Dec 16 12:44:19.099870 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd490] Dec 16 12:44:19.099888 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT Fri Dec 12 15:17:36 -00 2025 Dec 16 12:44:19.099895 kernel: KASLR enabled Dec 16 12:44:19.099899 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Dec 16 12:44:19.099904 kernel: printk: legacy bootconsole [pl11] enabled Dec 16 12:44:19.099908 kernel: efi: EFI v2.7 by EDK II Dec 16 12:44:19.099913 kernel: efi: ACPI 2.0=0x3f979018 SMBIOS=0x3f8a0000 SMBIOS 3.0=0x3f880000 MEMATTR=0x3e89d018 RNG=0x3f979998 MEMRESERVE=0x3db7d598 Dec 16 12:44:19.099917 kernel: random: crng init done Dec 16 12:44:19.099921 kernel: secureboot: Secure boot disabled Dec 16 12:44:19.099926 kernel: ACPI: Early table checksum verification disabled Dec 16 12:44:19.099930 kernel: ACPI: RSDP 0x000000003F979018 000024 (v02 VRTUAL) Dec 16 12:44:19.099934 kernel: ACPI: XSDT 0x000000003F979F18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 12:44:19.099938 kernel: ACPI: FACP 0x000000003F979C18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 12:44:19.099943 kernel: ACPI: DSDT 0x000000003F95A018 01E046 (v02 MSFTVM DSDT01 00000001 INTL 20230628) Dec 16 12:44:19.099948 kernel: ACPI: DBG2 0x000000003F979B18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 12:44:19.099953 kernel: ACPI: GTDT 0x000000003F979D98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 12:44:19.099958 kernel: ACPI: OEM0 0x000000003F979098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 12:44:19.099963 kernel: ACPI: SPCR 0x000000003F979A98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 12:44:19.099967 kernel: ACPI: APIC 0x000000003F979818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 12:44:19.099972 kernel: ACPI: SRAT 0x000000003F979198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 12:44:19.099976 kernel: ACPI: PPTT 0x000000003F979418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Dec 16 12:44:19.099981 kernel: ACPI: BGRT 0x000000003F979E98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 12:44:19.099985 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Dec 16 12:44:19.099990 kernel: ACPI: Use ACPI SPCR as default console: Yes Dec 16 12:44:19.099994 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Dec 16 12:44:19.099999 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] hotplug Dec 16 12:44:19.100003 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] hotplug Dec 16 12:44:19.100008 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Dec 16 12:44:19.100013 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Dec 16 12:44:19.100018 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Dec 16 12:44:19.100022 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Dec 16 12:44:19.100026 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Dec 16 12:44:19.100031 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Dec 16 12:44:19.100035 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Dec 16 12:44:19.100039 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Dec 16 12:44:19.100044 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Dec 16 12:44:19.100048 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x1bfffffff] -> [mem 0x00000000-0x1bfffffff] Dec 16 12:44:19.100053 kernel: NODE_DATA(0) allocated [mem 0x1bf7ffa00-0x1bf806fff] Dec 16 12:44:19.100058 kernel: Zone ranges: Dec 16 12:44:19.100062 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Dec 16 12:44:19.100069 kernel: DMA32 empty Dec 16 12:44:19.100074 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Dec 16 12:44:19.100078 kernel: Device empty Dec 16 12:44:19.100084 kernel: Movable zone start for each node Dec 16 12:44:19.100089 kernel: Early memory node ranges Dec 16 12:44:19.100093 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Dec 16 12:44:19.100098 kernel: node 0: [mem 0x0000000000824000-0x000000003f38ffff] Dec 16 12:44:19.100103 kernel: node 0: [mem 0x000000003f390000-0x000000003f93ffff] Dec 16 12:44:19.100107 kernel: node 0: [mem 0x000000003f940000-0x000000003f9effff] Dec 16 12:44:19.100112 kernel: node 0: [mem 0x000000003f9f0000-0x000000003fdeffff] Dec 16 12:44:19.100116 kernel: node 0: [mem 0x000000003fdf0000-0x000000003fffffff] Dec 16 12:44:19.100121 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Dec 16 12:44:19.100127 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Dec 16 12:44:19.100131 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Dec 16 12:44:19.100136 kernel: cma: Reserved 16 MiB at 0x000000003ca00000 on node -1 Dec 16 12:44:19.100140 kernel: psci: probing for conduit method from ACPI. Dec 16 12:44:19.100145 kernel: psci: PSCIv1.3 detected in firmware. Dec 16 12:44:19.100150 kernel: psci: Using standard PSCI v0.2 function IDs Dec 16 12:44:19.100154 kernel: psci: MIGRATE_INFO_TYPE not supported. Dec 16 12:44:19.100159 kernel: psci: SMC Calling Convention v1.4 Dec 16 12:44:19.100164 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Dec 16 12:44:19.100168 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Dec 16 12:44:19.100173 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Dec 16 12:44:19.100178 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Dec 16 12:44:19.100183 kernel: pcpu-alloc: [0] 0 [0] 1 Dec 16 12:44:19.100188 kernel: Detected PIPT I-cache on CPU0 Dec 16 12:44:19.100193 kernel: CPU features: detected: Address authentication (architected QARMA5 algorithm) Dec 16 12:44:19.100197 kernel: CPU features: detected: GIC system register CPU interface Dec 16 12:44:19.100213 kernel: CPU features: detected: Spectre-v4 Dec 16 12:44:19.100218 kernel: CPU features: detected: Spectre-BHB Dec 16 12:44:19.100222 kernel: CPU features: kernel page table isolation forced ON by KASLR Dec 16 12:44:19.100227 kernel: CPU features: detected: Kernel page table isolation (KPTI) Dec 16 12:44:19.100232 kernel: CPU features: detected: ARM erratum 2067961 or 2054223 Dec 16 12:44:19.100236 kernel: CPU features: detected: SSBS not fully self-synchronizing Dec 16 12:44:19.100242 kernel: alternatives: applying boot alternatives Dec 16 12:44:19.100248 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=f511955c7ec069359d088640c1194932d6d915b5bb2829e8afbb591f10cd0849 Dec 16 12:44:19.100253 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 16 12:44:19.100257 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 16 12:44:19.100262 kernel: Fallback order for Node 0: 0 Dec 16 12:44:19.100267 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048540 Dec 16 12:44:19.100271 kernel: Policy zone: Normal Dec 16 12:44:19.100276 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 16 12:44:19.100281 kernel: software IO TLB: area num 2. Dec 16 12:44:19.100285 kernel: software IO TLB: mapped [mem 0x0000000037380000-0x000000003b380000] (64MB) Dec 16 12:44:19.100290 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 16 12:44:19.100296 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 16 12:44:19.100301 kernel: rcu: RCU event tracing is enabled. Dec 16 12:44:19.100306 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 16 12:44:19.100311 kernel: Trampoline variant of Tasks RCU enabled. Dec 16 12:44:19.100315 kernel: Tracing variant of Tasks RCU enabled. Dec 16 12:44:19.100320 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 16 12:44:19.100325 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 16 12:44:19.100329 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 12:44:19.100334 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 12:44:19.100339 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 16 12:44:19.100343 kernel: GICv3: 960 SPIs implemented Dec 16 12:44:19.100349 kernel: GICv3: 0 Extended SPIs implemented Dec 16 12:44:19.100354 kernel: Root IRQ handler: gic_handle_irq Dec 16 12:44:19.100358 kernel: GICv3: GICv3 features: 16 PPIs, RSS Dec 16 12:44:19.100363 kernel: GICv3: GICD_CTRL.DS=0, SCR_EL3.FIQ=0 Dec 16 12:44:19.100367 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Dec 16 12:44:19.100372 kernel: ITS: No ITS available, not enabling LPIs Dec 16 12:44:19.100377 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 16 12:44:19.100382 kernel: arch_timer: cp15 timer(s) running at 1000.00MHz (virt). Dec 16 12:44:19.100386 kernel: clocksource: arch_sys_counter: mask: 0x1fffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 16 12:44:19.100391 kernel: sched_clock: 61 bits at 1000MHz, resolution 1ns, wraps every 4398046511103ns Dec 16 12:44:19.100396 kernel: Console: colour dummy device 80x25 Dec 16 12:44:19.100402 kernel: printk: legacy console [tty1] enabled Dec 16 12:44:19.100407 kernel: ACPI: Core revision 20240827 Dec 16 12:44:19.100412 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 2000.00 BogoMIPS (lpj=1000000) Dec 16 12:44:19.100417 kernel: pid_max: default: 32768 minimum: 301 Dec 16 12:44:19.100422 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 16 12:44:19.100427 kernel: landlock: Up and running. Dec 16 12:44:19.100432 kernel: SELinux: Initializing. Dec 16 12:44:19.100438 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 16 12:44:19.100443 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 16 12:44:19.100448 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0xa0000e, misc 0x31e1 Dec 16 12:44:19.100453 kernel: Hyper-V: Host Build 10.0.26102.1172-1-0 Dec 16 12:44:19.100461 kernel: Hyper-V: enabling crash_kexec_post_notifiers Dec 16 12:44:19.100467 kernel: rcu: Hierarchical SRCU implementation. Dec 16 12:44:19.100472 kernel: rcu: Max phase no-delay instances is 400. Dec 16 12:44:19.100478 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Dec 16 12:44:19.100483 kernel: Remapping and enabling EFI services. Dec 16 12:44:19.100489 kernel: smp: Bringing up secondary CPUs ... Dec 16 12:44:19.100494 kernel: Detected PIPT I-cache on CPU1 Dec 16 12:44:19.100499 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Dec 16 12:44:19.100504 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd490] Dec 16 12:44:19.100510 kernel: smp: Brought up 1 node, 2 CPUs Dec 16 12:44:19.100515 kernel: SMP: Total of 2 processors activated. Dec 16 12:44:19.100520 kernel: CPU: All CPU(s) started at EL1 Dec 16 12:44:19.100525 kernel: CPU features: detected: 32-bit EL0 Support Dec 16 12:44:19.100531 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Dec 16 12:44:19.100536 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Dec 16 12:44:19.100541 kernel: CPU features: detected: Common not Private translations Dec 16 12:44:19.100547 kernel: CPU features: detected: CRC32 instructions Dec 16 12:44:19.100552 kernel: CPU features: detected: Generic authentication (architected QARMA5 algorithm) Dec 16 12:44:19.100557 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Dec 16 12:44:19.100562 kernel: CPU features: detected: LSE atomic instructions Dec 16 12:44:19.100568 kernel: CPU features: detected: Privileged Access Never Dec 16 12:44:19.100573 kernel: CPU features: detected: Speculation barrier (SB) Dec 16 12:44:19.100578 kernel: CPU features: detected: TLB range maintenance instructions Dec 16 12:44:19.100584 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Dec 16 12:44:19.100589 kernel: CPU features: detected: Scalable Vector Extension Dec 16 12:44:19.100594 kernel: alternatives: applying system-wide alternatives Dec 16 12:44:19.100599 kernel: CPU features: detected: Hardware dirty bit management on CPU0-1 Dec 16 12:44:19.100604 kernel: SVE: maximum available vector length 16 bytes per vector Dec 16 12:44:19.100609 kernel: SVE: default vector length 16 bytes per vector Dec 16 12:44:19.100615 kernel: Memory: 3979964K/4194160K available (11200K kernel code, 2456K rwdata, 9084K rodata, 12416K init, 1038K bss, 193008K reserved, 16384K cma-reserved) Dec 16 12:44:19.102234 kernel: devtmpfs: initialized Dec 16 12:44:19.102246 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 16 12:44:19.102252 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 16 12:44:19.102258 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Dec 16 12:44:19.102263 kernel: 0 pages in range for non-PLT usage Dec 16 12:44:19.102268 kernel: 515184 pages in range for PLT usage Dec 16 12:44:19.102274 kernel: pinctrl core: initialized pinctrl subsystem Dec 16 12:44:19.102283 kernel: SMBIOS 3.1.0 present. Dec 16 12:44:19.102289 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 06/10/2025 Dec 16 12:44:19.102294 kernel: DMI: Memory slots populated: 2/2 Dec 16 12:44:19.102299 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 16 12:44:19.102304 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Dec 16 12:44:19.102310 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 16 12:44:19.102315 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 16 12:44:19.102321 kernel: audit: initializing netlink subsys (disabled) Dec 16 12:44:19.102327 kernel: audit: type=2000 audit(0.059:1): state=initialized audit_enabled=0 res=1 Dec 16 12:44:19.102333 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 16 12:44:19.102338 kernel: cpuidle: using governor menu Dec 16 12:44:19.102343 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 16 12:44:19.102349 kernel: ASID allocator initialised with 32768 entries Dec 16 12:44:19.102354 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 16 12:44:19.102359 kernel: Serial: AMBA PL011 UART driver Dec 16 12:44:19.102366 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 16 12:44:19.102371 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Dec 16 12:44:19.102376 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Dec 16 12:44:19.102381 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Dec 16 12:44:19.102387 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 16 12:44:19.102392 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Dec 16 12:44:19.102397 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Dec 16 12:44:19.102403 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Dec 16 12:44:19.102408 kernel: ACPI: Added _OSI(Module Device) Dec 16 12:44:19.102413 kernel: ACPI: Added _OSI(Processor Device) Dec 16 12:44:19.102419 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 16 12:44:19.102424 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 16 12:44:19.102429 kernel: ACPI: Interpreter enabled Dec 16 12:44:19.102434 kernel: ACPI: Using GIC for interrupt routing Dec 16 12:44:19.102440 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Dec 16 12:44:19.102446 kernel: printk: legacy console [ttyAMA0] enabled Dec 16 12:44:19.102451 kernel: printk: legacy bootconsole [pl11] disabled Dec 16 12:44:19.102456 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Dec 16 12:44:19.102461 kernel: ACPI: CPU0 has been hot-added Dec 16 12:44:19.102467 kernel: ACPI: CPU1 has been hot-added Dec 16 12:44:19.102472 kernel: iommu: Default domain type: Translated Dec 16 12:44:19.102478 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 16 12:44:19.102483 kernel: efivars: Registered efivars operations Dec 16 12:44:19.102488 kernel: vgaarb: loaded Dec 16 12:44:19.102493 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 16 12:44:19.102498 kernel: VFS: Disk quotas dquot_6.6.0 Dec 16 12:44:19.102504 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 16 12:44:19.102509 kernel: pnp: PnP ACPI init Dec 16 12:44:19.102515 kernel: pnp: PnP ACPI: found 0 devices Dec 16 12:44:19.102520 kernel: NET: Registered PF_INET protocol family Dec 16 12:44:19.102525 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 16 12:44:19.102530 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 16 12:44:19.102536 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 16 12:44:19.102541 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 16 12:44:19.102546 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 16 12:44:19.102552 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 16 12:44:19.102558 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 16 12:44:19.102563 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 16 12:44:19.102568 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 16 12:44:19.102573 kernel: PCI: CLS 0 bytes, default 64 Dec 16 12:44:19.102578 kernel: kvm [1]: HYP mode not available Dec 16 12:44:19.102584 kernel: Initialise system trusted keyrings Dec 16 12:44:19.102589 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 16 12:44:19.102595 kernel: Key type asymmetric registered Dec 16 12:44:19.102600 kernel: Asymmetric key parser 'x509' registered Dec 16 12:44:19.102605 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 16 12:44:19.102610 kernel: io scheduler mq-deadline registered Dec 16 12:44:19.102616 kernel: io scheduler kyber registered Dec 16 12:44:19.102621 kernel: io scheduler bfq registered Dec 16 12:44:19.102626 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 16 12:44:19.102632 kernel: thunder_xcv, ver 1.0 Dec 16 12:44:19.102637 kernel: thunder_bgx, ver 1.0 Dec 16 12:44:19.102642 kernel: nicpf, ver 1.0 Dec 16 12:44:19.102647 kernel: nicvf, ver 1.0 Dec 16 12:44:19.102794 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 16 12:44:19.102864 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-12-16T12:44:15 UTC (1765889055) Dec 16 12:44:19.102872 kernel: efifb: probing for efifb Dec 16 12:44:19.102878 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Dec 16 12:44:19.102883 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Dec 16 12:44:19.102888 kernel: efifb: scrolling: redraw Dec 16 12:44:19.102893 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Dec 16 12:44:19.102899 kernel: Console: switching to colour frame buffer device 128x48 Dec 16 12:44:19.102904 kernel: fb0: EFI VGA frame buffer device Dec 16 12:44:19.102910 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Dec 16 12:44:19.102915 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 16 12:44:19.102920 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Dec 16 12:44:19.102925 kernel: watchdog: NMI not fully supported Dec 16 12:44:19.102931 kernel: watchdog: Hard watchdog permanently disabled Dec 16 12:44:19.102936 kernel: NET: Registered PF_INET6 protocol family Dec 16 12:44:19.102941 kernel: Segment Routing with IPv6 Dec 16 12:44:19.102947 kernel: In-situ OAM (IOAM) with IPv6 Dec 16 12:44:19.102952 kernel: NET: Registered PF_PACKET protocol family Dec 16 12:44:19.102957 kernel: Key type dns_resolver registered Dec 16 12:44:19.102962 kernel: registered taskstats version 1 Dec 16 12:44:19.102967 kernel: Loading compiled-in X.509 certificates Dec 16 12:44:19.102973 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: a5d527f63342895c4af575176d4ae6e640b6d0e9' Dec 16 12:44:19.102978 kernel: Demotion targets for Node 0: null Dec 16 12:44:19.102984 kernel: Key type .fscrypt registered Dec 16 12:44:19.102989 kernel: Key type fscrypt-provisioning registered Dec 16 12:44:19.102994 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 16 12:44:19.102999 kernel: ima: Allocated hash algorithm: sha1 Dec 16 12:44:19.103005 kernel: ima: No architecture policies found Dec 16 12:44:19.103010 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 16 12:44:19.103015 kernel: clk: Disabling unused clocks Dec 16 12:44:19.103020 kernel: PM: genpd: Disabling unused power domains Dec 16 12:44:19.103026 kernel: Freeing unused kernel memory: 12416K Dec 16 12:44:19.103031 kernel: Run /init as init process Dec 16 12:44:19.103036 kernel: with arguments: Dec 16 12:44:19.103042 kernel: /init Dec 16 12:44:19.103047 kernel: with environment: Dec 16 12:44:19.103052 kernel: HOME=/ Dec 16 12:44:19.103057 kernel: TERM=linux Dec 16 12:44:19.103063 kernel: hv_vmbus: Vmbus version:5.3 Dec 16 12:44:19.103068 kernel: hv_vmbus: registering driver hid_hyperv Dec 16 12:44:19.103073 kernel: SCSI subsystem initialized Dec 16 12:44:19.103079 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Dec 16 12:44:19.103164 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Dec 16 12:44:19.103172 kernel: hv_vmbus: registering driver hyperv_keyboard Dec 16 12:44:19.103178 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Dec 16 12:44:19.103184 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 16 12:44:19.103189 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 16 12:44:19.103194 kernel: PTP clock support registered Dec 16 12:44:19.103210 kernel: hv_utils: Registering HyperV Utility Driver Dec 16 12:44:19.103215 kernel: hv_vmbus: registering driver hv_utils Dec 16 12:44:19.103221 kernel: hv_utils: Heartbeat IC version 3.0 Dec 16 12:44:19.103227 kernel: hv_utils: Shutdown IC version 3.2 Dec 16 12:44:19.103232 kernel: hv_utils: TimeSync IC version 4.0 Dec 16 12:44:19.103237 kernel: hv_vmbus: registering driver hv_storvsc Dec 16 12:44:19.103332 kernel: scsi host0: storvsc_host_t Dec 16 12:44:19.103410 kernel: scsi host1: storvsc_host_t Dec 16 12:44:19.103498 kernel: scsi 1:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Dec 16 12:44:19.103583 kernel: scsi 1:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Dec 16 12:44:19.103657 kernel: sd 1:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Dec 16 12:44:19.103730 kernel: sd 1:0:0:0: [sda] 4096-byte physical blocks Dec 16 12:44:19.103803 kernel: sd 1:0:0:0: [sda] Write Protect is off Dec 16 12:44:19.103876 kernel: sd 1:0:0:0: [sda] Mode Sense: 0f 00 10 00 Dec 16 12:44:19.103948 kernel: sd 1:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Dec 16 12:44:19.104028 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#125 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Dec 16 12:44:19.104104 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#6 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Dec 16 12:44:19.104111 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 16 12:44:19.104184 kernel: sd 1:0:0:0: [sda] Attached SCSI disk Dec 16 12:44:19.104265 kernel: sr 1:0:0:2: [sr0] scsi-1 drive Dec 16 12:44:19.104273 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 16 12:44:19.104345 kernel: sr 1:0:0:2: Attached scsi CD-ROM sr0 Dec 16 12:44:19.104351 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 16 12:44:19.104356 kernel: device-mapper: uevent: version 1.0.3 Dec 16 12:44:19.104362 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 16 12:44:19.104367 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Dec 16 12:44:19.104373 kernel: raid6: neonx8 gen() 18563 MB/s Dec 16 12:44:19.104379 kernel: raid6: neonx4 gen() 18577 MB/s Dec 16 12:44:19.104384 kernel: raid6: neonx2 gen() 17096 MB/s Dec 16 12:44:19.104389 kernel: raid6: neonx1 gen() 15032 MB/s Dec 16 12:44:19.104394 kernel: raid6: int64x8 gen() 10560 MB/s Dec 16 12:44:19.104399 kernel: raid6: int64x4 gen() 10614 MB/s Dec 16 12:44:19.104405 kernel: raid6: int64x2 gen() 8982 MB/s Dec 16 12:44:19.104410 kernel: raid6: int64x1 gen() 7006 MB/s Dec 16 12:44:19.104416 kernel: raid6: using algorithm neonx4 gen() 18577 MB/s Dec 16 12:44:19.104421 kernel: raid6: .... xor() 15136 MB/s, rmw enabled Dec 16 12:44:19.104426 kernel: raid6: using neon recovery algorithm Dec 16 12:44:19.104432 kernel: xor: measuring software checksum speed Dec 16 12:44:19.104437 kernel: 8regs : 28649 MB/sec Dec 16 12:44:19.104442 kernel: 32regs : 28266 MB/sec Dec 16 12:44:19.104447 kernel: arm64_neon : 37371 MB/sec Dec 16 12:44:19.104453 kernel: xor: using function: arm64_neon (37371 MB/sec) Dec 16 12:44:19.104459 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 16 12:44:19.104464 kernel: BTRFS: device fsid d09b8b5a-fb5f-4a17-94ef-0a452535b2bc devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (423) Dec 16 12:44:19.104469 kernel: BTRFS info (device dm-0): first mount of filesystem d09b8b5a-fb5f-4a17-94ef-0a452535b2bc Dec 16 12:44:19.104475 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Dec 16 12:44:19.104480 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 16 12:44:19.104485 kernel: BTRFS info (device dm-0): enabling free space tree Dec 16 12:44:19.104490 kernel: loop: module loaded Dec 16 12:44:19.104496 kernel: loop0: detected capacity change from 0 to 91480 Dec 16 12:44:19.104501 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 16 12:44:19.104508 systemd[1]: Successfully made /usr/ read-only. Dec 16 12:44:19.104515 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 12:44:19.104521 systemd[1]: Detected virtualization microsoft. Dec 16 12:44:19.104527 systemd[1]: Detected architecture arm64. Dec 16 12:44:19.104533 systemd[1]: Running in initrd. Dec 16 12:44:19.104538 systemd[1]: No hostname configured, using default hostname. Dec 16 12:44:19.104544 systemd[1]: Hostname set to . Dec 16 12:44:19.104550 systemd[1]: Initializing machine ID from random generator. Dec 16 12:44:19.104555 systemd[1]: Queued start job for default target initrd.target. Dec 16 12:44:19.104561 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 16 12:44:19.104567 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 12:44:19.104573 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 12:44:19.104579 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 16 12:44:19.104585 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 12:44:19.104591 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 16 12:44:19.104597 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 16 12:44:19.104604 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 12:44:19.104610 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 12:44:19.104615 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 16 12:44:19.104621 systemd[1]: Reached target paths.target - Path Units. Dec 16 12:44:19.104626 systemd[1]: Reached target slices.target - Slice Units. Dec 16 12:44:19.104632 systemd[1]: Reached target swap.target - Swaps. Dec 16 12:44:19.104638 systemd[1]: Reached target timers.target - Timer Units. Dec 16 12:44:19.104644 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 12:44:19.104650 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 12:44:19.104655 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 16 12:44:19.104661 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 16 12:44:19.104667 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 16 12:44:19.104672 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 12:44:19.104683 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 12:44:19.104689 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 12:44:19.104695 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 12:44:19.104701 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 16 12:44:19.104707 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 16 12:44:19.104714 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 12:44:19.104720 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 16 12:44:19.104726 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 16 12:44:19.104732 systemd[1]: Starting systemd-fsck-usr.service... Dec 16 12:44:19.104738 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 12:44:19.104744 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 12:44:19.104764 systemd-journald[560]: Collecting audit messages is enabled. Dec 16 12:44:19.104779 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:44:19.104786 systemd-journald[560]: Journal started Dec 16 12:44:19.104799 systemd-journald[560]: Runtime Journal (/run/log/journal/8121421ec7544d63acb0fc157f7e0bcf) is 8M, max 78.3M, 70.3M free. Dec 16 12:44:19.125870 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 12:44:19.125000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:19.131561 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 16 12:44:19.163388 kernel: audit: type=1130 audit(1765889059.125:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:19.163412 kernel: audit: type=1130 audit(1765889059.150:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:19.150000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:19.151084 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 12:44:19.173000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:19.175279 systemd[1]: Finished systemd-fsck-usr.service. Dec 16 12:44:19.219495 kernel: audit: type=1130 audit(1765889059.173:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:19.219513 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 16 12:44:19.225000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:19.227827 systemd-modules-load[563]: Inserted module 'br_netfilter' Dec 16 12:44:19.242997 kernel: Bridge firewalling registered Dec 16 12:44:19.243019 kernel: audit: type=1130 audit(1765889059.225:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:19.228677 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 12:44:19.247000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:19.269280 kernel: audit: type=1130 audit(1765889059.247:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:19.270342 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 12:44:19.282167 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 12:44:19.298740 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 12:44:19.305123 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:44:19.344538 kernel: audit: type=1130 audit(1765889059.318:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:19.318000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:19.327975 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 16 12:44:19.351479 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 12:44:19.356000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:19.364178 systemd-tmpfiles[578]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 16 12:44:19.381045 kernel: audit: type=1130 audit(1765889059.356:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:19.380924 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 12:44:19.386000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:19.440170 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 12:44:19.455621 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 12:44:19.479976 kernel: audit: type=1130 audit(1765889059.386:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:19.480008 kernel: audit: type=1130 audit(1765889059.445:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:19.445000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:19.453000 audit: BPF prog-id=6 op=LOAD Dec 16 12:44:19.483457 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 12:44:19.498299 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 12:44:19.504000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:19.515338 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 16 12:44:19.529436 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 12:44:19.534000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:19.610180 systemd-resolved[588]: Positive Trust Anchors: Dec 16 12:44:19.610193 systemd-resolved[588]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 12:44:19.610195 systemd-resolved[588]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 16 12:44:19.610228 systemd-resolved[588]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 12:44:19.632723 systemd-resolved[588]: Defaulting to hostname 'linux'. Dec 16 12:44:19.667000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:19.672123 dracut-cmdline[601]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=f511955c7ec069359d088640c1194932d6d915b5bb2829e8afbb591f10cd0849 Dec 16 12:44:19.633379 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 12:44:19.668131 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 12:44:19.820220 kernel: Loading iSCSI transport class v2.0-870. Dec 16 12:44:19.864224 kernel: iscsi: registered transport (tcp) Dec 16 12:44:19.892514 kernel: iscsi: registered transport (qla4xxx) Dec 16 12:44:19.892535 kernel: QLogic iSCSI HBA Driver Dec 16 12:44:19.949845 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 12:44:19.976034 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 12:44:19.981000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:19.982997 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 12:44:20.032159 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 16 12:44:20.036000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:20.038536 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 16 12:44:20.069812 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 16 12:44:20.092333 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 16 12:44:20.101000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:20.106103 kernel: kauditd_printk_skb: 6 callbacks suppressed Dec 16 12:44:20.106135 kernel: audit: type=1130 audit(1765889060.101:17): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:20.106000 audit: BPF prog-id=7 op=LOAD Dec 16 12:44:20.121451 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 12:44:20.140455 kernel: audit: type=1334 audit(1765889060.106:18): prog-id=7 op=LOAD Dec 16 12:44:20.140474 kernel: audit: type=1334 audit(1765889060.118:19): prog-id=8 op=LOAD Dec 16 12:44:20.118000 audit: BPF prog-id=8 op=LOAD Dec 16 12:44:20.207290 systemd-udevd[823]: Using default interface naming scheme 'v257'. Dec 16 12:44:20.212756 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 12:44:20.219000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:20.240248 kernel: audit: type=1130 audit(1765889060.219:20): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:20.240391 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 16 12:44:20.261440 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 12:44:20.272000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:20.286000 audit: BPF prog-id=9 op=LOAD Dec 16 12:44:20.289357 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 12:44:20.297219 kernel: audit: type=1130 audit(1765889060.272:21): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:20.297236 kernel: audit: type=1334 audit(1765889060.286:22): prog-id=9 op=LOAD Dec 16 12:44:20.302537 dracut-pre-trigger[942]: rd.md=0: removing MD RAID activation Dec 16 12:44:20.325596 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 12:44:20.329000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:20.348355 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 12:44:20.359269 kernel: audit: type=1130 audit(1765889060.329:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:20.356900 systemd-networkd[949]: lo: Link UP Dec 16 12:44:20.356902 systemd-networkd[949]: lo: Gained carrier Dec 16 12:44:20.365000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:20.358897 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 12:44:20.389473 kernel: audit: type=1130 audit(1765889060.365:24): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:20.366847 systemd[1]: Reached target network.target - Network. Dec 16 12:44:20.414400 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 12:44:20.420000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:20.443226 kernel: audit: type=1130 audit(1765889060.420:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:20.448762 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 16 12:44:20.497247 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#119 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Dec 16 12:44:20.520241 kernel: hv_vmbus: registering driver hv_netvsc Dec 16 12:44:20.523847 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 12:44:20.523999 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:44:20.560540 kernel: audit: type=1131 audit(1765889060.533:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:20.533000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:20.534562 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:44:20.554054 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:44:20.581100 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:44:20.585000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:20.609978 kernel: hv_netvsc 002248b9-ab26-0022-48b9-ab26002248b9 eth0: VF slot 1 added Dec 16 12:44:20.631871 systemd-networkd[949]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 12:44:20.631881 systemd-networkd[949]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 12:44:20.654948 kernel: hv_vmbus: registering driver hv_pci Dec 16 12:44:20.654964 kernel: hv_pci c390f072-1f04-49c4-8cd2-a02018c33b81: PCI VMBus probing: Using version 0x10004 Dec 16 12:44:20.633528 systemd-networkd[949]: eth0: Link UP Dec 16 12:44:20.633639 systemd-networkd[949]: eth0: Gained carrier Dec 16 12:44:20.633648 systemd-networkd[949]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 12:44:20.680760 kernel: hv_pci c390f072-1f04-49c4-8cd2-a02018c33b81: PCI host bridge to bus 1f04:00 Dec 16 12:44:20.680924 kernel: pci_bus 1f04:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Dec 16 12:44:20.681022 kernel: pci_bus 1f04:00: No busn resource found for root bus, will use [bus 00-ff] Dec 16 12:44:20.688239 systemd-networkd[949]: eth0: DHCPv4 address 10.200.20.37/24, gateway 10.200.20.1 acquired from 168.63.129.16 Dec 16 12:44:20.702855 kernel: pci 1f04:00:02.0: [15b3:101a] type 00 class 0x020000 PCIe Endpoint Dec 16 12:44:20.702940 kernel: pci 1f04:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref] Dec 16 12:44:20.709210 kernel: pci 1f04:00:02.0: enabling Extended Tags Dec 16 12:44:20.726258 kernel: pci 1f04:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 1f04:00:02.0 (capable of 252.048 Gb/s with 16.0 GT/s PCIe x16 link) Dec 16 12:44:20.737070 kernel: pci_bus 1f04:00: busn_res: [bus 00-ff] end is updated to 00 Dec 16 12:44:20.737196 kernel: pci 1f04:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref]: assigned Dec 16 12:44:20.886636 kernel: mlx5_core 1f04:00:02.0: enabling device (0000 -> 0002) Dec 16 12:44:20.895441 kernel: mlx5_core 1f04:00:02.0: PTM is not supported by PCIe Dec 16 12:44:20.895566 kernel: mlx5_core 1f04:00:02.0: firmware version: 16.30.5006 Dec 16 12:44:21.070221 kernel: hv_netvsc 002248b9-ab26-0022-48b9-ab26002248b9 eth0: VF registering: eth1 Dec 16 12:44:21.070436 kernel: mlx5_core 1f04:00:02.0 eth1: joined to eth0 Dec 16 12:44:21.077092 kernel: mlx5_core 1f04:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Dec 16 12:44:21.088286 kernel: mlx5_core 1f04:00:02.0 enP7940s1: renamed from eth1 Dec 16 12:44:21.091328 systemd-networkd[949]: eth1: Interface name change detected, renamed to enP7940s1. Dec 16 12:44:21.186880 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Dec 16 12:44:21.193517 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 16 12:44:21.218246 kernel: mlx5_core 1f04:00:02.0 enP7940s1: Link up Dec 16 12:44:21.247786 systemd-networkd[949]: enP7940s1: Link UP Dec 16 12:44:21.251039 kernel: hv_netvsc 002248b9-ab26-0022-48b9-ab26002248b9 eth0: Data path switched to VF: enP7940s1 Dec 16 12:44:21.287995 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Dec 16 12:44:21.308662 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Dec 16 12:44:21.331986 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Dec 16 12:44:21.415608 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 16 12:44:21.422000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:21.424331 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 12:44:21.432250 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 12:44:21.440477 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 12:44:21.455315 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 16 12:44:21.481315 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 16 12:44:21.492000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:21.641715 systemd-networkd[949]: enP7940s1: Gained carrier Dec 16 12:44:22.425482 disk-uuid[1064]: Warning: The kernel is still using the old partition table. Dec 16 12:44:22.425482 disk-uuid[1064]: The new table will be used at the next reboot or after you Dec 16 12:44:22.425482 disk-uuid[1064]: run partprobe(8) or kpartx(8) Dec 16 12:44:22.425482 disk-uuid[1064]: The operation has completed successfully. Dec 16 12:44:22.442000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:22.442000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:22.434891 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 16 12:44:22.434981 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 16 12:44:22.444649 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 16 12:44:22.502220 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1216) Dec 16 12:44:22.512125 kernel: BTRFS info (device sda6): first mount of filesystem 006ba4f4-0786-4a38-abb9-900c84a8b97a Dec 16 12:44:22.512153 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Dec 16 12:44:22.535874 kernel: BTRFS info (device sda6): turning on async discard Dec 16 12:44:22.535906 kernel: BTRFS info (device sda6): enabling free space tree Dec 16 12:44:22.537290 systemd-networkd[949]: eth0: Gained IPv6LL Dec 16 12:44:22.548245 kernel: BTRFS info (device sda6): last unmount of filesystem 006ba4f4-0786-4a38-abb9-900c84a8b97a Dec 16 12:44:22.548690 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 16 12:44:22.551000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:22.553696 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 16 12:44:23.468622 ignition[1235]: Ignition 2.22.0 Dec 16 12:44:23.468635 ignition[1235]: Stage: fetch-offline Dec 16 12:44:23.471282 ignition[1235]: no configs at "/usr/lib/ignition/base.d" Dec 16 12:44:23.474721 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 12:44:23.482000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:23.471296 ignition[1235]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 12:44:23.485064 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 16 12:44:23.471372 ignition[1235]: parsed url from cmdline: "" Dec 16 12:44:23.471374 ignition[1235]: no config URL provided Dec 16 12:44:23.471378 ignition[1235]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 12:44:23.471384 ignition[1235]: no config at "/usr/lib/ignition/user.ign" Dec 16 12:44:23.471387 ignition[1235]: failed to fetch config: resource requires networking Dec 16 12:44:23.471510 ignition[1235]: Ignition finished successfully Dec 16 12:44:23.525194 ignition[1242]: Ignition 2.22.0 Dec 16 12:44:23.525218 ignition[1242]: Stage: fetch Dec 16 12:44:23.525392 ignition[1242]: no configs at "/usr/lib/ignition/base.d" Dec 16 12:44:23.525399 ignition[1242]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 12:44:23.525462 ignition[1242]: parsed url from cmdline: "" Dec 16 12:44:23.525464 ignition[1242]: no config URL provided Dec 16 12:44:23.525467 ignition[1242]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 12:44:23.525472 ignition[1242]: no config at "/usr/lib/ignition/user.ign" Dec 16 12:44:23.525486 ignition[1242]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Dec 16 12:44:23.587329 ignition[1242]: GET result: OK Dec 16 12:44:23.587394 ignition[1242]: config has been read from IMDS userdata Dec 16 12:44:23.587408 ignition[1242]: parsing config with SHA512: 7d72621320c2ee9f76beb5139f96203257bccd092283faf244861437989ff3de78d7f7f10b1455fd890d196f8d08f0a9430539e0c4031e9cf9c4f05908a74d5f Dec 16 12:44:23.590910 unknown[1242]: fetched base config from "system" Dec 16 12:44:23.591229 ignition[1242]: fetch: fetch complete Dec 16 12:44:23.603000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:23.590916 unknown[1242]: fetched base config from "system" Dec 16 12:44:23.591232 ignition[1242]: fetch: fetch passed Dec 16 12:44:23.590920 unknown[1242]: fetched user config from "azure" Dec 16 12:44:23.591273 ignition[1242]: Ignition finished successfully Dec 16 12:44:23.596452 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 16 12:44:23.605063 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 16 12:44:23.641932 ignition[1249]: Ignition 2.22.0 Dec 16 12:44:23.641948 ignition[1249]: Stage: kargs Dec 16 12:44:23.642175 ignition[1249]: no configs at "/usr/lib/ignition/base.d" Dec 16 12:44:23.652000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:23.646162 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 16 12:44:23.642185 ignition[1249]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 12:44:23.653734 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 16 12:44:23.642713 ignition[1249]: kargs: kargs passed Dec 16 12:44:23.642749 ignition[1249]: Ignition finished successfully Dec 16 12:44:23.687985 ignition[1255]: Ignition 2.22.0 Dec 16 12:44:23.687996 ignition[1255]: Stage: disks Dec 16 12:44:23.695000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:23.690407 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 16 12:44:23.688158 ignition[1255]: no configs at "/usr/lib/ignition/base.d" Dec 16 12:44:23.696115 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 16 12:44:23.688164 ignition[1255]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 12:44:23.704053 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 16 12:44:23.688767 ignition[1255]: disks: disks passed Dec 16 12:44:23.714601 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 12:44:23.688810 ignition[1255]: Ignition finished successfully Dec 16 12:44:23.723507 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 12:44:23.734186 systemd[1]: Reached target basic.target - Basic System. Dec 16 12:44:23.744338 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 16 12:44:23.853288 systemd-fsck[1263]: ROOT: clean, 15/6361680 files, 408771/6359552 blocks Dec 16 12:44:23.860811 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 16 12:44:23.866000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:23.869257 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 16 12:44:24.131140 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 16 12:44:24.138309 kernel: EXT4-fs (sda9): mounted filesystem fa93fc03-2e23-46f9-9013-1e396e3304a8 r/w with ordered data mode. Quota mode: none. Dec 16 12:44:24.135433 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 16 12:44:24.184077 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 12:44:24.189002 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 16 12:44:24.207231 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Dec 16 12:44:24.218071 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 16 12:44:24.218103 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 12:44:24.223997 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 16 12:44:24.241348 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 16 12:44:24.265214 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1277) Dec 16 12:44:24.275536 kernel: BTRFS info (device sda6): first mount of filesystem 006ba4f4-0786-4a38-abb9-900c84a8b97a Dec 16 12:44:24.275545 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Dec 16 12:44:24.285805 kernel: BTRFS info (device sda6): turning on async discard Dec 16 12:44:24.285842 kernel: BTRFS info (device sda6): enabling free space tree Dec 16 12:44:24.287493 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 12:44:24.798209 coreos-metadata[1279]: Dec 16 12:44:24.797 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Dec 16 12:44:24.805085 coreos-metadata[1279]: Dec 16 12:44:24.804 INFO Fetch successful Dec 16 12:44:24.805085 coreos-metadata[1279]: Dec 16 12:44:24.804 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Dec 16 12:44:24.818368 coreos-metadata[1279]: Dec 16 12:44:24.818 INFO Fetch successful Dec 16 12:44:24.837554 coreos-metadata[1279]: Dec 16 12:44:24.837 INFO wrote hostname ci-4515.1.0-a-4ca6cdd03e to /sysroot/etc/hostname Dec 16 12:44:24.845070 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 16 12:44:24.849000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:25.186850 initrd-setup-root[1308]: cut: /sysroot/etc/passwd: No such file or directory Dec 16 12:44:25.224118 initrd-setup-root[1315]: cut: /sysroot/etc/group: No such file or directory Dec 16 12:44:25.230679 initrd-setup-root[1322]: cut: /sysroot/etc/shadow: No such file or directory Dec 16 12:44:25.237714 initrd-setup-root[1329]: cut: /sysroot/etc/gshadow: No such file or directory Dec 16 12:44:26.179016 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 16 12:44:26.192232 kernel: kauditd_printk_skb: 12 callbacks suppressed Dec 16 12:44:26.192271 kernel: audit: type=1130 audit(1765889066.186:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:26.186000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:26.189296 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 16 12:44:26.213767 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 16 12:44:26.244774 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 16 12:44:26.249222 kernel: BTRFS info (device sda6): last unmount of filesystem 006ba4f4-0786-4a38-abb9-900c84a8b97a Dec 16 12:44:26.258290 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 16 12:44:26.266000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:26.282270 kernel: audit: type=1130 audit(1765889066.266:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:26.284144 ignition[1399]: INFO : Ignition 2.22.0 Dec 16 12:44:26.284144 ignition[1399]: INFO : Stage: mount Dec 16 12:44:26.292256 ignition[1399]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 12:44:26.292256 ignition[1399]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 12:44:26.292256 ignition[1399]: INFO : mount: mount passed Dec 16 12:44:26.292256 ignition[1399]: INFO : Ignition finished successfully Dec 16 12:44:26.328505 kernel: audit: type=1130 audit(1765889066.294:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:26.294000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:26.291167 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 16 12:44:26.296066 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 16 12:44:26.329446 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 12:44:26.371734 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1408) Dec 16 12:44:26.371767 kernel: BTRFS info (device sda6): first mount of filesystem 006ba4f4-0786-4a38-abb9-900c84a8b97a Dec 16 12:44:26.376558 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Dec 16 12:44:26.386792 kernel: BTRFS info (device sda6): turning on async discard Dec 16 12:44:26.386818 kernel: BTRFS info (device sda6): enabling free space tree Dec 16 12:44:26.388104 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 12:44:26.413061 ignition[1426]: INFO : Ignition 2.22.0 Dec 16 12:44:26.417502 ignition[1426]: INFO : Stage: files Dec 16 12:44:26.417502 ignition[1426]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 12:44:26.417502 ignition[1426]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 12:44:26.417502 ignition[1426]: DEBUG : files: compiled without relabeling support, skipping Dec 16 12:44:26.435153 ignition[1426]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 16 12:44:26.435153 ignition[1426]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 16 12:44:26.491718 ignition[1426]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 16 12:44:26.497630 ignition[1426]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 16 12:44:26.497630 ignition[1426]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 16 12:44:26.492134 unknown[1426]: wrote ssh authorized keys file for user: core Dec 16 12:44:26.527475 ignition[1426]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Dec 16 12:44:26.536482 ignition[1426]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Dec 16 12:44:26.697430 ignition[1426]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 16 12:44:26.788226 ignition[1426]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Dec 16 12:44:26.788226 ignition[1426]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 16 12:44:26.788226 ignition[1426]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 16 12:44:26.810832 ignition[1426]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 16 12:44:26.810832 ignition[1426]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 16 12:44:26.810832 ignition[1426]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 12:44:26.810832 ignition[1426]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 12:44:26.810832 ignition[1426]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 12:44:26.810832 ignition[1426]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 12:44:26.810832 ignition[1426]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 12:44:26.810832 ignition[1426]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 12:44:26.810832 ignition[1426]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Dec 16 12:44:26.810832 ignition[1426]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Dec 16 12:44:26.810832 ignition[1426]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Dec 16 12:44:26.810832 ignition[1426]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-arm64.raw: attempt #1 Dec 16 12:44:27.331949 ignition[1426]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 16 12:44:27.520592 ignition[1426]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Dec 16 12:44:27.520592 ignition[1426]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 16 12:44:27.546721 ignition[1426]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 12:44:27.560245 ignition[1426]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 12:44:27.560245 ignition[1426]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 16 12:44:27.573000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:27.592384 ignition[1426]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Dec 16 12:44:27.592384 ignition[1426]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Dec 16 12:44:27.592384 ignition[1426]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 16 12:44:27.592384 ignition[1426]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 16 12:44:27.592384 ignition[1426]: INFO : files: files passed Dec 16 12:44:27.592384 ignition[1426]: INFO : Ignition finished successfully Dec 16 12:44:27.673444 kernel: audit: type=1130 audit(1765889067.573:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:27.673468 kernel: audit: type=1130 audit(1765889067.638:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:27.673476 kernel: audit: type=1131 audit(1765889067.638:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:27.638000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:27.638000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:27.569634 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 16 12:44:27.575914 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 16 12:44:27.598452 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 16 12:44:27.626527 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 16 12:44:27.694236 initrd-setup-root-after-ignition[1456]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 12:44:27.694236 initrd-setup-root-after-ignition[1456]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 16 12:44:27.706000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:27.626596 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 16 12:44:27.739297 kernel: audit: type=1130 audit(1765889067.706:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:27.739315 initrd-setup-root-after-ignition[1460]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 12:44:27.694764 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 12:44:27.707383 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 16 12:44:27.738354 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 16 12:44:27.796951 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 16 12:44:27.797062 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 16 12:44:27.806000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:27.806000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:27.807131 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 16 12:44:27.846722 kernel: audit: type=1130 audit(1765889067.806:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:27.846747 kernel: audit: type=1131 audit(1765889067.806:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:27.845295 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 16 12:44:27.851612 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 16 12:44:27.856350 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 16 12:44:27.891430 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 12:44:27.896000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:27.915230 kernel: audit: type=1130 audit(1765889067.896:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:27.915305 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 16 12:44:27.943578 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 16 12:44:27.943721 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 16 12:44:27.954617 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 12:44:27.964417 systemd[1]: Stopped target timers.target - Timer Units. Dec 16 12:44:27.973596 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 16 12:44:27.982000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:27.973705 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 12:44:27.987235 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 16 12:44:27.992157 systemd[1]: Stopped target basic.target - Basic System. Dec 16 12:44:28.001637 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 16 12:44:28.010600 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 12:44:28.019793 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 16 12:44:28.029551 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 16 12:44:28.039704 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 16 12:44:28.049891 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 12:44:28.060643 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 16 12:44:28.070935 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 16 12:44:28.081237 systemd[1]: Stopped target swap.target - Swaps. Dec 16 12:44:28.099000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:28.089957 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 16 12:44:28.090071 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 16 12:44:28.103222 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 16 12:44:28.108582 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 12:44:28.136000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:28.118074 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 16 12:44:28.147000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:28.118137 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 12:44:28.155000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:28.127980 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 16 12:44:28.167000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:28.128074 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 16 12:44:28.141978 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 16 12:44:28.142069 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 12:44:28.148101 systemd[1]: ignition-files.service: Deactivated successfully. Dec 16 12:44:28.148167 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 16 12:44:28.156839 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Dec 16 12:44:28.156915 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 16 12:44:28.169535 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 16 12:44:28.200381 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 16 12:44:28.236000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:28.242261 ignition[1480]: INFO : Ignition 2.22.0 Dec 16 12:44:28.242261 ignition[1480]: INFO : Stage: umount Dec 16 12:44:28.242261 ignition[1480]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 12:44:28.242261 ignition[1480]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 12:44:28.242261 ignition[1480]: INFO : umount: umount passed Dec 16 12:44:28.242261 ignition[1480]: INFO : Ignition finished successfully Dec 16 12:44:28.251000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:28.261000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:28.276000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:28.285000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:28.220669 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 16 12:44:28.294000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:28.220793 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 12:44:28.304000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:28.237679 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 16 12:44:28.237768 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 12:44:28.327000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:28.253197 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 16 12:44:28.253352 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 12:44:28.268532 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 16 12:44:28.268739 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 16 12:44:28.279961 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 16 12:44:28.280032 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 16 12:44:28.286386 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 16 12:44:28.286427 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 16 12:44:28.408000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:28.295404 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 16 12:44:28.417000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:28.295438 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 16 12:44:28.305274 systemd[1]: Stopped target network.target - Network. Dec 16 12:44:28.315647 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 16 12:44:28.450000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:28.450000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:28.315690 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 12:44:28.460000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:28.328833 systemd[1]: Stopped target paths.target - Path Units. Dec 16 12:44:28.338155 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 16 12:44:28.474000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:28.341217 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 12:44:28.478000 audit: BPF prog-id=9 op=UNLOAD Dec 16 12:44:28.485000 audit: BPF prog-id=6 op=UNLOAD Dec 16 12:44:28.348942 systemd[1]: Stopped target slices.target - Slice Units. Dec 16 12:44:28.361562 systemd[1]: Stopped target sockets.target - Socket Units. Dec 16 12:44:28.370711 systemd[1]: iscsid.socket: Deactivated successfully. Dec 16 12:44:28.370757 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 12:44:28.517000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:28.380026 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 16 12:44:28.525000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:28.380067 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 12:44:28.538000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:28.388701 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Dec 16 12:44:28.388717 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Dec 16 12:44:28.399700 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 16 12:44:28.399754 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 16 12:44:28.409425 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 16 12:44:28.409457 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 16 12:44:28.418730 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 16 12:44:28.429194 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 16 12:44:28.441002 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 16 12:44:28.589000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:28.441532 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 16 12:44:28.441597 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 16 12:44:28.451956 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 16 12:44:28.615000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:28.452042 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 16 12:44:28.467005 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 16 12:44:28.634000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:28.467088 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 16 12:44:28.648472 kernel: hv_netvsc 002248b9-ab26-0022-48b9-ab26002248b9 eth0: Data path switched from VF: enP7940s1 Dec 16 12:44:28.643000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:28.481664 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 16 12:44:28.491292 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 16 12:44:28.491331 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 16 12:44:28.668000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:28.501310 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 16 12:44:28.683000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:28.508773 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 16 12:44:28.694000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:28.508832 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 12:44:28.704000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:28.518529 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 16 12:44:28.715000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:28.518568 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 16 12:44:28.723000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:28.526465 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 16 12:44:28.732000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:28.732000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:28.526496 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 16 12:44:28.741000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:28.539769 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 12:44:28.580854 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 16 12:44:28.761000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:28.581085 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 12:44:28.590849 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 16 12:44:28.590884 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 16 12:44:28.598908 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 16 12:44:28.598933 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 12:44:28.607039 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 16 12:44:28.607082 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 16 12:44:28.620511 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 16 12:44:28.620552 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 16 12:44:28.638482 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 16 12:44:28.638518 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 12:44:28.649143 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 16 12:44:28.660448 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 16 12:44:28.660505 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 12:44:28.669298 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 16 12:44:28.669339 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 12:44:28.685106 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 16 12:44:28.685154 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 12:44:28.695154 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 16 12:44:28.695196 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 12:44:28.705228 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 12:44:28.705261 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:44:28.716637 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 16 12:44:28.716717 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 16 12:44:28.724841 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 16 12:44:28.724907 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 16 12:44:28.733790 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 16 12:44:28.733857 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 16 12:44:28.905192 systemd-journald[560]: Received SIGTERM from PID 1 (systemd). Dec 16 12:44:28.742835 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 16 12:44:28.752160 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 16 12:44:28.752238 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 16 12:44:28.763303 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 16 12:44:28.783795 systemd[1]: Switching root. Dec 16 12:44:28.924309 systemd-journald[560]: Journal stopped Dec 16 12:44:33.301861 kernel: SELinux: policy capability network_peer_controls=1 Dec 16 12:44:33.301879 kernel: SELinux: policy capability open_perms=1 Dec 16 12:44:33.301887 kernel: SELinux: policy capability extended_socket_class=1 Dec 16 12:44:33.301892 kernel: SELinux: policy capability always_check_network=0 Dec 16 12:44:33.301899 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 16 12:44:33.301905 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 16 12:44:33.301911 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 16 12:44:33.301916 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 16 12:44:33.301922 kernel: SELinux: policy capability userspace_initial_context=0 Dec 16 12:44:33.301928 systemd[1]: Successfully loaded SELinux policy in 140.784ms. Dec 16 12:44:33.301936 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 4.512ms. Dec 16 12:44:33.301944 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 12:44:33.301951 systemd[1]: Detected virtualization microsoft. Dec 16 12:44:33.301958 systemd[1]: Detected architecture arm64. Dec 16 12:44:33.301965 systemd[1]: Detected first boot. Dec 16 12:44:33.301972 systemd[1]: Hostname set to . Dec 16 12:44:33.301978 systemd[1]: Initializing machine ID from random generator. Dec 16 12:44:33.301984 zram_generator::config[1522]: No configuration found. Dec 16 12:44:33.301991 kernel: NET: Registered PF_VSOCK protocol family Dec 16 12:44:33.301998 systemd[1]: Populated /etc with preset unit settings. Dec 16 12:44:33.302004 kernel: kauditd_printk_skb: 45 callbacks suppressed Dec 16 12:44:33.302010 kernel: audit: type=1334 audit(1765889072.453:94): prog-id=12 op=LOAD Dec 16 12:44:33.302016 kernel: audit: type=1334 audit(1765889072.453:95): prog-id=3 op=UNLOAD Dec 16 12:44:33.302021 kernel: audit: type=1334 audit(1765889072.457:96): prog-id=13 op=LOAD Dec 16 12:44:33.302027 kernel: audit: type=1334 audit(1765889072.460:97): prog-id=14 op=LOAD Dec 16 12:44:33.302034 kernel: audit: type=1334 audit(1765889072.460:98): prog-id=4 op=UNLOAD Dec 16 12:44:33.302040 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 16 12:44:33.302046 kernel: audit: type=1334 audit(1765889072.460:99): prog-id=5 op=UNLOAD Dec 16 12:44:33.302052 kernel: audit: type=1131 audit(1765889072.465:100): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:33.302058 kernel: audit: type=1334 audit(1765889072.496:101): prog-id=12 op=UNLOAD Dec 16 12:44:33.302064 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 16 12:44:33.302071 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 16 12:44:33.302077 kernel: audit: type=1130 audit(1765889072.509:102): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:33.302085 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 16 12:44:33.302092 kernel: audit: type=1131 audit(1765889072.509:103): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:33.302098 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 16 12:44:33.302105 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 16 12:44:33.302112 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 16 12:44:33.302118 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 16 12:44:33.302125 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 16 12:44:33.302133 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 16 12:44:33.302140 systemd[1]: Created slice user.slice - User and Session Slice. Dec 16 12:44:33.302146 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 12:44:33.302154 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 12:44:33.302161 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 16 12:44:33.302167 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 16 12:44:33.302174 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 16 12:44:33.302180 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 12:44:33.302187 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Dec 16 12:44:33.302193 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 12:44:33.302212 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 12:44:33.302219 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 16 12:44:33.302226 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 16 12:44:33.302233 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 16 12:44:33.302240 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 16 12:44:33.302247 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 12:44:33.302254 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 12:44:33.302261 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Dec 16 12:44:33.302267 systemd[1]: Reached target slices.target - Slice Units. Dec 16 12:44:33.302274 systemd[1]: Reached target swap.target - Swaps. Dec 16 12:44:33.302280 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 16 12:44:33.302287 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 16 12:44:33.302295 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 16 12:44:33.302301 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 16 12:44:33.302308 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Dec 16 12:44:33.302314 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 12:44:33.302322 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Dec 16 12:44:33.302329 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Dec 16 12:44:33.302335 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 12:44:33.302342 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 12:44:33.302348 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 16 12:44:33.302355 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 16 12:44:33.302362 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 16 12:44:33.302369 systemd[1]: Mounting media.mount - External Media Directory... Dec 16 12:44:33.302376 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 16 12:44:33.302384 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 16 12:44:33.302390 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 16 12:44:33.302397 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 16 12:44:33.302404 systemd[1]: Reached target machines.target - Containers. Dec 16 12:44:33.302410 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 16 12:44:33.302418 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 12:44:33.302425 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 12:44:33.302431 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 16 12:44:33.302438 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 12:44:33.302444 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 12:44:33.302451 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 12:44:33.302458 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 16 12:44:33.302465 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 12:44:33.302472 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 16 12:44:33.302478 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 16 12:44:33.302485 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 16 12:44:33.302491 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 16 12:44:33.302498 systemd[1]: Stopped systemd-fsck-usr.service. Dec 16 12:44:33.302506 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 12:44:33.302512 kernel: ACPI: bus type drm_connector registered Dec 16 12:44:33.302518 kernel: fuse: init (API version 7.41) Dec 16 12:44:33.302525 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 12:44:33.302532 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 12:44:33.302538 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 12:44:33.302545 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 16 12:44:33.302552 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 16 12:44:33.302559 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 12:44:33.302576 systemd-journald[1603]: Collecting audit messages is enabled. Dec 16 12:44:33.302592 systemd-journald[1603]: Journal started Dec 16 12:44:33.302606 systemd-journald[1603]: Runtime Journal (/run/log/journal/c3ab81714a7a4143a189a70d0c8fd141) is 8M, max 78.3M, 70.3M free. Dec 16 12:44:32.832000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Dec 16 12:44:33.167000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:33.179000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:33.191000 audit: BPF prog-id=14 op=UNLOAD Dec 16 12:44:33.191000 audit: BPF prog-id=13 op=UNLOAD Dec 16 12:44:33.193000 audit: BPF prog-id=15 op=LOAD Dec 16 12:44:33.194000 audit: BPF prog-id=16 op=LOAD Dec 16 12:44:33.194000 audit: BPF prog-id=17 op=LOAD Dec 16 12:44:33.298000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 16 12:44:33.298000 audit[1603]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=fffff036dab0 a2=4000 a3=0 items=0 ppid=1 pid=1603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:44:33.298000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 16 12:44:32.442598 systemd[1]: Queued start job for default target multi-user.target. Dec 16 12:44:32.462067 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Dec 16 12:44:32.466589 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 16 12:44:32.466865 systemd[1]: systemd-journald.service: Consumed 2.667s CPU time. Dec 16 12:44:33.314435 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 12:44:33.313000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:33.315278 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 16 12:44:33.319804 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 16 12:44:33.325150 systemd[1]: Mounted media.mount - External Media Directory. Dec 16 12:44:33.329495 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 16 12:44:33.334243 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 16 12:44:33.339779 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 16 12:44:33.344530 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 16 12:44:33.348000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:33.350250 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 12:44:33.355000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:33.356662 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 16 12:44:33.356783 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 16 12:44:33.361000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:33.361000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:33.362461 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 12:44:33.362577 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 12:44:33.367000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:33.367000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:33.368298 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 12:44:33.368407 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 12:44:33.372000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:33.372000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:33.373726 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 12:44:33.373847 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 12:44:33.377000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:33.377000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:33.379319 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 16 12:44:33.379433 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 16 12:44:33.383000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:33.383000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:33.384392 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 12:44:33.384495 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 12:44:33.388000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:33.388000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:33.389663 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 12:44:33.393000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:33.395156 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 12:44:33.400000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:33.401884 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 16 12:44:33.406000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:33.409304 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 16 12:44:33.413000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:33.415679 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 12:44:33.420000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:33.430418 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 12:44:33.436048 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Dec 16 12:44:33.442813 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 16 12:44:33.457319 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 16 12:44:33.462770 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 16 12:44:33.462864 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 12:44:33.468101 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 16 12:44:33.473587 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 12:44:33.473744 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 12:44:33.476022 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 16 12:44:33.481558 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 16 12:44:33.486654 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 12:44:33.487356 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 16 12:44:33.491992 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 12:44:33.493343 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 12:44:33.499832 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 16 12:44:33.506419 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 12:44:33.514651 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 16 12:44:33.521700 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 16 12:44:33.529808 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 16 12:44:33.533000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:33.536183 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 16 12:44:33.537112 systemd-journald[1603]: Time spent on flushing to /var/log/journal/c3ab81714a7a4143a189a70d0c8fd141 is 18.243ms for 1083 entries. Dec 16 12:44:33.537112 systemd-journald[1603]: System Journal (/var/log/journal/c3ab81714a7a4143a189a70d0c8fd141) is 8M, max 2.2G, 2.2G free. Dec 16 12:44:33.685256 systemd-journald[1603]: Received client request to flush runtime journal. Dec 16 12:44:33.685346 kernel: loop1: detected capacity change from 0 to 100192 Dec 16 12:44:33.598000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:33.548404 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 16 12:44:33.594282 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 12:44:33.688654 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 16 12:44:33.693000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:33.703598 systemd-tmpfiles[1664]: ACLs are not supported, ignoring. Dec 16 12:44:33.703614 systemd-tmpfiles[1664]: ACLs are not supported, ignoring. Dec 16 12:44:33.707315 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 12:44:33.712000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:33.715946 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 16 12:44:33.720904 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 16 12:44:33.725351 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 16 12:44:33.729000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:33.847323 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 16 12:44:33.851000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:33.852000 audit: BPF prog-id=18 op=LOAD Dec 16 12:44:33.852000 audit: BPF prog-id=19 op=LOAD Dec 16 12:44:33.852000 audit: BPF prog-id=20 op=LOAD Dec 16 12:44:33.854297 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Dec 16 12:44:33.859000 audit: BPF prog-id=21 op=LOAD Dec 16 12:44:33.860877 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 12:44:33.865893 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 12:44:33.883953 systemd-tmpfiles[1683]: ACLs are not supported, ignoring. Dec 16 12:44:33.884184 systemd-tmpfiles[1683]: ACLs are not supported, ignoring. Dec 16 12:44:33.886780 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 12:44:33.891000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:33.907000 audit: BPF prog-id=22 op=LOAD Dec 16 12:44:33.907000 audit: BPF prog-id=23 op=LOAD Dec 16 12:44:33.907000 audit: BPF prog-id=24 op=LOAD Dec 16 12:44:33.909499 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Dec 16 12:44:33.918000 audit: BPF prog-id=25 op=LOAD Dec 16 12:44:33.918000 audit: BPF prog-id=26 op=LOAD Dec 16 12:44:33.918000 audit: BPF prog-id=27 op=LOAD Dec 16 12:44:33.922318 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 16 12:44:33.958025 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 16 12:44:33.961000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:33.967600 systemd-nsresourced[1686]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Dec 16 12:44:33.968676 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Dec 16 12:44:33.972000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:34.033875 systemd-oomd[1681]: No swap; memory pressure usage will be degraded Dec 16 12:44:34.036597 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Dec 16 12:44:34.040000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:34.052215 kernel: loop2: detected capacity change from 0 to 200800 Dec 16 12:44:34.057018 systemd-resolved[1682]: Positive Trust Anchors: Dec 16 12:44:34.057039 systemd-resolved[1682]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 12:44:34.057042 systemd-resolved[1682]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 16 12:44:34.057061 systemd-resolved[1682]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 12:44:34.141220 kernel: loop3: detected capacity change from 0 to 27736 Dec 16 12:44:34.147263 systemd-resolved[1682]: Using system hostname 'ci-4515.1.0-a-4ca6cdd03e'. Dec 16 12:44:34.148323 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 12:44:34.152000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:34.153830 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 12:44:34.479811 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 16 12:44:34.483000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:34.483000 audit: BPF prog-id=8 op=UNLOAD Dec 16 12:44:34.483000 audit: BPF prog-id=7 op=UNLOAD Dec 16 12:44:34.484000 audit: BPF prog-id=28 op=LOAD Dec 16 12:44:34.484000 audit: BPF prog-id=29 op=LOAD Dec 16 12:44:34.486073 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 12:44:34.507695 systemd-udevd[1706]: Using default interface naming scheme 'v257'. Dec 16 12:44:34.524225 kernel: loop4: detected capacity change from 0 to 109872 Dec 16 12:44:34.735534 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 12:44:34.740000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:34.742000 audit: BPF prog-id=30 op=LOAD Dec 16 12:44:34.744886 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 12:44:34.788463 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Dec 16 12:44:34.832064 systemd-networkd[1717]: lo: Link UP Dec 16 12:44:34.833134 systemd-networkd[1717]: lo: Gained carrier Dec 16 12:44:34.834561 systemd-networkd[1717]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 12:44:34.834568 systemd-networkd[1717]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 12:44:34.840127 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 12:44:34.844130 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#201 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Dec 16 12:44:34.845000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:34.847178 systemd[1]: Reached target network.target - Network. Dec 16 12:44:34.863958 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 16 12:44:34.870424 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 16 12:44:34.873221 kernel: mousedev: PS/2 mouse device common for all mice Dec 16 12:44:34.873280 kernel: loop5: detected capacity change from 0 to 100192 Dec 16 12:44:34.898997 kernel: loop6: detected capacity change from 0 to 200800 Dec 16 12:44:34.899063 kernel: hv_vmbus: registering driver hv_balloon Dec 16 12:44:34.918273 kernel: hv_vmbus: registering driver hyperv_fb Dec 16 12:44:34.918334 kernel: mlx5_core 1f04:00:02.0 enP7940s1: Link up Dec 16 12:44:34.925253 kernel: loop7: detected capacity change from 0 to 27736 Dec 16 12:44:34.941296 kernel: hv_netvsc 002248b9-ab26-0022-48b9-ab26002248b9 eth0: Data path switched to VF: enP7940s1 Dec 16 12:44:34.941488 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Dec 16 12:44:34.943650 systemd-networkd[1717]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 12:44:34.944976 systemd-networkd[1717]: enP7940s1: Link UP Dec 16 12:44:34.945331 systemd-networkd[1717]: eth0: Link UP Dec 16 12:44:34.945406 systemd-networkd[1717]: eth0: Gained carrier Dec 16 12:44:34.945466 systemd-networkd[1717]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 12:44:34.958222 kernel: loop1: detected capacity change from 0 to 109872 Dec 16 12:44:34.958279 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Dec 16 12:44:34.958295 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Dec 16 12:44:34.958323 kernel: hv_balloon: Memory hot add disabled on ARM64 Dec 16 12:44:34.961568 systemd-networkd[1717]: enP7940s1: Gained carrier Dec 16 12:44:34.964190 kernel: Console: switching to colour dummy device 80x25 Dec 16 12:44:34.964350 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 16 12:44:34.971000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-persistent-storage comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:34.978605 kernel: Console: switching to colour frame buffer device 128x48 Dec 16 12:44:34.979272 systemd-networkd[1717]: eth0: DHCPv4 address 10.200.20.37/24, gateway 10.200.20.1 acquired from 168.63.129.16 Dec 16 12:44:34.987583 (sd-merge)[1747]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-azure.raw'. Dec 16 12:44:34.990286 (sd-merge)[1747]: Merged extensions into '/usr'. Dec 16 12:44:35.008875 systemd[1]: Reload requested from client PID 1662 ('systemd-sysext') (unit systemd-sysext.service)... Dec 16 12:44:35.011227 systemd[1]: Reloading... Dec 16 12:44:35.084223 kernel: MACsec IEEE 802.1AE Dec 16 12:44:35.116222 zram_generator::config[1858]: No configuration found. Dec 16 12:44:35.289287 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Dec 16 12:44:35.295307 systemd[1]: Reloading finished in 283 ms. Dec 16 12:44:35.319447 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 16 12:44:35.324000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:35.357005 systemd[1]: Starting ensure-sysext.service... Dec 16 12:44:35.363074 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 16 12:44:35.369784 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 12:44:35.380054 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:44:35.384000 audit: BPF prog-id=31 op=LOAD Dec 16 12:44:35.384000 audit: BPF prog-id=21 op=UNLOAD Dec 16 12:44:35.384000 audit: BPF prog-id=32 op=LOAD Dec 16 12:44:35.384000 audit: BPF prog-id=33 op=LOAD Dec 16 12:44:35.384000 audit: BPF prog-id=28 op=UNLOAD Dec 16 12:44:35.384000 audit: BPF prog-id=29 op=UNLOAD Dec 16 12:44:35.385000 audit: BPF prog-id=34 op=LOAD Dec 16 12:44:35.385000 audit: BPF prog-id=15 op=UNLOAD Dec 16 12:44:35.385000 audit: BPF prog-id=35 op=LOAD Dec 16 12:44:35.385000 audit: BPF prog-id=36 op=LOAD Dec 16 12:44:35.385000 audit: BPF prog-id=16 op=UNLOAD Dec 16 12:44:35.385000 audit: BPF prog-id=17 op=UNLOAD Dec 16 12:44:35.385000 audit: BPF prog-id=37 op=LOAD Dec 16 12:44:35.385000 audit: BPF prog-id=22 op=UNLOAD Dec 16 12:44:35.385000 audit: BPF prog-id=38 op=LOAD Dec 16 12:44:35.385000 audit: BPF prog-id=39 op=LOAD Dec 16 12:44:35.385000 audit: BPF prog-id=23 op=UNLOAD Dec 16 12:44:35.385000 audit: BPF prog-id=24 op=UNLOAD Dec 16 12:44:35.386000 audit: BPF prog-id=40 op=LOAD Dec 16 12:44:35.386000 audit: BPF prog-id=30 op=UNLOAD Dec 16 12:44:35.386000 audit: BPF prog-id=41 op=LOAD Dec 16 12:44:35.386000 audit: BPF prog-id=18 op=UNLOAD Dec 16 12:44:35.386000 audit: BPF prog-id=42 op=LOAD Dec 16 12:44:35.386000 audit: BPF prog-id=43 op=LOAD Dec 16 12:44:35.386000 audit: BPF prog-id=19 op=UNLOAD Dec 16 12:44:35.386000 audit: BPF prog-id=20 op=UNLOAD Dec 16 12:44:35.387000 audit: BPF prog-id=44 op=LOAD Dec 16 12:44:35.387000 audit: BPF prog-id=25 op=UNLOAD Dec 16 12:44:35.387000 audit: BPF prog-id=45 op=LOAD Dec 16 12:44:35.387000 audit: BPF prog-id=46 op=LOAD Dec 16 12:44:35.387000 audit: BPF prog-id=26 op=UNLOAD Dec 16 12:44:35.387000 audit: BPF prog-id=27 op=UNLOAD Dec 16 12:44:35.392895 systemd[1]: Reload requested from client PID 1917 ('systemctl') (unit ensure-sysext.service)... Dec 16 12:44:35.392913 systemd[1]: Reloading... Dec 16 12:44:35.397260 systemd-tmpfiles[1919]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 16 12:44:35.397281 systemd-tmpfiles[1919]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 16 12:44:35.397468 systemd-tmpfiles[1919]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 16 12:44:35.398104 systemd-tmpfiles[1919]: ACLs are not supported, ignoring. Dec 16 12:44:35.398146 systemd-tmpfiles[1919]: ACLs are not supported, ignoring. Dec 16 12:44:35.419223 systemd-tmpfiles[1919]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 12:44:35.419325 systemd-tmpfiles[1919]: Skipping /boot Dec 16 12:44:35.427882 systemd-tmpfiles[1919]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 12:44:35.428232 systemd-tmpfiles[1919]: Skipping /boot Dec 16 12:44:35.468371 zram_generator::config[1962]: No configuration found. Dec 16 12:44:35.616948 systemd[1]: Reloading finished in 223 ms. Dec 16 12:44:35.636386 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 16 12:44:35.641000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:35.642000 audit: BPF prog-id=47 op=LOAD Dec 16 12:44:35.642000 audit: BPF prog-id=31 op=UNLOAD Dec 16 12:44:35.643000 audit: BPF prog-id=48 op=LOAD Dec 16 12:44:35.643000 audit: BPF prog-id=34 op=UNLOAD Dec 16 12:44:35.643000 audit: BPF prog-id=49 op=LOAD Dec 16 12:44:35.643000 audit: BPF prog-id=50 op=LOAD Dec 16 12:44:35.643000 audit: BPF prog-id=35 op=UNLOAD Dec 16 12:44:35.643000 audit: BPF prog-id=36 op=UNLOAD Dec 16 12:44:35.643000 audit: BPF prog-id=51 op=LOAD Dec 16 12:44:35.643000 audit: BPF prog-id=40 op=UNLOAD Dec 16 12:44:35.643000 audit: BPF prog-id=52 op=LOAD Dec 16 12:44:35.643000 audit: BPF prog-id=53 op=LOAD Dec 16 12:44:35.643000 audit: BPF prog-id=32 op=UNLOAD Dec 16 12:44:35.643000 audit: BPF prog-id=33 op=UNLOAD Dec 16 12:44:35.644000 audit: BPF prog-id=54 op=LOAD Dec 16 12:44:35.644000 audit: BPF prog-id=44 op=UNLOAD Dec 16 12:44:35.644000 audit: BPF prog-id=55 op=LOAD Dec 16 12:44:35.644000 audit: BPF prog-id=56 op=LOAD Dec 16 12:44:35.644000 audit: BPF prog-id=45 op=UNLOAD Dec 16 12:44:35.644000 audit: BPF prog-id=46 op=UNLOAD Dec 16 12:44:35.652000 audit: BPF prog-id=57 op=LOAD Dec 16 12:44:35.652000 audit: BPF prog-id=41 op=UNLOAD Dec 16 12:44:35.652000 audit: BPF prog-id=58 op=LOAD Dec 16 12:44:35.652000 audit: BPF prog-id=59 op=LOAD Dec 16 12:44:35.652000 audit: BPF prog-id=42 op=UNLOAD Dec 16 12:44:35.652000 audit: BPF prog-id=43 op=UNLOAD Dec 16 12:44:35.653000 audit: BPF prog-id=60 op=LOAD Dec 16 12:44:35.653000 audit: BPF prog-id=37 op=UNLOAD Dec 16 12:44:35.653000 audit: BPF prog-id=61 op=LOAD Dec 16 12:44:35.653000 audit: BPF prog-id=62 op=LOAD Dec 16 12:44:35.653000 audit: BPF prog-id=38 op=UNLOAD Dec 16 12:44:35.653000 audit: BPF prog-id=39 op=UNLOAD Dec 16 12:44:35.657331 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 12:44:35.662000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:35.669859 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 12:44:35.679958 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 16 12:44:35.688236 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 16 12:44:35.694936 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 16 12:44:35.703423 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 16 12:44:35.713047 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 12:44:35.721394 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 12:44:35.721000 audit[2021]: SYSTEM_BOOT pid=2021 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 16 12:44:35.727551 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 12:44:35.736129 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 12:44:35.741139 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 12:44:35.741282 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 12:44:35.741346 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 12:44:35.741979 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 12:44:35.743501 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 12:44:35.747000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:35.747000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:35.749550 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 12:44:35.749699 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 12:44:35.754000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:35.754000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:35.755735 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 12:44:35.755869 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 12:44:35.760000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:35.760000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:35.766163 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 12:44:35.769403 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 12:44:35.777803 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 12:44:35.786386 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 12:44:35.791828 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 12:44:35.791967 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 12:44:35.792030 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 12:44:35.794715 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 16 12:44:35.799000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:35.802392 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 16 12:44:35.807000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:35.809296 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 12:44:35.809450 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 12:44:35.815000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:35.815000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:35.816673 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 12:44:35.816831 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 12:44:35.821000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:35.821000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:35.822928 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 12:44:35.823084 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 12:44:35.827000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:35.827000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:35.834440 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 12:44:35.835378 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 12:44:35.844746 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 12:44:35.852448 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 12:44:35.860402 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 12:44:35.865347 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 12:44:35.865483 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 12:44:35.865550 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 12:44:35.865648 systemd[1]: Reached target time-set.target - System Time Set. Dec 16 12:44:35.872443 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:44:35.877000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:35.879870 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 12:44:35.880025 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 12:44:35.884000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:35.884000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:35.885849 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 12:44:35.885992 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 12:44:35.890000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:35.890000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:35.891492 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 12:44:35.891630 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 12:44:35.896000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:35.896000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:35.897695 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 12:44:35.897833 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 12:44:35.901000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:35.901000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:35.905185 systemd[1]: Finished ensure-sysext.service. Dec 16 12:44:35.908000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:35.911944 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 12:44:35.911999 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 12:44:35.999000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 16 12:44:35.999000 audit[2067]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffc114f290 a2=420 a3=0 items=0 ppid=2017 pid=2067 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:44:35.999000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 16 12:44:36.000561 augenrules[2067]: No rules Dec 16 12:44:36.001882 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 12:44:36.002152 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 12:44:36.233361 systemd-networkd[1717]: eth0: Gained IPv6LL Dec 16 12:44:36.235331 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 16 12:44:36.241590 systemd[1]: Reached target network-online.target - Network is Online. Dec 16 12:44:36.410296 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 16 12:44:36.416194 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 16 12:44:40.586168 ldconfig[2019]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 16 12:44:40.597792 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 16 12:44:40.604568 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 16 12:44:40.633792 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 16 12:44:40.638884 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 12:44:40.643598 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 16 12:44:40.648906 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 16 12:44:40.654404 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 16 12:44:40.658922 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 16 12:44:40.664961 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Dec 16 12:44:40.670685 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Dec 16 12:44:40.675826 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 16 12:44:40.681117 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 16 12:44:40.681141 systemd[1]: Reached target paths.target - Path Units. Dec 16 12:44:40.685379 systemd[1]: Reached target timers.target - Timer Units. Dec 16 12:44:40.707541 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 16 12:44:40.713766 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 16 12:44:40.719286 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 16 12:44:40.724948 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 16 12:44:40.730341 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 16 12:44:40.736660 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 16 12:44:40.741601 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 16 12:44:40.747058 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 16 12:44:40.751984 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 12:44:40.756167 systemd[1]: Reached target basic.target - Basic System. Dec 16 12:44:40.760279 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 16 12:44:40.760297 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 16 12:44:40.762131 systemd[1]: Starting chronyd.service - NTP client/server... Dec 16 12:44:40.776308 systemd[1]: Starting containerd.service - containerd container runtime... Dec 16 12:44:40.781454 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 16 12:44:40.788341 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 16 12:44:40.793939 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 16 12:44:40.801308 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 16 12:44:40.810409 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 16 12:44:40.815498 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 16 12:44:40.815851 chronyd[2080]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Dec 16 12:44:40.820635 jq[2088]: false Dec 16 12:44:40.820626 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Dec 16 12:44:40.825106 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Dec 16 12:44:40.825837 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:44:40.831072 KVP[2090]: KVP starting; pid is:2090 Dec 16 12:44:40.832067 chronyd[2080]: Timezone right/UTC failed leap second check, ignoring Dec 16 12:44:40.832194 chronyd[2080]: Loaded seccomp filter (level 2) Dec 16 12:44:40.836457 kernel: hv_utils: KVP IC version 4.0 Dec 16 12:44:40.836264 KVP[2090]: KVP LIC Version: 3.1 Dec 16 12:44:40.837577 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 16 12:44:40.849362 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 16 12:44:40.857110 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 16 12:44:40.864315 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 16 12:44:40.871360 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 16 12:44:40.872164 extend-filesystems[2089]: Found /dev/sda6 Dec 16 12:44:40.889325 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 16 12:44:40.894533 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 16 12:44:40.894886 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 16 12:44:40.895749 systemd[1]: Starting update-engine.service - Update Engine... Dec 16 12:44:40.900195 extend-filesystems[2089]: Found /dev/sda9 Dec 16 12:44:40.912609 extend-filesystems[2089]: Checking size of /dev/sda9 Dec 16 12:44:40.906956 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 16 12:44:40.917103 systemd[1]: Started chronyd.service - NTP client/server. Dec 16 12:44:40.919604 jq[2118]: true Dec 16 12:44:40.925863 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 16 12:44:40.933658 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 16 12:44:40.936399 extend-filesystems[2089]: Resized partition /dev/sda9 Dec 16 12:44:40.941380 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 16 12:44:40.942549 systemd[1]: motdgen.service: Deactivated successfully. Dec 16 12:44:40.942716 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 16 12:44:40.951298 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 16 12:44:40.959273 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 16 12:44:40.959453 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 16 12:44:40.974939 extend-filesystems[2134]: resize2fs 1.47.3 (8-Jul-2025) Dec 16 12:44:40.985156 update_engine[2112]: I20251216 12:44:40.976084 2112 main.cc:92] Flatcar Update Engine starting Dec 16 12:44:40.996107 jq[2136]: true Dec 16 12:44:41.005210 kernel: EXT4-fs (sda9): resizing filesystem from 6359552 to 6376955 blocks Dec 16 12:44:41.005256 kernel: EXT4-fs (sda9): resized filesystem to 6376955 Dec 16 12:44:41.007802 systemd-logind[2110]: New seat seat0. Dec 16 12:44:41.040499 systemd-logind[2110]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Dec 16 12:44:41.040882 systemd[1]: Started systemd-logind.service - User Login Management. Dec 16 12:44:41.049788 tar[2133]: linux-arm64/LICENSE Dec 16 12:44:41.049788 tar[2133]: linux-arm64/helm Dec 16 12:44:41.055548 extend-filesystems[2134]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Dec 16 12:44:41.055548 extend-filesystems[2134]: old_desc_blocks = 4, new_desc_blocks = 4 Dec 16 12:44:41.055548 extend-filesystems[2134]: The filesystem on /dev/sda9 is now 6376955 (4k) blocks long. Dec 16 12:44:41.112133 extend-filesystems[2089]: Resized filesystem in /dev/sda9 Dec 16 12:44:41.066834 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 16 12:44:41.067259 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 16 12:44:41.144161 dbus-daemon[2083]: [system] SELinux support is enabled Dec 16 12:44:41.144848 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 16 12:44:41.148243 bash[2178]: Updated "/home/core/.ssh/authorized_keys" Dec 16 12:44:41.155119 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 16 12:44:41.163447 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Dec 16 12:44:41.163536 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 16 12:44:41.163580 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 16 12:44:41.168366 dbus-daemon[2083]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 16 12:44:41.169497 update_engine[2112]: I20251216 12:44:41.169350 2112 update_check_scheduler.cc:74] Next update check in 10m50s Dec 16 12:44:41.170078 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 16 12:44:41.170103 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 16 12:44:41.179133 systemd[1]: Started update-engine.service - Update Engine. Dec 16 12:44:41.185310 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 16 12:44:41.214124 coreos-metadata[2082]: Dec 16 12:44:41.214 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Dec 16 12:44:41.226868 coreos-metadata[2082]: Dec 16 12:44:41.224 INFO Fetch successful Dec 16 12:44:41.226868 coreos-metadata[2082]: Dec 16 12:44:41.224 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Dec 16 12:44:41.226868 coreos-metadata[2082]: Dec 16 12:44:41.224 INFO Fetch successful Dec 16 12:44:41.226868 coreos-metadata[2082]: Dec 16 12:44:41.224 INFO Fetching http://168.63.129.16/machine/a950769d-8366-48d1-87b4-abc164b1c146/b42e9abb%2D6ffb%2D4539%2Dae9c%2D1c7561e24b1e.%5Fci%2D4515.1.0%2Da%2D4ca6cdd03e?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Dec 16 12:44:41.251070 coreos-metadata[2082]: Dec 16 12:44:41.251 INFO Fetch successful Dec 16 12:44:41.251070 coreos-metadata[2082]: Dec 16 12:44:41.251 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Dec 16 12:44:41.258807 coreos-metadata[2082]: Dec 16 12:44:41.258 INFO Fetch successful Dec 16 12:44:41.318252 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 16 12:44:41.326136 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 16 12:44:41.366557 locksmithd[2221]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 16 12:44:41.383309 sshd_keygen[2114]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 16 12:44:41.433400 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 16 12:44:41.437870 containerd[2138]: time="2025-12-16T12:44:41Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 16 12:44:41.437870 containerd[2138]: time="2025-12-16T12:44:41.435090608Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Dec 16 12:44:41.441719 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 16 12:44:41.448170 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Dec 16 12:44:41.455303 containerd[2138]: time="2025-12-16T12:44:41.454090920Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="7.336µs" Dec 16 12:44:41.455303 containerd[2138]: time="2025-12-16T12:44:41.454124840Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 16 12:44:41.455303 containerd[2138]: time="2025-12-16T12:44:41.454158256Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 16 12:44:41.455303 containerd[2138]: time="2025-12-16T12:44:41.454166576Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 16 12:44:41.455303 containerd[2138]: time="2025-12-16T12:44:41.454700752Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 16 12:44:41.455303 containerd[2138]: time="2025-12-16T12:44:41.454720616Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 12:44:41.455303 containerd[2138]: time="2025-12-16T12:44:41.454759200Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 12:44:41.455303 containerd[2138]: time="2025-12-16T12:44:41.454766008Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 12:44:41.455303 containerd[2138]: time="2025-12-16T12:44:41.454934944Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 12:44:41.455303 containerd[2138]: time="2025-12-16T12:44:41.454947440Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 12:44:41.455303 containerd[2138]: time="2025-12-16T12:44:41.454955104Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 12:44:41.455303 containerd[2138]: time="2025-12-16T12:44:41.454961256Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Dec 16 12:44:41.455494 containerd[2138]: time="2025-12-16T12:44:41.455079040Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Dec 16 12:44:41.455494 containerd[2138]: time="2025-12-16T12:44:41.455088328Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 16 12:44:41.455494 containerd[2138]: time="2025-12-16T12:44:41.455140032Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 16 12:44:41.456376 containerd[2138]: time="2025-12-16T12:44:41.455620344Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 12:44:41.456376 containerd[2138]: time="2025-12-16T12:44:41.455654320Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 12:44:41.456376 containerd[2138]: time="2025-12-16T12:44:41.455661736Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 16 12:44:41.456376 containerd[2138]: time="2025-12-16T12:44:41.455680216Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 16 12:44:41.456376 containerd[2138]: time="2025-12-16T12:44:41.456182320Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 16 12:44:41.456376 containerd[2138]: time="2025-12-16T12:44:41.456287456Z" level=info msg="metadata content store policy set" policy=shared Dec 16 12:44:41.471890 containerd[2138]: time="2025-12-16T12:44:41.471857592Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 16 12:44:41.471951 containerd[2138]: time="2025-12-16T12:44:41.471909616Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 16 12:44:41.472003 containerd[2138]: time="2025-12-16T12:44:41.471985232Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 16 12:44:41.472003 containerd[2138]: time="2025-12-16T12:44:41.471997936Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 16 12:44:41.472051 containerd[2138]: time="2025-12-16T12:44:41.472006672Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 16 12:44:41.472051 containerd[2138]: time="2025-12-16T12:44:41.472015424Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 16 12:44:41.472051 containerd[2138]: time="2025-12-16T12:44:41.472022392Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 16 12:44:41.472051 containerd[2138]: time="2025-12-16T12:44:41.472028424Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 16 12:44:41.472051 containerd[2138]: time="2025-12-16T12:44:41.472037176Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 16 12:44:41.472051 containerd[2138]: time="2025-12-16T12:44:41.472044632Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 16 12:44:41.472051 containerd[2138]: time="2025-12-16T12:44:41.472050960Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 16 12:44:41.472130 containerd[2138]: time="2025-12-16T12:44:41.472058304Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 16 12:44:41.472130 containerd[2138]: time="2025-12-16T12:44:41.472074272Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 16 12:44:41.472130 containerd[2138]: time="2025-12-16T12:44:41.472084632Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 16 12:44:41.472191 containerd[2138]: time="2025-12-16T12:44:41.472171592Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 16 12:44:41.473054 containerd[2138]: time="2025-12-16T12:44:41.473028040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 16 12:44:41.473099 containerd[2138]: time="2025-12-16T12:44:41.473069016Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 16 12:44:41.473099 containerd[2138]: time="2025-12-16T12:44:41.473078624Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 16 12:44:41.473099 containerd[2138]: time="2025-12-16T12:44:41.473086200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 16 12:44:41.473099 containerd[2138]: time="2025-12-16T12:44:41.473092552Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 16 12:44:41.473099 containerd[2138]: time="2025-12-16T12:44:41.473099776Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 16 12:44:41.473177 containerd[2138]: time="2025-12-16T12:44:41.473109360Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 16 12:44:41.473177 containerd[2138]: time="2025-12-16T12:44:41.473116968Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 16 12:44:41.473177 containerd[2138]: time="2025-12-16T12:44:41.473123536Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 16 12:44:41.473177 containerd[2138]: time="2025-12-16T12:44:41.473141136Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 16 12:44:41.473177 containerd[2138]: time="2025-12-16T12:44:41.473159864Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 16 12:44:41.473265 containerd[2138]: time="2025-12-16T12:44:41.473189416Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 16 12:44:41.474602 containerd[2138]: time="2025-12-16T12:44:41.473198896Z" level=info msg="Start snapshots syncer" Dec 16 12:44:41.474602 containerd[2138]: time="2025-12-16T12:44:41.474292152Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 16 12:44:41.474646 containerd[2138]: time="2025-12-16T12:44:41.474592680Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 16 12:44:41.474646 containerd[2138]: time="2025-12-16T12:44:41.474636872Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 16 12:44:41.474731 containerd[2138]: time="2025-12-16T12:44:41.474681016Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 16 12:44:41.475123 containerd[2138]: time="2025-12-16T12:44:41.474891984Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 16 12:44:41.475123 containerd[2138]: time="2025-12-16T12:44:41.474933784Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 16 12:44:41.475123 containerd[2138]: time="2025-12-16T12:44:41.474942616Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 16 12:44:41.475123 containerd[2138]: time="2025-12-16T12:44:41.474948936Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 16 12:44:41.475123 containerd[2138]: time="2025-12-16T12:44:41.474956280Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 16 12:44:41.475123 containerd[2138]: time="2025-12-16T12:44:41.474964816Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 16 12:44:41.475123 containerd[2138]: time="2025-12-16T12:44:41.474974032Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 16 12:44:41.475123 containerd[2138]: time="2025-12-16T12:44:41.474980056Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 16 12:44:41.475123 containerd[2138]: time="2025-12-16T12:44:41.474986536Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 16 12:44:41.475123 containerd[2138]: time="2025-12-16T12:44:41.475021728Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 12:44:41.475123 containerd[2138]: time="2025-12-16T12:44:41.475031184Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 12:44:41.475123 containerd[2138]: time="2025-12-16T12:44:41.475036816Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 12:44:41.475123 containerd[2138]: time="2025-12-16T12:44:41.475042496Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 12:44:41.475123 containerd[2138]: time="2025-12-16T12:44:41.475047016Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 16 12:44:41.475338 containerd[2138]: time="2025-12-16T12:44:41.475170040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 16 12:44:41.475338 containerd[2138]: time="2025-12-16T12:44:41.475184936Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 16 12:44:41.475338 containerd[2138]: time="2025-12-16T12:44:41.475194024Z" level=info msg="runtime interface created" Dec 16 12:44:41.475338 containerd[2138]: time="2025-12-16T12:44:41.475198144Z" level=info msg="created NRI interface" Dec 16 12:44:41.475338 containerd[2138]: time="2025-12-16T12:44:41.475215920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 16 12:44:41.475338 containerd[2138]: time="2025-12-16T12:44:41.475224952Z" level=info msg="Connect containerd service" Dec 16 12:44:41.475338 containerd[2138]: time="2025-12-16T12:44:41.475243400Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 16 12:44:41.476001 systemd[1]: issuegen.service: Deactivated successfully. Dec 16 12:44:41.476620 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 16 12:44:41.481265 containerd[2138]: time="2025-12-16T12:44:41.480787272Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 16 12:44:41.489250 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 16 12:44:41.506132 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Dec 16 12:44:41.519176 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 16 12:44:41.528126 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 16 12:44:41.536170 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Dec 16 12:44:41.543597 systemd[1]: Reached target getty.target - Login Prompts. Dec 16 12:44:41.608739 tar[2133]: linux-arm64/README.md Dec 16 12:44:41.622517 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 16 12:44:41.806293 containerd[2138]: time="2025-12-16T12:44:41.806229144Z" level=info msg="Start subscribing containerd event" Dec 16 12:44:41.806293 containerd[2138]: time="2025-12-16T12:44:41.806299752Z" level=info msg="Start recovering state" Dec 16 12:44:41.806511 containerd[2138]: time="2025-12-16T12:44:41.806407216Z" level=info msg="Start event monitor" Dec 16 12:44:41.806511 containerd[2138]: time="2025-12-16T12:44:41.806418624Z" level=info msg="Start cni network conf syncer for default" Dec 16 12:44:41.806511 containerd[2138]: time="2025-12-16T12:44:41.806425552Z" level=info msg="Start streaming server" Dec 16 12:44:41.806511 containerd[2138]: time="2025-12-16T12:44:41.806432152Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 16 12:44:41.806511 containerd[2138]: time="2025-12-16T12:44:41.806436960Z" level=info msg="runtime interface starting up..." Dec 16 12:44:41.806511 containerd[2138]: time="2025-12-16T12:44:41.806440600Z" level=info msg="starting plugins..." Dec 16 12:44:41.806511 containerd[2138]: time="2025-12-16T12:44:41.806452320Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 16 12:44:41.807355 containerd[2138]: time="2025-12-16T12:44:41.807329952Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 16 12:44:41.807500 containerd[2138]: time="2025-12-16T12:44:41.807473368Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 16 12:44:41.810225 containerd[2138]: time="2025-12-16T12:44:41.807594016Z" level=info msg="containerd successfully booted in 0.373201s" Dec 16 12:44:41.807791 systemd[1]: Started containerd.service - containerd container runtime. Dec 16 12:44:41.868564 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:44:41.875293 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 16 12:44:41.882383 (kubelet)[2295]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 12:44:41.885268 systemd[1]: Startup finished in 2.858s (kernel) + 11.854s (initrd) + 12.153s (userspace) = 26.867s. Dec 16 12:44:42.173096 kubelet[2295]: E1216 12:44:42.172981 2295 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 12:44:42.175061 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 12:44:42.175167 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 12:44:42.175757 systemd[1]: kubelet.service: Consumed 492ms CPU time, 247.4M memory peak. Dec 16 12:44:42.460530 login[2279]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Dec 16 12:44:42.462260 login[2280]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:44:42.471254 systemd-logind[2110]: New session 1 of user core. Dec 16 12:44:42.472013 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 16 12:44:42.472987 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 16 12:44:42.488453 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 16 12:44:42.491123 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 16 12:44:42.501057 (systemd)[2308]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 16 12:44:42.503294 systemd-logind[2110]: New session c1 of user core. Dec 16 12:44:42.643154 systemd[2308]: Queued start job for default target default.target. Dec 16 12:44:42.650931 systemd[2308]: Created slice app.slice - User Application Slice. Dec 16 12:44:42.650957 systemd[2308]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Dec 16 12:44:42.650966 systemd[2308]: Reached target paths.target - Paths. Dec 16 12:44:42.651003 systemd[2308]: Reached target timers.target - Timers. Dec 16 12:44:42.651955 systemd[2308]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 16 12:44:42.652526 systemd[2308]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Dec 16 12:44:42.662339 systemd[2308]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 16 12:44:42.662506 systemd[2308]: Reached target sockets.target - Sockets. Dec 16 12:44:42.663405 systemd[2308]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Dec 16 12:44:42.663589 systemd[2308]: Reached target basic.target - Basic System. Dec 16 12:44:42.663799 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 16 12:44:42.664004 systemd[2308]: Reached target default.target - Main User Target. Dec 16 12:44:42.664336 systemd[2308]: Startup finished in 156ms. Dec 16 12:44:42.675335 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 16 12:44:43.307399 waagent[2273]: 2025-12-16T12:44:43.307326Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Dec 16 12:44:43.311528 waagent[2273]: 2025-12-16T12:44:43.311489Z INFO Daemon Daemon OS: flatcar 4515.1.0 Dec 16 12:44:43.314875 waagent[2273]: 2025-12-16T12:44:43.314844Z INFO Daemon Daemon Python: 3.11.13 Dec 16 12:44:43.318260 waagent[2273]: 2025-12-16T12:44:43.318190Z INFO Daemon Daemon Run daemon Dec 16 12:44:43.321222 waagent[2273]: 2025-12-16T12:44:43.321126Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4515.1.0' Dec 16 12:44:43.327578 waagent[2273]: 2025-12-16T12:44:43.327534Z INFO Daemon Daemon Using waagent for provisioning Dec 16 12:44:43.331590 waagent[2273]: 2025-12-16T12:44:43.331556Z INFO Daemon Daemon Activate resource disk Dec 16 12:44:43.335081 waagent[2273]: 2025-12-16T12:44:43.335052Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Dec 16 12:44:43.343048 waagent[2273]: 2025-12-16T12:44:43.343008Z INFO Daemon Daemon Found device: None Dec 16 12:44:43.346235 waagent[2273]: 2025-12-16T12:44:43.346198Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Dec 16 12:44:43.352532 waagent[2273]: 2025-12-16T12:44:43.352503Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Dec 16 12:44:43.360601 waagent[2273]: 2025-12-16T12:44:43.360562Z INFO Daemon Daemon Clean protocol and wireserver endpoint Dec 16 12:44:43.364901 waagent[2273]: 2025-12-16T12:44:43.364870Z INFO Daemon Daemon Running default provisioning handler Dec 16 12:44:43.373631 waagent[2273]: 2025-12-16T12:44:43.373584Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Dec 16 12:44:43.383344 waagent[2273]: 2025-12-16T12:44:43.383306Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Dec 16 12:44:43.389979 waagent[2273]: 2025-12-16T12:44:43.389952Z INFO Daemon Daemon cloud-init is enabled: False Dec 16 12:44:43.393731 waagent[2273]: 2025-12-16T12:44:43.393706Z INFO Daemon Daemon Copying ovf-env.xml Dec 16 12:44:43.442556 waagent[2273]: 2025-12-16T12:44:43.442511Z INFO Daemon Daemon Successfully mounted dvd Dec 16 12:44:43.460869 login[2279]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:44:43.465868 systemd-logind[2110]: New session 2 of user core. Dec 16 12:44:43.469229 waagent[2273]: 2025-12-16T12:44:43.468583Z INFO Daemon Daemon Detect protocol endpoint Dec 16 12:44:43.472451 waagent[2273]: 2025-12-16T12:44:43.472418Z INFO Daemon Daemon Clean protocol and wireserver endpoint Dec 16 12:44:43.476455 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 16 12:44:43.477313 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Dec 16 12:44:43.478575 waagent[2273]: 2025-12-16T12:44:43.477962Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Dec 16 12:44:43.482878 waagent[2273]: 2025-12-16T12:44:43.482836Z INFO Daemon Daemon Test for route to 168.63.129.16 Dec 16 12:44:43.486903 waagent[2273]: 2025-12-16T12:44:43.486865Z INFO Daemon Daemon Route to 168.63.129.16 exists Dec 16 12:44:43.491216 waagent[2273]: 2025-12-16T12:44:43.491180Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Dec 16 12:44:43.517642 waagent[2273]: 2025-12-16T12:44:43.517604Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Dec 16 12:44:43.524891 waagent[2273]: 2025-12-16T12:44:43.523676Z INFO Daemon Daemon Wire protocol version:2012-11-30 Dec 16 12:44:43.527648 waagent[2273]: 2025-12-16T12:44:43.527614Z INFO Daemon Daemon Server preferred version:2015-04-05 Dec 16 12:44:43.611701 waagent[2273]: 2025-12-16T12:44:43.611603Z INFO Daemon Daemon Initializing goal state during protocol detection Dec 16 12:44:43.617064 waagent[2273]: 2025-12-16T12:44:43.617027Z INFO Daemon Daemon Forcing an update of the goal state. Dec 16 12:44:43.624518 waagent[2273]: 2025-12-16T12:44:43.624478Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Dec 16 12:44:43.640536 waagent[2273]: 2025-12-16T12:44:43.640504Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.177 Dec 16 12:44:43.645206 waagent[2273]: 2025-12-16T12:44:43.645170Z INFO Daemon Dec 16 12:44:43.647857 waagent[2273]: 2025-12-16T12:44:43.647820Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: c9eaec7a-bbcd-45f3-809b-893c2569097b eTag: 13541853556906550186 source: Fabric] Dec 16 12:44:43.657018 waagent[2273]: 2025-12-16T12:44:43.656981Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Dec 16 12:44:43.661890 waagent[2273]: 2025-12-16T12:44:43.661856Z INFO Daemon Dec 16 12:44:43.664185 waagent[2273]: 2025-12-16T12:44:43.664154Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Dec 16 12:44:43.672543 waagent[2273]: 2025-12-16T12:44:43.672517Z INFO Daemon Daemon Downloading artifacts profile blob Dec 16 12:44:43.809999 waagent[2273]: 2025-12-16T12:44:43.809940Z INFO Daemon Downloaded certificate {'thumbprint': 'EC5D49A70CC181937B314125535B747E5639284C', 'hasPrivateKey': True} Dec 16 12:44:43.817647 waagent[2273]: 2025-12-16T12:44:43.817609Z INFO Daemon Fetch goal state completed Dec 16 12:44:43.858395 waagent[2273]: 2025-12-16T12:44:43.858362Z INFO Daemon Daemon Starting provisioning Dec 16 12:44:43.862328 waagent[2273]: 2025-12-16T12:44:43.862265Z INFO Daemon Daemon Handle ovf-env.xml. Dec 16 12:44:43.866333 waagent[2273]: 2025-12-16T12:44:43.866305Z INFO Daemon Daemon Set hostname [ci-4515.1.0-a-4ca6cdd03e] Dec 16 12:44:43.887034 waagent[2273]: 2025-12-16T12:44:43.886992Z INFO Daemon Daemon Publish hostname [ci-4515.1.0-a-4ca6cdd03e] Dec 16 12:44:43.892151 waagent[2273]: 2025-12-16T12:44:43.892114Z INFO Daemon Daemon Examine /proc/net/route for primary interface Dec 16 12:44:43.897417 waagent[2273]: 2025-12-16T12:44:43.897381Z INFO Daemon Daemon Primary interface is [eth0] Dec 16 12:44:43.908082 systemd-networkd[1717]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 12:44:43.908097 systemd-networkd[1717]: eth0: Reconfiguring with /usr/lib/systemd/network/zz-default.network. Dec 16 12:44:43.908179 systemd-networkd[1717]: eth0: DHCP lease lost Dec 16 12:44:43.926751 waagent[2273]: 2025-12-16T12:44:43.926696Z INFO Daemon Daemon Create user account if not exists Dec 16 12:44:43.931589 waagent[2273]: 2025-12-16T12:44:43.931548Z INFO Daemon Daemon User core already exists, skip useradd Dec 16 12:44:43.936730 waagent[2273]: 2025-12-16T12:44:43.936690Z INFO Daemon Daemon Configure sudoer Dec 16 12:44:43.942242 systemd-networkd[1717]: eth0: DHCPv4 address 10.200.20.37/24, gateway 10.200.20.1 acquired from 168.63.129.16 Dec 16 12:44:43.944652 waagent[2273]: 2025-12-16T12:44:43.944607Z INFO Daemon Daemon Configure sshd Dec 16 12:44:43.951827 waagent[2273]: 2025-12-16T12:44:43.951785Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Dec 16 12:44:43.961733 waagent[2273]: 2025-12-16T12:44:43.961699Z INFO Daemon Daemon Deploy ssh public key. Dec 16 12:44:45.032557 waagent[2273]: 2025-12-16T12:44:45.032509Z INFO Daemon Daemon Provisioning complete Dec 16 12:44:45.045672 waagent[2273]: 2025-12-16T12:44:45.045635Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Dec 16 12:44:45.050137 waagent[2273]: 2025-12-16T12:44:45.050101Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Dec 16 12:44:45.058062 waagent[2273]: 2025-12-16T12:44:45.058034Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Dec 16 12:44:45.158800 waagent[2360]: 2025-12-16T12:44:45.158737Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Dec 16 12:44:45.160250 waagent[2360]: 2025-12-16T12:44:45.159183Z INFO ExtHandler ExtHandler OS: flatcar 4515.1.0 Dec 16 12:44:45.160250 waagent[2360]: 2025-12-16T12:44:45.159281Z INFO ExtHandler ExtHandler Python: 3.11.13 Dec 16 12:44:45.160250 waagent[2360]: 2025-12-16T12:44:45.159320Z INFO ExtHandler ExtHandler CPU Arch: aarch64 Dec 16 12:44:45.215525 waagent[2360]: 2025-12-16T12:44:45.215469Z INFO ExtHandler ExtHandler Distro: flatcar-4515.1.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.13; Arch: aarch64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Dec 16 12:44:45.215653 waagent[2360]: 2025-12-16T12:44:45.215627Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 16 12:44:45.215691 waagent[2360]: 2025-12-16T12:44:45.215674Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 16 12:44:45.220986 waagent[2360]: 2025-12-16T12:44:45.220942Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Dec 16 12:44:45.225536 waagent[2360]: 2025-12-16T12:44:45.225504Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.177 Dec 16 12:44:45.225885 waagent[2360]: 2025-12-16T12:44:45.225852Z INFO ExtHandler Dec 16 12:44:45.225939 waagent[2360]: 2025-12-16T12:44:45.225920Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: e7558782-a212-4576-974d-765e81430395 eTag: 13541853556906550186 source: Fabric] Dec 16 12:44:45.226162 waagent[2360]: 2025-12-16T12:44:45.226135Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Dec 16 12:44:45.226596 waagent[2360]: 2025-12-16T12:44:45.226564Z INFO ExtHandler Dec 16 12:44:45.226639 waagent[2360]: 2025-12-16T12:44:45.226621Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Dec 16 12:44:45.229729 waagent[2360]: 2025-12-16T12:44:45.229701Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Dec 16 12:44:45.283322 waagent[2360]: 2025-12-16T12:44:45.283223Z INFO ExtHandler Downloaded certificate {'thumbprint': 'EC5D49A70CC181937B314125535B747E5639284C', 'hasPrivateKey': True} Dec 16 12:44:45.283627 waagent[2360]: 2025-12-16T12:44:45.283593Z INFO ExtHandler Fetch goal state completed Dec 16 12:44:45.294922 waagent[2360]: 2025-12-16T12:44:45.294876Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.4.3 30 Sep 2025 (Library: OpenSSL 3.4.3 30 Sep 2025) Dec 16 12:44:45.298045 waagent[2360]: 2025-12-16T12:44:45.298000Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 2360 Dec 16 12:44:45.298146 waagent[2360]: 2025-12-16T12:44:45.298117Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Dec 16 12:44:45.298428 waagent[2360]: 2025-12-16T12:44:45.298398Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Dec 16 12:44:45.299536 waagent[2360]: 2025-12-16T12:44:45.299500Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4515.1.0', '', 'Flatcar Container Linux by Kinvolk'] Dec 16 12:44:45.299859 waagent[2360]: 2025-12-16T12:44:45.299827Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4515.1.0', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Dec 16 12:44:45.299973 waagent[2360]: 2025-12-16T12:44:45.299948Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Dec 16 12:44:45.300418 waagent[2360]: 2025-12-16T12:44:45.300387Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Dec 16 12:44:45.370079 waagent[2360]: 2025-12-16T12:44:45.370044Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Dec 16 12:44:45.370247 waagent[2360]: 2025-12-16T12:44:45.370195Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Dec 16 12:44:45.374549 waagent[2360]: 2025-12-16T12:44:45.374524Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Dec 16 12:44:45.378996 systemd[1]: Reload requested from client PID 2375 ('systemctl') (unit waagent.service)... Dec 16 12:44:45.379221 systemd[1]: Reloading... Dec 16 12:44:45.458230 zram_generator::config[2426]: No configuration found. Dec 16 12:44:45.605837 systemd[1]: Reloading finished in 226 ms. Dec 16 12:44:45.616826 waagent[2360]: 2025-12-16T12:44:45.616401Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Dec 16 12:44:45.616826 waagent[2360]: 2025-12-16T12:44:45.616535Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Dec 16 12:44:45.841529 waagent[2360]: 2025-12-16T12:44:45.841456Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Dec 16 12:44:45.841804 waagent[2360]: 2025-12-16T12:44:45.841768Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Dec 16 12:44:45.842473 waagent[2360]: 2025-12-16T12:44:45.842428Z INFO ExtHandler ExtHandler Starting env monitor service. Dec 16 12:44:45.842799 waagent[2360]: 2025-12-16T12:44:45.842726Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Dec 16 12:44:45.842937 waagent[2360]: 2025-12-16T12:44:45.842905Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Dec 16 12:44:45.843035 waagent[2360]: 2025-12-16T12:44:45.842945Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Dec 16 12:44:45.843394 waagent[2360]: 2025-12-16T12:44:45.843334Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Dec 16 12:44:45.843576 waagent[2360]: 2025-12-16T12:44:45.843399Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Dec 16 12:44:45.843576 waagent[2360]: 2025-12-16T12:44:45.843518Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 16 12:44:45.843720 waagent[2360]: 2025-12-16T12:44:45.843690Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 16 12:44:45.844227 waagent[2360]: 2025-12-16T12:44:45.843997Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 16 12:44:45.844227 waagent[2360]: 2025-12-16T12:44:45.844172Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Dec 16 12:44:45.844372 waagent[2360]: 2025-12-16T12:44:45.844345Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 16 12:44:45.844636 waagent[2360]: 2025-12-16T12:44:45.844581Z INFO EnvHandler ExtHandler Configure routes Dec 16 12:44:45.844780 waagent[2360]: 2025-12-16T12:44:45.844753Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Dec 16 12:44:45.845279 waagent[2360]: 2025-12-16T12:44:45.845258Z INFO EnvHandler ExtHandler Gateway:None Dec 16 12:44:45.845407 waagent[2360]: 2025-12-16T12:44:45.845386Z INFO EnvHandler ExtHandler Routes:None Dec 16 12:44:45.846028 waagent[2360]: 2025-12-16T12:44:45.845996Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Dec 16 12:44:45.846028 waagent[2360]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Dec 16 12:44:45.846028 waagent[2360]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Dec 16 12:44:45.846028 waagent[2360]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Dec 16 12:44:45.846028 waagent[2360]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Dec 16 12:44:45.846028 waagent[2360]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 16 12:44:45.846028 waagent[2360]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 16 12:44:45.851411 waagent[2360]: 2025-12-16T12:44:45.851366Z INFO ExtHandler ExtHandler Dec 16 12:44:45.851476 waagent[2360]: 2025-12-16T12:44:45.851452Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 7ec5202e-b963-4ed1-bed7-469ec29623ff correlation abffe655-2a24-4c2a-a2bf-ff5dadb0804c created: 2025-12-16T12:43:53.959244Z] Dec 16 12:44:45.851753 waagent[2360]: 2025-12-16T12:44:45.851722Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Dec 16 12:44:45.852146 waagent[2360]: 2025-12-16T12:44:45.852120Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 0 ms] Dec 16 12:44:45.881269 waagent[2360]: 2025-12-16T12:44:45.881011Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Dec 16 12:44:45.881269 waagent[2360]: Try `iptables -h' or 'iptables --help' for more information.) Dec 16 12:44:45.881374 waagent[2360]: 2025-12-16T12:44:45.881336Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 69D6CBBF-DF24-4AE9-8826-E905D8972BA3;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Dec 16 12:44:45.900104 waagent[2360]: 2025-12-16T12:44:45.900050Z INFO MonitorHandler ExtHandler Network interfaces: Dec 16 12:44:45.900104 waagent[2360]: Executing ['ip', '-a', '-o', 'link']: Dec 16 12:44:45.900104 waagent[2360]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Dec 16 12:44:45.900104 waagent[2360]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:b9:ab:26 brd ff:ff:ff:ff:ff:ff\ altname enx002248b9ab26 Dec 16 12:44:45.900104 waagent[2360]: 3: enP7940s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:b9:ab:26 brd ff:ff:ff:ff:ff:ff\ altname enP7940p0s2 Dec 16 12:44:45.900104 waagent[2360]: Executing ['ip', '-4', '-a', '-o', 'address']: Dec 16 12:44:45.900104 waagent[2360]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Dec 16 12:44:45.900104 waagent[2360]: 2: eth0 inet 10.200.20.37/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Dec 16 12:44:45.900104 waagent[2360]: Executing ['ip', '-6', '-a', '-o', 'address']: Dec 16 12:44:45.900104 waagent[2360]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Dec 16 12:44:45.900104 waagent[2360]: 2: eth0 inet6 fe80::222:48ff:feb9:ab26/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Dec 16 12:44:45.946810 waagent[2360]: 2025-12-16T12:44:45.946756Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Dec 16 12:44:45.946810 waagent[2360]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Dec 16 12:44:45.946810 waagent[2360]: pkts bytes target prot opt in out source destination Dec 16 12:44:45.946810 waagent[2360]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Dec 16 12:44:45.946810 waagent[2360]: pkts bytes target prot opt in out source destination Dec 16 12:44:45.946810 waagent[2360]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Dec 16 12:44:45.946810 waagent[2360]: pkts bytes target prot opt in out source destination Dec 16 12:44:45.946810 waagent[2360]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Dec 16 12:44:45.946810 waagent[2360]: 1 52 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Dec 16 12:44:45.946810 waagent[2360]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Dec 16 12:44:45.949100 waagent[2360]: 2025-12-16T12:44:45.949053Z INFO EnvHandler ExtHandler Current Firewall rules: Dec 16 12:44:45.949100 waagent[2360]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Dec 16 12:44:45.949100 waagent[2360]: pkts bytes target prot opt in out source destination Dec 16 12:44:45.949100 waagent[2360]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Dec 16 12:44:45.949100 waagent[2360]: pkts bytes target prot opt in out source destination Dec 16 12:44:45.949100 waagent[2360]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Dec 16 12:44:45.949100 waagent[2360]: pkts bytes target prot opt in out source destination Dec 16 12:44:45.949100 waagent[2360]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Dec 16 12:44:45.949100 waagent[2360]: 1 52 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Dec 16 12:44:45.949100 waagent[2360]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Dec 16 12:44:45.949317 waagent[2360]: 2025-12-16T12:44:45.949289Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Dec 16 12:44:52.248534 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 16 12:44:52.249794 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:44:52.349998 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:44:52.354591 (kubelet)[2512]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 12:44:52.497756 kubelet[2512]: E1216 12:44:52.497703 2512 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 12:44:52.500340 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 12:44:52.500452 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 12:44:52.500790 systemd[1]: kubelet.service: Consumed 112ms CPU time, 107.1M memory peak. Dec 16 12:45:02.748671 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 16 12:45:02.750064 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:45:02.852041 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:45:02.854633 (kubelet)[2527]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 12:45:02.983817 kubelet[2527]: E1216 12:45:02.983747 2527 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 12:45:02.986093 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 12:45:02.986324 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 12:45:02.988285 systemd[1]: kubelet.service: Consumed 106ms CPU time, 106.1M memory peak. Dec 16 12:45:04.624575 chronyd[2080]: Selected source PHC0 Dec 16 12:45:10.147501 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 16 12:45:10.149246 systemd[1]: Started sshd@0-10.200.20.37:22-10.200.16.10:40978.service - OpenSSH per-connection server daemon (10.200.16.10:40978). Dec 16 12:45:10.718152 sshd[2535]: Accepted publickey for core from 10.200.16.10 port 40978 ssh2: RSA SHA256:EQywgtLEgQeM5AlE8u6bSLW5d7rgTODhxBgGmPiEalo Dec 16 12:45:10.719128 sshd-session[2535]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:45:10.723803 systemd-logind[2110]: New session 3 of user core. Dec 16 12:45:10.731337 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 16 12:45:11.020940 systemd[1]: Started sshd@1-10.200.20.37:22-10.200.16.10:40986.service - OpenSSH per-connection server daemon (10.200.16.10:40986). Dec 16 12:45:11.414952 sshd[2541]: Accepted publickey for core from 10.200.16.10 port 40986 ssh2: RSA SHA256:EQywgtLEgQeM5AlE8u6bSLW5d7rgTODhxBgGmPiEalo Dec 16 12:45:11.416429 sshd-session[2541]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:45:11.420364 systemd-logind[2110]: New session 4 of user core. Dec 16 12:45:11.426338 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 16 12:45:11.722547 systemd[1]: Started sshd@2-10.200.20.37:22-10.200.16.10:40992.service - OpenSSH per-connection server daemon (10.200.16.10:40992). Dec 16 12:45:11.981407 sshd[2544]: Connection closed by 10.200.16.10 port 40986 Dec 16 12:45:11.981889 sshd-session[2541]: pam_unix(sshd:session): session closed for user core Dec 16 12:45:11.985236 systemd-logind[2110]: Session 4 logged out. Waiting for processes to exit. Dec 16 12:45:11.985984 systemd[1]: sshd@1-10.200.20.37:22-10.200.16.10:40986.service: Deactivated successfully. Dec 16 12:45:11.987811 systemd[1]: session-4.scope: Deactivated successfully. Dec 16 12:45:11.989867 systemd-logind[2110]: Removed session 4. Dec 16 12:45:12.115795 sshd[2547]: Accepted publickey for core from 10.200.16.10 port 40992 ssh2: RSA SHA256:EQywgtLEgQeM5AlE8u6bSLW5d7rgTODhxBgGmPiEalo Dec 16 12:45:12.116842 sshd-session[2547]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:45:12.120750 systemd-logind[2110]: New session 5 of user core. Dec 16 12:45:12.132556 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 16 12:45:12.328376 sshd[2553]: Connection closed by 10.200.16.10 port 40992 Dec 16 12:45:12.328857 sshd-session[2547]: pam_unix(sshd:session): session closed for user core Dec 16 12:45:12.332426 systemd[1]: sshd@2-10.200.20.37:22-10.200.16.10:40992.service: Deactivated successfully. Dec 16 12:45:12.333794 systemd[1]: session-5.scope: Deactivated successfully. Dec 16 12:45:12.335686 systemd-logind[2110]: Session 5 logged out. Waiting for processes to exit. Dec 16 12:45:12.336661 systemd-logind[2110]: Removed session 5. Dec 16 12:45:12.404423 systemd[1]: Started sshd@3-10.200.20.37:22-10.200.16.10:41004.service - OpenSSH per-connection server daemon (10.200.16.10:41004). Dec 16 12:45:12.894663 sshd[2559]: Accepted publickey for core from 10.200.16.10 port 41004 ssh2: RSA SHA256:EQywgtLEgQeM5AlE8u6bSLW5d7rgTODhxBgGmPiEalo Dec 16 12:45:12.895664 sshd-session[2559]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:45:12.899526 systemd-logind[2110]: New session 6 of user core. Dec 16 12:45:12.906333 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 16 12:45:12.998379 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 16 12:45:12.999621 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:45:13.090827 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:45:13.097525 (kubelet)[2572]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 12:45:13.111374 sshd[2562]: Connection closed by 10.200.16.10 port 41004 Dec 16 12:45:13.111883 sshd-session[2559]: pam_unix(sshd:session): session closed for user core Dec 16 12:45:13.115615 systemd-logind[2110]: Session 6 logged out. Waiting for processes to exit. Dec 16 12:45:13.115756 systemd[1]: sshd@3-10.200.20.37:22-10.200.16.10:41004.service: Deactivated successfully. Dec 16 12:45:13.117019 systemd[1]: session-6.scope: Deactivated successfully. Dec 16 12:45:13.119146 systemd-logind[2110]: Removed session 6. Dec 16 12:45:13.191479 systemd[1]: Started sshd@4-10.200.20.37:22-10.200.16.10:41010.service - OpenSSH per-connection server daemon (10.200.16.10:41010). Dec 16 12:45:13.236238 kubelet[2572]: E1216 12:45:13.236173 2572 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 12:45:13.237890 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 12:45:13.237995 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 12:45:13.238297 systemd[1]: kubelet.service: Consumed 209ms CPU time, 107.4M memory peak. Dec 16 12:45:13.578545 sshd[2581]: Accepted publickey for core from 10.200.16.10 port 41010 ssh2: RSA SHA256:EQywgtLEgQeM5AlE8u6bSLW5d7rgTODhxBgGmPiEalo Dec 16 12:45:13.579636 sshd-session[2581]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:45:13.584033 systemd-logind[2110]: New session 7 of user core. Dec 16 12:45:13.591337 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 16 12:45:14.017880 sudo[2587]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 16 12:45:14.018457 sudo[2587]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 12:45:14.044514 sudo[2587]: pam_unix(sudo:session): session closed for user root Dec 16 12:45:14.115708 sshd[2586]: Connection closed by 10.200.16.10 port 41010 Dec 16 12:45:14.115599 sshd-session[2581]: pam_unix(sshd:session): session closed for user core Dec 16 12:45:14.119016 systemd-logind[2110]: Session 7 logged out. Waiting for processes to exit. Dec 16 12:45:14.119127 systemd[1]: sshd@4-10.200.20.37:22-10.200.16.10:41010.service: Deactivated successfully. Dec 16 12:45:14.120564 systemd[1]: session-7.scope: Deactivated successfully. Dec 16 12:45:14.121785 systemd-logind[2110]: Removed session 7. Dec 16 12:45:14.206705 systemd[1]: Started sshd@5-10.200.20.37:22-10.200.16.10:41024.service - OpenSSH per-connection server daemon (10.200.16.10:41024). Dec 16 12:45:14.602924 sshd[2593]: Accepted publickey for core from 10.200.16.10 port 41024 ssh2: RSA SHA256:EQywgtLEgQeM5AlE8u6bSLW5d7rgTODhxBgGmPiEalo Dec 16 12:45:14.603972 sshd-session[2593]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:45:14.608423 systemd-logind[2110]: New session 8 of user core. Dec 16 12:45:14.614360 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 16 12:45:14.748182 sudo[2598]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 16 12:45:14.748443 sudo[2598]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 12:45:14.756797 sudo[2598]: pam_unix(sudo:session): session closed for user root Dec 16 12:45:14.760972 sudo[2597]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 16 12:45:14.761170 sudo[2597]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 12:45:14.767635 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 12:45:14.794000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 16 12:45:14.798493 augenrules[2620]: No rules Dec 16 12:45:14.798672 kernel: kauditd_printk_skb: 156 callbacks suppressed Dec 16 12:45:14.798702 kernel: audit: type=1305 audit(1765889114.794:256): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 16 12:45:14.806098 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 12:45:14.806287 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 12:45:14.794000 audit[2620]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffd100cfa0 a2=420 a3=0 items=0 ppid=2601 pid=2620 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:14.822399 kernel: audit: type=1300 audit(1765889114.794:256): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffd100cfa0 a2=420 a3=0 items=0 ppid=2601 pid=2620 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:14.794000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 16 12:45:14.823865 sudo[2597]: pam_unix(sudo:session): session closed for user root Dec 16 12:45:14.829991 kernel: audit: type=1327 audit(1765889114.794:256): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 16 12:45:14.805000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:45:14.841924 kernel: audit: type=1130 audit(1765889114.805:257): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:45:14.841971 kernel: audit: type=1131 audit(1765889114.805:258): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:45:14.805000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:45:14.821000 audit[2597]: USER_END pid=2597 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:45:14.866841 kernel: audit: type=1106 audit(1765889114.821:259): pid=2597 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:45:14.821000 audit[2597]: CRED_DISP pid=2597 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:45:14.879040 kernel: audit: type=1104 audit(1765889114.821:260): pid=2597 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:45:14.992161 systemd[1]: Started sshd@6-10.200.20.37:22-10.200.16.10:41036.service - OpenSSH per-connection server daemon (10.200.16.10:41036). Dec 16 12:45:14.991000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.20.37:22-10.200.16.10:41036 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:45:15.006221 kernel: audit: type=1130 audit(1765889114.991:261): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.20.37:22-10.200.16.10:41036 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:45:15.181394 sshd[2596]: Connection closed by 10.200.16.10 port 41024 Dec 16 12:45:15.181247 sshd-session[2593]: pam_unix(sshd:session): session closed for user core Dec 16 12:45:15.181000 audit[2593]: USER_END pid=2593 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:45:15.200485 systemd[1]: sshd@5-10.200.20.37:22-10.200.16.10:41024.service: Deactivated successfully. Dec 16 12:45:15.181000 audit[2593]: CRED_DISP pid=2593 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:45:15.201973 systemd[1]: session-8.scope: Deactivated successfully. Dec 16 12:45:15.214363 kernel: audit: type=1106 audit(1765889115.181:262): pid=2593 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:45:15.214420 kernel: audit: type=1104 audit(1765889115.181:263): pid=2593 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:45:15.199000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.200.20.37:22-10.200.16.10:41024 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:45:15.215354 systemd-logind[2110]: Session 8 logged out. Waiting for processes to exit. Dec 16 12:45:15.216350 systemd-logind[2110]: Removed session 8. Dec 16 12:45:15.440000 audit[2626]: USER_ACCT pid=2626 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:45:15.442055 sshd[2626]: Accepted publickey for core from 10.200.16.10 port 41036 ssh2: RSA SHA256:EQywgtLEgQeM5AlE8u6bSLW5d7rgTODhxBgGmPiEalo Dec 16 12:45:15.441000 audit[2626]: CRED_ACQ pid=2626 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:45:15.441000 audit[2626]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc6f458f0 a2=3 a3=0 items=0 ppid=1 pid=2626 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:15.441000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:45:15.443189 sshd-session[2626]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:45:15.447421 systemd-logind[2110]: New session 9 of user core. Dec 16 12:45:15.453330 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 16 12:45:15.454000 audit[2626]: USER_START pid=2626 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:45:15.456000 audit[2632]: CRED_ACQ pid=2632 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:45:15.599000 audit[2633]: USER_ACCT pid=2633 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:45:15.600569 sudo[2633]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 16 12:45:15.599000 audit[2633]: CRED_REFR pid=2633 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:45:15.601131 sudo[2633]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 12:45:15.601000 audit[2633]: USER_START pid=2633 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:45:16.934501 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 16 12:45:16.942447 (dockerd)[2651]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 16 12:45:18.013897 dockerd[2651]: time="2025-12-16T12:45:18.013841937Z" level=info msg="Starting up" Dec 16 12:45:18.014447 dockerd[2651]: time="2025-12-16T12:45:18.014407911Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 16 12:45:18.023224 dockerd[2651]: time="2025-12-16T12:45:18.023105679Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 16 12:45:18.095075 dockerd[2651]: time="2025-12-16T12:45:18.095041308Z" level=info msg="Loading containers: start." Dec 16 12:45:18.122229 kernel: Initializing XFRM netlink socket Dec 16 12:45:18.166000 audit[2697]: NETFILTER_CFG table=nat:5 family=2 entries=2 op=nft_register_chain pid=2697 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:18.166000 audit[2697]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=ffffebd82410 a2=0 a3=0 items=0 ppid=2651 pid=2697 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:18.166000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 16 12:45:18.167000 audit[2699]: NETFILTER_CFG table=filter:6 family=2 entries=2 op=nft_register_chain pid=2699 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:18.167000 audit[2699]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffe4746b30 a2=0 a3=0 items=0 ppid=2651 pid=2699 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:18.167000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 16 12:45:18.169000 audit[2701]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=2701 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:18.169000 audit[2701]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffcfb4f520 a2=0 a3=0 items=0 ppid=2651 pid=2701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:18.169000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 16 12:45:18.171000 audit[2703]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=2703 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:18.171000 audit[2703]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffdd404470 a2=0 a3=0 items=0 ppid=2651 pid=2703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:18.171000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Dec 16 12:45:18.172000 audit[2705]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_chain pid=2705 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:18.172000 audit[2705]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffd85e51e0 a2=0 a3=0 items=0 ppid=2651 pid=2705 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:18.172000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Dec 16 12:45:18.174000 audit[2707]: NETFILTER_CFG table=filter:10 family=2 entries=1 op=nft_register_chain pid=2707 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:18.174000 audit[2707]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffe062fe80 a2=0 a3=0 items=0 ppid=2651 pid=2707 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:18.174000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 12:45:18.175000 audit[2709]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_register_chain pid=2709 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:18.175000 audit[2709]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffe7241db0 a2=0 a3=0 items=0 ppid=2651 pid=2709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:18.175000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 16 12:45:18.177000 audit[2711]: NETFILTER_CFG table=nat:12 family=2 entries=2 op=nft_register_chain pid=2711 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:18.177000 audit[2711]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=384 a0=3 a1=ffffdda4ece0 a2=0 a3=0 items=0 ppid=2651 pid=2711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:18.177000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 16 12:45:18.204000 audit[2714]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=2714 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:18.204000 audit[2714]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=472 a0=3 a1=ffffd87944f0 a2=0 a3=0 items=0 ppid=2651 pid=2714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:18.204000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Dec 16 12:45:18.206000 audit[2716]: NETFILTER_CFG table=filter:14 family=2 entries=2 op=nft_register_chain pid=2716 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:18.206000 audit[2716]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffeb7a8fb0 a2=0 a3=0 items=0 ppid=2651 pid=2716 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:18.206000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Dec 16 12:45:18.208000 audit[2718]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=2718 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:18.208000 audit[2718]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=236 a0=3 a1=ffffddbe20a0 a2=0 a3=0 items=0 ppid=2651 pid=2718 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:18.208000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Dec 16 12:45:18.209000 audit[2720]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=2720 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:18.209000 audit[2720]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=248 a0=3 a1=ffffc4f2bb80 a2=0 a3=0 items=0 ppid=2651 pid=2720 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:18.209000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 12:45:18.211000 audit[2722]: NETFILTER_CFG table=filter:17 family=2 entries=1 op=nft_register_rule pid=2722 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:18.211000 audit[2722]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=232 a0=3 a1=ffffe52ae800 a2=0 a3=0 items=0 ppid=2651 pid=2722 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:18.211000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Dec 16 12:45:18.267000 audit[2752]: NETFILTER_CFG table=nat:18 family=10 entries=2 op=nft_register_chain pid=2752 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:45:18.267000 audit[2752]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=ffffeeec75c0 a2=0 a3=0 items=0 ppid=2651 pid=2752 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:18.267000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 16 12:45:18.269000 audit[2754]: NETFILTER_CFG table=filter:19 family=10 entries=2 op=nft_register_chain pid=2754 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:45:18.269000 audit[2754]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffd096c870 a2=0 a3=0 items=0 ppid=2651 pid=2754 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:18.269000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 16 12:45:18.270000 audit[2756]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=2756 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:45:18.270000 audit[2756]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd27124f0 a2=0 a3=0 items=0 ppid=2651 pid=2756 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:18.270000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 16 12:45:18.271000 audit[2758]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=2758 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:45:18.271000 audit[2758]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffd6c7850 a2=0 a3=0 items=0 ppid=2651 pid=2758 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:18.271000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Dec 16 12:45:18.273000 audit[2760]: NETFILTER_CFG table=filter:22 family=10 entries=1 op=nft_register_chain pid=2760 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:45:18.273000 audit[2760]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffeb9c4690 a2=0 a3=0 items=0 ppid=2651 pid=2760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:18.273000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Dec 16 12:45:18.274000 audit[2762]: NETFILTER_CFG table=filter:23 family=10 entries=1 op=nft_register_chain pid=2762 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:45:18.274000 audit[2762]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffe4e371f0 a2=0 a3=0 items=0 ppid=2651 pid=2762 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:18.274000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 12:45:18.276000 audit[2764]: NETFILTER_CFG table=filter:24 family=10 entries=1 op=nft_register_chain pid=2764 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:45:18.276000 audit[2764]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=fffff4234820 a2=0 a3=0 items=0 ppid=2651 pid=2764 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:18.276000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 16 12:45:18.278000 audit[2766]: NETFILTER_CFG table=nat:25 family=10 entries=2 op=nft_register_chain pid=2766 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:45:18.278000 audit[2766]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=384 a0=3 a1=ffffeb74f100 a2=0 a3=0 items=0 ppid=2651 pid=2766 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:18.278000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 16 12:45:18.279000 audit[2768]: NETFILTER_CFG table=nat:26 family=10 entries=2 op=nft_register_chain pid=2768 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:45:18.279000 audit[2768]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=484 a0=3 a1=ffffd86b4fe0 a2=0 a3=0 items=0 ppid=2651 pid=2768 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:18.279000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Dec 16 12:45:18.281000 audit[2770]: NETFILTER_CFG table=filter:27 family=10 entries=2 op=nft_register_chain pid=2770 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:45:18.281000 audit[2770]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffda9babc0 a2=0 a3=0 items=0 ppid=2651 pid=2770 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:18.281000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Dec 16 12:45:18.282000 audit[2772]: NETFILTER_CFG table=filter:28 family=10 entries=1 op=nft_register_rule pid=2772 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:45:18.282000 audit[2772]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=236 a0=3 a1=ffffde917a70 a2=0 a3=0 items=0 ppid=2651 pid=2772 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:18.282000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Dec 16 12:45:18.284000 audit[2774]: NETFILTER_CFG table=filter:29 family=10 entries=1 op=nft_register_rule pid=2774 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:45:18.284000 audit[2774]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=248 a0=3 a1=fffff3906230 a2=0 a3=0 items=0 ppid=2651 pid=2774 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:18.284000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 12:45:18.285000 audit[2776]: NETFILTER_CFG table=filter:30 family=10 entries=1 op=nft_register_rule pid=2776 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:45:18.285000 audit[2776]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=232 a0=3 a1=ffffc3219670 a2=0 a3=0 items=0 ppid=2651 pid=2776 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:18.285000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Dec 16 12:45:18.289000 audit[2781]: NETFILTER_CFG table=filter:31 family=2 entries=1 op=nft_register_chain pid=2781 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:18.289000 audit[2781]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffcdf146f0 a2=0 a3=0 items=0 ppid=2651 pid=2781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:18.289000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 16 12:45:18.291000 audit[2783]: NETFILTER_CFG table=filter:32 family=2 entries=1 op=nft_register_rule pid=2783 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:18.291000 audit[2783]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=ffffd9894780 a2=0 a3=0 items=0 ppid=2651 pid=2783 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:18.291000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 16 12:45:18.292000 audit[2785]: NETFILTER_CFG table=filter:33 family=2 entries=1 op=nft_register_rule pid=2785 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:18.292000 audit[2785]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffcc726340 a2=0 a3=0 items=0 ppid=2651 pid=2785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:18.292000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 16 12:45:18.294000 audit[2787]: NETFILTER_CFG table=filter:34 family=10 entries=1 op=nft_register_chain pid=2787 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:45:18.294000 audit[2787]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffcdff3a00 a2=0 a3=0 items=0 ppid=2651 pid=2787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:18.294000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 16 12:45:18.295000 audit[2789]: NETFILTER_CFG table=filter:35 family=10 entries=1 op=nft_register_rule pid=2789 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:45:18.295000 audit[2789]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=ffffea318660 a2=0 a3=0 items=0 ppid=2651 pid=2789 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:18.295000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 16 12:45:18.297000 audit[2791]: NETFILTER_CFG table=filter:36 family=10 entries=1 op=nft_register_rule pid=2791 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:45:18.297000 audit[2791]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffd4d10370 a2=0 a3=0 items=0 ppid=2651 pid=2791 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:18.297000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 16 12:45:18.352000 audit[2796]: NETFILTER_CFG table=nat:37 family=2 entries=2 op=nft_register_chain pid=2796 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:18.352000 audit[2796]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=520 a0=3 a1=ffffd4ccc430 a2=0 a3=0 items=0 ppid=2651 pid=2796 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:18.352000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Dec 16 12:45:18.354000 audit[2798]: NETFILTER_CFG table=nat:38 family=2 entries=1 op=nft_register_rule pid=2798 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:18.354000 audit[2798]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=288 a0=3 a1=ffffe3d308a0 a2=0 a3=0 items=0 ppid=2651 pid=2798 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:18.354000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Dec 16 12:45:18.360000 audit[2806]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2806 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:18.360000 audit[2806]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=300 a0=3 a1=ffffd98f07a0 a2=0 a3=0 items=0 ppid=2651 pid=2806 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:18.360000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Dec 16 12:45:18.364000 audit[2811]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2811 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:18.364000 audit[2811]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=ffffd4f24e00 a2=0 a3=0 items=0 ppid=2651 pid=2811 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:18.364000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Dec 16 12:45:18.366000 audit[2813]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2813 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:18.366000 audit[2813]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=512 a0=3 a1=ffffd30d7430 a2=0 a3=0 items=0 ppid=2651 pid=2813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:18.366000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Dec 16 12:45:18.367000 audit[2815]: NETFILTER_CFG table=filter:42 family=2 entries=1 op=nft_register_rule pid=2815 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:18.367000 audit[2815]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffc62f83e0 a2=0 a3=0 items=0 ppid=2651 pid=2815 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:18.367000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Dec 16 12:45:18.369000 audit[2817]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_rule pid=2817 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:18.369000 audit[2817]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=428 a0=3 a1=ffffeb6546a0 a2=0 a3=0 items=0 ppid=2651 pid=2817 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:18.369000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 16 12:45:18.371000 audit[2819]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_rule pid=2819 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:18.371000 audit[2819]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffc0716510 a2=0 a3=0 items=0 ppid=2651 pid=2819 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:18.371000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Dec 16 12:45:18.373094 systemd-networkd[1717]: docker0: Link UP Dec 16 12:45:18.388223 dockerd[2651]: time="2025-12-16T12:45:18.388183352Z" level=info msg="Loading containers: done." Dec 16 12:45:18.437091 dockerd[2651]: time="2025-12-16T12:45:18.437043282Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 16 12:45:18.437471 dockerd[2651]: time="2025-12-16T12:45:18.437296664Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 16 12:45:18.437580 dockerd[2651]: time="2025-12-16T12:45:18.437564775Z" level=info msg="Initializing buildkit" Dec 16 12:45:18.486751 dockerd[2651]: time="2025-12-16T12:45:18.486719289Z" level=info msg="Completed buildkit initialization" Dec 16 12:45:18.492181 dockerd[2651]: time="2025-12-16T12:45:18.492146268Z" level=info msg="Daemon has completed initialization" Dec 16 12:45:18.492366 dockerd[2651]: time="2025-12-16T12:45:18.492322945Z" level=info msg="API listen on /run/docker.sock" Dec 16 12:45:18.493059 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 16 12:45:18.492000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:45:19.058112 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3853088612-merged.mount: Deactivated successfully. Dec 16 12:45:19.166726 containerd[2138]: time="2025-12-16T12:45:19.166673292Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\"" Dec 16 12:45:20.049077 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount203495623.mount: Deactivated successfully. Dec 16 12:45:21.045566 containerd[2138]: time="2025-12-16T12:45:21.045512269Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:45:21.049558 containerd[2138]: time="2025-12-16T12:45:21.049514532Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.3: active requests=0, bytes read=23059757" Dec 16 12:45:21.052950 containerd[2138]: time="2025-12-16T12:45:21.052907543Z" level=info msg="ImageCreate event name:\"sha256:cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:45:21.057751 containerd[2138]: time="2025-12-16T12:45:21.057706455Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:45:21.058438 containerd[2138]: time="2025-12-16T12:45:21.058255953Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.3\" with image id \"sha256:cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\", size \"24567639\" in 1.891547268s" Dec 16 12:45:21.058438 containerd[2138]: time="2025-12-16T12:45:21.058286994Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\" returns image reference \"sha256:cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896\"" Dec 16 12:45:21.059002 containerd[2138]: time="2025-12-16T12:45:21.058934310Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\"" Dec 16 12:45:22.236274 containerd[2138]: time="2025-12-16T12:45:22.236220572Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:45:22.239890 containerd[2138]: time="2025-12-16T12:45:22.239682081Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.3: active requests=0, bytes read=19127323" Dec 16 12:45:22.246008 containerd[2138]: time="2025-12-16T12:45:22.245985545Z" level=info msg="ImageCreate event name:\"sha256:7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:45:22.250808 containerd[2138]: time="2025-12-16T12:45:22.250782577Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:45:22.251295 containerd[2138]: time="2025-12-16T12:45:22.251270553Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.3\" with image id \"sha256:7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\", size \"20719958\" in 1.192168054s" Dec 16 12:45:22.251352 containerd[2138]: time="2025-12-16T12:45:22.251298002Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\" returns image reference \"sha256:7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22\"" Dec 16 12:45:22.252231 containerd[2138]: time="2025-12-16T12:45:22.252210887Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\"" Dec 16 12:45:23.071220 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Dec 16 12:45:23.248407 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Dec 16 12:45:23.249667 containerd[2138]: time="2025-12-16T12:45:23.248901702Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:45:23.250077 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:45:23.252741 containerd[2138]: time="2025-12-16T12:45:23.252711175Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.3: active requests=0, bytes read=14186332" Dec 16 12:45:23.256510 containerd[2138]: time="2025-12-16T12:45:23.256479215Z" level=info msg="ImageCreate event name:\"sha256:2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:45:23.261714 containerd[2138]: time="2025-12-16T12:45:23.261688556Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:45:23.262233 containerd[2138]: time="2025-12-16T12:45:23.262019510Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.3\" with image id \"sha256:2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\", size \"15776215\" in 1.009787559s" Dec 16 12:45:23.262233 containerd[2138]: time="2025-12-16T12:45:23.262043023Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\" returns image reference \"sha256:2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6\"" Dec 16 12:45:23.262493 containerd[2138]: time="2025-12-16T12:45:23.262469309Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\"" Dec 16 12:45:23.355061 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:45:23.354000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:45:23.358354 kernel: kauditd_printk_skb: 132 callbacks suppressed Dec 16 12:45:23.358395 kernel: audit: type=1130 audit(1765889123.354:314): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:45:23.379399 (kubelet)[2934]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 12:45:23.404942 kubelet[2934]: E1216 12:45:23.404885 2934 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 12:45:23.406826 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 12:45:23.407012 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 12:45:23.408000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 12:45:23.409283 systemd[1]: kubelet.service: Consumed 103ms CPU time, 106.6M memory peak. Dec 16 12:45:23.423223 kernel: audit: type=1131 audit(1765889123.408:315): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 12:45:24.839923 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount991212430.mount: Deactivated successfully. Dec 16 12:45:25.365439 containerd[2138]: time="2025-12-16T12:45:25.365387083Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:45:25.369744 containerd[2138]: time="2025-12-16T12:45:25.369702884Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.3: active requests=0, bytes read=22802891" Dec 16 12:45:25.373413 containerd[2138]: time="2025-12-16T12:45:25.373383153Z" level=info msg="ImageCreate event name:\"sha256:4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:45:25.378027 containerd[2138]: time="2025-12-16T12:45:25.377995035Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:45:25.378466 containerd[2138]: time="2025-12-16T12:45:25.378258708Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.3\" with image id \"sha256:4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162\", repo tag \"registry.k8s.io/kube-proxy:v1.34.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\", size \"22804272\" in 2.115751397s" Dec 16 12:45:25.378466 containerd[2138]: time="2025-12-16T12:45:25.378284460Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\" returns image reference \"sha256:4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162\"" Dec 16 12:45:25.378701 containerd[2138]: time="2025-12-16T12:45:25.378668937Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Dec 16 12:45:26.026667 update_engine[2112]: I20251216 12:45:26.026132 2112 update_attempter.cc:509] Updating boot flags... Dec 16 12:45:26.138717 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3223554544.mount: Deactivated successfully. Dec 16 12:45:27.092030 containerd[2138]: time="2025-12-16T12:45:27.091961785Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:45:27.096025 containerd[2138]: time="2025-12-16T12:45:27.095982569Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=19673352" Dec 16 12:45:27.100364 containerd[2138]: time="2025-12-16T12:45:27.100327731Z" level=info msg="ImageCreate event name:\"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:45:27.106232 containerd[2138]: time="2025-12-16T12:45:27.105767423Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:45:27.106297 containerd[2138]: time="2025-12-16T12:45:27.106272903Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"20392204\" in 1.72757255s" Dec 16 12:45:27.106320 containerd[2138]: time="2025-12-16T12:45:27.106304664Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\"" Dec 16 12:45:27.106731 containerd[2138]: time="2025-12-16T12:45:27.106711549Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Dec 16 12:45:27.740932 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1517541474.mount: Deactivated successfully. Dec 16 12:45:27.762291 containerd[2138]: time="2025-12-16T12:45:27.762249837Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:45:27.765576 containerd[2138]: time="2025-12-16T12:45:27.765422018Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=0" Dec 16 12:45:27.768519 containerd[2138]: time="2025-12-16T12:45:27.768494203Z" level=info msg="ImageCreate event name:\"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:45:27.772863 containerd[2138]: time="2025-12-16T12:45:27.772826117Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:45:27.773439 containerd[2138]: time="2025-12-16T12:45:27.773132326Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"267939\" in 665.938618ms" Dec 16 12:45:27.773439 containerd[2138]: time="2025-12-16T12:45:27.773158671Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\"" Dec 16 12:45:27.773742 containerd[2138]: time="2025-12-16T12:45:27.773719633Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Dec 16 12:45:28.479996 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2694334077.mount: Deactivated successfully. Dec 16 12:45:31.118826 containerd[2138]: time="2025-12-16T12:45:31.118758834Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:45:31.122096 containerd[2138]: time="2025-12-16T12:45:31.122049391Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=86907416" Dec 16 12:45:31.125635 containerd[2138]: time="2025-12-16T12:45:31.125605132Z" level=info msg="ImageCreate event name:\"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:45:31.130858 containerd[2138]: time="2025-12-16T12:45:31.130826207Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:45:31.131634 containerd[2138]: time="2025-12-16T12:45:31.131217882Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"98207481\" in 3.357464288s" Dec 16 12:45:31.131634 containerd[2138]: time="2025-12-16T12:45:31.131242955Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\"" Dec 16 12:45:33.498462 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Dec 16 12:45:33.501364 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:45:33.627928 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 16 12:45:33.627000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 12:45:33.627985 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 16 12:45:33.628244 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:45:33.634471 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:45:33.641506 kernel: audit: type=1130 audit(1765889133.627:316): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 12:45:33.663024 systemd[1]: Reload requested from client PID 3154 ('systemctl') (unit session-9.scope)... Dec 16 12:45:33.663132 systemd[1]: Reloading... Dec 16 12:45:33.730230 zram_generator::config[3204]: No configuration found. Dec 16 12:45:33.900756 systemd[1]: Reloading finished in 237 ms. Dec 16 12:45:33.920000 audit: BPF prog-id=87 op=LOAD Dec 16 12:45:33.939521 kernel: audit: type=1334 audit(1765889133.920:317): prog-id=87 op=LOAD Dec 16 12:45:33.939593 kernel: audit: type=1334 audit(1765889133.920:318): prog-id=68 op=UNLOAD Dec 16 12:45:33.939609 kernel: audit: type=1334 audit(1765889133.921:319): prog-id=88 op=LOAD Dec 16 12:45:33.939623 kernel: audit: type=1334 audit(1765889133.921:320): prog-id=82 op=UNLOAD Dec 16 12:45:33.939646 kernel: audit: type=1334 audit(1765889133.921:321): prog-id=89 op=LOAD Dec 16 12:45:33.939661 kernel: audit: type=1334 audit(1765889133.921:322): prog-id=90 op=LOAD Dec 16 12:45:33.939675 kernel: audit: type=1334 audit(1765889133.921:323): prog-id=83 op=UNLOAD Dec 16 12:45:33.920000 audit: BPF prog-id=68 op=UNLOAD Dec 16 12:45:33.921000 audit: BPF prog-id=88 op=LOAD Dec 16 12:45:33.921000 audit: BPF prog-id=82 op=UNLOAD Dec 16 12:45:33.921000 audit: BPF prog-id=89 op=LOAD Dec 16 12:45:33.921000 audit: BPF prog-id=90 op=LOAD Dec 16 12:45:33.921000 audit: BPF prog-id=83 op=UNLOAD Dec 16 12:45:33.921000 audit: BPF prog-id=84 op=UNLOAD Dec 16 12:45:33.956248 kernel: audit: type=1334 audit(1765889133.921:324): prog-id=84 op=UNLOAD Dec 16 12:45:33.926000 audit: BPF prog-id=91 op=LOAD Dec 16 12:45:33.960649 kernel: audit: type=1334 audit(1765889133.926:325): prog-id=91 op=LOAD Dec 16 12:45:33.926000 audit: BPF prog-id=75 op=UNLOAD Dec 16 12:45:33.926000 audit: BPF prog-id=92 op=LOAD Dec 16 12:45:33.926000 audit: BPF prog-id=93 op=LOAD Dec 16 12:45:33.931000 audit: BPF prog-id=76 op=UNLOAD Dec 16 12:45:33.931000 audit: BPF prog-id=77 op=UNLOAD Dec 16 12:45:33.934000 audit: BPF prog-id=94 op=LOAD Dec 16 12:45:33.934000 audit: BPF prog-id=72 op=UNLOAD Dec 16 12:45:33.942000 audit: BPF prog-id=95 op=LOAD Dec 16 12:45:33.950000 audit: BPF prog-id=96 op=LOAD Dec 16 12:45:33.956000 audit: BPF prog-id=73 op=UNLOAD Dec 16 12:45:33.956000 audit: BPF prog-id=74 op=UNLOAD Dec 16 12:45:33.959000 audit: BPF prog-id=97 op=LOAD Dec 16 12:45:33.959000 audit: BPF prog-id=67 op=UNLOAD Dec 16 12:45:33.959000 audit: BPF prog-id=98 op=LOAD Dec 16 12:45:33.959000 audit: BPF prog-id=99 op=LOAD Dec 16 12:45:33.959000 audit: BPF prog-id=85 op=UNLOAD Dec 16 12:45:33.959000 audit: BPF prog-id=86 op=UNLOAD Dec 16 12:45:33.961000 audit: BPF prog-id=100 op=LOAD Dec 16 12:45:33.961000 audit: BPF prog-id=78 op=UNLOAD Dec 16 12:45:33.962000 audit: BPF prog-id=101 op=LOAD Dec 16 12:45:33.962000 audit: BPF prog-id=69 op=UNLOAD Dec 16 12:45:33.962000 audit: BPF prog-id=102 op=LOAD Dec 16 12:45:33.962000 audit: BPF prog-id=103 op=LOAD Dec 16 12:45:33.962000 audit: BPF prog-id=70 op=UNLOAD Dec 16 12:45:33.962000 audit: BPF prog-id=71 op=UNLOAD Dec 16 12:45:33.963000 audit: BPF prog-id=104 op=LOAD Dec 16 12:45:33.963000 audit: BPF prog-id=79 op=UNLOAD Dec 16 12:45:33.963000 audit: BPF prog-id=105 op=LOAD Dec 16 12:45:33.963000 audit: BPF prog-id=106 op=LOAD Dec 16 12:45:33.963000 audit: BPF prog-id=80 op=UNLOAD Dec 16 12:45:33.964000 audit: BPF prog-id=81 op=UNLOAD Dec 16 12:45:34.030659 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 16 12:45:34.030741 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 16 12:45:34.031019 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:45:34.031083 systemd[1]: kubelet.service: Consumed 77ms CPU time, 95.1M memory peak. Dec 16 12:45:34.029000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 12:45:34.034473 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:45:34.397091 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:45:34.396000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:45:34.400452 (kubelet)[3270]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 12:45:34.533664 kubelet[3270]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 12:45:34.533664 kubelet[3270]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 12:45:34.534508 kubelet[3270]: I1216 12:45:34.534408 3270 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 12:45:35.004178 kubelet[3270]: I1216 12:45:35.004138 3270 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Dec 16 12:45:35.004178 kubelet[3270]: I1216 12:45:35.004167 3270 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 12:45:35.004178 kubelet[3270]: I1216 12:45:35.004192 3270 watchdog_linux.go:95] "Systemd watchdog is not enabled" Dec 16 12:45:35.004178 kubelet[3270]: I1216 12:45:35.004196 3270 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 12:45:35.004409 kubelet[3270]: I1216 12:45:35.004372 3270 server.go:956] "Client rotation is on, will bootstrap in background" Dec 16 12:45:35.015194 kubelet[3270]: I1216 12:45:35.015169 3270 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 12:45:35.015826 kubelet[3270]: E1216 12:45:35.015717 3270 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.20.37:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.37:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 16 12:45:35.018809 kubelet[3270]: I1216 12:45:35.018795 3270 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 12:45:35.021747 kubelet[3270]: I1216 12:45:35.021271 3270 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Dec 16 12:45:35.021747 kubelet[3270]: I1216 12:45:35.021413 3270 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 12:45:35.021747 kubelet[3270]: I1216 12:45:35.021431 3270 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4515.1.0-a-4ca6cdd03e","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 12:45:35.021747 kubelet[3270]: I1216 12:45:35.021531 3270 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 12:45:35.021894 kubelet[3270]: I1216 12:45:35.021538 3270 container_manager_linux.go:306] "Creating device plugin manager" Dec 16 12:45:35.021894 kubelet[3270]: I1216 12:45:35.021616 3270 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Dec 16 12:45:35.031170 kubelet[3270]: I1216 12:45:35.031150 3270 state_mem.go:36] "Initialized new in-memory state store" Dec 16 12:45:35.032358 kubelet[3270]: I1216 12:45:35.032341 3270 kubelet.go:475] "Attempting to sync node with API server" Dec 16 12:45:35.032444 kubelet[3270]: I1216 12:45:35.032435 3270 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 12:45:35.032810 kubelet[3270]: E1216 12:45:35.032788 3270 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.37:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4515.1.0-a-4ca6cdd03e&limit=500&resourceVersion=0\": dial tcp 10.200.20.37:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 16 12:45:35.033149 kubelet[3270]: I1216 12:45:35.033138 3270 kubelet.go:387] "Adding apiserver pod source" Dec 16 12:45:35.033544 kubelet[3270]: I1216 12:45:35.033527 3270 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 12:45:35.034406 kubelet[3270]: I1216 12:45:35.034386 3270 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 16 12:45:35.034779 kubelet[3270]: I1216 12:45:35.034758 3270 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 16 12:45:35.034779 kubelet[3270]: I1216 12:45:35.034779 3270 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Dec 16 12:45:35.034841 kubelet[3270]: W1216 12:45:35.034810 3270 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 16 12:45:35.034954 kubelet[3270]: E1216 12:45:35.034935 3270 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.37:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.37:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 16 12:45:35.037501 kubelet[3270]: I1216 12:45:35.037475 3270 server.go:1262] "Started kubelet" Dec 16 12:45:35.041127 kubelet[3270]: I1216 12:45:35.039570 3270 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 12:45:35.041127 kubelet[3270]: I1216 12:45:35.039692 3270 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 12:45:35.041127 kubelet[3270]: I1216 12:45:35.039737 3270 server_v1.go:49] "podresources" method="list" useActivePods=true Dec 16 12:45:35.041127 kubelet[3270]: I1216 12:45:35.039960 3270 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 12:45:35.041127 kubelet[3270]: I1216 12:45:35.040451 3270 server.go:310] "Adding debug handlers to kubelet server" Dec 16 12:45:35.043238 kubelet[3270]: I1216 12:45:35.043193 3270 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 12:45:35.044482 kubelet[3270]: I1216 12:45:35.044458 3270 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 12:45:35.046289 kubelet[3270]: E1216 12:45:35.042420 3270 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.37:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.37:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4515.1.0-a-4ca6cdd03e.1881b2cfeea11bb0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4515.1.0-a-4ca6cdd03e,UID:ci-4515.1.0-a-4ca6cdd03e,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4515.1.0-a-4ca6cdd03e,},FirstTimestamp:2025-12-16 12:45:35.03745528 +0000 UTC m=+0.634466120,LastTimestamp:2025-12-16 12:45:35.03745528 +0000 UTC m=+0.634466120,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4515.1.0-a-4ca6cdd03e,}" Dec 16 12:45:35.046436 kubelet[3270]: I1216 12:45:35.046424 3270 volume_manager.go:313] "Starting Kubelet Volume Manager" Dec 16 12:45:35.046659 kubelet[3270]: E1216 12:45:35.046641 3270 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4515.1.0-a-4ca6cdd03e\" not found" Dec 16 12:45:35.047262 kubelet[3270]: I1216 12:45:35.047249 3270 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 16 12:45:35.047397 kubelet[3270]: I1216 12:45:35.047387 3270 reconciler.go:29] "Reconciler: start to sync state" Dec 16 12:45:35.048190 kubelet[3270]: E1216 12:45:35.047375 3270 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4515.1.0-a-4ca6cdd03e?timeout=10s\": dial tcp 10.200.20.37:6443: connect: connection refused" interval="200ms" Dec 16 12:45:35.048711 kubelet[3270]: I1216 12:45:35.048696 3270 factory.go:223] Registration of the systemd container factory successfully Dec 16 12:45:35.048858 kubelet[3270]: I1216 12:45:35.048841 3270 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 12:45:35.049437 kubelet[3270]: E1216 12:45:35.049421 3270 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.37:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.37:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 16 12:45:35.049919 kubelet[3270]: E1216 12:45:35.049899 3270 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 12:45:35.050289 kubelet[3270]: I1216 12:45:35.050261 3270 factory.go:223] Registration of the containerd container factory successfully Dec 16 12:45:35.052000 audit[3287]: NETFILTER_CFG table=mangle:45 family=10 entries=2 op=nft_register_chain pid=3287 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:45:35.052000 audit[3287]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffe78dac00 a2=0 a3=0 items=0 ppid=3270 pid=3287 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:35.052000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 16 12:45:35.055246 kubelet[3270]: I1216 12:45:35.055198 3270 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Dec 16 12:45:35.054000 audit[3289]: NETFILTER_CFG table=mangle:46 family=2 entries=2 op=nft_register_chain pid=3289 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:35.054000 audit[3289]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=fffff17e3f20 a2=0 a3=0 items=0 ppid=3270 pid=3289 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:35.054000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 16 12:45:35.056000 audit[3290]: NETFILTER_CFG table=mangle:47 family=10 entries=1 op=nft_register_chain pid=3290 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:45:35.056000 audit[3290]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff035fe20 a2=0 a3=0 items=0 ppid=3270 pid=3290 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:35.056000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 16 12:45:35.056000 audit[3292]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_chain pid=3292 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:35.056000 audit[3292]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff4439350 a2=0 a3=0 items=0 ppid=3270 pid=3292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:35.056000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 16 12:45:35.059114 kubelet[3270]: I1216 12:45:35.058927 3270 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 12:45:35.059114 kubelet[3270]: I1216 12:45:35.058943 3270 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 12:45:35.059114 kubelet[3270]: I1216 12:45:35.058957 3270 state_mem.go:36] "Initialized new in-memory state store" Dec 16 12:45:35.058000 audit[3293]: NETFILTER_CFG table=nat:49 family=10 entries=1 op=nft_register_chain pid=3293 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:45:35.058000 audit[3293]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffcc5317e0 a2=0 a3=0 items=0 ppid=3270 pid=3293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:35.058000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 16 12:45:35.060000 audit[3296]: NETFILTER_CFG table=filter:50 family=2 entries=2 op=nft_register_chain pid=3296 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:35.060000 audit[3296]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffe3bc5780 a2=0 a3=0 items=0 ppid=3270 pid=3296 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:35.060000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 12:45:35.060000 audit[3297]: NETFILTER_CFG table=filter:51 family=10 entries=1 op=nft_register_chain pid=3297 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:45:35.060000 audit[3297]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd066cb70 a2=0 a3=0 items=0 ppid=3270 pid=3297 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:35.060000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 16 12:45:35.062000 audit[3299]: NETFILTER_CFG table=filter:52 family=2 entries=2 op=nft_register_chain pid=3299 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:35.062000 audit[3299]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffc1a7ec30 a2=0 a3=0 items=0 ppid=3270 pid=3299 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:35.062000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 12:45:35.066054 kubelet[3270]: I1216 12:45:35.065844 3270 policy_none.go:49] "None policy: Start" Dec 16 12:45:35.066054 kubelet[3270]: I1216 12:45:35.065870 3270 memory_manager.go:187] "Starting memorymanager" policy="None" Dec 16 12:45:35.066054 kubelet[3270]: I1216 12:45:35.065880 3270 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Dec 16 12:45:35.070588 kubelet[3270]: I1216 12:45:35.070574 3270 policy_none.go:47] "Start" Dec 16 12:45:35.073943 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 16 12:45:35.084155 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 16 12:45:35.086000 audit[3303]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=3303 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:35.086000 audit[3303]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=924 a0=3 a1=ffffe0f4ffa0 a2=0 a3=0 items=0 ppid=3270 pid=3303 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:35.086000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F380000002D2D737263003132372E Dec 16 12:45:35.089044 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 16 12:45:35.089651 kubelet[3270]: I1216 12:45:35.089623 3270 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Dec 16 12:45:35.089651 kubelet[3270]: I1216 12:45:35.089651 3270 status_manager.go:244] "Starting to sync pod status with apiserver" Dec 16 12:45:35.089717 kubelet[3270]: I1216 12:45:35.089678 3270 kubelet.go:2427] "Starting kubelet main sync loop" Dec 16 12:45:35.089745 kubelet[3270]: E1216 12:45:35.089709 3270 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 12:45:35.090692 kubelet[3270]: E1216 12:45:35.090658 3270 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.37:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.37:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 16 12:45:35.090000 audit[3304]: NETFILTER_CFG table=mangle:54 family=2 entries=1 op=nft_register_chain pid=3304 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:35.090000 audit[3304]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd6b5b8d0 a2=0 a3=0 items=0 ppid=3270 pid=3304 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:35.090000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 16 12:45:35.091000 audit[3305]: NETFILTER_CFG table=nat:55 family=2 entries=1 op=nft_register_chain pid=3305 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:35.091000 audit[3305]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffac76e60 a2=0 a3=0 items=0 ppid=3270 pid=3305 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:35.091000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 16 12:45:35.092000 audit[3306]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_chain pid=3306 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:35.092000 audit[3306]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe92a2df0 a2=0 a3=0 items=0 ppid=3270 pid=3306 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:35.092000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 16 12:45:35.094932 kubelet[3270]: E1216 12:45:35.094915 3270 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 16 12:45:35.095467 kubelet[3270]: I1216 12:45:35.095390 3270 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 12:45:35.095467 kubelet[3270]: I1216 12:45:35.095403 3270 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 12:45:35.097443 kubelet[3270]: I1216 12:45:35.097417 3270 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 12:45:35.097732 kubelet[3270]: E1216 12:45:35.097718 3270 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 12:45:35.097977 kubelet[3270]: E1216 12:45:35.097962 3270 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4515.1.0-a-4ca6cdd03e\" not found" Dec 16 12:45:35.196803 kubelet[3270]: I1216 12:45:35.196727 3270 kubelet_node_status.go:75] "Attempting to register node" node="ci-4515.1.0-a-4ca6cdd03e" Dec 16 12:45:35.197218 kubelet[3270]: E1216 12:45:35.197190 3270 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.37:6443/api/v1/nodes\": dial tcp 10.200.20.37:6443: connect: connection refused" node="ci-4515.1.0-a-4ca6cdd03e" Dec 16 12:45:35.203019 systemd[1]: Created slice kubepods-burstable-pod636732ac188bd5f3e998635a318841c7.slice - libcontainer container kubepods-burstable-pod636732ac188bd5f3e998635a318841c7.slice. Dec 16 12:45:35.209045 kubelet[3270]: E1216 12:45:35.208767 3270 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515.1.0-a-4ca6cdd03e\" not found" node="ci-4515.1.0-a-4ca6cdd03e" Dec 16 12:45:35.212244 systemd[1]: Created slice kubepods-burstable-pod05d7d983eec0103a607f5d846a066a28.slice - libcontainer container kubepods-burstable-pod05d7d983eec0103a607f5d846a066a28.slice. Dec 16 12:45:35.213763 kubelet[3270]: E1216 12:45:35.213747 3270 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515.1.0-a-4ca6cdd03e\" not found" node="ci-4515.1.0-a-4ca6cdd03e" Dec 16 12:45:35.228348 systemd[1]: Created slice kubepods-burstable-pod5983d43d303ffc91919510ec80de2669.slice - libcontainer container kubepods-burstable-pod5983d43d303ffc91919510ec80de2669.slice. Dec 16 12:45:35.229863 kubelet[3270]: E1216 12:45:35.229734 3270 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515.1.0-a-4ca6cdd03e\" not found" node="ci-4515.1.0-a-4ca6cdd03e" Dec 16 12:45:35.248293 kubelet[3270]: I1216 12:45:35.248271 3270 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/636732ac188bd5f3e998635a318841c7-ca-certs\") pod \"kube-apiserver-ci-4515.1.0-a-4ca6cdd03e\" (UID: \"636732ac188bd5f3e998635a318841c7\") " pod="kube-system/kube-apiserver-ci-4515.1.0-a-4ca6cdd03e" Dec 16 12:45:35.248684 kubelet[3270]: E1216 12:45:35.248664 3270 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4515.1.0-a-4ca6cdd03e?timeout=10s\": dial tcp 10.200.20.37:6443: connect: connection refused" interval="400ms" Dec 16 12:45:35.350132 kubelet[3270]: I1216 12:45:35.349044 3270 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/636732ac188bd5f3e998635a318841c7-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4515.1.0-a-4ca6cdd03e\" (UID: \"636732ac188bd5f3e998635a318841c7\") " pod="kube-system/kube-apiserver-ci-4515.1.0-a-4ca6cdd03e" Dec 16 12:45:35.350132 kubelet[3270]: I1216 12:45:35.349079 3270 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/05d7d983eec0103a607f5d846a066a28-flexvolume-dir\") pod \"kube-controller-manager-ci-4515.1.0-a-4ca6cdd03e\" (UID: \"05d7d983eec0103a607f5d846a066a28\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-a-4ca6cdd03e" Dec 16 12:45:35.350615 kubelet[3270]: I1216 12:45:35.350392 3270 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/05d7d983eec0103a607f5d846a066a28-kubeconfig\") pod \"kube-controller-manager-ci-4515.1.0-a-4ca6cdd03e\" (UID: \"05d7d983eec0103a607f5d846a066a28\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-a-4ca6cdd03e" Dec 16 12:45:35.350615 kubelet[3270]: I1216 12:45:35.350423 3270 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/05d7d983eec0103a607f5d846a066a28-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4515.1.0-a-4ca6cdd03e\" (UID: \"05d7d983eec0103a607f5d846a066a28\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-a-4ca6cdd03e" Dec 16 12:45:35.351258 kubelet[3270]: I1216 12:45:35.350806 3270 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5983d43d303ffc91919510ec80de2669-kubeconfig\") pod \"kube-scheduler-ci-4515.1.0-a-4ca6cdd03e\" (UID: \"5983d43d303ffc91919510ec80de2669\") " pod="kube-system/kube-scheduler-ci-4515.1.0-a-4ca6cdd03e" Dec 16 12:45:35.351258 kubelet[3270]: I1216 12:45:35.350853 3270 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/636732ac188bd5f3e998635a318841c7-k8s-certs\") pod \"kube-apiserver-ci-4515.1.0-a-4ca6cdd03e\" (UID: \"636732ac188bd5f3e998635a318841c7\") " pod="kube-system/kube-apiserver-ci-4515.1.0-a-4ca6cdd03e" Dec 16 12:45:35.351258 kubelet[3270]: I1216 12:45:35.350864 3270 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/05d7d983eec0103a607f5d846a066a28-ca-certs\") pod \"kube-controller-manager-ci-4515.1.0-a-4ca6cdd03e\" (UID: \"05d7d983eec0103a607f5d846a066a28\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-a-4ca6cdd03e" Dec 16 12:45:35.351258 kubelet[3270]: I1216 12:45:35.350877 3270 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/05d7d983eec0103a607f5d846a066a28-k8s-certs\") pod \"kube-controller-manager-ci-4515.1.0-a-4ca6cdd03e\" (UID: \"05d7d983eec0103a607f5d846a066a28\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-a-4ca6cdd03e" Dec 16 12:45:35.399542 kubelet[3270]: I1216 12:45:35.399528 3270 kubelet_node_status.go:75] "Attempting to register node" node="ci-4515.1.0-a-4ca6cdd03e" Dec 16 12:45:35.399930 kubelet[3270]: E1216 12:45:35.399904 3270 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.37:6443/api/v1/nodes\": dial tcp 10.200.20.37:6443: connect: connection refused" node="ci-4515.1.0-a-4ca6cdd03e" Dec 16 12:45:35.516129 containerd[2138]: time="2025-12-16T12:45:35.516083400Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4515.1.0-a-4ca6cdd03e,Uid:636732ac188bd5f3e998635a318841c7,Namespace:kube-system,Attempt:0,}" Dec 16 12:45:35.521998 containerd[2138]: time="2025-12-16T12:45:35.521937229Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4515.1.0-a-4ca6cdd03e,Uid:05d7d983eec0103a607f5d846a066a28,Namespace:kube-system,Attempt:0,}" Dec 16 12:45:35.536889 containerd[2138]: time="2025-12-16T12:45:35.536842234Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4515.1.0-a-4ca6cdd03e,Uid:5983d43d303ffc91919510ec80de2669,Namespace:kube-system,Attempt:0,}" Dec 16 12:45:35.649770 kubelet[3270]: E1216 12:45:35.649679 3270 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4515.1.0-a-4ca6cdd03e?timeout=10s\": dial tcp 10.200.20.37:6443: connect: connection refused" interval="800ms" Dec 16 12:45:35.801514 kubelet[3270]: I1216 12:45:35.801487 3270 kubelet_node_status.go:75] "Attempting to register node" node="ci-4515.1.0-a-4ca6cdd03e" Dec 16 12:45:35.801816 kubelet[3270]: E1216 12:45:35.801791 3270 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.37:6443/api/v1/nodes\": dial tcp 10.200.20.37:6443: connect: connection refused" node="ci-4515.1.0-a-4ca6cdd03e" Dec 16 12:45:36.092487 kubelet[3270]: E1216 12:45:36.092414 3270 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.37:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4515.1.0-a-4ca6cdd03e&limit=500&resourceVersion=0\": dial tcp 10.200.20.37:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 16 12:45:36.141545 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1524080414.mount: Deactivated successfully. Dec 16 12:45:36.172288 containerd[2138]: time="2025-12-16T12:45:36.172242112Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 12:45:36.182783 containerd[2138]: time="2025-12-16T12:45:36.182690455Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 16 12:45:36.187830 containerd[2138]: time="2025-12-16T12:45:36.187406388Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 12:45:36.191057 containerd[2138]: time="2025-12-16T12:45:36.191020706Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 12:45:36.201084 containerd[2138]: time="2025-12-16T12:45:36.201032974Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 16 12:45:36.206293 containerd[2138]: time="2025-12-16T12:45:36.206263201Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 12:45:36.215373 containerd[2138]: time="2025-12-16T12:45:36.215335666Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 16 12:45:36.222431 containerd[2138]: time="2025-12-16T12:45:36.222369008Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 12:45:36.222985 containerd[2138]: time="2025-12-16T12:45:36.222955929Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 695.720222ms" Dec 16 12:45:36.234532 containerd[2138]: time="2025-12-16T12:45:36.234501191Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 681.524885ms" Dec 16 12:45:36.239204 containerd[2138]: time="2025-12-16T12:45:36.239174219Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 697.341996ms" Dec 16 12:45:36.244416 kubelet[3270]: E1216 12:45:36.244378 3270 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.37:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.37:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 16 12:45:36.291878 containerd[2138]: time="2025-12-16T12:45:36.291840444Z" level=info msg="connecting to shim 32063e8b6c55825198409de86d8c8b23ef3bbe75b8bf07533c25c12802b937d2" address="unix:///run/containerd/s/e4d6c4a0187bdae20a8980f1a9093f95af83226c508b44056dc57d2475700bcf" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:45:36.310352 systemd[1]: Started cri-containerd-32063e8b6c55825198409de86d8c8b23ef3bbe75b8bf07533c25c12802b937d2.scope - libcontainer container 32063e8b6c55825198409de86d8c8b23ef3bbe75b8bf07533c25c12802b937d2. Dec 16 12:45:36.324569 containerd[2138]: time="2025-12-16T12:45:36.324505711Z" level=info msg="connecting to shim 98928f17e8561b6ca2446f3611fbe642e3025dd73596400179530f04582bbab5" address="unix:///run/containerd/s/ba862362df079bf3b1ecb7434033d62156f0a77eae8edb67dde093d86411098c" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:45:36.325000 audit: BPF prog-id=107 op=LOAD Dec 16 12:45:36.327290 containerd[2138]: time="2025-12-16T12:45:36.327258797Z" level=info msg="connecting to shim 8086b4783691397333a0c0dfb76ff57f827467207ade7b0f96fbb01d7da8dc6e" address="unix:///run/containerd/s/a9a93e061cf781ffc8a21637fb81844ef37698356181a9ee38915f20e5692b3c" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:45:36.326000 audit: BPF prog-id=108 op=LOAD Dec 16 12:45:36.326000 audit[3330]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=3318 pid=3330 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:36.326000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332303633653862366335353832353139383430396465383664386338 Dec 16 12:45:36.326000 audit: BPF prog-id=108 op=UNLOAD Dec 16 12:45:36.326000 audit[3330]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3318 pid=3330 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:36.326000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332303633653862366335353832353139383430396465383664386338 Dec 16 12:45:36.326000 audit: BPF prog-id=109 op=LOAD Dec 16 12:45:36.326000 audit[3330]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=3318 pid=3330 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:36.326000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332303633653862366335353832353139383430396465383664386338 Dec 16 12:45:36.327000 audit: BPF prog-id=110 op=LOAD Dec 16 12:45:36.327000 audit[3330]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=3318 pid=3330 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:36.327000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332303633653862366335353832353139383430396465383664386338 Dec 16 12:45:36.327000 audit: BPF prog-id=110 op=UNLOAD Dec 16 12:45:36.327000 audit[3330]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3318 pid=3330 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:36.327000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332303633653862366335353832353139383430396465383664386338 Dec 16 12:45:36.327000 audit: BPF prog-id=109 op=UNLOAD Dec 16 12:45:36.327000 audit[3330]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3318 pid=3330 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:36.327000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332303633653862366335353832353139383430396465383664386338 Dec 16 12:45:36.327000 audit: BPF prog-id=111 op=LOAD Dec 16 12:45:36.327000 audit[3330]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=3318 pid=3330 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:36.327000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332303633653862366335353832353139383430396465383664386338 Dec 16 12:45:36.353371 systemd[1]: Started cri-containerd-8086b4783691397333a0c0dfb76ff57f827467207ade7b0f96fbb01d7da8dc6e.scope - libcontainer container 8086b4783691397333a0c0dfb76ff57f827467207ade7b0f96fbb01d7da8dc6e. Dec 16 12:45:36.358731 systemd[1]: Started cri-containerd-98928f17e8561b6ca2446f3611fbe642e3025dd73596400179530f04582bbab5.scope - libcontainer container 98928f17e8561b6ca2446f3611fbe642e3025dd73596400179530f04582bbab5. Dec 16 12:45:36.365000 audit: BPF prog-id=112 op=LOAD Dec 16 12:45:36.366000 audit: BPF prog-id=113 op=LOAD Dec 16 12:45:36.366000 audit[3389]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=3369 pid=3389 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:36.366000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830383662343738333639313339373333336130633064666237366666 Dec 16 12:45:36.366000 audit: BPF prog-id=113 op=UNLOAD Dec 16 12:45:36.366000 audit[3389]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3369 pid=3389 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:36.366000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830383662343738333639313339373333336130633064666237366666 Dec 16 12:45:36.366000 audit: BPF prog-id=114 op=LOAD Dec 16 12:45:36.366000 audit[3389]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=3369 pid=3389 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:36.366000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830383662343738333639313339373333336130633064666237366666 Dec 16 12:45:36.366000 audit: BPF prog-id=115 op=LOAD Dec 16 12:45:36.366000 audit[3389]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=3369 pid=3389 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:36.366000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830383662343738333639313339373333336130633064666237366666 Dec 16 12:45:36.366000 audit: BPF prog-id=115 op=UNLOAD Dec 16 12:45:36.366000 audit[3389]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3369 pid=3389 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:36.366000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830383662343738333639313339373333336130633064666237366666 Dec 16 12:45:36.366000 audit: BPF prog-id=114 op=UNLOAD Dec 16 12:45:36.366000 audit[3389]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3369 pid=3389 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:36.366000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830383662343738333639313339373333336130633064666237366666 Dec 16 12:45:36.366000 audit: BPF prog-id=116 op=LOAD Dec 16 12:45:36.366000 audit[3389]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=3369 pid=3389 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:36.366000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830383662343738333639313339373333336130633064666237366666 Dec 16 12:45:36.369466 containerd[2138]: time="2025-12-16T12:45:36.368371879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4515.1.0-a-4ca6cdd03e,Uid:636732ac188bd5f3e998635a318841c7,Namespace:kube-system,Attempt:0,} returns sandbox id \"32063e8b6c55825198409de86d8c8b23ef3bbe75b8bf07533c25c12802b937d2\"" Dec 16 12:45:36.373000 audit: BPF prog-id=117 op=LOAD Dec 16 12:45:36.374000 audit: BPF prog-id=118 op=LOAD Dec 16 12:45:36.374000 audit[3391]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=3362 pid=3391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:36.374000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3938393238663137653835363162366361323434366633363131666265 Dec 16 12:45:36.374000 audit: BPF prog-id=118 op=UNLOAD Dec 16 12:45:36.374000 audit[3391]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3362 pid=3391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:36.374000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3938393238663137653835363162366361323434366633363131666265 Dec 16 12:45:36.374000 audit: BPF prog-id=119 op=LOAD Dec 16 12:45:36.374000 audit[3391]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=3362 pid=3391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:36.374000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3938393238663137653835363162366361323434366633363131666265 Dec 16 12:45:36.374000 audit: BPF prog-id=120 op=LOAD Dec 16 12:45:36.374000 audit[3391]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=3362 pid=3391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:36.374000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3938393238663137653835363162366361323434366633363131666265 Dec 16 12:45:36.374000 audit: BPF prog-id=120 op=UNLOAD Dec 16 12:45:36.374000 audit[3391]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3362 pid=3391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:36.374000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3938393238663137653835363162366361323434366633363131666265 Dec 16 12:45:36.374000 audit: BPF prog-id=119 op=UNLOAD Dec 16 12:45:36.374000 audit[3391]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3362 pid=3391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:36.374000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3938393238663137653835363162366361323434366633363131666265 Dec 16 12:45:36.374000 audit: BPF prog-id=121 op=LOAD Dec 16 12:45:36.374000 audit[3391]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=3362 pid=3391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:36.374000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3938393238663137653835363162366361323434366633363131666265 Dec 16 12:45:36.381451 containerd[2138]: time="2025-12-16T12:45:36.381394279Z" level=info msg="CreateContainer within sandbox \"32063e8b6c55825198409de86d8c8b23ef3bbe75b8bf07533c25c12802b937d2\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 16 12:45:36.402054 kubelet[3270]: E1216 12:45:36.402017 3270 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.37:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.37:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 16 12:45:36.450493 kubelet[3270]: E1216 12:45:36.450444 3270 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4515.1.0-a-4ca6cdd03e?timeout=10s\": dial tcp 10.200.20.37:6443: connect: connection refused" interval="1.6s" Dec 16 12:45:36.465069 kubelet[3270]: E1216 12:45:36.465024 3270 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.37:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.37:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 16 12:45:36.603488 kubelet[3270]: I1216 12:45:36.603373 3270 kubelet_node_status.go:75] "Attempting to register node" node="ci-4515.1.0-a-4ca6cdd03e" Dec 16 12:45:36.604210 kubelet[3270]: E1216 12:45:36.604176 3270 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.37:6443/api/v1/nodes\": dial tcp 10.200.20.37:6443: connect: connection refused" node="ci-4515.1.0-a-4ca6cdd03e" Dec 16 12:45:37.031605 kubelet[3270]: E1216 12:45:37.031537 3270 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.20.37:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.37:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 16 12:45:37.176848 containerd[2138]: time="2025-12-16T12:45:37.176802713Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4515.1.0-a-4ca6cdd03e,Uid:05d7d983eec0103a607f5d846a066a28,Namespace:kube-system,Attempt:0,} returns sandbox id \"8086b4783691397333a0c0dfb76ff57f827467207ade7b0f96fbb01d7da8dc6e\"" Dec 16 12:45:37.231815 containerd[2138]: time="2025-12-16T12:45:37.231701442Z" level=info msg="CreateContainer within sandbox \"8086b4783691397333a0c0dfb76ff57f827467207ade7b0f96fbb01d7da8dc6e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 16 12:45:37.232845 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1481595421.mount: Deactivated successfully. Dec 16 12:45:37.235718 containerd[2138]: time="2025-12-16T12:45:37.234788428Z" level=info msg="Container 5bd5e68abfcf59d53fa7b6d95f0ab93f5bf87aea328c5bcf52e581901deb6d09: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:45:37.236590 containerd[2138]: time="2025-12-16T12:45:37.236561512Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4515.1.0-a-4ca6cdd03e,Uid:5983d43d303ffc91919510ec80de2669,Namespace:kube-system,Attempt:0,} returns sandbox id \"98928f17e8561b6ca2446f3611fbe642e3025dd73596400179530f04582bbab5\"" Dec 16 12:45:37.243832 containerd[2138]: time="2025-12-16T12:45:37.243803763Z" level=info msg="CreateContainer within sandbox \"98928f17e8561b6ca2446f3611fbe642e3025dd73596400179530f04582bbab5\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 16 12:45:37.259069 containerd[2138]: time="2025-12-16T12:45:37.259037064Z" level=info msg="CreateContainer within sandbox \"32063e8b6c55825198409de86d8c8b23ef3bbe75b8bf07533c25c12802b937d2\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"5bd5e68abfcf59d53fa7b6d95f0ab93f5bf87aea328c5bcf52e581901deb6d09\"" Dec 16 12:45:37.259656 containerd[2138]: time="2025-12-16T12:45:37.259559975Z" level=info msg="StartContainer for \"5bd5e68abfcf59d53fa7b6d95f0ab93f5bf87aea328c5bcf52e581901deb6d09\"" Dec 16 12:45:37.260549 containerd[2138]: time="2025-12-16T12:45:37.260527891Z" level=info msg="connecting to shim 5bd5e68abfcf59d53fa7b6d95f0ab93f5bf87aea328c5bcf52e581901deb6d09" address="unix:///run/containerd/s/e4d6c4a0187bdae20a8980f1a9093f95af83226c508b44056dc57d2475700bcf" protocol=ttrpc version=3 Dec 16 12:45:37.274846 containerd[2138]: time="2025-12-16T12:45:37.274822588Z" level=info msg="Container 591e0586d22508f704448f4b0dcb5046d7a9ae15ba1f39cc24b899e977716388: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:45:37.279409 systemd[1]: Started cri-containerd-5bd5e68abfcf59d53fa7b6d95f0ab93f5bf87aea328c5bcf52e581901deb6d09.scope - libcontainer container 5bd5e68abfcf59d53fa7b6d95f0ab93f5bf87aea328c5bcf52e581901deb6d09. Dec 16 12:45:37.287000 audit: BPF prog-id=122 op=LOAD Dec 16 12:45:37.288000 audit: BPF prog-id=123 op=LOAD Dec 16 12:45:37.288000 audit[3454]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=3318 pid=3454 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:37.288000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562643565363861626663663539643533666137623664393566306162 Dec 16 12:45:37.288000 audit: BPF prog-id=123 op=UNLOAD Dec 16 12:45:37.288000 audit[3454]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3318 pid=3454 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:37.288000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562643565363861626663663539643533666137623664393566306162 Dec 16 12:45:37.288000 audit: BPF prog-id=124 op=LOAD Dec 16 12:45:37.288000 audit[3454]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=3318 pid=3454 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:37.288000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562643565363861626663663539643533666137623664393566306162 Dec 16 12:45:37.288000 audit: BPF prog-id=125 op=LOAD Dec 16 12:45:37.288000 audit[3454]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=3318 pid=3454 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:37.288000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562643565363861626663663539643533666137623664393566306162 Dec 16 12:45:37.288000 audit: BPF prog-id=125 op=UNLOAD Dec 16 12:45:37.288000 audit[3454]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3318 pid=3454 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:37.288000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562643565363861626663663539643533666137623664393566306162 Dec 16 12:45:37.288000 audit: BPF prog-id=124 op=UNLOAD Dec 16 12:45:37.288000 audit[3454]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3318 pid=3454 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:37.288000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562643565363861626663663539643533666137623664393566306162 Dec 16 12:45:37.288000 audit: BPF prog-id=126 op=LOAD Dec 16 12:45:37.288000 audit[3454]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=3318 pid=3454 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:37.288000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562643565363861626663663539643533666137623664393566306162 Dec 16 12:45:37.295528 containerd[2138]: time="2025-12-16T12:45:37.295495208Z" level=info msg="Container 2853be4a3e6c2491e05783b7e1e25e423fab70c70720430f79de9ae71955c0c5: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:45:37.306273 containerd[2138]: time="2025-12-16T12:45:37.306240825Z" level=info msg="CreateContainer within sandbox \"8086b4783691397333a0c0dfb76ff57f827467207ade7b0f96fbb01d7da8dc6e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"591e0586d22508f704448f4b0dcb5046d7a9ae15ba1f39cc24b899e977716388\"" Dec 16 12:45:37.306959 containerd[2138]: time="2025-12-16T12:45:37.306916181Z" level=info msg="StartContainer for \"591e0586d22508f704448f4b0dcb5046d7a9ae15ba1f39cc24b899e977716388\"" Dec 16 12:45:37.309566 containerd[2138]: time="2025-12-16T12:45:37.309536961Z" level=info msg="connecting to shim 591e0586d22508f704448f4b0dcb5046d7a9ae15ba1f39cc24b899e977716388" address="unix:///run/containerd/s/a9a93e061cf781ffc8a21637fb81844ef37698356181a9ee38915f20e5692b3c" protocol=ttrpc version=3 Dec 16 12:45:37.323976 containerd[2138]: time="2025-12-16T12:45:37.323943934Z" level=info msg="CreateContainer within sandbox \"98928f17e8561b6ca2446f3611fbe642e3025dd73596400179530f04582bbab5\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"2853be4a3e6c2491e05783b7e1e25e423fab70c70720430f79de9ae71955c0c5\"" Dec 16 12:45:37.325400 containerd[2138]: time="2025-12-16T12:45:37.325377359Z" level=info msg="StartContainer for \"2853be4a3e6c2491e05783b7e1e25e423fab70c70720430f79de9ae71955c0c5\"" Dec 16 12:45:37.326064 containerd[2138]: time="2025-12-16T12:45:37.326037955Z" level=info msg="connecting to shim 2853be4a3e6c2491e05783b7e1e25e423fab70c70720430f79de9ae71955c0c5" address="unix:///run/containerd/s/ba862362df079bf3b1ecb7434033d62156f0a77eae8edb67dde093d86411098c" protocol=ttrpc version=3 Dec 16 12:45:37.328914 systemd[1]: Started cri-containerd-591e0586d22508f704448f4b0dcb5046d7a9ae15ba1f39cc24b899e977716388.scope - libcontainer container 591e0586d22508f704448f4b0dcb5046d7a9ae15ba1f39cc24b899e977716388. Dec 16 12:45:37.331442 containerd[2138]: time="2025-12-16T12:45:37.331384951Z" level=info msg="StartContainer for \"5bd5e68abfcf59d53fa7b6d95f0ab93f5bf87aea328c5bcf52e581901deb6d09\" returns successfully" Dec 16 12:45:37.348628 systemd[1]: Started cri-containerd-2853be4a3e6c2491e05783b7e1e25e423fab70c70720430f79de9ae71955c0c5.scope - libcontainer container 2853be4a3e6c2491e05783b7e1e25e423fab70c70720430f79de9ae71955c0c5. Dec 16 12:45:37.359000 audit: BPF prog-id=127 op=LOAD Dec 16 12:45:37.360000 audit: BPF prog-id=128 op=LOAD Dec 16 12:45:37.360000 audit[3479]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40000fe180 a2=98 a3=0 items=0 ppid=3369 pid=3479 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:37.360000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539316530353836643232353038663730343434386634623064636235 Dec 16 12:45:37.360000 audit: BPF prog-id=128 op=UNLOAD Dec 16 12:45:37.360000 audit[3479]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3369 pid=3479 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:37.360000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539316530353836643232353038663730343434386634623064636235 Dec 16 12:45:37.361000 audit: BPF prog-id=129 op=LOAD Dec 16 12:45:37.361000 audit[3479]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40000fe3e8 a2=98 a3=0 items=0 ppid=3369 pid=3479 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:37.361000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539316530353836643232353038663730343434386634623064636235 Dec 16 12:45:37.361000 audit: BPF prog-id=130 op=LOAD Dec 16 12:45:37.361000 audit[3479]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40000fe168 a2=98 a3=0 items=0 ppid=3369 pid=3479 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:37.361000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539316530353836643232353038663730343434386634623064636235 Dec 16 12:45:37.361000 audit: BPF prog-id=130 op=UNLOAD Dec 16 12:45:37.361000 audit[3479]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3369 pid=3479 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:37.361000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539316530353836643232353038663730343434386634623064636235 Dec 16 12:45:37.361000 audit: BPF prog-id=129 op=UNLOAD Dec 16 12:45:37.361000 audit[3479]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3369 pid=3479 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:37.361000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539316530353836643232353038663730343434386634623064636235 Dec 16 12:45:37.361000 audit: BPF prog-id=131 op=LOAD Dec 16 12:45:37.361000 audit[3479]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40000fe648 a2=98 a3=0 items=0 ppid=3369 pid=3479 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:37.361000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539316530353836643232353038663730343434386634623064636235 Dec 16 12:45:37.364000 audit: BPF prog-id=132 op=LOAD Dec 16 12:45:37.364000 audit: BPF prog-id=133 op=LOAD Dec 16 12:45:37.364000 audit[3497]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b0180 a2=98 a3=0 items=0 ppid=3362 pid=3497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:37.364000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238353362653461336536633234393165303537383362376531653235 Dec 16 12:45:37.364000 audit: BPF prog-id=133 op=UNLOAD Dec 16 12:45:37.364000 audit[3497]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3362 pid=3497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:37.364000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238353362653461336536633234393165303537383362376531653235 Dec 16 12:45:37.365000 audit: BPF prog-id=134 op=LOAD Dec 16 12:45:37.365000 audit[3497]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b03e8 a2=98 a3=0 items=0 ppid=3362 pid=3497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:37.365000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238353362653461336536633234393165303537383362376531653235 Dec 16 12:45:37.365000 audit: BPF prog-id=135 op=LOAD Dec 16 12:45:37.365000 audit[3497]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001b0168 a2=98 a3=0 items=0 ppid=3362 pid=3497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:37.365000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238353362653461336536633234393165303537383362376531653235 Dec 16 12:45:37.365000 audit: BPF prog-id=135 op=UNLOAD Dec 16 12:45:37.365000 audit[3497]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3362 pid=3497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:37.365000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238353362653461336536633234393165303537383362376531653235 Dec 16 12:45:37.365000 audit: BPF prog-id=134 op=UNLOAD Dec 16 12:45:37.365000 audit[3497]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3362 pid=3497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:37.365000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238353362653461336536633234393165303537383362376531653235 Dec 16 12:45:37.365000 audit: BPF prog-id=136 op=LOAD Dec 16 12:45:37.365000 audit[3497]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b0648 a2=98 a3=0 items=0 ppid=3362 pid=3497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:37.365000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238353362653461336536633234393165303537383362376531653235 Dec 16 12:45:37.400374 containerd[2138]: time="2025-12-16T12:45:37.400190750Z" level=info msg="StartContainer for \"2853be4a3e6c2491e05783b7e1e25e423fab70c70720430f79de9ae71955c0c5\" returns successfully" Dec 16 12:45:37.405007 containerd[2138]: time="2025-12-16T12:45:37.404942577Z" level=info msg="StartContainer for \"591e0586d22508f704448f4b0dcb5046d7a9ae15ba1f39cc24b899e977716388\" returns successfully" Dec 16 12:45:38.105583 kubelet[3270]: E1216 12:45:38.105181 3270 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515.1.0-a-4ca6cdd03e\" not found" node="ci-4515.1.0-a-4ca6cdd03e" Dec 16 12:45:38.110933 kubelet[3270]: E1216 12:45:38.110894 3270 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515.1.0-a-4ca6cdd03e\" not found" node="ci-4515.1.0-a-4ca6cdd03e" Dec 16 12:45:38.115746 kubelet[3270]: E1216 12:45:38.115709 3270 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515.1.0-a-4ca6cdd03e\" not found" node="ci-4515.1.0-a-4ca6cdd03e" Dec 16 12:45:38.206628 kubelet[3270]: I1216 12:45:38.206599 3270 kubelet_node_status.go:75] "Attempting to register node" node="ci-4515.1.0-a-4ca6cdd03e" Dec 16 12:45:38.846180 kubelet[3270]: E1216 12:45:38.846145 3270 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4515.1.0-a-4ca6cdd03e\" not found" node="ci-4515.1.0-a-4ca6cdd03e" Dec 16 12:45:39.013079 kubelet[3270]: I1216 12:45:39.013037 3270 kubelet_node_status.go:78] "Successfully registered node" node="ci-4515.1.0-a-4ca6cdd03e" Dec 16 12:45:39.037057 kubelet[3270]: I1216 12:45:39.037030 3270 apiserver.go:52] "Watching apiserver" Dec 16 12:45:39.047835 kubelet[3270]: I1216 12:45:39.047779 3270 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4515.1.0-a-4ca6cdd03e" Dec 16 12:45:39.047835 kubelet[3270]: I1216 12:45:39.047787 3270 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 16 12:45:39.065217 kubelet[3270]: E1216 12:45:39.065149 3270 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4515.1.0-a-4ca6cdd03e\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4515.1.0-a-4ca6cdd03e" Dec 16 12:45:39.065452 kubelet[3270]: I1216 12:45:39.065386 3270 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4515.1.0-a-4ca6cdd03e" Dec 16 12:45:39.067544 kubelet[3270]: E1216 12:45:39.067513 3270 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4515.1.0-a-4ca6cdd03e\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4515.1.0-a-4ca6cdd03e" Dec 16 12:45:39.067544 kubelet[3270]: I1216 12:45:39.067533 3270 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4515.1.0-a-4ca6cdd03e" Dec 16 12:45:39.068858 kubelet[3270]: E1216 12:45:39.068811 3270 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4515.1.0-a-4ca6cdd03e\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4515.1.0-a-4ca6cdd03e" Dec 16 12:45:39.116288 kubelet[3270]: I1216 12:45:39.115778 3270 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4515.1.0-a-4ca6cdd03e" Dec 16 12:45:39.116288 kubelet[3270]: I1216 12:45:39.115853 3270 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4515.1.0-a-4ca6cdd03e" Dec 16 12:45:39.117237 kubelet[3270]: E1216 12:45:39.117176 3270 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4515.1.0-a-4ca6cdd03e\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4515.1.0-a-4ca6cdd03e" Dec 16 12:45:39.118073 kubelet[3270]: E1216 12:45:39.118051 3270 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4515.1.0-a-4ca6cdd03e\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4515.1.0-a-4ca6cdd03e" Dec 16 12:45:39.434737 kubelet[3270]: I1216 12:45:39.434538 3270 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4515.1.0-a-4ca6cdd03e" Dec 16 12:45:39.436095 kubelet[3270]: E1216 12:45:39.436066 3270 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4515.1.0-a-4ca6cdd03e\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4515.1.0-a-4ca6cdd03e" Dec 16 12:45:40.117289 kubelet[3270]: I1216 12:45:40.117232 3270 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4515.1.0-a-4ca6cdd03e" Dec 16 12:45:40.119616 kubelet[3270]: I1216 12:45:40.117863 3270 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4515.1.0-a-4ca6cdd03e" Dec 16 12:45:40.130752 kubelet[3270]: I1216 12:45:40.130728 3270 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 12:45:40.140896 kubelet[3270]: I1216 12:45:40.140875 3270 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 12:45:41.465917 systemd[1]: Reload requested from client PID 3554 ('systemctl') (unit session-9.scope)... Dec 16 12:45:41.465934 systemd[1]: Reloading... Dec 16 12:45:41.551244 zram_generator::config[3604]: No configuration found. Dec 16 12:45:41.713019 systemd[1]: Reloading finished in 246 ms. Dec 16 12:45:41.747527 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:45:41.761850 systemd[1]: kubelet.service: Deactivated successfully. Dec 16 12:45:41.762090 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:45:41.760000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:45:41.765158 kernel: kauditd_printk_skb: 201 callbacks suppressed Dec 16 12:45:41.765219 kernel: audit: type=1131 audit(1765889141.760:419): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:45:41.768247 systemd[1]: kubelet.service: Consumed 793ms CPU time, 121.4M memory peak. Dec 16 12:45:41.771227 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:45:41.783000 audit: BPF prog-id=137 op=LOAD Dec 16 12:45:41.789288 kernel: audit: type=1334 audit(1765889141.783:420): prog-id=137 op=LOAD Dec 16 12:45:41.789000 audit: BPF prog-id=94 op=UNLOAD Dec 16 12:45:41.789000 audit: BPF prog-id=138 op=LOAD Dec 16 12:45:41.802518 kernel: audit: type=1334 audit(1765889141.789:421): prog-id=94 op=UNLOAD Dec 16 12:45:41.802573 kernel: audit: type=1334 audit(1765889141.789:422): prog-id=138 op=LOAD Dec 16 12:45:41.789000 audit: BPF prog-id=139 op=LOAD Dec 16 12:45:41.806600 kernel: audit: type=1334 audit(1765889141.789:423): prog-id=139 op=LOAD Dec 16 12:45:41.789000 audit: BPF prog-id=95 op=UNLOAD Dec 16 12:45:41.811000 kernel: audit: type=1334 audit(1765889141.789:424): prog-id=95 op=UNLOAD Dec 16 12:45:41.789000 audit: BPF prog-id=96 op=UNLOAD Dec 16 12:45:41.815518 kernel: audit: type=1334 audit(1765889141.789:425): prog-id=96 op=UNLOAD Dec 16 12:45:41.789000 audit: BPF prog-id=140 op=LOAD Dec 16 12:45:41.819883 kernel: audit: type=1334 audit(1765889141.789:426): prog-id=140 op=LOAD Dec 16 12:45:41.789000 audit: BPF prog-id=97 op=UNLOAD Dec 16 12:45:41.824384 kernel: audit: type=1334 audit(1765889141.789:427): prog-id=97 op=UNLOAD Dec 16 12:45:41.794000 audit: BPF prog-id=141 op=LOAD Dec 16 12:45:41.828739 kernel: audit: type=1334 audit(1765889141.794:428): prog-id=141 op=LOAD Dec 16 12:45:41.794000 audit: BPF prog-id=88 op=UNLOAD Dec 16 12:45:41.794000 audit: BPF prog-id=142 op=LOAD Dec 16 12:45:41.794000 audit: BPF prog-id=143 op=LOAD Dec 16 12:45:41.794000 audit: BPF prog-id=89 op=UNLOAD Dec 16 12:45:41.794000 audit: BPF prog-id=90 op=UNLOAD Dec 16 12:45:41.801000 audit: BPF prog-id=144 op=LOAD Dec 16 12:45:41.801000 audit: BPF prog-id=145 op=LOAD Dec 16 12:45:41.801000 audit: BPF prog-id=98 op=UNLOAD Dec 16 12:45:41.801000 audit: BPF prog-id=99 op=UNLOAD Dec 16 12:45:41.809000 audit: BPF prog-id=146 op=LOAD Dec 16 12:45:41.809000 audit: BPF prog-id=101 op=UNLOAD Dec 16 12:45:41.814000 audit: BPF prog-id=147 op=LOAD Dec 16 12:45:41.814000 audit: BPF prog-id=148 op=LOAD Dec 16 12:45:41.814000 audit: BPF prog-id=102 op=UNLOAD Dec 16 12:45:41.814000 audit: BPF prog-id=103 op=UNLOAD Dec 16 12:45:41.823000 audit: BPF prog-id=149 op=LOAD Dec 16 12:45:41.823000 audit: BPF prog-id=87 op=UNLOAD Dec 16 12:45:41.827000 audit: BPF prog-id=150 op=LOAD Dec 16 12:45:41.827000 audit: BPF prog-id=104 op=UNLOAD Dec 16 12:45:41.827000 audit: BPF prog-id=151 op=LOAD Dec 16 12:45:41.827000 audit: BPF prog-id=152 op=LOAD Dec 16 12:45:41.827000 audit: BPF prog-id=105 op=UNLOAD Dec 16 12:45:41.827000 audit: BPF prog-id=106 op=UNLOAD Dec 16 12:45:41.828000 audit: BPF prog-id=153 op=LOAD Dec 16 12:45:41.828000 audit: BPF prog-id=100 op=UNLOAD Dec 16 12:45:41.828000 audit: BPF prog-id=154 op=LOAD Dec 16 12:45:41.828000 audit: BPF prog-id=91 op=UNLOAD Dec 16 12:45:41.828000 audit: BPF prog-id=155 op=LOAD Dec 16 12:45:41.828000 audit: BPF prog-id=156 op=LOAD Dec 16 12:45:41.828000 audit: BPF prog-id=92 op=UNLOAD Dec 16 12:45:41.828000 audit: BPF prog-id=93 op=UNLOAD Dec 16 12:45:41.917828 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:45:41.918000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:45:41.924429 (kubelet)[3668]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 12:45:41.953184 kubelet[3668]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 12:45:41.953184 kubelet[3668]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 12:45:41.953184 kubelet[3668]: I1216 12:45:41.953254 3668 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 12:45:41.958328 kubelet[3668]: I1216 12:45:41.958197 3668 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Dec 16 12:45:41.958328 kubelet[3668]: I1216 12:45:41.958324 3668 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 12:45:41.958429 kubelet[3668]: I1216 12:45:41.958353 3668 watchdog_linux.go:95] "Systemd watchdog is not enabled" Dec 16 12:45:41.958429 kubelet[3668]: I1216 12:45:41.958359 3668 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 12:45:41.958558 kubelet[3668]: I1216 12:45:41.958539 3668 server.go:956] "Client rotation is on, will bootstrap in background" Dec 16 12:45:41.959512 kubelet[3668]: I1216 12:45:41.959489 3668 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Dec 16 12:45:41.961084 kubelet[3668]: I1216 12:45:41.961059 3668 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 12:45:41.965236 kubelet[3668]: I1216 12:45:41.964526 3668 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 12:45:41.966807 kubelet[3668]: I1216 12:45:41.966790 3668 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Dec 16 12:45:41.966953 kubelet[3668]: I1216 12:45:41.966933 3668 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 12:45:41.967056 kubelet[3668]: I1216 12:45:41.966951 3668 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4515.1.0-a-4ca6cdd03e","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 12:45:41.967056 kubelet[3668]: I1216 12:45:41.967055 3668 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 12:45:41.967136 kubelet[3668]: I1216 12:45:41.967062 3668 container_manager_linux.go:306] "Creating device plugin manager" Dec 16 12:45:41.967136 kubelet[3668]: I1216 12:45:41.967079 3668 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Dec 16 12:45:41.967651 kubelet[3668]: I1216 12:45:41.967635 3668 state_mem.go:36] "Initialized new in-memory state store" Dec 16 12:45:41.967754 kubelet[3668]: I1216 12:45:41.967741 3668 kubelet.go:475] "Attempting to sync node with API server" Dec 16 12:45:41.967754 kubelet[3668]: I1216 12:45:41.967753 3668 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 12:45:41.967795 kubelet[3668]: I1216 12:45:41.967770 3668 kubelet.go:387] "Adding apiserver pod source" Dec 16 12:45:41.967795 kubelet[3668]: I1216 12:45:41.967782 3668 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 12:45:41.970531 kubelet[3668]: I1216 12:45:41.970485 3668 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 16 12:45:41.971063 kubelet[3668]: I1216 12:45:41.971050 3668 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 16 12:45:41.971225 kubelet[3668]: I1216 12:45:41.971145 3668 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Dec 16 12:45:41.973261 kubelet[3668]: I1216 12:45:41.973247 3668 server.go:1262] "Started kubelet" Dec 16 12:45:41.975124 kubelet[3668]: I1216 12:45:41.975104 3668 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 12:45:41.976036 kubelet[3668]: I1216 12:45:41.975174 3668 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 12:45:41.976728 kubelet[3668]: I1216 12:45:41.976568 3668 server.go:310] "Adding debug handlers to kubelet server" Dec 16 12:45:41.979088 kubelet[3668]: I1216 12:45:41.979063 3668 volume_manager.go:313] "Starting Kubelet Volume Manager" Dec 16 12:45:41.979153 kubelet[3668]: E1216 12:45:41.979146 3668 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4515.1.0-a-4ca6cdd03e\" not found" Dec 16 12:45:41.979645 kubelet[3668]: I1216 12:45:41.975215 3668 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 12:45:41.979645 kubelet[3668]: I1216 12:45:41.979467 3668 server_v1.go:49] "podresources" method="list" useActivePods=true Dec 16 12:45:41.979645 kubelet[3668]: I1216 12:45:41.979608 3668 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 12:45:41.979731 kubelet[3668]: I1216 12:45:41.979652 3668 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 16 12:45:41.979751 kubelet[3668]: I1216 12:45:41.979734 3668 reconciler.go:29] "Reconciler: start to sync state" Dec 16 12:45:41.981820 kubelet[3668]: I1216 12:45:41.981780 3668 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 12:45:41.984906 kubelet[3668]: I1216 12:45:41.984887 3668 factory.go:223] Registration of the systemd container factory successfully Dec 16 12:45:41.985104 kubelet[3668]: I1216 12:45:41.985086 3668 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 12:45:41.999703 kubelet[3668]: I1216 12:45:41.999579 3668 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Dec 16 12:45:42.001373 kubelet[3668]: I1216 12:45:42.001265 3668 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Dec 16 12:45:42.001556 kubelet[3668]: I1216 12:45:42.001538 3668 status_manager.go:244] "Starting to sync pod status with apiserver" Dec 16 12:45:42.001746 kubelet[3668]: I1216 12:45:42.001655 3668 kubelet.go:2427] "Starting kubelet main sync loop" Dec 16 12:45:42.001811 kubelet[3668]: E1216 12:45:42.001693 3668 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 12:45:42.006671 kubelet[3668]: I1216 12:45:42.006642 3668 factory.go:223] Registration of the containerd container factory successfully Dec 16 12:45:42.010461 kubelet[3668]: E1216 12:45:42.010353 3668 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 12:45:42.050318 kubelet[3668]: I1216 12:45:42.050198 3668 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 12:45:42.051130 kubelet[3668]: I1216 12:45:42.050363 3668 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 12:45:42.051130 kubelet[3668]: I1216 12:45:42.050386 3668 state_mem.go:36] "Initialized new in-memory state store" Dec 16 12:45:42.051130 kubelet[3668]: I1216 12:45:42.050488 3668 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 16 12:45:42.051130 kubelet[3668]: I1216 12:45:42.050495 3668 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 16 12:45:42.051130 kubelet[3668]: I1216 12:45:42.050508 3668 policy_none.go:49] "None policy: Start" Dec 16 12:45:42.051130 kubelet[3668]: I1216 12:45:42.050515 3668 memory_manager.go:187] "Starting memorymanager" policy="None" Dec 16 12:45:42.051130 kubelet[3668]: I1216 12:45:42.050522 3668 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Dec 16 12:45:42.051130 kubelet[3668]: I1216 12:45:42.050588 3668 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Dec 16 12:45:42.051130 kubelet[3668]: I1216 12:45:42.050594 3668 policy_none.go:47] "Start" Dec 16 12:45:42.057567 kubelet[3668]: E1216 12:45:42.057549 3668 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 16 12:45:42.058458 kubelet[3668]: I1216 12:45:42.058443 3668 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 12:45:42.058682 kubelet[3668]: I1216 12:45:42.058655 3668 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 12:45:42.059013 kubelet[3668]: I1216 12:45:42.058999 3668 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 12:45:42.061087 kubelet[3668]: E1216 12:45:42.060971 3668 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 12:45:42.102465 kubelet[3668]: I1216 12:45:42.102437 3668 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4515.1.0-a-4ca6cdd03e" Dec 16 12:45:42.102793 kubelet[3668]: I1216 12:45:42.102770 3668 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4515.1.0-a-4ca6cdd03e" Dec 16 12:45:42.103116 kubelet[3668]: I1216 12:45:42.103018 3668 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4515.1.0-a-4ca6cdd03e" Dec 16 12:45:42.117811 kubelet[3668]: I1216 12:45:42.117778 3668 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 12:45:42.118085 kubelet[3668]: I1216 12:45:42.117988 3668 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 12:45:42.118139 kubelet[3668]: E1216 12:45:42.118104 3668 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4515.1.0-a-4ca6cdd03e\" already exists" pod="kube-system/kube-apiserver-ci-4515.1.0-a-4ca6cdd03e" Dec 16 12:45:42.118250 kubelet[3668]: E1216 12:45:42.118053 3668 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4515.1.0-a-4ca6cdd03e\" already exists" pod="kube-system/kube-scheduler-ci-4515.1.0-a-4ca6cdd03e" Dec 16 12:45:42.118250 kubelet[3668]: I1216 12:45:42.118021 3668 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 12:45:42.164287 kubelet[3668]: I1216 12:45:42.164239 3668 kubelet_node_status.go:75] "Attempting to register node" node="ci-4515.1.0-a-4ca6cdd03e" Dec 16 12:45:42.181311 kubelet[3668]: I1216 12:45:42.181134 3668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/636732ac188bd5f3e998635a318841c7-ca-certs\") pod \"kube-apiserver-ci-4515.1.0-a-4ca6cdd03e\" (UID: \"636732ac188bd5f3e998635a318841c7\") " pod="kube-system/kube-apiserver-ci-4515.1.0-a-4ca6cdd03e" Dec 16 12:45:42.181311 kubelet[3668]: I1216 12:45:42.181167 3668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/636732ac188bd5f3e998635a318841c7-k8s-certs\") pod \"kube-apiserver-ci-4515.1.0-a-4ca6cdd03e\" (UID: \"636732ac188bd5f3e998635a318841c7\") " pod="kube-system/kube-apiserver-ci-4515.1.0-a-4ca6cdd03e" Dec 16 12:45:42.181311 kubelet[3668]: I1216 12:45:42.181181 3668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/636732ac188bd5f3e998635a318841c7-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4515.1.0-a-4ca6cdd03e\" (UID: \"636732ac188bd5f3e998635a318841c7\") " pod="kube-system/kube-apiserver-ci-4515.1.0-a-4ca6cdd03e" Dec 16 12:45:42.181311 kubelet[3668]: I1216 12:45:42.181194 3668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/05d7d983eec0103a607f5d846a066a28-ca-certs\") pod \"kube-controller-manager-ci-4515.1.0-a-4ca6cdd03e\" (UID: \"05d7d983eec0103a607f5d846a066a28\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-a-4ca6cdd03e" Dec 16 12:45:42.181311 kubelet[3668]: I1216 12:45:42.181221 3668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/05d7d983eec0103a607f5d846a066a28-kubeconfig\") pod \"kube-controller-manager-ci-4515.1.0-a-4ca6cdd03e\" (UID: \"05d7d983eec0103a607f5d846a066a28\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-a-4ca6cdd03e" Dec 16 12:45:42.181472 kubelet[3668]: I1216 12:45:42.181245 3668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/05d7d983eec0103a607f5d846a066a28-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4515.1.0-a-4ca6cdd03e\" (UID: \"05d7d983eec0103a607f5d846a066a28\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-a-4ca6cdd03e" Dec 16 12:45:42.181472 kubelet[3668]: I1216 12:45:42.181254 3668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5983d43d303ffc91919510ec80de2669-kubeconfig\") pod \"kube-scheduler-ci-4515.1.0-a-4ca6cdd03e\" (UID: \"5983d43d303ffc91919510ec80de2669\") " pod="kube-system/kube-scheduler-ci-4515.1.0-a-4ca6cdd03e" Dec 16 12:45:42.181472 kubelet[3668]: I1216 12:45:42.181296 3668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/05d7d983eec0103a607f5d846a066a28-flexvolume-dir\") pod \"kube-controller-manager-ci-4515.1.0-a-4ca6cdd03e\" (UID: \"05d7d983eec0103a607f5d846a066a28\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-a-4ca6cdd03e" Dec 16 12:45:42.181472 kubelet[3668]: I1216 12:45:42.181318 3668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/05d7d983eec0103a607f5d846a066a28-k8s-certs\") pod \"kube-controller-manager-ci-4515.1.0-a-4ca6cdd03e\" (UID: \"05d7d983eec0103a607f5d846a066a28\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-a-4ca6cdd03e" Dec 16 12:45:42.187970 kubelet[3668]: I1216 12:45:42.187633 3668 kubelet_node_status.go:124] "Node was previously registered" node="ci-4515.1.0-a-4ca6cdd03e" Dec 16 12:45:42.187970 kubelet[3668]: I1216 12:45:42.187707 3668 kubelet_node_status.go:78] "Successfully registered node" node="ci-4515.1.0-a-4ca6cdd03e" Dec 16 12:45:42.971138 kubelet[3668]: I1216 12:45:42.970036 3668 apiserver.go:52] "Watching apiserver" Dec 16 12:45:42.980658 kubelet[3668]: I1216 12:45:42.980619 3668 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 16 12:45:43.034223 kubelet[3668]: I1216 12:45:43.034165 3668 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4515.1.0-a-4ca6cdd03e" Dec 16 12:45:43.045699 kubelet[3668]: I1216 12:45:43.045523 3668 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 12:45:43.045699 kubelet[3668]: E1216 12:45:43.045571 3668 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4515.1.0-a-4ca6cdd03e\" already exists" pod="kube-system/kube-scheduler-ci-4515.1.0-a-4ca6cdd03e" Dec 16 12:45:43.069392 kubelet[3668]: I1216 12:45:43.069336 3668 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4515.1.0-a-4ca6cdd03e" podStartSLOduration=3.069323734 podStartE2EDuration="3.069323734s" podCreationTimestamp="2025-12-16 12:45:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:45:43.057064304 +0000 UTC m=+1.130200329" watchObservedRunningTime="2025-12-16 12:45:43.069323734 +0000 UTC m=+1.142459759" Dec 16 12:45:43.070125 kubelet[3668]: I1216 12:45:43.070093 3668 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4515.1.0-a-4ca6cdd03e" podStartSLOduration=1.070085548 podStartE2EDuration="1.070085548s" podCreationTimestamp="2025-12-16 12:45:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:45:43.069305381 +0000 UTC m=+1.142441430" watchObservedRunningTime="2025-12-16 12:45:43.070085548 +0000 UTC m=+1.143221581" Dec 16 12:45:43.078370 kubelet[3668]: I1216 12:45:43.078244 3668 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4515.1.0-a-4ca6cdd03e" podStartSLOduration=3.078234033 podStartE2EDuration="3.078234033s" podCreationTimestamp="2025-12-16 12:45:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:45:43.078089701 +0000 UTC m=+1.151225726" watchObservedRunningTime="2025-12-16 12:45:43.078234033 +0000 UTC m=+1.151370058" Dec 16 12:45:47.424775 kubelet[3668]: I1216 12:45:47.424739 3668 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 16 12:45:47.425362 kubelet[3668]: I1216 12:45:47.425181 3668 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 16 12:45:47.425410 containerd[2138]: time="2025-12-16T12:45:47.425006692Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 16 12:45:48.173586 systemd[1]: Created slice kubepods-besteffort-pod7f5454c2_0e4f_40f5_8f1e_2b319a6de6e2.slice - libcontainer container kubepods-besteffort-pod7f5454c2_0e4f_40f5_8f1e_2b319a6de6e2.slice. Dec 16 12:45:48.211944 kubelet[3668]: I1216 12:45:48.211854 3668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7f5454c2-0e4f-40f5-8f1e-2b319a6de6e2-lib-modules\") pod \"kube-proxy-qzx9j\" (UID: \"7f5454c2-0e4f-40f5-8f1e-2b319a6de6e2\") " pod="kube-system/kube-proxy-qzx9j" Dec 16 12:45:48.211944 kubelet[3668]: I1216 12:45:48.211881 3668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7f5454c2-0e4f-40f5-8f1e-2b319a6de6e2-kube-proxy\") pod \"kube-proxy-qzx9j\" (UID: \"7f5454c2-0e4f-40f5-8f1e-2b319a6de6e2\") " pod="kube-system/kube-proxy-qzx9j" Dec 16 12:45:48.211944 kubelet[3668]: I1216 12:45:48.211892 3668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7f5454c2-0e4f-40f5-8f1e-2b319a6de6e2-xtables-lock\") pod \"kube-proxy-qzx9j\" (UID: \"7f5454c2-0e4f-40f5-8f1e-2b319a6de6e2\") " pod="kube-system/kube-proxy-qzx9j" Dec 16 12:45:48.211944 kubelet[3668]: I1216 12:45:48.211902 3668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5vgx\" (UniqueName: \"kubernetes.io/projected/7f5454c2-0e4f-40f5-8f1e-2b319a6de6e2-kube-api-access-b5vgx\") pod \"kube-proxy-qzx9j\" (UID: \"7f5454c2-0e4f-40f5-8f1e-2b319a6de6e2\") " pod="kube-system/kube-proxy-qzx9j" Dec 16 12:45:48.317007 kubelet[3668]: E1216 12:45:48.316797 3668 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Dec 16 12:45:48.317007 kubelet[3668]: E1216 12:45:48.316823 3668 projected.go:196] Error preparing data for projected volume kube-api-access-b5vgx for pod kube-system/kube-proxy-qzx9j: configmap "kube-root-ca.crt" not found Dec 16 12:45:48.317007 kubelet[3668]: E1216 12:45:48.316872 3668 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7f5454c2-0e4f-40f5-8f1e-2b319a6de6e2-kube-api-access-b5vgx podName:7f5454c2-0e4f-40f5-8f1e-2b319a6de6e2 nodeName:}" failed. No retries permitted until 2025-12-16 12:45:48.81685609 +0000 UTC m=+6.889992115 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-b5vgx" (UniqueName: "kubernetes.io/projected/7f5454c2-0e4f-40f5-8f1e-2b319a6de6e2-kube-api-access-b5vgx") pod "kube-proxy-qzx9j" (UID: "7f5454c2-0e4f-40f5-8f1e-2b319a6de6e2") : configmap "kube-root-ca.crt" not found Dec 16 12:45:48.671719 systemd[1]: Created slice kubepods-besteffort-podae439c11_21a4_42e9_9a45_c846a6d0e877.slice - libcontainer container kubepods-besteffort-podae439c11_21a4_42e9_9a45_c846a6d0e877.slice. Dec 16 12:45:48.714612 kubelet[3668]: I1216 12:45:48.714581 3668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klddp\" (UniqueName: \"kubernetes.io/projected/ae439c11-21a4-42e9-9a45-c846a6d0e877-kube-api-access-klddp\") pod \"tigera-operator-65cdcdfd6d-4hpvq\" (UID: \"ae439c11-21a4-42e9-9a45-c846a6d0e877\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-4hpvq" Dec 16 12:45:48.714996 kubelet[3668]: I1216 12:45:48.714939 3668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ae439c11-21a4-42e9-9a45-c846a6d0e877-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-4hpvq\" (UID: \"ae439c11-21a4-42e9-9a45-c846a6d0e877\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-4hpvq" Dec 16 12:45:48.980064 containerd[2138]: time="2025-12-16T12:45:48.979931180Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-4hpvq,Uid:ae439c11-21a4-42e9-9a45-c846a6d0e877,Namespace:tigera-operator,Attempt:0,}" Dec 16 12:45:49.024413 containerd[2138]: time="2025-12-16T12:45:49.024342844Z" level=info msg="connecting to shim 2a4e734fdf489c16c08ebe6ed4a7ccbe0c6434b465274be8ed3d6209d1fc8089" address="unix:///run/containerd/s/817bab2214bda156ff5cb80450eb7b9f1b9ad5ee667edea21da18955bc034aaf" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:45:49.046683 systemd[1]: Started cri-containerd-2a4e734fdf489c16c08ebe6ed4a7ccbe0c6434b465274be8ed3d6209d1fc8089.scope - libcontainer container 2a4e734fdf489c16c08ebe6ed4a7ccbe0c6434b465274be8ed3d6209d1fc8089. Dec 16 12:45:49.056000 audit: BPF prog-id=157 op=LOAD Dec 16 12:45:49.061044 kernel: kauditd_printk_skb: 32 callbacks suppressed Dec 16 12:45:49.061171 kernel: audit: type=1334 audit(1765889149.056:461): prog-id=157 op=LOAD Dec 16 12:45:49.064000 audit: BPF prog-id=158 op=LOAD Dec 16 12:45:49.070193 kernel: audit: type=1334 audit(1765889149.064:462): prog-id=158 op=LOAD Dec 16 12:45:49.064000 audit[3739]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a8180 a2=98 a3=0 items=0 ppid=3728 pid=3739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:49.088517 kernel: audit: type=1300 audit(1765889149.064:462): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a8180 a2=98 a3=0 items=0 ppid=3728 pid=3739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:49.088596 kernel: audit: type=1327 audit(1765889149.064:462): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261346537333466646634383963313663303865626536656434613763 Dec 16 12:45:49.064000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261346537333466646634383963313663303865626536656434613763 Dec 16 12:45:49.064000 audit: BPF prog-id=158 op=UNLOAD Dec 16 12:45:49.107106 kernel: audit: type=1334 audit(1765889149.064:463): prog-id=158 op=UNLOAD Dec 16 12:45:49.122701 kernel: audit: type=1300 audit(1765889149.064:463): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3728 pid=3739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:49.064000 audit[3739]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3728 pid=3739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:49.064000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261346537333466646634383963313663303865626536656434613763 Dec 16 12:45:49.138351 kernel: audit: type=1327 audit(1765889149.064:463): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261346537333466646634383963313663303865626536656434613763 Dec 16 12:45:49.065000 audit: BPF prog-id=159 op=LOAD Dec 16 12:45:49.140767 containerd[2138]: time="2025-12-16T12:45:49.140347742Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qzx9j,Uid:7f5454c2-0e4f-40f5-8f1e-2b319a6de6e2,Namespace:kube-system,Attempt:0,}" Dec 16 12:45:49.142931 kernel: audit: type=1334 audit(1765889149.065:464): prog-id=159 op=LOAD Dec 16 12:45:49.065000 audit[3739]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a83e8 a2=98 a3=0 items=0 ppid=3728 pid=3739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:49.159801 kernel: audit: type=1300 audit(1765889149.065:464): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a83e8 a2=98 a3=0 items=0 ppid=3728 pid=3739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:49.065000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261346537333466646634383963313663303865626536656434613763 Dec 16 12:45:49.175810 kernel: audit: type=1327 audit(1765889149.065:464): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261346537333466646634383963313663303865626536656434613763 Dec 16 12:45:49.068000 audit: BPF prog-id=160 op=LOAD Dec 16 12:45:49.068000 audit[3739]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a8168 a2=98 a3=0 items=0 ppid=3728 pid=3739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:49.068000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261346537333466646634383963313663303865626536656434613763 Dec 16 12:45:49.068000 audit: BPF prog-id=160 op=UNLOAD Dec 16 12:45:49.068000 audit[3739]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3728 pid=3739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:49.068000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261346537333466646634383963313663303865626536656434613763 Dec 16 12:45:49.068000 audit: BPF prog-id=159 op=UNLOAD Dec 16 12:45:49.068000 audit[3739]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3728 pid=3739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:49.068000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261346537333466646634383963313663303865626536656434613763 Dec 16 12:45:49.069000 audit: BPF prog-id=161 op=LOAD Dec 16 12:45:49.069000 audit[3739]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a8648 a2=98 a3=0 items=0 ppid=3728 pid=3739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:49.069000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261346537333466646634383963313663303865626536656434613763 Dec 16 12:45:49.246930 containerd[2138]: time="2025-12-16T12:45:49.246834023Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-4hpvq,Uid:ae439c11-21a4-42e9-9a45-c846a6d0e877,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"2a4e734fdf489c16c08ebe6ed4a7ccbe0c6434b465274be8ed3d6209d1fc8089\"" Dec 16 12:45:49.248711 containerd[2138]: time="2025-12-16T12:45:49.248672191Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Dec 16 12:45:49.285805 containerd[2138]: time="2025-12-16T12:45:49.285737343Z" level=info msg="connecting to shim 1099ffa86afafa58c06d1673b0ad54b23bf5f46761dde0c19a39f0d23b55aa73" address="unix:///run/containerd/s/84f77726fd650c12acf4db1e992aa09af050812b7ab5b378ef434d191a0914e9" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:45:49.311338 systemd[1]: Started cri-containerd-1099ffa86afafa58c06d1673b0ad54b23bf5f46761dde0c19a39f0d23b55aa73.scope - libcontainer container 1099ffa86afafa58c06d1673b0ad54b23bf5f46761dde0c19a39f0d23b55aa73. Dec 16 12:45:49.316000 audit: BPF prog-id=162 op=LOAD Dec 16 12:45:49.316000 audit: BPF prog-id=163 op=LOAD Dec 16 12:45:49.316000 audit[3785]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=3774 pid=3785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:49.316000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130393966666138366166616661353863303664313637336230616435 Dec 16 12:45:49.317000 audit: BPF prog-id=163 op=UNLOAD Dec 16 12:45:49.317000 audit[3785]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3774 pid=3785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:49.317000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130393966666138366166616661353863303664313637336230616435 Dec 16 12:45:49.317000 audit: BPF prog-id=164 op=LOAD Dec 16 12:45:49.317000 audit[3785]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=3774 pid=3785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:49.317000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130393966666138366166616661353863303664313637336230616435 Dec 16 12:45:49.317000 audit: BPF prog-id=165 op=LOAD Dec 16 12:45:49.317000 audit[3785]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=3774 pid=3785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:49.317000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130393966666138366166616661353863303664313637336230616435 Dec 16 12:45:49.317000 audit: BPF prog-id=165 op=UNLOAD Dec 16 12:45:49.317000 audit[3785]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3774 pid=3785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:49.317000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130393966666138366166616661353863303664313637336230616435 Dec 16 12:45:49.317000 audit: BPF prog-id=164 op=UNLOAD Dec 16 12:45:49.317000 audit[3785]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3774 pid=3785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:49.317000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130393966666138366166616661353863303664313637336230616435 Dec 16 12:45:49.317000 audit: BPF prog-id=166 op=LOAD Dec 16 12:45:49.317000 audit[3785]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=3774 pid=3785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:49.317000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130393966666138366166616661353863303664313637336230616435 Dec 16 12:45:49.334785 containerd[2138]: time="2025-12-16T12:45:49.334745346Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qzx9j,Uid:7f5454c2-0e4f-40f5-8f1e-2b319a6de6e2,Namespace:kube-system,Attempt:0,} returns sandbox id \"1099ffa86afafa58c06d1673b0ad54b23bf5f46761dde0c19a39f0d23b55aa73\"" Dec 16 12:45:49.345111 containerd[2138]: time="2025-12-16T12:45:49.345082501Z" level=info msg="CreateContainer within sandbox \"1099ffa86afafa58c06d1673b0ad54b23bf5f46761dde0c19a39f0d23b55aa73\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 16 12:45:49.366550 containerd[2138]: time="2025-12-16T12:45:49.366521945Z" level=info msg="Container 852135937894390748920696eccb36de2805781653f967ae94a399c7ea2b2d11: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:45:49.386582 containerd[2138]: time="2025-12-16T12:45:49.386547931Z" level=info msg="CreateContainer within sandbox \"1099ffa86afafa58c06d1673b0ad54b23bf5f46761dde0c19a39f0d23b55aa73\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"852135937894390748920696eccb36de2805781653f967ae94a399c7ea2b2d11\"" Dec 16 12:45:49.387098 containerd[2138]: time="2025-12-16T12:45:49.387059602Z" level=info msg="StartContainer for \"852135937894390748920696eccb36de2805781653f967ae94a399c7ea2b2d11\"" Dec 16 12:45:49.388285 containerd[2138]: time="2025-12-16T12:45:49.388243894Z" level=info msg="connecting to shim 852135937894390748920696eccb36de2805781653f967ae94a399c7ea2b2d11" address="unix:///run/containerd/s/84f77726fd650c12acf4db1e992aa09af050812b7ab5b378ef434d191a0914e9" protocol=ttrpc version=3 Dec 16 12:45:49.400357 systemd[1]: Started cri-containerd-852135937894390748920696eccb36de2805781653f967ae94a399c7ea2b2d11.scope - libcontainer container 852135937894390748920696eccb36de2805781653f967ae94a399c7ea2b2d11. Dec 16 12:45:49.440000 audit: BPF prog-id=167 op=LOAD Dec 16 12:45:49.440000 audit[3810]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=3774 pid=3810 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:49.440000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835323133353933373839343339303734383932303639366563636233 Dec 16 12:45:49.440000 audit: BPF prog-id=168 op=LOAD Dec 16 12:45:49.440000 audit[3810]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=3774 pid=3810 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:49.440000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835323133353933373839343339303734383932303639366563636233 Dec 16 12:45:49.440000 audit: BPF prog-id=168 op=UNLOAD Dec 16 12:45:49.440000 audit[3810]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3774 pid=3810 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:49.440000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835323133353933373839343339303734383932303639366563636233 Dec 16 12:45:49.441000 audit: BPF prog-id=167 op=UNLOAD Dec 16 12:45:49.441000 audit[3810]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3774 pid=3810 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:49.441000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835323133353933373839343339303734383932303639366563636233 Dec 16 12:45:49.441000 audit: BPF prog-id=169 op=LOAD Dec 16 12:45:49.441000 audit[3810]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=3774 pid=3810 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:49.441000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835323133353933373839343339303734383932303639366563636233 Dec 16 12:45:49.458908 containerd[2138]: time="2025-12-16T12:45:49.458878452Z" level=info msg="StartContainer for \"852135937894390748920696eccb36de2805781653f967ae94a399c7ea2b2d11\" returns successfully" Dec 16 12:45:49.633000 audit[3876]: NETFILTER_CFG table=mangle:57 family=2 entries=1 op=nft_register_chain pid=3876 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:49.633000 audit[3876]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffde002c20 a2=0 a3=1 items=0 ppid=3822 pid=3876 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:49.633000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 16 12:45:49.634000 audit[3877]: NETFILTER_CFG table=mangle:58 family=10 entries=1 op=nft_register_chain pid=3877 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:45:49.634000 audit[3877]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd80c7030 a2=0 a3=1 items=0 ppid=3822 pid=3877 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:49.634000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 16 12:45:49.635000 audit[3878]: NETFILTER_CFG table=nat:59 family=2 entries=1 op=nft_register_chain pid=3878 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:49.635000 audit[3878]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffffeb0790 a2=0 a3=1 items=0 ppid=3822 pid=3878 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:49.635000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 16 12:45:49.635000 audit[3879]: NETFILTER_CFG table=nat:60 family=10 entries=1 op=nft_register_chain pid=3879 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:45:49.635000 audit[3879]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe7bb8930 a2=0 a3=1 items=0 ppid=3822 pid=3879 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:49.635000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 16 12:45:49.636000 audit[3880]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_chain pid=3880 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:49.636000 audit[3880]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe313d4e0 a2=0 a3=1 items=0 ppid=3822 pid=3880 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:49.636000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 16 12:45:49.637000 audit[3881]: NETFILTER_CFG table=filter:62 family=10 entries=1 op=nft_register_chain pid=3881 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:45:49.637000 audit[3881]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe8804530 a2=0 a3=1 items=0 ppid=3822 pid=3881 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:49.637000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 16 12:45:49.741000 audit[3884]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3884 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:49.741000 audit[3884]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffc5d2d8b0 a2=0 a3=1 items=0 ppid=3822 pid=3884 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:49.741000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 16 12:45:49.743000 audit[3886]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3886 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:49.743000 audit[3886]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=fffff0b7dc30 a2=0 a3=1 items=0 ppid=3822 pid=3886 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:49.743000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C73002D Dec 16 12:45:49.746000 audit[3889]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_rule pid=3889 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:49.746000 audit[3889]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=fffff2e78d50 a2=0 a3=1 items=0 ppid=3822 pid=3889 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:49.746000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C73 Dec 16 12:45:49.747000 audit[3890]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_chain pid=3890 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:49.747000 audit[3890]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc0d6d6f0 a2=0 a3=1 items=0 ppid=3822 pid=3890 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:49.747000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 16 12:45:49.749000 audit[3892]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3892 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:49.749000 audit[3892]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffcf0e73e0 a2=0 a3=1 items=0 ppid=3822 pid=3892 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:49.749000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 16 12:45:49.750000 audit[3893]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3893 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:49.750000 audit[3893]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc754ad80 a2=0 a3=1 items=0 ppid=3822 pid=3893 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:49.750000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D5345525649434553002D740066696C746572 Dec 16 12:45:49.752000 audit[3895]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3895 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:49.752000 audit[3895]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffd1464a60 a2=0 a3=1 items=0 ppid=3822 pid=3895 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:49.752000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 12:45:49.755000 audit[3898]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_rule pid=3898 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:49.755000 audit[3898]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=fffffc2bcc50 a2=0 a3=1 items=0 ppid=3822 pid=3898 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:49.755000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 12:45:49.756000 audit[3899]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_chain pid=3899 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:49.756000 audit[3899]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff32bc650 a2=0 a3=1 items=0 ppid=3822 pid=3899 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:49.756000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D464F5257415244002D740066696C746572 Dec 16 12:45:49.758000 audit[3901]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3901 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:49.758000 audit[3901]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffc00164d0 a2=0 a3=1 items=0 ppid=3822 pid=3901 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:49.758000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 16 12:45:49.759000 audit[3902]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_chain pid=3902 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:49.759000 audit[3902]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc0546140 a2=0 a3=1 items=0 ppid=3822 pid=3902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:49.759000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 16 12:45:49.761000 audit[3904]: NETFILTER_CFG table=filter:74 family=2 entries=1 op=nft_register_rule pid=3904 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:49.761000 audit[3904]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffe8314680 a2=0 a3=1 items=0 ppid=3822 pid=3904 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:49.761000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F5859 Dec 16 12:45:49.764000 audit[3907]: NETFILTER_CFG table=filter:75 family=2 entries=1 op=nft_register_rule pid=3907 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:49.764000 audit[3907]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffca951ca0 a2=0 a3=1 items=0 ppid=3822 pid=3907 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:49.764000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F58 Dec 16 12:45:49.767000 audit[3910]: NETFILTER_CFG table=filter:76 family=2 entries=1 op=nft_register_rule pid=3910 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:49.767000 audit[3910]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffff67a8130 a2=0 a3=1 items=0 ppid=3822 pid=3910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:49.767000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F Dec 16 12:45:49.768000 audit[3911]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3911 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:49.768000 audit[3911]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffd030fa50 a2=0 a3=1 items=0 ppid=3822 pid=3911 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:49.768000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D5345525649434553002D74006E6174 Dec 16 12:45:49.770000 audit[3913]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3913 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:49.770000 audit[3913]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=fffff72c85b0 a2=0 a3=1 items=0 ppid=3822 pid=3913 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:49.770000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 12:45:49.773000 audit[3916]: NETFILTER_CFG table=nat:79 family=2 entries=1 op=nft_register_rule pid=3916 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:49.773000 audit[3916]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffea2957d0 a2=0 a3=1 items=0 ppid=3822 pid=3916 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:49.773000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 12:45:49.774000 audit[3917]: NETFILTER_CFG table=nat:80 family=2 entries=1 op=nft_register_chain pid=3917 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:49.774000 audit[3917]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe30abbb0 a2=0 a3=1 items=0 ppid=3822 pid=3917 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:49.774000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 16 12:45:49.776000 audit[3919]: NETFILTER_CFG table=nat:81 family=2 entries=1 op=nft_register_rule pid=3919 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:49.776000 audit[3919]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=532 a0=3 a1=ffffef3b9270 a2=0 a3=1 items=0 ppid=3822 pid=3919 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:49.776000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 16 12:45:49.840000 audit[3925]: NETFILTER_CFG table=filter:82 family=2 entries=8 op=nft_register_rule pid=3925 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:45:49.840000 audit[3925]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=fffff5598db0 a2=0 a3=1 items=0 ppid=3822 pid=3925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:49.840000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:45:49.849000 audit[3925]: NETFILTER_CFG table=nat:83 family=2 entries=14 op=nft_register_chain pid=3925 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:45:49.849000 audit[3925]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5508 a0=3 a1=fffff5598db0 a2=0 a3=1 items=0 ppid=3822 pid=3925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:49.849000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:45:49.850000 audit[3930]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3930 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:45:49.850000 audit[3930]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffe6871cc0 a2=0 a3=1 items=0 ppid=3822 pid=3930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:49.850000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 16 12:45:49.852000 audit[3932]: NETFILTER_CFG table=filter:85 family=10 entries=2 op=nft_register_chain pid=3932 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:45:49.852000 audit[3932]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffe0d5f4c0 a2=0 a3=1 items=0 ppid=3822 pid=3932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:49.852000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C73 Dec 16 12:45:49.856000 audit[3935]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=3935 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:45:49.856000 audit[3935]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=fffffbc78290 a2=0 a3=1 items=0 ppid=3822 pid=3935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:49.856000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C Dec 16 12:45:49.857000 audit[3936]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_chain pid=3936 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:45:49.857000 audit[3936]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff6180c10 a2=0 a3=1 items=0 ppid=3822 pid=3936 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:49.857000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 16 12:45:49.859000 audit[3938]: NETFILTER_CFG table=filter:88 family=10 entries=1 op=nft_register_rule pid=3938 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:45:49.859000 audit[3938]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffe1fbb970 a2=0 a3=1 items=0 ppid=3822 pid=3938 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:49.859000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 16 12:45:49.860000 audit[3939]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3939 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:45:49.860000 audit[3939]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff298d250 a2=0 a3=1 items=0 ppid=3822 pid=3939 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:49.860000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D5345525649434553002D740066696C746572 Dec 16 12:45:49.863000 audit[3941]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3941 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:45:49.863000 audit[3941]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffce5e80d0 a2=0 a3=1 items=0 ppid=3822 pid=3941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:49.863000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 12:45:49.865000 audit[3944]: NETFILTER_CFG table=filter:91 family=10 entries=2 op=nft_register_chain pid=3944 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:45:49.865000 audit[3944]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=828 a0=3 a1=ffffe61c24c0 a2=0 a3=1 items=0 ppid=3822 pid=3944 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:49.865000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 12:45:49.866000 audit[3945]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_chain pid=3945 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:45:49.866000 audit[3945]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff4a25cd0 a2=0 a3=1 items=0 ppid=3822 pid=3945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:49.866000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D464F5257415244002D740066696C746572 Dec 16 12:45:49.868000 audit[3947]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3947 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:45:49.868000 audit[3947]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffc206cb40 a2=0 a3=1 items=0 ppid=3822 pid=3947 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:49.868000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 16 12:45:49.869000 audit[3948]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_chain pid=3948 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:45:49.869000 audit[3948]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd6c0b060 a2=0 a3=1 items=0 ppid=3822 pid=3948 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:49.869000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 16 12:45:49.871000 audit[3950]: NETFILTER_CFG table=filter:95 family=10 entries=1 op=nft_register_rule pid=3950 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:45:49.871000 audit[3950]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffe5362360 a2=0 a3=1 items=0 ppid=3822 pid=3950 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:49.871000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F58 Dec 16 12:45:49.874000 audit[3953]: NETFILTER_CFG table=filter:96 family=10 entries=1 op=nft_register_rule pid=3953 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:45:49.874000 audit[3953]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffe9b2ec40 a2=0 a3=1 items=0 ppid=3822 pid=3953 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:49.874000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F Dec 16 12:45:49.876000 audit[3956]: NETFILTER_CFG table=filter:97 family=10 entries=1 op=nft_register_rule pid=3956 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:45:49.876000 audit[3956]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffeba4db70 a2=0 a3=1 items=0 ppid=3822 pid=3956 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:49.876000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D5052 Dec 16 12:45:49.877000 audit[3957]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3957 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:45:49.877000 audit[3957]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffd7188a40 a2=0 a3=1 items=0 ppid=3822 pid=3957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:49.877000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D5345525649434553002D74006E6174 Dec 16 12:45:49.879000 audit[3959]: NETFILTER_CFG table=nat:99 family=10 entries=1 op=nft_register_rule pid=3959 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:45:49.879000 audit[3959]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=fffff6d01910 a2=0 a3=1 items=0 ppid=3822 pid=3959 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:49.879000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 12:45:49.882000 audit[3962]: NETFILTER_CFG table=nat:100 family=10 entries=1 op=nft_register_rule pid=3962 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:45:49.882000 audit[3962]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffd57a40b0 a2=0 a3=1 items=0 ppid=3822 pid=3962 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:49.882000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 12:45:49.883000 audit[3963]: NETFILTER_CFG table=nat:101 family=10 entries=1 op=nft_register_chain pid=3963 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:45:49.883000 audit[3963]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe5505fd0 a2=0 a3=1 items=0 ppid=3822 pid=3963 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:49.883000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 16 12:45:49.885000 audit[3965]: NETFILTER_CFG table=nat:102 family=10 entries=2 op=nft_register_chain pid=3965 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:45:49.885000 audit[3965]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=fffffc541450 a2=0 a3=1 items=0 ppid=3822 pid=3965 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:49.885000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 16 12:45:49.886000 audit[3966]: NETFILTER_CFG table=filter:103 family=10 entries=1 op=nft_register_chain pid=3966 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:45:49.886000 audit[3966]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc1d7b0f0 a2=0 a3=1 items=0 ppid=3822 pid=3966 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:49.886000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 16 12:45:49.888000 audit[3968]: NETFILTER_CFG table=filter:104 family=10 entries=1 op=nft_register_rule pid=3968 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:45:49.888000 audit[3968]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffe22ecd60 a2=0 a3=1 items=0 ppid=3822 pid=3968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:49.888000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 12:45:49.891000 audit[3971]: NETFILTER_CFG table=filter:105 family=10 entries=1 op=nft_register_rule pid=3971 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:45:49.891000 audit[3971]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=fffff22b17f0 a2=0 a3=1 items=0 ppid=3822 pid=3971 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:49.891000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 12:45:49.893000 audit[3973]: NETFILTER_CFG table=filter:106 family=10 entries=3 op=nft_register_rule pid=3973 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 16 12:45:49.893000 audit[3973]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2088 a0=3 a1=ffffcad6b980 a2=0 a3=1 items=0 ppid=3822 pid=3973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:49.893000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:45:49.894000 audit[3973]: NETFILTER_CFG table=nat:107 family=10 entries=7 op=nft_register_chain pid=3973 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 16 12:45:49.894000 audit[3973]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2056 a0=3 a1=ffffcad6b980 a2=0 a3=1 items=0 ppid=3822 pid=3973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:49.894000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:45:51.073826 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3291379067.mount: Deactivated successfully. Dec 16 12:45:51.361946 kubelet[3668]: I1216 12:45:51.361372 3668 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-qzx9j" podStartSLOduration=3.361358957 podStartE2EDuration="3.361358957s" podCreationTimestamp="2025-12-16 12:45:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:45:50.061801272 +0000 UTC m=+8.134937297" watchObservedRunningTime="2025-12-16 12:45:51.361358957 +0000 UTC m=+9.434494990" Dec 16 12:45:51.636645 containerd[2138]: time="2025-12-16T12:45:51.636318501Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:45:51.643256 containerd[2138]: time="2025-12-16T12:45:51.643217095Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=20773434" Dec 16 12:45:51.651192 containerd[2138]: time="2025-12-16T12:45:51.650497484Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:45:51.654521 containerd[2138]: time="2025-12-16T12:45:51.654491478Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:45:51.654996 containerd[2138]: time="2025-12-16T12:45:51.654892546Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 2.406193083s" Dec 16 12:45:51.654996 containerd[2138]: time="2025-12-16T12:45:51.654916499Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Dec 16 12:45:51.662651 containerd[2138]: time="2025-12-16T12:45:51.662622341Z" level=info msg="CreateContainer within sandbox \"2a4e734fdf489c16c08ebe6ed4a7ccbe0c6434b465274be8ed3d6209d1fc8089\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 16 12:45:51.678970 containerd[2138]: time="2025-12-16T12:45:51.678625924Z" level=info msg="Container e497273531a76a3b461f2aff6c98c580ca82a244eaa4780b118f93282958dc43: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:45:51.699666 containerd[2138]: time="2025-12-16T12:45:51.699631331Z" level=info msg="CreateContainer within sandbox \"2a4e734fdf489c16c08ebe6ed4a7ccbe0c6434b465274be8ed3d6209d1fc8089\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"e497273531a76a3b461f2aff6c98c580ca82a244eaa4780b118f93282958dc43\"" Dec 16 12:45:51.700133 containerd[2138]: time="2025-12-16T12:45:51.700106114Z" level=info msg="StartContainer for \"e497273531a76a3b461f2aff6c98c580ca82a244eaa4780b118f93282958dc43\"" Dec 16 12:45:51.701594 containerd[2138]: time="2025-12-16T12:45:51.701568598Z" level=info msg="connecting to shim e497273531a76a3b461f2aff6c98c580ca82a244eaa4780b118f93282958dc43" address="unix:///run/containerd/s/817bab2214bda156ff5cb80450eb7b9f1b9ad5ee667edea21da18955bc034aaf" protocol=ttrpc version=3 Dec 16 12:45:51.719357 systemd[1]: Started cri-containerd-e497273531a76a3b461f2aff6c98c580ca82a244eaa4780b118f93282958dc43.scope - libcontainer container e497273531a76a3b461f2aff6c98c580ca82a244eaa4780b118f93282958dc43. Dec 16 12:45:51.725000 audit: BPF prog-id=170 op=LOAD Dec 16 12:45:51.725000 audit: BPF prog-id=171 op=LOAD Dec 16 12:45:51.725000 audit[3982]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=3728 pid=3982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:51.725000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534393732373335333161373661336234363166326166663663393863 Dec 16 12:45:51.725000 audit: BPF prog-id=171 op=UNLOAD Dec 16 12:45:51.725000 audit[3982]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3728 pid=3982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:51.725000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534393732373335333161373661336234363166326166663663393863 Dec 16 12:45:51.726000 audit: BPF prog-id=172 op=LOAD Dec 16 12:45:51.726000 audit[3982]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=3728 pid=3982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:51.726000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534393732373335333161373661336234363166326166663663393863 Dec 16 12:45:51.726000 audit: BPF prog-id=173 op=LOAD Dec 16 12:45:51.726000 audit[3982]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=3728 pid=3982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:51.726000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534393732373335333161373661336234363166326166663663393863 Dec 16 12:45:51.726000 audit: BPF prog-id=173 op=UNLOAD Dec 16 12:45:51.726000 audit[3982]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3728 pid=3982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:51.726000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534393732373335333161373661336234363166326166663663393863 Dec 16 12:45:51.726000 audit: BPF prog-id=172 op=UNLOAD Dec 16 12:45:51.726000 audit[3982]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3728 pid=3982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:51.726000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534393732373335333161373661336234363166326166663663393863 Dec 16 12:45:51.726000 audit: BPF prog-id=174 op=LOAD Dec 16 12:45:51.726000 audit[3982]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=3728 pid=3982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:51.726000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534393732373335333161373661336234363166326166663663393863 Dec 16 12:45:51.745917 containerd[2138]: time="2025-12-16T12:45:51.745888699Z" level=info msg="StartContainer for \"e497273531a76a3b461f2aff6c98c580ca82a244eaa4780b118f93282958dc43\" returns successfully" Dec 16 12:45:52.083804 kubelet[3668]: I1216 12:45:52.083607 3668 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-4hpvq" podStartSLOduration=1.676512603 podStartE2EDuration="4.083590784s" podCreationTimestamp="2025-12-16 12:45:48 +0000 UTC" firstStartedPulling="2025-12-16 12:45:49.24844712 +0000 UTC m=+7.321583145" lastFinishedPulling="2025-12-16 12:45:51.655525301 +0000 UTC m=+9.728661326" observedRunningTime="2025-12-16 12:45:52.082574113 +0000 UTC m=+10.155710138" watchObservedRunningTime="2025-12-16 12:45:52.083590784 +0000 UTC m=+10.156726809" Dec 16 12:45:56.936156 sudo[2633]: pam_unix(sudo:session): session closed for user root Dec 16 12:45:56.934000 audit[2633]: USER_END pid=2633 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:45:56.940122 kernel: kauditd_printk_skb: 224 callbacks suppressed Dec 16 12:45:56.940233 kernel: audit: type=1106 audit(1765889156.934:541): pid=2633 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:45:56.935000 audit[2633]: CRED_DISP pid=2633 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:45:56.971527 kernel: audit: type=1104 audit(1765889156.935:542): pid=2633 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:45:57.026825 sshd[2632]: Connection closed by 10.200.16.10 port 41036 Dec 16 12:45:57.027466 sshd-session[2626]: pam_unix(sshd:session): session closed for user core Dec 16 12:45:57.028000 audit[2626]: USER_END pid=2626 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:45:57.028000 audit[2626]: CRED_DISP pid=2626 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:45:57.069690 kernel: audit: type=1106 audit(1765889157.028:543): pid=2626 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:45:57.069761 kernel: audit: type=1104 audit(1765889157.028:544): pid=2626 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:45:57.053753 systemd[1]: session-9.scope: Deactivated successfully. Dec 16 12:45:57.053998 systemd[1]: session-9.scope: Consumed 3.443s CPU time, 220.3M memory peak. Dec 16 12:45:57.057301 systemd[1]: sshd@6-10.200.20.37:22-10.200.16.10:41036.service: Deactivated successfully. Dec 16 12:45:57.070926 systemd-logind[2110]: Session 9 logged out. Waiting for processes to exit. Dec 16 12:45:57.072649 systemd-logind[2110]: Removed session 9. Dec 16 12:45:57.056000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.20.37:22-10.200.16.10:41036 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:45:57.091227 kernel: audit: type=1131 audit(1765889157.056:545): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.20.37:22-10.200.16.10:41036 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:45:58.598000 audit[4056]: NETFILTER_CFG table=filter:108 family=2 entries=15 op=nft_register_rule pid=4056 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:45:58.598000 audit[4056]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffcada7610 a2=0 a3=1 items=0 ppid=3822 pid=4056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:58.610278 kernel: audit: type=1325 audit(1765889158.598:546): table=filter:108 family=2 entries=15 op=nft_register_rule pid=4056 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:45:58.598000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:45:58.640878 kernel: audit: type=1300 audit(1765889158.598:546): arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffcada7610 a2=0 a3=1 items=0 ppid=3822 pid=4056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:58.640943 kernel: audit: type=1327 audit(1765889158.598:546): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:45:58.632000 audit[4056]: NETFILTER_CFG table=nat:109 family=2 entries=12 op=nft_register_rule pid=4056 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:45:58.650427 kernel: audit: type=1325 audit(1765889158.632:547): table=nat:109 family=2 entries=12 op=nft_register_rule pid=4056 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:45:58.632000 audit[4056]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffcada7610 a2=0 a3=1 items=0 ppid=3822 pid=4056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:58.668900 kernel: audit: type=1300 audit(1765889158.632:547): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffcada7610 a2=0 a3=1 items=0 ppid=3822 pid=4056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:58.632000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:45:59.690000 audit[4058]: NETFILTER_CFG table=filter:110 family=2 entries=16 op=nft_register_rule pid=4058 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:45:59.690000 audit[4058]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffcbe2f400 a2=0 a3=1 items=0 ppid=3822 pid=4058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:59.690000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:45:59.695000 audit[4058]: NETFILTER_CFG table=nat:111 family=2 entries=12 op=nft_register_rule pid=4058 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:45:59.695000 audit[4058]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffcbe2f400 a2=0 a3=1 items=0 ppid=3822 pid=4058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:59.695000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:46:02.905236 kernel: kauditd_printk_skb: 7 callbacks suppressed Dec 16 12:46:02.905346 kernel: audit: type=1325 audit(1765889162.899:550): table=filter:112 family=2 entries=17 op=nft_register_rule pid=4060 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:46:02.899000 audit[4060]: NETFILTER_CFG table=filter:112 family=2 entries=17 op=nft_register_rule pid=4060 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:46:02.899000 audit[4060]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffd8b3b090 a2=0 a3=1 items=0 ppid=3822 pid=4060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:02.930190 kernel: audit: type=1300 audit(1765889162.899:550): arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffd8b3b090 a2=0 a3=1 items=0 ppid=3822 pid=4060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:02.899000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:46:02.939925 kernel: audit: type=1327 audit(1765889162.899:550): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:46:02.925000 audit[4060]: NETFILTER_CFG table=nat:113 family=2 entries=12 op=nft_register_rule pid=4060 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:46:02.949384 kernel: audit: type=1325 audit(1765889162.925:551): table=nat:113 family=2 entries=12 op=nft_register_rule pid=4060 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:46:02.925000 audit[4060]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffd8b3b090 a2=0 a3=1 items=0 ppid=3822 pid=4060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:02.968274 kernel: audit: type=1300 audit(1765889162.925:551): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffd8b3b090 a2=0 a3=1 items=0 ppid=3822 pid=4060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:02.925000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:46:02.976940 kernel: audit: type=1327 audit(1765889162.925:551): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:46:03.960000 audit[4062]: NETFILTER_CFG table=filter:114 family=2 entries=19 op=nft_register_rule pid=4062 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:46:03.960000 audit[4062]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffc7a35fd0 a2=0 a3=1 items=0 ppid=3822 pid=4062 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:03.989003 kernel: audit: type=1325 audit(1765889163.960:552): table=filter:114 family=2 entries=19 op=nft_register_rule pid=4062 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:46:03.989081 kernel: audit: type=1300 audit(1765889163.960:552): arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffc7a35fd0 a2=0 a3=1 items=0 ppid=3822 pid=4062 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:03.960000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:46:03.997866 kernel: audit: type=1327 audit(1765889163.960:552): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:46:03.989000 audit[4062]: NETFILTER_CFG table=nat:115 family=2 entries=12 op=nft_register_rule pid=4062 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:46:04.007343 kernel: audit: type=1325 audit(1765889163.989:553): table=nat:115 family=2 entries=12 op=nft_register_rule pid=4062 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:46:03.989000 audit[4062]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffc7a35fd0 a2=0 a3=1 items=0 ppid=3822 pid=4062 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:03.989000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:46:05.637000 audit[4064]: NETFILTER_CFG table=filter:116 family=2 entries=21 op=nft_register_rule pid=4064 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:46:05.637000 audit[4064]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=fffffc2abad0 a2=0 a3=1 items=0 ppid=3822 pid=4064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:05.637000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:46:05.647000 audit[4064]: NETFILTER_CFG table=nat:117 family=2 entries=12 op=nft_register_rule pid=4064 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:46:05.647000 audit[4064]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffffc2abad0 a2=0 a3=1 items=0 ppid=3822 pid=4064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:05.647000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:46:05.690952 systemd[1]: Created slice kubepods-besteffort-pod9b1d6f6b_f093_4451_860f_d36d7bb566ba.slice - libcontainer container kubepods-besteffort-pod9b1d6f6b_f093_4451_860f_d36d7bb566ba.slice. Dec 16 12:46:05.731819 kubelet[3668]: I1216 12:46:05.731784 3668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9b1d6f6b-f093-4451-860f-d36d7bb566ba-tigera-ca-bundle\") pod \"calico-typha-799fc5847-pcfwk\" (UID: \"9b1d6f6b-f093-4451-860f-d36d7bb566ba\") " pod="calico-system/calico-typha-799fc5847-pcfwk" Dec 16 12:46:05.731819 kubelet[3668]: I1216 12:46:05.731820 3668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgf4q\" (UniqueName: \"kubernetes.io/projected/9b1d6f6b-f093-4451-860f-d36d7bb566ba-kube-api-access-dgf4q\") pod \"calico-typha-799fc5847-pcfwk\" (UID: \"9b1d6f6b-f093-4451-860f-d36d7bb566ba\") " pod="calico-system/calico-typha-799fc5847-pcfwk" Dec 16 12:46:05.732129 kubelet[3668]: I1216 12:46:05.731835 3668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/9b1d6f6b-f093-4451-860f-d36d7bb566ba-typha-certs\") pod \"calico-typha-799fc5847-pcfwk\" (UID: \"9b1d6f6b-f093-4451-860f-d36d7bb566ba\") " pod="calico-system/calico-typha-799fc5847-pcfwk" Dec 16 12:46:05.916080 systemd[1]: Created slice kubepods-besteffort-pod97d83477_c41f_4aa0_a3d2_d3dd29a4a225.slice - libcontainer container kubepods-besteffort-pod97d83477_c41f_4aa0_a3d2_d3dd29a4a225.slice. Dec 16 12:46:05.933115 kubelet[3668]: I1216 12:46:05.933084 3668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/97d83477-c41f-4aa0-a3d2-d3dd29a4a225-cni-log-dir\") pod \"calico-node-rnrdj\" (UID: \"97d83477-c41f-4aa0-a3d2-d3dd29a4a225\") " pod="calico-system/calico-node-rnrdj" Dec 16 12:46:05.933115 kubelet[3668]: I1216 12:46:05.933116 3668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/97d83477-c41f-4aa0-a3d2-d3dd29a4a225-cni-net-dir\") pod \"calico-node-rnrdj\" (UID: \"97d83477-c41f-4aa0-a3d2-d3dd29a4a225\") " pod="calico-system/calico-node-rnrdj" Dec 16 12:46:05.933326 kubelet[3668]: I1216 12:46:05.933129 3668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/97d83477-c41f-4aa0-a3d2-d3dd29a4a225-cni-bin-dir\") pod \"calico-node-rnrdj\" (UID: \"97d83477-c41f-4aa0-a3d2-d3dd29a4a225\") " pod="calico-system/calico-node-rnrdj" Dec 16 12:46:05.933326 kubelet[3668]: I1216 12:46:05.933138 3668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/97d83477-c41f-4aa0-a3d2-d3dd29a4a225-lib-modules\") pod \"calico-node-rnrdj\" (UID: \"97d83477-c41f-4aa0-a3d2-d3dd29a4a225\") " pod="calico-system/calico-node-rnrdj" Dec 16 12:46:05.933326 kubelet[3668]: I1216 12:46:05.933188 3668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/97d83477-c41f-4aa0-a3d2-d3dd29a4a225-node-certs\") pod \"calico-node-rnrdj\" (UID: \"97d83477-c41f-4aa0-a3d2-d3dd29a4a225\") " pod="calico-system/calico-node-rnrdj" Dec 16 12:46:05.933420 kubelet[3668]: I1216 12:46:05.933404 3668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/97d83477-c41f-4aa0-a3d2-d3dd29a4a225-policysync\") pod \"calico-node-rnrdj\" (UID: \"97d83477-c41f-4aa0-a3d2-d3dd29a4a225\") " pod="calico-system/calico-node-rnrdj" Dec 16 12:46:05.933451 kubelet[3668]: I1216 12:46:05.933436 3668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/97d83477-c41f-4aa0-a3d2-d3dd29a4a225-flexvol-driver-host\") pod \"calico-node-rnrdj\" (UID: \"97d83477-c41f-4aa0-a3d2-d3dd29a4a225\") " pod="calico-system/calico-node-rnrdj" Dec 16 12:46:05.933577 kubelet[3668]: I1216 12:46:05.933456 3668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/97d83477-c41f-4aa0-a3d2-d3dd29a4a225-var-run-calico\") pod \"calico-node-rnrdj\" (UID: \"97d83477-c41f-4aa0-a3d2-d3dd29a4a225\") " pod="calico-system/calico-node-rnrdj" Dec 16 12:46:05.933577 kubelet[3668]: I1216 12:46:05.933474 3668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/97d83477-c41f-4aa0-a3d2-d3dd29a4a225-var-lib-calico\") pod \"calico-node-rnrdj\" (UID: \"97d83477-c41f-4aa0-a3d2-d3dd29a4a225\") " pod="calico-system/calico-node-rnrdj" Dec 16 12:46:05.933577 kubelet[3668]: I1216 12:46:05.933510 3668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/97d83477-c41f-4aa0-a3d2-d3dd29a4a225-xtables-lock\") pod \"calico-node-rnrdj\" (UID: \"97d83477-c41f-4aa0-a3d2-d3dd29a4a225\") " pod="calico-system/calico-node-rnrdj" Dec 16 12:46:05.933577 kubelet[3668]: I1216 12:46:05.933566 3668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/97d83477-c41f-4aa0-a3d2-d3dd29a4a225-tigera-ca-bundle\") pod \"calico-node-rnrdj\" (UID: \"97d83477-c41f-4aa0-a3d2-d3dd29a4a225\") " pod="calico-system/calico-node-rnrdj" Dec 16 12:46:05.933650 kubelet[3668]: I1216 12:46:05.933582 3668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2t9q\" (UniqueName: \"kubernetes.io/projected/97d83477-c41f-4aa0-a3d2-d3dd29a4a225-kube-api-access-b2t9q\") pod \"calico-node-rnrdj\" (UID: \"97d83477-c41f-4aa0-a3d2-d3dd29a4a225\") " pod="calico-system/calico-node-rnrdj" Dec 16 12:46:06.000933 containerd[2138]: time="2025-12-16T12:46:06.000754772Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-799fc5847-pcfwk,Uid:9b1d6f6b-f093-4451-860f-d36d7bb566ba,Namespace:calico-system,Attempt:0,}" Dec 16 12:46:06.041228 kubelet[3668]: E1216 12:46:06.041189 3668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:06.041411 kubelet[3668]: W1216 12:46:06.041313 3668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:06.041411 kubelet[3668]: E1216 12:46:06.041337 3668 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:06.041653 kubelet[3668]: E1216 12:46:06.041561 3668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:06.041653 kubelet[3668]: W1216 12:46:06.041570 3668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:06.041653 kubelet[3668]: E1216 12:46:06.041580 3668 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:06.043031 kubelet[3668]: E1216 12:46:06.043006 3668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:06.043031 kubelet[3668]: W1216 12:46:06.043024 3668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:06.043105 kubelet[3668]: E1216 12:46:06.043037 3668 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:06.043891 kubelet[3668]: E1216 12:46:06.043340 3668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:06.043891 kubelet[3668]: W1216 12:46:06.043354 3668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:06.043891 kubelet[3668]: E1216 12:46:06.043365 3668 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:06.043891 kubelet[3668]: E1216 12:46:06.043743 3668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:06.043891 kubelet[3668]: W1216 12:46:06.043753 3668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:06.043891 kubelet[3668]: E1216 12:46:06.043775 3668 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:06.044144 kubelet[3668]: E1216 12:46:06.044122 3668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:06.044144 kubelet[3668]: W1216 12:46:06.044137 3668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:06.044192 kubelet[3668]: E1216 12:46:06.044152 3668 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:06.044456 kubelet[3668]: E1216 12:46:06.044437 3668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:06.044579 kubelet[3668]: W1216 12:46:06.044452 3668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:06.044606 kubelet[3668]: E1216 12:46:06.044582 3668 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:06.044980 kubelet[3668]: E1216 12:46:06.044960 3668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:06.044980 kubelet[3668]: W1216 12:46:06.044977 3668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:06.045024 kubelet[3668]: E1216 12:46:06.044987 3668 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:06.045415 kubelet[3668]: E1216 12:46:06.045394 3668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:06.045415 kubelet[3668]: W1216 12:46:06.045409 3668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:06.045464 kubelet[3668]: E1216 12:46:06.045419 3668 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:06.052852 kubelet[3668]: E1216 12:46:06.051547 3668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:06.052852 kubelet[3668]: W1216 12:46:06.051563 3668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:06.052852 kubelet[3668]: E1216 12:46:06.051575 3668 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:06.063138 containerd[2138]: time="2025-12-16T12:46:06.062296213Z" level=info msg="connecting to shim 2edf82f7661d7e714850834e6024f5352d270bdc0ad5bb3f3c410de8806467a0" address="unix:///run/containerd/s/a0ba8f74d61ef9e17e9a40bea0963be44ef47f186ddf1003d7d8fd0e1573ff0b" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:46:06.084374 systemd[1]: Started cri-containerd-2edf82f7661d7e714850834e6024f5352d270bdc0ad5bb3f3c410de8806467a0.scope - libcontainer container 2edf82f7661d7e714850834e6024f5352d270bdc0ad5bb3f3c410de8806467a0. Dec 16 12:46:06.092000 audit: BPF prog-id=175 op=LOAD Dec 16 12:46:06.092000 audit: BPF prog-id=176 op=LOAD Dec 16 12:46:06.092000 audit[4098]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000138180 a2=98 a3=0 items=0 ppid=4088 pid=4098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:06.092000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265646638326637363631643765373134383530383334653630323466 Dec 16 12:46:06.092000 audit: BPF prog-id=176 op=UNLOAD Dec 16 12:46:06.092000 audit[4098]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4088 pid=4098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:06.092000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265646638326637363631643765373134383530383334653630323466 Dec 16 12:46:06.093000 audit: BPF prog-id=177 op=LOAD Dec 16 12:46:06.093000 audit[4098]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001383e8 a2=98 a3=0 items=0 ppid=4088 pid=4098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:06.093000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265646638326637363631643765373134383530383334653630323466 Dec 16 12:46:06.093000 audit: BPF prog-id=178 op=LOAD Dec 16 12:46:06.093000 audit[4098]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000138168 a2=98 a3=0 items=0 ppid=4088 pid=4098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:06.093000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265646638326637363631643765373134383530383334653630323466 Dec 16 12:46:06.093000 audit: BPF prog-id=178 op=UNLOAD Dec 16 12:46:06.093000 audit[4098]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4088 pid=4098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:06.093000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265646638326637363631643765373134383530383334653630323466 Dec 16 12:46:06.093000 audit: BPF prog-id=177 op=UNLOAD Dec 16 12:46:06.093000 audit[4098]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4088 pid=4098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:06.093000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265646638326637363631643765373134383530383334653630323466 Dec 16 12:46:06.093000 audit: BPF prog-id=179 op=LOAD Dec 16 12:46:06.093000 audit[4098]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000138648 a2=98 a3=0 items=0 ppid=4088 pid=4098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:06.093000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265646638326637363631643765373134383530383334653630323466 Dec 16 12:46:06.124191 containerd[2138]: time="2025-12-16T12:46:06.124154247Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-799fc5847-pcfwk,Uid:9b1d6f6b-f093-4451-860f-d36d7bb566ba,Namespace:calico-system,Attempt:0,} returns sandbox id \"2edf82f7661d7e714850834e6024f5352d270bdc0ad5bb3f3c410de8806467a0\"" Dec 16 12:46:06.128175 containerd[2138]: time="2025-12-16T12:46:06.127982277Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Dec 16 12:46:06.132764 kubelet[3668]: E1216 12:46:06.132731 3668 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xwwbh" podUID="5bbc1d74-de1f-40b8-bd99-2346a3e2bafe" Dec 16 12:46:06.212173 kubelet[3668]: E1216 12:46:06.212004 3668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:06.212849 kubelet[3668]: W1216 12:46:06.212780 3668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:06.212849 kubelet[3668]: E1216 12:46:06.212807 3668 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:06.213989 kubelet[3668]: E1216 12:46:06.213374 3668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:06.213989 kubelet[3668]: W1216 12:46:06.213388 3668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:06.213989 kubelet[3668]: E1216 12:46:06.213429 3668 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:06.214157 kubelet[3668]: E1216 12:46:06.214145 3668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:06.214360 kubelet[3668]: W1216 12:46:06.214345 3668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:06.214653 kubelet[3668]: E1216 12:46:06.214510 3668 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:06.215157 kubelet[3668]: E1216 12:46:06.215121 3668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:06.215310 kubelet[3668]: W1216 12:46:06.215229 3668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:06.215310 kubelet[3668]: E1216 12:46:06.215244 3668 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:06.215536 kubelet[3668]: E1216 12:46:06.215502 3668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:06.215536 kubelet[3668]: W1216 12:46:06.215513 3668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:06.215536 kubelet[3668]: E1216 12:46:06.215523 3668 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:06.215780 kubelet[3668]: E1216 12:46:06.215767 3668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:06.215910 kubelet[3668]: W1216 12:46:06.215843 3668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:06.215910 kubelet[3668]: E1216 12:46:06.215857 3668 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:06.216128 kubelet[3668]: E1216 12:46:06.216061 3668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:06.216128 kubelet[3668]: W1216 12:46:06.216069 3668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:06.216128 kubelet[3668]: E1216 12:46:06.216077 3668 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:06.216381 kubelet[3668]: E1216 12:46:06.216323 3668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:06.216381 kubelet[3668]: W1216 12:46:06.216332 3668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:06.216381 kubelet[3668]: E1216 12:46:06.216342 3668 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:06.216586 kubelet[3668]: E1216 12:46:06.216577 3668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:06.216726 kubelet[3668]: W1216 12:46:06.216634 3668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:06.216726 kubelet[3668]: E1216 12:46:06.216647 3668 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:06.216935 kubelet[3668]: E1216 12:46:06.216873 3668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:06.216935 kubelet[3668]: W1216 12:46:06.216882 3668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:06.216935 kubelet[3668]: E1216 12:46:06.216890 3668 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:06.217297 kubelet[3668]: E1216 12:46:06.217224 3668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:06.217297 kubelet[3668]: W1216 12:46:06.217235 3668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:06.217297 kubelet[3668]: E1216 12:46:06.217245 3668 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:06.217550 kubelet[3668]: E1216 12:46:06.217486 3668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:06.217550 kubelet[3668]: W1216 12:46:06.217496 3668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:06.217550 kubelet[3668]: E1216 12:46:06.217505 3668 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:06.217833 kubelet[3668]: E1216 12:46:06.217821 3668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:06.218010 kubelet[3668]: W1216 12:46:06.217872 3668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:06.218010 kubelet[3668]: E1216 12:46:06.217886 3668 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:06.218253 kubelet[3668]: E1216 12:46:06.218241 3668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:06.218343 kubelet[3668]: W1216 12:46:06.218331 3668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:06.218553 kubelet[3668]: E1216 12:46:06.218539 3668 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:06.218759 kubelet[3668]: E1216 12:46:06.218749 3668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:06.218955 kubelet[3668]: W1216 12:46:06.218853 3668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:06.218955 kubelet[3668]: E1216 12:46:06.218869 3668 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:06.219119 kubelet[3668]: E1216 12:46:06.219110 3668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:06.219214 kubelet[3668]: W1216 12:46:06.219189 3668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:06.220750 kubelet[3668]: E1216 12:46:06.220731 3668 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:06.221115 kubelet[3668]: E1216 12:46:06.221018 3668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:06.221115 kubelet[3668]: W1216 12:46:06.221030 3668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:06.221115 kubelet[3668]: E1216 12:46:06.221041 3668 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:06.221278 kubelet[3668]: E1216 12:46:06.221267 3668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:06.221343 kubelet[3668]: W1216 12:46:06.221332 3668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:06.221475 kubelet[3668]: E1216 12:46:06.221390 3668 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:06.221570 kubelet[3668]: E1216 12:46:06.221561 3668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:06.221707 kubelet[3668]: W1216 12:46:06.221618 3668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:06.221707 kubelet[3668]: E1216 12:46:06.221632 3668 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:06.221828 kubelet[3668]: E1216 12:46:06.221817 3668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:06.222139 kubelet[3668]: W1216 12:46:06.221873 3668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:06.222139 kubelet[3668]: E1216 12:46:06.221889 3668 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:06.228348 containerd[2138]: time="2025-12-16T12:46:06.228277231Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rnrdj,Uid:97d83477-c41f-4aa0-a3d2-d3dd29a4a225,Namespace:calico-system,Attempt:0,}" Dec 16 12:46:06.235794 kubelet[3668]: E1216 12:46:06.235773 3668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:06.235794 kubelet[3668]: W1216 12:46:06.235788 3668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:06.235794 kubelet[3668]: E1216 12:46:06.235800 3668 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:06.238653 kubelet[3668]: I1216 12:46:06.235820 3668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsplf\" (UniqueName: \"kubernetes.io/projected/5bbc1d74-de1f-40b8-bd99-2346a3e2bafe-kube-api-access-lsplf\") pod \"csi-node-driver-xwwbh\" (UID: \"5bbc1d74-de1f-40b8-bd99-2346a3e2bafe\") " pod="calico-system/csi-node-driver-xwwbh" Dec 16 12:46:06.238653 kubelet[3668]: E1216 12:46:06.235968 3668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:06.238653 kubelet[3668]: W1216 12:46:06.235975 3668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:06.238653 kubelet[3668]: E1216 12:46:06.235984 3668 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:06.238653 kubelet[3668]: I1216 12:46:06.235994 3668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/5bbc1d74-de1f-40b8-bd99-2346a3e2bafe-registration-dir\") pod \"csi-node-driver-xwwbh\" (UID: \"5bbc1d74-de1f-40b8-bd99-2346a3e2bafe\") " pod="calico-system/csi-node-driver-xwwbh" Dec 16 12:46:06.238653 kubelet[3668]: E1216 12:46:06.236254 3668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:06.238653 kubelet[3668]: W1216 12:46:06.236361 3668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:06.238653 kubelet[3668]: E1216 12:46:06.236374 3668 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:06.238653 kubelet[3668]: E1216 12:46:06.236668 3668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:06.238787 kubelet[3668]: W1216 12:46:06.236678 3668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:06.238787 kubelet[3668]: E1216 12:46:06.236689 3668 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:06.238787 kubelet[3668]: E1216 12:46:06.237076 3668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:06.238787 kubelet[3668]: W1216 12:46:06.237088 3668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:06.238787 kubelet[3668]: E1216 12:46:06.237099 3668 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:06.238787 kubelet[3668]: I1216 12:46:06.237120 3668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5bbc1d74-de1f-40b8-bd99-2346a3e2bafe-kubelet-dir\") pod \"csi-node-driver-xwwbh\" (UID: \"5bbc1d74-de1f-40b8-bd99-2346a3e2bafe\") " pod="calico-system/csi-node-driver-xwwbh" Dec 16 12:46:06.238787 kubelet[3668]: E1216 12:46:06.237249 3668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:06.238787 kubelet[3668]: W1216 12:46:06.237276 3668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:06.238787 kubelet[3668]: E1216 12:46:06.237285 3668 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:06.238914 kubelet[3668]: E1216 12:46:06.237385 3668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:06.238914 kubelet[3668]: W1216 12:46:06.237390 3668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:06.238914 kubelet[3668]: E1216 12:46:06.237397 3668 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:06.238914 kubelet[3668]: E1216 12:46:06.237538 3668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:06.238914 kubelet[3668]: W1216 12:46:06.237544 3668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:06.238914 kubelet[3668]: E1216 12:46:06.237551 3668 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:06.238914 kubelet[3668]: I1216 12:46:06.237567 3668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/5bbc1d74-de1f-40b8-bd99-2346a3e2bafe-socket-dir\") pod \"csi-node-driver-xwwbh\" (UID: \"5bbc1d74-de1f-40b8-bd99-2346a3e2bafe\") " pod="calico-system/csi-node-driver-xwwbh" Dec 16 12:46:06.238914 kubelet[3668]: E1216 12:46:06.238005 3668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:06.238914 kubelet[3668]: W1216 12:46:06.238017 3668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:06.239038 kubelet[3668]: E1216 12:46:06.238032 3668 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:06.239038 kubelet[3668]: E1216 12:46:06.238270 3668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:06.239038 kubelet[3668]: W1216 12:46:06.238280 3668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:06.239038 kubelet[3668]: E1216 12:46:06.238376 3668 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:06.239735 kubelet[3668]: E1216 12:46:06.239688 3668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:06.239735 kubelet[3668]: W1216 12:46:06.239702 3668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:06.240013 kubelet[3668]: E1216 12:46:06.239889 3668 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:06.240013 kubelet[3668]: I1216 12:46:06.239918 3668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/5bbc1d74-de1f-40b8-bd99-2346a3e2bafe-varrun\") pod \"csi-node-driver-xwwbh\" (UID: \"5bbc1d74-de1f-40b8-bd99-2346a3e2bafe\") " pod="calico-system/csi-node-driver-xwwbh" Dec 16 12:46:06.240286 kubelet[3668]: E1216 12:46:06.240224 3668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:06.240286 kubelet[3668]: W1216 12:46:06.240250 3668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:06.240286 kubelet[3668]: E1216 12:46:06.240262 3668 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:06.240765 kubelet[3668]: E1216 12:46:06.240747 3668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:06.240765 kubelet[3668]: W1216 12:46:06.240762 3668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:06.240840 kubelet[3668]: E1216 12:46:06.240772 3668 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:06.241023 kubelet[3668]: E1216 12:46:06.241005 3668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:06.241023 kubelet[3668]: W1216 12:46:06.241020 3668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:06.241078 kubelet[3668]: E1216 12:46:06.241030 3668 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:06.242185 kubelet[3668]: E1216 12:46:06.241494 3668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:06.242185 kubelet[3668]: W1216 12:46:06.241506 3668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:06.242185 kubelet[3668]: E1216 12:46:06.241518 3668 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:06.282547 containerd[2138]: time="2025-12-16T12:46:06.282520022Z" level=info msg="connecting to shim 48839d914cea11ce81c8b130c1b347aaf158f0b8ed0842f34c80e8eaa2b39c2d" address="unix:///run/containerd/s/50b57ddfde5f821b70a196a9e3fa83449ed82c573f826b44923f785af469a03a" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:46:06.304354 systemd[1]: Started cri-containerd-48839d914cea11ce81c8b130c1b347aaf158f0b8ed0842f34c80e8eaa2b39c2d.scope - libcontainer container 48839d914cea11ce81c8b130c1b347aaf158f0b8ed0842f34c80e8eaa2b39c2d. Dec 16 12:46:06.312000 audit: BPF prog-id=180 op=LOAD Dec 16 12:46:06.312000 audit: BPF prog-id=181 op=LOAD Dec 16 12:46:06.312000 audit[4190]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=4178 pid=4190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:06.312000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438383339643931346365613131636538316338623133306331623334 Dec 16 12:46:06.312000 audit: BPF prog-id=181 op=UNLOAD Dec 16 12:46:06.312000 audit[4190]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4178 pid=4190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:06.312000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438383339643931346365613131636538316338623133306331623334 Dec 16 12:46:06.313000 audit: BPF prog-id=182 op=LOAD Dec 16 12:46:06.313000 audit[4190]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=4178 pid=4190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:06.313000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438383339643931346365613131636538316338623133306331623334 Dec 16 12:46:06.313000 audit: BPF prog-id=183 op=LOAD Dec 16 12:46:06.313000 audit[4190]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=4178 pid=4190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:06.313000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438383339643931346365613131636538316338623133306331623334 Dec 16 12:46:06.313000 audit: BPF prog-id=183 op=UNLOAD Dec 16 12:46:06.313000 audit[4190]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4178 pid=4190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:06.313000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438383339643931346365613131636538316338623133306331623334 Dec 16 12:46:06.313000 audit: BPF prog-id=182 op=UNLOAD Dec 16 12:46:06.313000 audit[4190]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4178 pid=4190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:06.313000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438383339643931346365613131636538316338623133306331623334 Dec 16 12:46:06.313000 audit: BPF prog-id=184 op=LOAD Dec 16 12:46:06.313000 audit[4190]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=4178 pid=4190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:06.313000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438383339643931346365613131636538316338623133306331623334 Dec 16 12:46:06.330478 containerd[2138]: time="2025-12-16T12:46:06.330448695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rnrdj,Uid:97d83477-c41f-4aa0-a3d2-d3dd29a4a225,Namespace:calico-system,Attempt:0,} returns sandbox id \"48839d914cea11ce81c8b130c1b347aaf158f0b8ed0842f34c80e8eaa2b39c2d\"" Dec 16 12:46:06.341074 kubelet[3668]: E1216 12:46:06.341054 3668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:06.341074 kubelet[3668]: W1216 12:46:06.341069 3668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:06.341184 kubelet[3668]: E1216 12:46:06.341088 3668 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:06.341275 kubelet[3668]: E1216 12:46:06.341260 3668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:06.341275 kubelet[3668]: W1216 12:46:06.341270 3668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:06.341329 kubelet[3668]: E1216 12:46:06.341279 3668 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:06.341454 kubelet[3668]: E1216 12:46:06.341437 3668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:06.341454 kubelet[3668]: W1216 12:46:06.341448 3668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:06.341505 kubelet[3668]: E1216 12:46:06.341456 3668 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:06.341596 kubelet[3668]: E1216 12:46:06.341586 3668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:06.341596 kubelet[3668]: W1216 12:46:06.341595 3668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:06.341635 kubelet[3668]: E1216 12:46:06.341601 3668 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:06.341774 kubelet[3668]: E1216 12:46:06.341760 3668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:06.341774 kubelet[3668]: W1216 12:46:06.341770 3668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:06.341819 kubelet[3668]: E1216 12:46:06.341777 3668 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:06.341944 kubelet[3668]: E1216 12:46:06.341928 3668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:06.341977 kubelet[3668]: W1216 12:46:06.341946 3668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:06.341977 kubelet[3668]: E1216 12:46:06.341954 3668 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:06.342083 kubelet[3668]: E1216 12:46:06.342071 3668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:06.342083 kubelet[3668]: W1216 12:46:06.342079 3668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:06.342142 kubelet[3668]: E1216 12:46:06.342085 3668 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:06.342227 kubelet[3668]: E1216 12:46:06.342195 3668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:06.342227 kubelet[3668]: W1216 12:46:06.342223 3668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:06.342410 kubelet[3668]: E1216 12:46:06.342229 3668 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:06.342498 kubelet[3668]: E1216 12:46:06.342485 3668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:06.342543 kubelet[3668]: W1216 12:46:06.342532 3668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:06.342593 kubelet[3668]: E1216 12:46:06.342583 3668 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:06.342869 kubelet[3668]: E1216 12:46:06.342768 3668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:06.342869 kubelet[3668]: W1216 12:46:06.342777 3668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:06.342869 kubelet[3668]: E1216 12:46:06.342786 3668 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:06.343006 kubelet[3668]: E1216 12:46:06.342995 3668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:06.343141 kubelet[3668]: W1216 12:46:06.343039 3668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:06.343141 kubelet[3668]: E1216 12:46:06.343053 3668 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:06.343285 kubelet[3668]: E1216 12:46:06.343272 3668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:06.343438 kubelet[3668]: W1216 12:46:06.343327 3668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:06.343438 kubelet[3668]: E1216 12:46:06.343342 3668 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:06.343554 kubelet[3668]: E1216 12:46:06.343543 3668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:06.343613 kubelet[3668]: W1216 12:46:06.343603 3668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:06.343780 kubelet[3668]: E1216 12:46:06.343659 3668 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:06.343871 kubelet[3668]: E1216 12:46:06.343860 3668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:06.343923 kubelet[3668]: W1216 12:46:06.343913 3668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:06.343965 kubelet[3668]: E1216 12:46:06.343958 3668 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:06.344262 kubelet[3668]: E1216 12:46:06.344160 3668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:06.344262 kubelet[3668]: W1216 12:46:06.344172 3668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:06.344262 kubelet[3668]: E1216 12:46:06.344180 3668 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:06.344464 kubelet[3668]: E1216 12:46:06.344454 3668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:06.344601 kubelet[3668]: W1216 12:46:06.344507 3668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:06.344601 kubelet[3668]: E1216 12:46:06.344521 3668 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:06.344713 kubelet[3668]: E1216 12:46:06.344703 3668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:06.344779 kubelet[3668]: W1216 12:46:06.344769 3668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:06.344955 kubelet[3668]: E1216 12:46:06.344857 3668 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:06.345053 kubelet[3668]: E1216 12:46:06.345043 3668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:06.345102 kubelet[3668]: W1216 12:46:06.345092 3668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:06.345140 kubelet[3668]: E1216 12:46:06.345131 3668 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:06.345395 kubelet[3668]: E1216 12:46:06.345382 3668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:06.345476 kubelet[3668]: W1216 12:46:06.345464 3668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:06.345522 kubelet[3668]: E1216 12:46:06.345511 3668 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:06.345750 kubelet[3668]: E1216 12:46:06.345698 3668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:06.345750 kubelet[3668]: W1216 12:46:06.345709 3668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:06.345750 kubelet[3668]: E1216 12:46:06.345717 3668 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:06.345869 kubelet[3668]: E1216 12:46:06.345846 3668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:06.345869 kubelet[3668]: W1216 12:46:06.345862 3668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:06.345912 kubelet[3668]: E1216 12:46:06.345872 3668 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:06.346058 kubelet[3668]: E1216 12:46:06.346045 3668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:06.346058 kubelet[3668]: W1216 12:46:06.346055 3668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:06.346103 kubelet[3668]: E1216 12:46:06.346063 3668 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:06.346239 kubelet[3668]: E1216 12:46:06.346188 3668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:06.346239 kubelet[3668]: W1216 12:46:06.346197 3668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:06.346299 kubelet[3668]: E1216 12:46:06.346243 3668 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:06.346387 kubelet[3668]: E1216 12:46:06.346375 3668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:06.346387 kubelet[3668]: W1216 12:46:06.346384 3668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:06.346430 kubelet[3668]: E1216 12:46:06.346390 3668 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:06.347237 kubelet[3668]: E1216 12:46:06.347179 3668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:06.347386 kubelet[3668]: W1216 12:46:06.347195 3668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:06.347417 kubelet[3668]: E1216 12:46:06.347386 3668 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:06.352820 kubelet[3668]: E1216 12:46:06.352799 3668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:06.352820 kubelet[3668]: W1216 12:46:06.352814 3668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:06.352820 kubelet[3668]: E1216 12:46:06.352825 3668 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:06.657000 audit[4244]: NETFILTER_CFG table=filter:118 family=2 entries=22 op=nft_register_rule pid=4244 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:46:06.657000 audit[4244]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffecd2ff80 a2=0 a3=1 items=0 ppid=3822 pid=4244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:06.657000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:46:06.661000 audit[4244]: NETFILTER_CFG table=nat:119 family=2 entries=12 op=nft_register_rule pid=4244 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:46:06.661000 audit[4244]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffecd2ff80 a2=0 a3=1 items=0 ppid=3822 pid=4244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:06.661000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:46:07.476909 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2138413347.mount: Deactivated successfully. Dec 16 12:46:07.880184 containerd[2138]: time="2025-12-16T12:46:07.879672689Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:46:07.884975 containerd[2138]: time="2025-12-16T12:46:07.884929376Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33086690" Dec 16 12:46:07.888241 containerd[2138]: time="2025-12-16T12:46:07.888215078Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:46:07.892146 containerd[2138]: time="2025-12-16T12:46:07.892107310Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:46:07.892606 containerd[2138]: time="2025-12-16T12:46:07.892573131Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 1.764542517s" Dec 16 12:46:07.892683 containerd[2138]: time="2025-12-16T12:46:07.892670166Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Dec 16 12:46:07.895225 containerd[2138]: time="2025-12-16T12:46:07.895192087Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Dec 16 12:46:07.914910 containerd[2138]: time="2025-12-16T12:46:07.914272883Z" level=info msg="CreateContainer within sandbox \"2edf82f7661d7e714850834e6024f5352d270bdc0ad5bb3f3c410de8806467a0\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 16 12:46:07.941547 containerd[2138]: time="2025-12-16T12:46:07.941520450Z" level=info msg="Container 564038c82b7f48f2fbdeb9205575291af23c719abbe78c3582341b2827985fa5: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:46:07.944469 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3161772059.mount: Deactivated successfully. Dec 16 12:46:07.962904 containerd[2138]: time="2025-12-16T12:46:07.962843191Z" level=info msg="CreateContainer within sandbox \"2edf82f7661d7e714850834e6024f5352d270bdc0ad5bb3f3c410de8806467a0\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"564038c82b7f48f2fbdeb9205575291af23c719abbe78c3582341b2827985fa5\"" Dec 16 12:46:07.963463 containerd[2138]: time="2025-12-16T12:46:07.963431072Z" level=info msg="StartContainer for \"564038c82b7f48f2fbdeb9205575291af23c719abbe78c3582341b2827985fa5\"" Dec 16 12:46:07.964211 containerd[2138]: time="2025-12-16T12:46:07.964169789Z" level=info msg="connecting to shim 564038c82b7f48f2fbdeb9205575291af23c719abbe78c3582341b2827985fa5" address="unix:///run/containerd/s/a0ba8f74d61ef9e17e9a40bea0963be44ef47f186ddf1003d7d8fd0e1573ff0b" protocol=ttrpc version=3 Dec 16 12:46:07.983346 systemd[1]: Started cri-containerd-564038c82b7f48f2fbdeb9205575291af23c719abbe78c3582341b2827985fa5.scope - libcontainer container 564038c82b7f48f2fbdeb9205575291af23c719abbe78c3582341b2827985fa5. Dec 16 12:46:07.991000 audit: BPF prog-id=185 op=LOAD Dec 16 12:46:07.995769 kernel: kauditd_printk_skb: 58 callbacks suppressed Dec 16 12:46:07.995815 kernel: audit: type=1334 audit(1765889167.991:574): prog-id=185 op=LOAD Dec 16 12:46:07.999000 audit: BPF prog-id=186 op=LOAD Dec 16 12:46:08.006492 kernel: audit: type=1334 audit(1765889167.999:575): prog-id=186 op=LOAD Dec 16 12:46:08.006561 kernel: audit: type=1300 audit(1765889167.999:575): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40000fe180 a2=98 a3=0 items=0 ppid=4088 pid=4255 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:07.999000 audit[4255]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40000fe180 a2=98 a3=0 items=0 ppid=4088 pid=4255 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:08.006656 kubelet[3668]: E1216 12:46:08.005288 3668 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xwwbh" podUID="5bbc1d74-de1f-40b8-bd99-2346a3e2bafe" Dec 16 12:46:07.999000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536343033386338326237663438663266626465623932303535373532 Dec 16 12:46:08.039452 kernel: audit: type=1327 audit(1765889167.999:575): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536343033386338326237663438663266626465623932303535373532 Dec 16 12:46:07.999000 audit: BPF prog-id=186 op=UNLOAD Dec 16 12:46:08.046584 kernel: audit: type=1334 audit(1765889167.999:576): prog-id=186 op=UNLOAD Dec 16 12:46:07.999000 audit[4255]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4088 pid=4255 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:08.067297 kernel: audit: type=1300 audit(1765889167.999:576): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4088 pid=4255 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:07.999000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536343033386338326237663438663266626465623932303535373532 Dec 16 12:46:08.084469 kernel: audit: type=1327 audit(1765889167.999:576): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536343033386338326237663438663266626465623932303535373532 Dec 16 12:46:07.999000 audit: BPF prog-id=187 op=LOAD Dec 16 12:46:08.089484 kernel: audit: type=1334 audit(1765889167.999:577): prog-id=187 op=LOAD Dec 16 12:46:07.999000 audit[4255]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40000fe3e8 a2=98 a3=0 items=0 ppid=4088 pid=4255 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:08.105704 kernel: audit: type=1300 audit(1765889167.999:577): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40000fe3e8 a2=98 a3=0 items=0 ppid=4088 pid=4255 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:07.999000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536343033386338326237663438663266626465623932303535373532 Dec 16 12:46:08.122304 kernel: audit: type=1327 audit(1765889167.999:577): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536343033386338326237663438663266626465623932303535373532 Dec 16 12:46:08.020000 audit: BPF prog-id=188 op=LOAD Dec 16 12:46:08.020000 audit[4255]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40000fe168 a2=98 a3=0 items=0 ppid=4088 pid=4255 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:08.020000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536343033386338326237663438663266626465623932303535373532 Dec 16 12:46:08.022000 audit: BPF prog-id=188 op=UNLOAD Dec 16 12:46:08.022000 audit[4255]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4088 pid=4255 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:08.022000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536343033386338326237663438663266626465623932303535373532 Dec 16 12:46:08.022000 audit: BPF prog-id=187 op=UNLOAD Dec 16 12:46:08.022000 audit[4255]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4088 pid=4255 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:08.022000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536343033386338326237663438663266626465623932303535373532 Dec 16 12:46:08.022000 audit: BPF prog-id=189 op=LOAD Dec 16 12:46:08.022000 audit[4255]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40000fe648 a2=98 a3=0 items=0 ppid=4088 pid=4255 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:08.022000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536343033386338326237663438663266626465623932303535373532 Dec 16 12:46:08.148506 containerd[2138]: time="2025-12-16T12:46:08.148414476Z" level=info msg="StartContainer for \"564038c82b7f48f2fbdeb9205575291af23c719abbe78c3582341b2827985fa5\" returns successfully" Dec 16 12:46:09.038227 containerd[2138]: time="2025-12-16T12:46:09.038083740Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:46:09.041590 containerd[2138]: time="2025-12-16T12:46:09.041543755Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Dec 16 12:46:09.045430 containerd[2138]: time="2025-12-16T12:46:09.045390957Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:46:09.053321 containerd[2138]: time="2025-12-16T12:46:09.052906485Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:46:09.053321 containerd[2138]: time="2025-12-16T12:46:09.053217974Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.157972149s" Dec 16 12:46:09.053321 containerd[2138]: time="2025-12-16T12:46:09.053243319Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Dec 16 12:46:09.061021 containerd[2138]: time="2025-12-16T12:46:09.060978733Z" level=info msg="CreateContainer within sandbox \"48839d914cea11ce81c8b130c1b347aaf158f0b8ed0842f34c80e8eaa2b39c2d\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 16 12:46:09.090494 containerd[2138]: time="2025-12-16T12:46:09.089314665Z" level=info msg="Container 702f0d85cb0dba8bd6e1af198b616325b67abc274efef62c62a251486a20d9b2: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:46:09.111733 containerd[2138]: time="2025-12-16T12:46:09.111649161Z" level=info msg="CreateContainer within sandbox \"48839d914cea11ce81c8b130c1b347aaf158f0b8ed0842f34c80e8eaa2b39c2d\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"702f0d85cb0dba8bd6e1af198b616325b67abc274efef62c62a251486a20d9b2\"" Dec 16 12:46:09.112606 containerd[2138]: time="2025-12-16T12:46:09.112355358Z" level=info msg="StartContainer for \"702f0d85cb0dba8bd6e1af198b616325b67abc274efef62c62a251486a20d9b2\"" Dec 16 12:46:09.113482 containerd[2138]: time="2025-12-16T12:46:09.113455751Z" level=info msg="connecting to shim 702f0d85cb0dba8bd6e1af198b616325b67abc274efef62c62a251486a20d9b2" address="unix:///run/containerd/s/50b57ddfde5f821b70a196a9e3fa83449ed82c573f826b44923f785af469a03a" protocol=ttrpc version=3 Dec 16 12:46:09.128343 systemd[1]: Started cri-containerd-702f0d85cb0dba8bd6e1af198b616325b67abc274efef62c62a251486a20d9b2.scope - libcontainer container 702f0d85cb0dba8bd6e1af198b616325b67abc274efef62c62a251486a20d9b2. Dec 16 12:46:09.147437 kubelet[3668]: E1216 12:46:09.147368 3668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:09.147437 kubelet[3668]: W1216 12:46:09.147386 3668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:09.147437 kubelet[3668]: E1216 12:46:09.147403 3668 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:09.147935 kubelet[3668]: E1216 12:46:09.147852 3668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:09.147935 kubelet[3668]: W1216 12:46:09.147864 3668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:09.147935 kubelet[3668]: E1216 12:46:09.147898 3668 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:09.148162 kubelet[3668]: E1216 12:46:09.148151 3668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:09.148258 kubelet[3668]: W1216 12:46:09.148246 3668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:09.148310 kubelet[3668]: E1216 12:46:09.148299 3668 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:09.148554 kubelet[3668]: E1216 12:46:09.148506 3668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:09.148554 kubelet[3668]: W1216 12:46:09.148516 3668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:09.148554 kubelet[3668]: E1216 12:46:09.148524 3668 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:09.150243 kubelet[3668]: I1216 12:46:09.149637 3668 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-799fc5847-pcfwk" podStartSLOduration=2.383388053 podStartE2EDuration="4.149628532s" podCreationTimestamp="2025-12-16 12:46:05 +0000 UTC" firstStartedPulling="2025-12-16 12:46:06.127211798 +0000 UTC m=+24.200347831" lastFinishedPulling="2025-12-16 12:46:07.893452277 +0000 UTC m=+25.966588310" observedRunningTime="2025-12-16 12:46:09.149461343 +0000 UTC m=+27.222597376" watchObservedRunningTime="2025-12-16 12:46:09.149628532 +0000 UTC m=+27.222764565" Dec 16 12:46:09.150699 kubelet[3668]: E1216 12:46:09.150275 3668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:09.150699 kubelet[3668]: W1216 12:46:09.150549 3668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:09.150699 kubelet[3668]: E1216 12:46:09.150568 3668 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:09.151335 kubelet[3668]: E1216 12:46:09.151280 3668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:09.151672 kubelet[3668]: W1216 12:46:09.151599 3668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:09.151672 kubelet[3668]: E1216 12:46:09.151626 3668 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:09.152590 kubelet[3668]: E1216 12:46:09.152261 3668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:09.152590 kubelet[3668]: W1216 12:46:09.152274 3668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:09.152590 kubelet[3668]: E1216 12:46:09.152285 3668 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:09.153065 kubelet[3668]: E1216 12:46:09.153018 3668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:09.153065 kubelet[3668]: W1216 12:46:09.153031 3668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:09.153065 kubelet[3668]: E1216 12:46:09.153044 3668 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:09.153741 kubelet[3668]: E1216 12:46:09.153625 3668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:09.153741 kubelet[3668]: W1216 12:46:09.153647 3668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:09.153741 kubelet[3668]: E1216 12:46:09.153658 3668 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:09.154560 kubelet[3668]: E1216 12:46:09.154475 3668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:09.154796 kubelet[3668]: W1216 12:46:09.154757 3668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:09.154999 kubelet[3668]: E1216 12:46:09.154905 3668 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:09.155881 kubelet[3668]: E1216 12:46:09.155866 3668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:09.156230 kubelet[3668]: W1216 12:46:09.156060 3668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:09.156387 kubelet[3668]: E1216 12:46:09.156118 3668 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:09.157322 kubelet[3668]: E1216 12:46:09.157261 3668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:09.157322 kubelet[3668]: W1216 12:46:09.157276 3668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:09.157322 kubelet[3668]: E1216 12:46:09.157286 3668 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:09.158616 kubelet[3668]: E1216 12:46:09.158548 3668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:09.158616 kubelet[3668]: W1216 12:46:09.158561 3668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:09.158616 kubelet[3668]: E1216 12:46:09.158571 3668 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:09.158981 kubelet[3668]: E1216 12:46:09.158939 3668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:09.159124 kubelet[3668]: W1216 12:46:09.159022 3668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:09.159124 kubelet[3668]: E1216 12:46:09.159036 3668 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:09.159478 kubelet[3668]: E1216 12:46:09.159456 3668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:09.159610 kubelet[3668]: W1216 12:46:09.159469 3668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:09.159610 kubelet[3668]: E1216 12:46:09.159568 3668 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:09.173000 audit: BPF prog-id=190 op=LOAD Dec 16 12:46:09.173000 audit[4295]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=4178 pid=4295 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:09.173000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730326630643835636230646261386264366531616631393862363136 Dec 16 12:46:09.173000 audit: BPF prog-id=191 op=LOAD Dec 16 12:46:09.173000 audit[4295]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=4178 pid=4295 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:09.173000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730326630643835636230646261386264366531616631393862363136 Dec 16 12:46:09.173000 audit: BPF prog-id=191 op=UNLOAD Dec 16 12:46:09.173000 audit[4295]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4178 pid=4295 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:09.173000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730326630643835636230646261386264366531616631393862363136 Dec 16 12:46:09.173000 audit: BPF prog-id=190 op=UNLOAD Dec 16 12:46:09.173000 audit[4295]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4178 pid=4295 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:09.173000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730326630643835636230646261386264366531616631393862363136 Dec 16 12:46:09.173000 audit: BPF prog-id=192 op=LOAD Dec 16 12:46:09.173000 audit[4295]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=4178 pid=4295 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:09.173000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730326630643835636230646261386264366531616631393862363136 Dec 16 12:46:09.176642 kubelet[3668]: E1216 12:46:09.176601 3668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:09.176743 kubelet[3668]: W1216 12:46:09.176652 3668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:09.176743 kubelet[3668]: E1216 12:46:09.176666 3668 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:09.176937 kubelet[3668]: E1216 12:46:09.176918 3668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:09.176937 kubelet[3668]: W1216 12:46:09.176931 3668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:09.176937 kubelet[3668]: E1216 12:46:09.176940 3668 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:09.177250 kubelet[3668]: E1216 12:46:09.177232 3668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:09.177250 kubelet[3668]: W1216 12:46:09.177246 3668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:09.177329 kubelet[3668]: E1216 12:46:09.177283 3668 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:09.177667 kubelet[3668]: E1216 12:46:09.177620 3668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:09.177667 kubelet[3668]: W1216 12:46:09.177635 3668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:09.177667 kubelet[3668]: E1216 12:46:09.177644 3668 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:09.178185 kubelet[3668]: E1216 12:46:09.177894 3668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:09.178185 kubelet[3668]: W1216 12:46:09.177905 3668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:09.178185 kubelet[3668]: E1216 12:46:09.177915 3668 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:09.178185 kubelet[3668]: E1216 12:46:09.178028 3668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:09.178185 kubelet[3668]: W1216 12:46:09.178034 3668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:09.178185 kubelet[3668]: E1216 12:46:09.178040 3668 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:09.178185 kubelet[3668]: E1216 12:46:09.178169 3668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:09.178185 kubelet[3668]: W1216 12:46:09.178174 3668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:09.178185 kubelet[3668]: E1216 12:46:09.178189 3668 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:09.178535 kubelet[3668]: E1216 12:46:09.178484 3668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:09.178535 kubelet[3668]: W1216 12:46:09.178497 3668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:09.178535 kubelet[3668]: E1216 12:46:09.178504 3668 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:09.178790 kubelet[3668]: E1216 12:46:09.178772 3668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:09.178790 kubelet[3668]: W1216 12:46:09.178786 3668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:09.178916 kubelet[3668]: E1216 12:46:09.178796 3668 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:09.179141 kubelet[3668]: E1216 12:46:09.179122 3668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:09.179141 kubelet[3668]: W1216 12:46:09.179138 3668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:09.179230 kubelet[3668]: E1216 12:46:09.179149 3668 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:09.179627 kubelet[3668]: E1216 12:46:09.179609 3668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:09.179842 kubelet[3668]: W1216 12:46:09.179772 3668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:09.179842 kubelet[3668]: E1216 12:46:09.179791 3668 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:09.181071 kubelet[3668]: E1216 12:46:09.181052 3668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:09.181071 kubelet[3668]: W1216 12:46:09.181067 3668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:09.181283 kubelet[3668]: E1216 12:46:09.181079 3668 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:09.181283 kubelet[3668]: E1216 12:46:09.181264 3668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:09.181283 kubelet[3668]: W1216 12:46:09.181271 3668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:09.181352 kubelet[3668]: E1216 12:46:09.181281 3668 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:09.181640 kubelet[3668]: E1216 12:46:09.181575 3668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:09.181640 kubelet[3668]: W1216 12:46:09.181600 3668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:09.181640 kubelet[3668]: E1216 12:46:09.181612 3668 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:09.182114 kubelet[3668]: E1216 12:46:09.182019 3668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:09.182114 kubelet[3668]: W1216 12:46:09.182032 3668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:09.182114 kubelet[3668]: E1216 12:46:09.182043 3668 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:09.182376 kubelet[3668]: E1216 12:46:09.182301 3668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:09.182376 kubelet[3668]: W1216 12:46:09.182310 3668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:09.182376 kubelet[3668]: E1216 12:46:09.182319 3668 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:09.184265 kubelet[3668]: E1216 12:46:09.184242 3668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:09.184265 kubelet[3668]: W1216 12:46:09.184253 3668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:09.184265 kubelet[3668]: E1216 12:46:09.184263 3668 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:09.185011 kubelet[3668]: E1216 12:46:09.184942 3668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:09.185011 kubelet[3668]: W1216 12:46:09.184955 3668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:09.185011 kubelet[3668]: E1216 12:46:09.184966 3668 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:09.203860 systemd[1]: cri-containerd-702f0d85cb0dba8bd6e1af198b616325b67abc274efef62c62a251486a20d9b2.scope: Deactivated successfully. Dec 16 12:46:09.207000 audit: BPF prog-id=192 op=UNLOAD Dec 16 12:46:09.213954 containerd[2138]: time="2025-12-16T12:46:09.213850467Z" level=info msg="received container exit event container_id:\"702f0d85cb0dba8bd6e1af198b616325b67abc274efef62c62a251486a20d9b2\" id:\"702f0d85cb0dba8bd6e1af198b616325b67abc274efef62c62a251486a20d9b2\" pid:4307 exited_at:{seconds:1765889169 nanos:207426540}" Dec 16 12:46:09.220395 containerd[2138]: time="2025-12-16T12:46:09.220372854Z" level=info msg="StartContainer for \"702f0d85cb0dba8bd6e1af198b616325b67abc274efef62c62a251486a20d9b2\" returns successfully" Dec 16 12:46:09.230890 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-702f0d85cb0dba8bd6e1af198b616325b67abc274efef62c62a251486a20d9b2-rootfs.mount: Deactivated successfully. Dec 16 12:46:10.003233 kubelet[3668]: E1216 12:46:10.003114 3668 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xwwbh" podUID="5bbc1d74-de1f-40b8-bd99-2346a3e2bafe" Dec 16 12:46:10.140910 kubelet[3668]: I1216 12:46:10.140258 3668 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 12:46:11.145009 containerd[2138]: time="2025-12-16T12:46:11.144652988Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Dec 16 12:46:12.003208 kubelet[3668]: E1216 12:46:12.002640 3668 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xwwbh" podUID="5bbc1d74-de1f-40b8-bd99-2346a3e2bafe" Dec 16 12:46:13.394477 containerd[2138]: time="2025-12-16T12:46:13.394429636Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:46:13.397897 containerd[2138]: time="2025-12-16T12:46:13.397760271Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65921248" Dec 16 12:46:13.401237 containerd[2138]: time="2025-12-16T12:46:13.401212694Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:46:13.405869 containerd[2138]: time="2025-12-16T12:46:13.405825991Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:46:13.406521 containerd[2138]: time="2025-12-16T12:46:13.406175458Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 2.261485172s" Dec 16 12:46:13.406521 containerd[2138]: time="2025-12-16T12:46:13.406215891Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Dec 16 12:46:13.415265 containerd[2138]: time="2025-12-16T12:46:13.414481385Z" level=info msg="CreateContainer within sandbox \"48839d914cea11ce81c8b130c1b347aaf158f0b8ed0842f34c80e8eaa2b39c2d\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 16 12:46:13.444198 containerd[2138]: time="2025-12-16T12:46:13.444170229Z" level=info msg="Container faa6217e01531f8dc3e48eba40c55f58839ea8cbd6676e343285ecb977f4280c: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:46:13.478901 containerd[2138]: time="2025-12-16T12:46:13.478865669Z" level=info msg="CreateContainer within sandbox \"48839d914cea11ce81c8b130c1b347aaf158f0b8ed0842f34c80e8eaa2b39c2d\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"faa6217e01531f8dc3e48eba40c55f58839ea8cbd6676e343285ecb977f4280c\"" Dec 16 12:46:13.479557 containerd[2138]: time="2025-12-16T12:46:13.479426526Z" level=info msg="StartContainer for \"faa6217e01531f8dc3e48eba40c55f58839ea8cbd6676e343285ecb977f4280c\"" Dec 16 12:46:13.480911 containerd[2138]: time="2025-12-16T12:46:13.480888705Z" level=info msg="connecting to shim faa6217e01531f8dc3e48eba40c55f58839ea8cbd6676e343285ecb977f4280c" address="unix:///run/containerd/s/50b57ddfde5f821b70a196a9e3fa83449ed82c573f826b44923f785af469a03a" protocol=ttrpc version=3 Dec 16 12:46:13.503348 systemd[1]: Started cri-containerd-faa6217e01531f8dc3e48eba40c55f58839ea8cbd6676e343285ecb977f4280c.scope - libcontainer container faa6217e01531f8dc3e48eba40c55f58839ea8cbd6676e343285ecb977f4280c. Dec 16 12:46:13.560222 kernel: kauditd_printk_skb: 28 callbacks suppressed Dec 16 12:46:13.560326 kernel: audit: type=1334 audit(1765889173.555:588): prog-id=193 op=LOAD Dec 16 12:46:13.555000 audit: BPF prog-id=193 op=LOAD Dec 16 12:46:13.555000 audit[4395]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40000fe3e8 a2=98 a3=0 items=0 ppid=4178 pid=4395 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:13.579772 kernel: audit: type=1300 audit(1765889173.555:588): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40000fe3e8 a2=98 a3=0 items=0 ppid=4178 pid=4395 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:13.555000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661613632313765303135333166386463336534386562613430633535 Dec 16 12:46:13.596005 kernel: audit: type=1327 audit(1765889173.555:588): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661613632313765303135333166386463336534386562613430633535 Dec 16 12:46:13.555000 audit: BPF prog-id=194 op=LOAD Dec 16 12:46:13.600503 kernel: audit: type=1334 audit(1765889173.555:589): prog-id=194 op=LOAD Dec 16 12:46:13.555000 audit[4395]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=40000fe168 a2=98 a3=0 items=0 ppid=4178 pid=4395 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:13.617018 kernel: audit: type=1300 audit(1765889173.555:589): arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=40000fe168 a2=98 a3=0 items=0 ppid=4178 pid=4395 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:13.555000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661613632313765303135333166386463336534386562613430633535 Dec 16 12:46:13.555000 audit: BPF prog-id=194 op=UNLOAD Dec 16 12:46:13.639077 kernel: audit: type=1327 audit(1765889173.555:589): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661613632313765303135333166386463336534386562613430633535 Dec 16 12:46:13.639135 kernel: audit: type=1334 audit(1765889173.555:590): prog-id=194 op=UNLOAD Dec 16 12:46:13.555000 audit[4395]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4178 pid=4395 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:13.655231 kernel: audit: type=1300 audit(1765889173.555:590): arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4178 pid=4395 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:13.555000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661613632313765303135333166386463336534386562613430633535 Dec 16 12:46:13.672912 kernel: audit: type=1327 audit(1765889173.555:590): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661613632313765303135333166386463336534386562613430633535 Dec 16 12:46:13.555000 audit: BPF prog-id=193 op=UNLOAD Dec 16 12:46:13.677878 kernel: audit: type=1334 audit(1765889173.555:591): prog-id=193 op=UNLOAD Dec 16 12:46:13.555000 audit[4395]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4178 pid=4395 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:13.555000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661613632313765303135333166386463336534386562613430633535 Dec 16 12:46:13.555000 audit: BPF prog-id=195 op=LOAD Dec 16 12:46:13.555000 audit[4395]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40000fe648 a2=98 a3=0 items=0 ppid=4178 pid=4395 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:13.555000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661613632313765303135333166386463336534386562613430633535 Dec 16 12:46:13.687417 containerd[2138]: time="2025-12-16T12:46:13.687334219Z" level=info msg="StartContainer for \"faa6217e01531f8dc3e48eba40c55f58839ea8cbd6676e343285ecb977f4280c\" returns successfully" Dec 16 12:46:14.002776 kubelet[3668]: E1216 12:46:14.002663 3668 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xwwbh" podUID="5bbc1d74-de1f-40b8-bd99-2346a3e2bafe" Dec 16 12:46:14.861165 containerd[2138]: time="2025-12-16T12:46:14.861119998Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 16 12:46:14.863869 systemd[1]: cri-containerd-faa6217e01531f8dc3e48eba40c55f58839ea8cbd6676e343285ecb977f4280c.scope: Deactivated successfully. Dec 16 12:46:14.864394 systemd[1]: cri-containerd-faa6217e01531f8dc3e48eba40c55f58839ea8cbd6676e343285ecb977f4280c.scope: Consumed 305ms CPU time, 190.7M memory peak, 165.9M written to disk. Dec 16 12:46:14.866372 containerd[2138]: time="2025-12-16T12:46:14.866332073Z" level=info msg="received container exit event container_id:\"faa6217e01531f8dc3e48eba40c55f58839ea8cbd6676e343285ecb977f4280c\" id:\"faa6217e01531f8dc3e48eba40c55f58839ea8cbd6676e343285ecb977f4280c\" pid:4408 exited_at:{seconds:1765889174 nanos:866073626}" Dec 16 12:46:14.866000 audit: BPF prog-id=195 op=UNLOAD Dec 16 12:46:14.882997 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-faa6217e01531f8dc3e48eba40c55f58839ea8cbd6676e343285ecb977f4280c-rootfs.mount: Deactivated successfully. Dec 16 12:46:14.935505 kubelet[3668]: I1216 12:46:14.935278 3668 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Dec 16 12:46:15.784882 systemd[1]: Created slice kubepods-besteffort-pod5bbc1d74_de1f_40b8_bd99_2346a3e2bafe.slice - libcontainer container kubepods-besteffort-pod5bbc1d74_de1f_40b8_bd99_2346a3e2bafe.slice. Dec 16 12:46:15.791927 systemd[1]: Created slice kubepods-besteffort-podafc8cbd3_cce9_4afd_951f_828ed80d9307.slice - libcontainer container kubepods-besteffort-podafc8cbd3_cce9_4afd_951f_828ed80d9307.slice. Dec 16 12:46:15.797492 containerd[2138]: time="2025-12-16T12:46:15.797461902Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xwwbh,Uid:5bbc1d74-de1f-40b8-bd99-2346a3e2bafe,Namespace:calico-system,Attempt:0,}" Dec 16 12:46:15.799018 systemd[1]: Created slice kubepods-burstable-pod1d9ba31c_b8d5_4a2c_a6bb_ca7d237b3d76.slice - libcontainer container kubepods-burstable-pod1d9ba31c_b8d5_4a2c_a6bb_ca7d237b3d76.slice. Dec 16 12:46:15.816782 systemd[1]: Created slice kubepods-besteffort-pod156f2d35_55d2_4c5a_af35_bdad37a7ceaf.slice - libcontainer container kubepods-besteffort-pod156f2d35_55d2_4c5a_af35_bdad37a7ceaf.slice. Dec 16 12:46:15.820902 kubelet[3668]: I1216 12:46:15.820847 3668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/afc8cbd3-cce9-4afd-951f-828ed80d9307-tigera-ca-bundle\") pod \"calico-kube-controllers-5449d854d8-xfsgn\" (UID: \"afc8cbd3-cce9-4afd-951f-828ed80d9307\") " pod="calico-system/calico-kube-controllers-5449d854d8-xfsgn" Dec 16 12:46:15.821801 kubelet[3668]: I1216 12:46:15.821319 3668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1d9ba31c-b8d5-4a2c-a6bb-ca7d237b3d76-config-volume\") pod \"coredns-66bc5c9577-gvb6w\" (UID: \"1d9ba31c-b8d5-4a2c-a6bb-ca7d237b3d76\") " pod="kube-system/coredns-66bc5c9577-gvb6w" Dec 16 12:46:15.821801 kubelet[3668]: I1216 12:46:15.821359 3668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsrv9\" (UniqueName: \"kubernetes.io/projected/1d9ba31c-b8d5-4a2c-a6bb-ca7d237b3d76-kube-api-access-rsrv9\") pod \"coredns-66bc5c9577-gvb6w\" (UID: \"1d9ba31c-b8d5-4a2c-a6bb-ca7d237b3d76\") " pod="kube-system/coredns-66bc5c9577-gvb6w" Dec 16 12:46:15.821801 kubelet[3668]: I1216 12:46:15.821642 3668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbh4q\" (UniqueName: \"kubernetes.io/projected/afc8cbd3-cce9-4afd-951f-828ed80d9307-kube-api-access-mbh4q\") pod \"calico-kube-controllers-5449d854d8-xfsgn\" (UID: \"afc8cbd3-cce9-4afd-951f-828ed80d9307\") " pod="calico-system/calico-kube-controllers-5449d854d8-xfsgn" Dec 16 12:46:15.822131 kubelet[3668]: I1216 12:46:15.821956 3668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/156f2d35-55d2-4c5a-af35-bdad37a7ceaf-whisker-backend-key-pair\") pod \"whisker-65558bbd85-k52xv\" (UID: \"156f2d35-55d2-4c5a-af35-bdad37a7ceaf\") " pod="calico-system/whisker-65558bbd85-k52xv" Dec 16 12:46:15.822131 kubelet[3668]: I1216 12:46:15.821992 3668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/156f2d35-55d2-4c5a-af35-bdad37a7ceaf-whisker-ca-bundle\") pod \"whisker-65558bbd85-k52xv\" (UID: \"156f2d35-55d2-4c5a-af35-bdad37a7ceaf\") " pod="calico-system/whisker-65558bbd85-k52xv" Dec 16 12:46:15.822131 kubelet[3668]: I1216 12:46:15.822007 3668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2g5v\" (UniqueName: \"kubernetes.io/projected/156f2d35-55d2-4c5a-af35-bdad37a7ceaf-kube-api-access-x2g5v\") pod \"whisker-65558bbd85-k52xv\" (UID: \"156f2d35-55d2-4c5a-af35-bdad37a7ceaf\") " pod="calico-system/whisker-65558bbd85-k52xv" Dec 16 12:46:15.832620 systemd[1]: Created slice kubepods-besteffort-pod33f549f5_c190_4fdd_897c_292335e0de6b.slice - libcontainer container kubepods-besteffort-pod33f549f5_c190_4fdd_897c_292335e0de6b.slice. Dec 16 12:46:15.841510 systemd[1]: Created slice kubepods-besteffort-podc0a1bd8d_1ce1_4f04_99f4_1d697ce95b28.slice - libcontainer container kubepods-besteffort-podc0a1bd8d_1ce1_4f04_99f4_1d697ce95b28.slice. Dec 16 12:46:15.852814 systemd[1]: Created slice kubepods-burstable-podf3748f03_9dbe_4ca7_b265_d450d86ecab7.slice - libcontainer container kubepods-burstable-podf3748f03_9dbe_4ca7_b265_d450d86ecab7.slice. Dec 16 12:46:15.859383 systemd[1]: Created slice kubepods-besteffort-pod452df744_6814_429d_baf4_38ff85179742.slice - libcontainer container kubepods-besteffort-pod452df744_6814_429d_baf4_38ff85179742.slice. Dec 16 12:46:15.881693 containerd[2138]: time="2025-12-16T12:46:15.881440466Z" level=error msg="Failed to destroy network for sandbox \"5fd92eb9e3b6e97790fb606e42edbfa3650af420c7ebb1c243f7d6ec8f43c64a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:46:15.883483 systemd[1]: run-netns-cni\x2dd309dc1f\x2d2e9e\x2d621d\x2d52fb\x2d89ce9df71463.mount: Deactivated successfully. Dec 16 12:46:15.894670 containerd[2138]: time="2025-12-16T12:46:15.894550504Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xwwbh,Uid:5bbc1d74-de1f-40b8-bd99-2346a3e2bafe,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5fd92eb9e3b6e97790fb606e42edbfa3650af420c7ebb1c243f7d6ec8f43c64a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:46:15.894934 kubelet[3668]: E1216 12:46:15.894898 3668 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5fd92eb9e3b6e97790fb606e42edbfa3650af420c7ebb1c243f7d6ec8f43c64a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:46:15.894934 kubelet[3668]: E1216 12:46:15.894955 3668 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5fd92eb9e3b6e97790fb606e42edbfa3650af420c7ebb1c243f7d6ec8f43c64a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xwwbh" Dec 16 12:46:15.894934 kubelet[3668]: E1216 12:46:15.894969 3668 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5fd92eb9e3b6e97790fb606e42edbfa3650af420c7ebb1c243f7d6ec8f43c64a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xwwbh" Dec 16 12:46:15.895393 kubelet[3668]: E1216 12:46:15.895360 3668 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-xwwbh_calico-system(5bbc1d74-de1f-40b8-bd99-2346a3e2bafe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-xwwbh_calico-system(5bbc1d74-de1f-40b8-bd99-2346a3e2bafe)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5fd92eb9e3b6e97790fb606e42edbfa3650af420c7ebb1c243f7d6ec8f43c64a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-xwwbh" podUID="5bbc1d74-de1f-40b8-bd99-2346a3e2bafe" Dec 16 12:46:15.922698 kubelet[3668]: I1216 12:46:15.922665 3668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/452df744-6814-429d-baf4-38ff85179742-calico-apiserver-certs\") pod \"calico-apiserver-6ff5fc5c78-6d7bc\" (UID: \"452df744-6814-429d-baf4-38ff85179742\") " pod="calico-apiserver/calico-apiserver-6ff5fc5c78-6d7bc" Dec 16 12:46:15.922698 kubelet[3668]: I1216 12:46:15.922698 3668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txq88\" (UniqueName: \"kubernetes.io/projected/33f549f5-c190-4fdd-897c-292335e0de6b-kube-api-access-txq88\") pod \"calico-apiserver-6ff5fc5c78-zxwl6\" (UID: \"33f549f5-c190-4fdd-897c-292335e0de6b\") " pod="calico-apiserver/calico-apiserver-6ff5fc5c78-zxwl6" Dec 16 12:46:15.922816 kubelet[3668]: I1216 12:46:15.922713 3668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6sz9\" (UniqueName: \"kubernetes.io/projected/452df744-6814-429d-baf4-38ff85179742-kube-api-access-h6sz9\") pod \"calico-apiserver-6ff5fc5c78-6d7bc\" (UID: \"452df744-6814-429d-baf4-38ff85179742\") " pod="calico-apiserver/calico-apiserver-6ff5fc5c78-6d7bc" Dec 16 12:46:15.922816 kubelet[3668]: I1216 12:46:15.922725 3668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0a1bd8d-1ce1-4f04-99f4-1d697ce95b28-config\") pod \"goldmane-7c778bb748-slfr9\" (UID: \"c0a1bd8d-1ce1-4f04-99f4-1d697ce95b28\") " pod="calico-system/goldmane-7c778bb748-slfr9" Dec 16 12:46:15.922816 kubelet[3668]: I1216 12:46:15.922738 3668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/33f549f5-c190-4fdd-897c-292335e0de6b-calico-apiserver-certs\") pod \"calico-apiserver-6ff5fc5c78-zxwl6\" (UID: \"33f549f5-c190-4fdd-897c-292335e0de6b\") " pod="calico-apiserver/calico-apiserver-6ff5fc5c78-zxwl6" Dec 16 12:46:15.922816 kubelet[3668]: I1216 12:46:15.922747 3668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f3748f03-9dbe-4ca7-b265-d450d86ecab7-config-volume\") pod \"coredns-66bc5c9577-257m4\" (UID: \"f3748f03-9dbe-4ca7-b265-d450d86ecab7\") " pod="kube-system/coredns-66bc5c9577-257m4" Dec 16 12:46:15.922816 kubelet[3668]: I1216 12:46:15.922756 3668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c0a1bd8d-1ce1-4f04-99f4-1d697ce95b28-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-slfr9\" (UID: \"c0a1bd8d-1ce1-4f04-99f4-1d697ce95b28\") " pod="calico-system/goldmane-7c778bb748-slfr9" Dec 16 12:46:15.922907 kubelet[3668]: I1216 12:46:15.922768 3668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkt8h\" (UniqueName: \"kubernetes.io/projected/f3748f03-9dbe-4ca7-b265-d450d86ecab7-kube-api-access-xkt8h\") pod \"coredns-66bc5c9577-257m4\" (UID: \"f3748f03-9dbe-4ca7-b265-d450d86ecab7\") " pod="kube-system/coredns-66bc5c9577-257m4" Dec 16 12:46:15.922907 kubelet[3668]: I1216 12:46:15.922806 3668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/c0a1bd8d-1ce1-4f04-99f4-1d697ce95b28-goldmane-key-pair\") pod \"goldmane-7c778bb748-slfr9\" (UID: \"c0a1bd8d-1ce1-4f04-99f4-1d697ce95b28\") " pod="calico-system/goldmane-7c778bb748-slfr9" Dec 16 12:46:15.922907 kubelet[3668]: I1216 12:46:15.922817 3668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mp9md\" (UniqueName: \"kubernetes.io/projected/c0a1bd8d-1ce1-4f04-99f4-1d697ce95b28-kube-api-access-mp9md\") pod \"goldmane-7c778bb748-slfr9\" (UID: \"c0a1bd8d-1ce1-4f04-99f4-1d697ce95b28\") " pod="calico-system/goldmane-7c778bb748-slfr9" Dec 16 12:46:16.104804 containerd[2138]: time="2025-12-16T12:46:16.104701352Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5449d854d8-xfsgn,Uid:afc8cbd3-cce9-4afd-951f-828ed80d9307,Namespace:calico-system,Attempt:0,}" Dec 16 12:46:16.115231 containerd[2138]: time="2025-12-16T12:46:16.115136974Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-gvb6w,Uid:1d9ba31c-b8d5-4a2c-a6bb-ca7d237b3d76,Namespace:kube-system,Attempt:0,}" Dec 16 12:46:16.142241 containerd[2138]: time="2025-12-16T12:46:16.141965717Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-65558bbd85-k52xv,Uid:156f2d35-55d2-4c5a-af35-bdad37a7ceaf,Namespace:calico-system,Attempt:0,}" Dec 16 12:46:16.147898 containerd[2138]: time="2025-12-16T12:46:16.147875477Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6ff5fc5c78-zxwl6,Uid:33f549f5-c190-4fdd-897c-292335e0de6b,Namespace:calico-apiserver,Attempt:0,}" Dec 16 12:46:16.155421 containerd[2138]: time="2025-12-16T12:46:16.155400045Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-slfr9,Uid:c0a1bd8d-1ce1-4f04-99f4-1d697ce95b28,Namespace:calico-system,Attempt:0,}" Dec 16 12:46:16.168485 containerd[2138]: time="2025-12-16T12:46:16.168379295Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Dec 16 12:46:16.173592 containerd[2138]: time="2025-12-16T12:46:16.173568826Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-257m4,Uid:f3748f03-9dbe-4ca7-b265-d450d86ecab7,Namespace:kube-system,Attempt:0,}" Dec 16 12:46:16.182056 containerd[2138]: time="2025-12-16T12:46:16.182013349Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6ff5fc5c78-6d7bc,Uid:452df744-6814-429d-baf4-38ff85179742,Namespace:calico-apiserver,Attempt:0,}" Dec 16 12:46:16.184113 containerd[2138]: time="2025-12-16T12:46:16.184075291Z" level=error msg="Failed to destroy network for sandbox \"06575c4329a2936a2c8fe68dd8c95718a9c0b95cf9192b7f05f1f0a7256a4c7d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:46:16.206336 containerd[2138]: time="2025-12-16T12:46:16.206301104Z" level=error msg="Failed to destroy network for sandbox \"f5bf00c4002e636bad7d8e9c3fae60aa74118f831bffee3b5d93f03e00e184c7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:46:16.229735 containerd[2138]: time="2025-12-16T12:46:16.229655975Z" level=error msg="Failed to destroy network for sandbox \"f67c72468a5b3651ddc3510e1ce5984f42521137ec2181e5caa61fd0713968cb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:46:16.242359 containerd[2138]: time="2025-12-16T12:46:16.242265127Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5449d854d8-xfsgn,Uid:afc8cbd3-cce9-4afd-951f-828ed80d9307,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"06575c4329a2936a2c8fe68dd8c95718a9c0b95cf9192b7f05f1f0a7256a4c7d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:46:16.242593 kubelet[3668]: E1216 12:46:16.242559 3668 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"06575c4329a2936a2c8fe68dd8c95718a9c0b95cf9192b7f05f1f0a7256a4c7d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:46:16.242741 kubelet[3668]: E1216 12:46:16.242607 3668 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"06575c4329a2936a2c8fe68dd8c95718a9c0b95cf9192b7f05f1f0a7256a4c7d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5449d854d8-xfsgn" Dec 16 12:46:16.242741 kubelet[3668]: E1216 12:46:16.242622 3668 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"06575c4329a2936a2c8fe68dd8c95718a9c0b95cf9192b7f05f1f0a7256a4c7d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5449d854d8-xfsgn" Dec 16 12:46:16.242741 kubelet[3668]: E1216 12:46:16.242675 3668 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5449d854d8-xfsgn_calico-system(afc8cbd3-cce9-4afd-951f-828ed80d9307)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5449d854d8-xfsgn_calico-system(afc8cbd3-cce9-4afd-951f-828ed80d9307)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"06575c4329a2936a2c8fe68dd8c95718a9c0b95cf9192b7f05f1f0a7256a4c7d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5449d854d8-xfsgn" podUID="afc8cbd3-cce9-4afd-951f-828ed80d9307" Dec 16 12:46:16.261417 containerd[2138]: time="2025-12-16T12:46:16.261343366Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-gvb6w,Uid:1d9ba31c-b8d5-4a2c-a6bb-ca7d237b3d76,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5bf00c4002e636bad7d8e9c3fae60aa74118f831bffee3b5d93f03e00e184c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:46:16.261524 kubelet[3668]: E1216 12:46:16.261496 3668 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5bf00c4002e636bad7d8e9c3fae60aa74118f831bffee3b5d93f03e00e184c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:46:16.261585 kubelet[3668]: E1216 12:46:16.261533 3668 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5bf00c4002e636bad7d8e9c3fae60aa74118f831bffee3b5d93f03e00e184c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-gvb6w" Dec 16 12:46:16.261585 kubelet[3668]: E1216 12:46:16.261548 3668 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5bf00c4002e636bad7d8e9c3fae60aa74118f831bffee3b5d93f03e00e184c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-gvb6w" Dec 16 12:46:16.261622 kubelet[3668]: E1216 12:46:16.261580 3668 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-gvb6w_kube-system(1d9ba31c-b8d5-4a2c-a6bb-ca7d237b3d76)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-gvb6w_kube-system(1d9ba31c-b8d5-4a2c-a6bb-ca7d237b3d76)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f5bf00c4002e636bad7d8e9c3fae60aa74118f831bffee3b5d93f03e00e184c7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-gvb6w" podUID="1d9ba31c-b8d5-4a2c-a6bb-ca7d237b3d76" Dec 16 12:46:16.268536 containerd[2138]: time="2025-12-16T12:46:16.268466627Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-65558bbd85-k52xv,Uid:156f2d35-55d2-4c5a-af35-bdad37a7ceaf,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f67c72468a5b3651ddc3510e1ce5984f42521137ec2181e5caa61fd0713968cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:46:16.268734 kubelet[3668]: E1216 12:46:16.268694 3668 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f67c72468a5b3651ddc3510e1ce5984f42521137ec2181e5caa61fd0713968cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:46:16.268789 kubelet[3668]: E1216 12:46:16.268735 3668 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f67c72468a5b3651ddc3510e1ce5984f42521137ec2181e5caa61fd0713968cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-65558bbd85-k52xv" Dec 16 12:46:16.268789 kubelet[3668]: E1216 12:46:16.268749 3668 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f67c72468a5b3651ddc3510e1ce5984f42521137ec2181e5caa61fd0713968cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-65558bbd85-k52xv" Dec 16 12:46:16.268789 kubelet[3668]: E1216 12:46:16.268782 3668 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-65558bbd85-k52xv_calico-system(156f2d35-55d2-4c5a-af35-bdad37a7ceaf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-65558bbd85-k52xv_calico-system(156f2d35-55d2-4c5a-af35-bdad37a7ceaf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f67c72468a5b3651ddc3510e1ce5984f42521137ec2181e5caa61fd0713968cb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-65558bbd85-k52xv" podUID="156f2d35-55d2-4c5a-af35-bdad37a7ceaf" Dec 16 12:46:16.279940 containerd[2138]: time="2025-12-16T12:46:16.279901607Z" level=error msg="Failed to destroy network for sandbox \"383a7af40b8ad5bf01ddcbbf8471fbda8c3c6e0f640e1051c0296f5a9e6c586c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:46:16.288739 containerd[2138]: time="2025-12-16T12:46:16.288656995Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6ff5fc5c78-zxwl6,Uid:33f549f5-c190-4fdd-897c-292335e0de6b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"383a7af40b8ad5bf01ddcbbf8471fbda8c3c6e0f640e1051c0296f5a9e6c586c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:46:16.288983 kubelet[3668]: E1216 12:46:16.288949 3668 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"383a7af40b8ad5bf01ddcbbf8471fbda8c3c6e0f640e1051c0296f5a9e6c586c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:46:16.289145 kubelet[3668]: E1216 12:46:16.289087 3668 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"383a7af40b8ad5bf01ddcbbf8471fbda8c3c6e0f640e1051c0296f5a9e6c586c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6ff5fc5c78-zxwl6" Dec 16 12:46:16.289344 kubelet[3668]: E1216 12:46:16.289228 3668 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"383a7af40b8ad5bf01ddcbbf8471fbda8c3c6e0f640e1051c0296f5a9e6c586c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6ff5fc5c78-zxwl6" Dec 16 12:46:16.289344 kubelet[3668]: E1216 12:46:16.289286 3668 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6ff5fc5c78-zxwl6_calico-apiserver(33f549f5-c190-4fdd-897c-292335e0de6b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6ff5fc5c78-zxwl6_calico-apiserver(33f549f5-c190-4fdd-897c-292335e0de6b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"383a7af40b8ad5bf01ddcbbf8471fbda8c3c6e0f640e1051c0296f5a9e6c586c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6ff5fc5c78-zxwl6" podUID="33f549f5-c190-4fdd-897c-292335e0de6b" Dec 16 12:46:16.313855 containerd[2138]: time="2025-12-16T12:46:16.313804192Z" level=error msg="Failed to destroy network for sandbox \"b6921add34684e65fbc0da5f4ac90d97049b701dcde36f11c16ebda696ab20f9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:46:16.316764 containerd[2138]: time="2025-12-16T12:46:16.316732583Z" level=error msg="Failed to destroy network for sandbox \"0a40825cccd6d091e56a637382f1fab57e2e8b545b6f0253f6f05a4f7572681a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:46:16.320769 containerd[2138]: time="2025-12-16T12:46:16.320697397Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-slfr9,Uid:c0a1bd8d-1ce1-4f04-99f4-1d697ce95b28,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6921add34684e65fbc0da5f4ac90d97049b701dcde36f11c16ebda696ab20f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:46:16.320966 kubelet[3668]: E1216 12:46:16.320942 3668 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6921add34684e65fbc0da5f4ac90d97049b701dcde36f11c16ebda696ab20f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:46:16.321097 kubelet[3668]: E1216 12:46:16.321070 3668 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6921add34684e65fbc0da5f4ac90d97049b701dcde36f11c16ebda696ab20f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-slfr9" Dec 16 12:46:16.321447 kubelet[3668]: E1216 12:46:16.321157 3668 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6921add34684e65fbc0da5f4ac90d97049b701dcde36f11c16ebda696ab20f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-slfr9" Dec 16 12:46:16.321447 kubelet[3668]: E1216 12:46:16.321227 3668 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-slfr9_calico-system(c0a1bd8d-1ce1-4f04-99f4-1d697ce95b28)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-slfr9_calico-system(c0a1bd8d-1ce1-4f04-99f4-1d697ce95b28)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b6921add34684e65fbc0da5f4ac90d97049b701dcde36f11c16ebda696ab20f9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-slfr9" podUID="c0a1bd8d-1ce1-4f04-99f4-1d697ce95b28" Dec 16 12:46:16.323258 containerd[2138]: time="2025-12-16T12:46:16.323229649Z" level=error msg="Failed to destroy network for sandbox \"07f7fb17c0b13a9e7a6faef4a6e28e3ff003c1439390cbe335f6005a3af21cee\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:46:16.337083 containerd[2138]: time="2025-12-16T12:46:16.337028051Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-257m4,Uid:f3748f03-9dbe-4ca7-b265-d450d86ecab7,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a40825cccd6d091e56a637382f1fab57e2e8b545b6f0253f6f05a4f7572681a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:46:16.337234 kubelet[3668]: E1216 12:46:16.337189 3668 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a40825cccd6d091e56a637382f1fab57e2e8b545b6f0253f6f05a4f7572681a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:46:16.337271 kubelet[3668]: E1216 12:46:16.337235 3668 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a40825cccd6d091e56a637382f1fab57e2e8b545b6f0253f6f05a4f7572681a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-257m4" Dec 16 12:46:16.337271 kubelet[3668]: E1216 12:46:16.337259 3668 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a40825cccd6d091e56a637382f1fab57e2e8b545b6f0253f6f05a4f7572681a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-257m4" Dec 16 12:46:16.337303 kubelet[3668]: E1216 12:46:16.337291 3668 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-257m4_kube-system(f3748f03-9dbe-4ca7-b265-d450d86ecab7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-257m4_kube-system(f3748f03-9dbe-4ca7-b265-d450d86ecab7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0a40825cccd6d091e56a637382f1fab57e2e8b545b6f0253f6f05a4f7572681a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-257m4" podUID="f3748f03-9dbe-4ca7-b265-d450d86ecab7" Dec 16 12:46:16.340325 containerd[2138]: time="2025-12-16T12:46:16.340243363Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6ff5fc5c78-6d7bc,Uid:452df744-6814-429d-baf4-38ff85179742,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"07f7fb17c0b13a9e7a6faef4a6e28e3ff003c1439390cbe335f6005a3af21cee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:46:16.340504 kubelet[3668]: E1216 12:46:16.340480 3668 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"07f7fb17c0b13a9e7a6faef4a6e28e3ff003c1439390cbe335f6005a3af21cee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:46:16.340601 kubelet[3668]: E1216 12:46:16.340586 3668 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"07f7fb17c0b13a9e7a6faef4a6e28e3ff003c1439390cbe335f6005a3af21cee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6ff5fc5c78-6d7bc" Dec 16 12:46:16.340690 kubelet[3668]: E1216 12:46:16.340655 3668 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"07f7fb17c0b13a9e7a6faef4a6e28e3ff003c1439390cbe335f6005a3af21cee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6ff5fc5c78-6d7bc" Dec 16 12:46:16.340776 kubelet[3668]: E1216 12:46:16.340761 3668 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6ff5fc5c78-6d7bc_calico-apiserver(452df744-6814-429d-baf4-38ff85179742)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6ff5fc5c78-6d7bc_calico-apiserver(452df744-6814-429d-baf4-38ff85179742)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"07f7fb17c0b13a9e7a6faef4a6e28e3ff003c1439390cbe335f6005a3af21cee\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6ff5fc5c78-6d7bc" podUID="452df744-6814-429d-baf4-38ff85179742" Dec 16 12:46:19.905449 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1325988931.mount: Deactivated successfully. Dec 16 12:46:20.329976 containerd[2138]: time="2025-12-16T12:46:20.329918227Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:46:20.336229 containerd[2138]: time="2025-12-16T12:46:20.336107770Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150930912" Dec 16 12:46:20.341948 containerd[2138]: time="2025-12-16T12:46:20.341917478Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:46:20.347905 containerd[2138]: time="2025-12-16T12:46:20.347857662Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:46:20.348374 containerd[2138]: time="2025-12-16T12:46:20.348274401Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 4.179860265s" Dec 16 12:46:20.348374 containerd[2138]: time="2025-12-16T12:46:20.348304578Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Dec 16 12:46:20.381442 containerd[2138]: time="2025-12-16T12:46:20.381416921Z" level=info msg="CreateContainer within sandbox \"48839d914cea11ce81c8b130c1b347aaf158f0b8ed0842f34c80e8eaa2b39c2d\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 16 12:46:20.407355 containerd[2138]: time="2025-12-16T12:46:20.407316589Z" level=info msg="Container 42e2bd4c9427608578c5658e09dceeb35119096aee68fdd648fa576ab49dabc9: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:46:20.434089 containerd[2138]: time="2025-12-16T12:46:20.434039991Z" level=info msg="CreateContainer within sandbox \"48839d914cea11ce81c8b130c1b347aaf158f0b8ed0842f34c80e8eaa2b39c2d\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"42e2bd4c9427608578c5658e09dceeb35119096aee68fdd648fa576ab49dabc9\"" Dec 16 12:46:20.434856 containerd[2138]: time="2025-12-16T12:46:20.434771244Z" level=info msg="StartContainer for \"42e2bd4c9427608578c5658e09dceeb35119096aee68fdd648fa576ab49dabc9\"" Dec 16 12:46:20.435805 containerd[2138]: time="2025-12-16T12:46:20.435781104Z" level=info msg="connecting to shim 42e2bd4c9427608578c5658e09dceeb35119096aee68fdd648fa576ab49dabc9" address="unix:///run/containerd/s/50b57ddfde5f821b70a196a9e3fa83449ed82c573f826b44923f785af469a03a" protocol=ttrpc version=3 Dec 16 12:46:20.469379 systemd[1]: Started cri-containerd-42e2bd4c9427608578c5658e09dceeb35119096aee68fdd648fa576ab49dabc9.scope - libcontainer container 42e2bd4c9427608578c5658e09dceeb35119096aee68fdd648fa576ab49dabc9. Dec 16 12:46:20.533241 kernel: kauditd_printk_skb: 6 callbacks suppressed Dec 16 12:46:20.533367 kernel: audit: type=1334 audit(1765889180.528:594): prog-id=196 op=LOAD Dec 16 12:46:20.528000 audit: BPF prog-id=196 op=LOAD Dec 16 12:46:20.528000 audit[4667]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001383e8 a2=98 a3=0 items=0 ppid=4178 pid=4667 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:20.552139 kernel: audit: type=1300 audit(1765889180.528:594): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001383e8 a2=98 a3=0 items=0 ppid=4178 pid=4667 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:20.528000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432653262643463393432373630383537386335363538653039646365 Dec 16 12:46:20.568389 kernel: audit: type=1327 audit(1765889180.528:594): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432653262643463393432373630383537386335363538653039646365 Dec 16 12:46:20.528000 audit: BPF prog-id=197 op=LOAD Dec 16 12:46:20.577152 kernel: audit: type=1334 audit(1765889180.528:595): prog-id=197 op=LOAD Dec 16 12:46:20.577231 kernel: audit: type=1300 audit(1765889180.528:595): arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000138168 a2=98 a3=0 items=0 ppid=4178 pid=4667 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:20.528000 audit[4667]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000138168 a2=98 a3=0 items=0 ppid=4178 pid=4667 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:20.528000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432653262643463393432373630383537386335363538653039646365 Dec 16 12:46:20.608719 kernel: audit: type=1327 audit(1765889180.528:595): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432653262643463393432373630383537386335363538653039646365 Dec 16 12:46:20.531000 audit: BPF prog-id=197 op=UNLOAD Dec 16 12:46:20.614021 kernel: audit: type=1334 audit(1765889180.531:596): prog-id=197 op=UNLOAD Dec 16 12:46:20.531000 audit[4667]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4178 pid=4667 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:20.620804 kubelet[3668]: I1216 12:46:20.620753 3668 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 12:46:20.630177 kernel: audit: type=1300 audit(1765889180.531:596): arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4178 pid=4667 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:20.531000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432653262643463393432373630383537386335363538653039646365 Dec 16 12:46:20.646941 kernel: audit: type=1327 audit(1765889180.531:596): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432653262643463393432373630383537386335363538653039646365 Dec 16 12:46:20.531000 audit: BPF prog-id=196 op=UNLOAD Dec 16 12:46:20.652001 kernel: audit: type=1334 audit(1765889180.531:597): prog-id=196 op=UNLOAD Dec 16 12:46:20.531000 audit[4667]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4178 pid=4667 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:20.531000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432653262643463393432373630383537386335363538653039646365 Dec 16 12:46:20.531000 audit: BPF prog-id=198 op=LOAD Dec 16 12:46:20.531000 audit[4667]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000138648 a2=98 a3=0 items=0 ppid=4178 pid=4667 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:20.531000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432653262643463393432373630383537386335363538653039646365 Dec 16 12:46:20.664000 audit[4686]: NETFILTER_CFG table=filter:120 family=2 entries=21 op=nft_register_rule pid=4686 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:46:20.664000 audit[4686]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffe84e4770 a2=0 a3=1 items=0 ppid=3822 pid=4686 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:20.664000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:46:20.670000 audit[4686]: NETFILTER_CFG table=nat:121 family=2 entries=19 op=nft_register_chain pid=4686 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:46:20.670000 audit[4686]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6276 a0=3 a1=ffffe84e4770 a2=0 a3=1 items=0 ppid=3822 pid=4686 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:20.670000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:46:20.676808 containerd[2138]: time="2025-12-16T12:46:20.676719636Z" level=info msg="StartContainer for \"42e2bd4c9427608578c5658e09dceeb35119096aee68fdd648fa576ab49dabc9\" returns successfully" Dec 16 12:46:21.198480 kubelet[3668]: I1216 12:46:21.198256 3668 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-rnrdj" podStartSLOduration=2.180779125 podStartE2EDuration="16.198239284s" podCreationTimestamp="2025-12-16 12:46:05 +0000 UTC" firstStartedPulling="2025-12-16 12:46:06.331825487 +0000 UTC m=+24.404961512" lastFinishedPulling="2025-12-16 12:46:20.349285638 +0000 UTC m=+38.422421671" observedRunningTime="2025-12-16 12:46:21.196499779 +0000 UTC m=+39.269635812" watchObservedRunningTime="2025-12-16 12:46:21.198239284 +0000 UTC m=+39.271375469" Dec 16 12:46:21.394950 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 16 12:46:21.395071 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 16 12:46:21.581369 systemd[1]: var-lib-kubelet-pods-156f2d35\x2d55d2\x2d4c5a\x2daf35\x2dbdad37a7ceaf-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dx2g5v.mount: Deactivated successfully. Dec 16 12:46:21.583333 kubelet[3668]: I1216 12:46:21.583291 3668 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/156f2d35-55d2-4c5a-af35-bdad37a7ceaf-kube-api-access-x2g5v" (OuterVolumeSpecName: "kube-api-access-x2g5v") pod "156f2d35-55d2-4c5a-af35-bdad37a7ceaf" (UID: "156f2d35-55d2-4c5a-af35-bdad37a7ceaf"). InnerVolumeSpecName "kube-api-access-x2g5v". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 16 12:46:21.583690 kubelet[3668]: I1216 12:46:21.583654 3668 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2g5v\" (UniqueName: \"kubernetes.io/projected/156f2d35-55d2-4c5a-af35-bdad37a7ceaf-kube-api-access-x2g5v\") pod \"156f2d35-55d2-4c5a-af35-bdad37a7ceaf\" (UID: \"156f2d35-55d2-4c5a-af35-bdad37a7ceaf\") " Dec 16 12:46:21.583936 kubelet[3668]: I1216 12:46:21.583785 3668 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/156f2d35-55d2-4c5a-af35-bdad37a7ceaf-whisker-backend-key-pair\") pod \"156f2d35-55d2-4c5a-af35-bdad37a7ceaf\" (UID: \"156f2d35-55d2-4c5a-af35-bdad37a7ceaf\") " Dec 16 12:46:21.583936 kubelet[3668]: I1216 12:46:21.583875 3668 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/156f2d35-55d2-4c5a-af35-bdad37a7ceaf-whisker-ca-bundle\") pod \"156f2d35-55d2-4c5a-af35-bdad37a7ceaf\" (UID: \"156f2d35-55d2-4c5a-af35-bdad37a7ceaf\") " Dec 16 12:46:21.584177 kubelet[3668]: I1216 12:46:21.584143 3668 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-x2g5v\" (UniqueName: \"kubernetes.io/projected/156f2d35-55d2-4c5a-af35-bdad37a7ceaf-kube-api-access-x2g5v\") on node \"ci-4515.1.0-a-4ca6cdd03e\" DevicePath \"\"" Dec 16 12:46:21.584491 kubelet[3668]: I1216 12:46:21.584464 3668 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/156f2d35-55d2-4c5a-af35-bdad37a7ceaf-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "156f2d35-55d2-4c5a-af35-bdad37a7ceaf" (UID: "156f2d35-55d2-4c5a-af35-bdad37a7ceaf"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 16 12:46:21.587358 systemd[1]: var-lib-kubelet-pods-156f2d35\x2d55d2\x2d4c5a\x2daf35\x2dbdad37a7ceaf-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Dec 16 12:46:21.589023 kubelet[3668]: I1216 12:46:21.588997 3668 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/156f2d35-55d2-4c5a-af35-bdad37a7ceaf-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "156f2d35-55d2-4c5a-af35-bdad37a7ceaf" (UID: "156f2d35-55d2-4c5a-af35-bdad37a7ceaf"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 16 12:46:21.685222 kubelet[3668]: I1216 12:46:21.685158 3668 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/156f2d35-55d2-4c5a-af35-bdad37a7ceaf-whisker-backend-key-pair\") on node \"ci-4515.1.0-a-4ca6cdd03e\" DevicePath \"\"" Dec 16 12:46:21.685222 kubelet[3668]: I1216 12:46:21.685181 3668 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/156f2d35-55d2-4c5a-af35-bdad37a7ceaf-whisker-ca-bundle\") on node \"ci-4515.1.0-a-4ca6cdd03e\" DevicePath \"\"" Dec 16 12:46:22.013165 systemd[1]: Removed slice kubepods-besteffort-pod156f2d35_55d2_4c5a_af35_bdad37a7ceaf.slice - libcontainer container kubepods-besteffort-pod156f2d35_55d2_4c5a_af35_bdad37a7ceaf.slice. Dec 16 12:46:22.258179 systemd[1]: Created slice kubepods-besteffort-podd2bf104b_aa2c_4645_b1eb_bf5f9ef78c24.slice - libcontainer container kubepods-besteffort-podd2bf104b_aa2c_4645_b1eb_bf5f9ef78c24.slice. Dec 16 12:46:22.388749 kubelet[3668]: I1216 12:46:22.388588 3668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c69g9\" (UniqueName: \"kubernetes.io/projected/d2bf104b-aa2c-4645-b1eb-bf5f9ef78c24-kube-api-access-c69g9\") pod \"whisker-858764cc7c-zqhnl\" (UID: \"d2bf104b-aa2c-4645-b1eb-bf5f9ef78c24\") " pod="calico-system/whisker-858764cc7c-zqhnl" Dec 16 12:46:22.388984 kubelet[3668]: I1216 12:46:22.388909 3668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d2bf104b-aa2c-4645-b1eb-bf5f9ef78c24-whisker-backend-key-pair\") pod \"whisker-858764cc7c-zqhnl\" (UID: \"d2bf104b-aa2c-4645-b1eb-bf5f9ef78c24\") " pod="calico-system/whisker-858764cc7c-zqhnl" Dec 16 12:46:22.388984 kubelet[3668]: I1216 12:46:22.388929 3668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d2bf104b-aa2c-4645-b1eb-bf5f9ef78c24-whisker-ca-bundle\") pod \"whisker-858764cc7c-zqhnl\" (UID: \"d2bf104b-aa2c-4645-b1eb-bf5f9ef78c24\") " pod="calico-system/whisker-858764cc7c-zqhnl" Dec 16 12:46:22.569190 containerd[2138]: time="2025-12-16T12:46:22.569086251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-858764cc7c-zqhnl,Uid:d2bf104b-aa2c-4645-b1eb-bf5f9ef78c24,Namespace:calico-system,Attempt:0,}" Dec 16 12:46:22.709275 systemd-networkd[1717]: cali528ec29d684: Link UP Dec 16 12:46:22.709466 systemd-networkd[1717]: cali528ec29d684: Gained carrier Dec 16 12:46:22.739286 containerd[2138]: 2025-12-16 12:46:22.594 [INFO][4783] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 12:46:22.739286 containerd[2138]: 2025-12-16 12:46:22.637 [INFO][4783] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515.1.0--a--4ca6cdd03e-k8s-whisker--858764cc7c--zqhnl-eth0 whisker-858764cc7c- calico-system d2bf104b-aa2c-4645-b1eb-bf5f9ef78c24 883 0 2025-12-16 12:46:22 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:858764cc7c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4515.1.0-a-4ca6cdd03e whisker-858764cc7c-zqhnl eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali528ec29d684 [] [] }} ContainerID="c9430c0ccd94dbd7b27f7158266a1e106fa53bca7a3e639afd0feef6a48dc798" Namespace="calico-system" Pod="whisker-858764cc7c-zqhnl" WorkloadEndpoint="ci--4515.1.0--a--4ca6cdd03e-k8s-whisker--858764cc7c--zqhnl-" Dec 16 12:46:22.739286 containerd[2138]: 2025-12-16 12:46:22.638 [INFO][4783] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c9430c0ccd94dbd7b27f7158266a1e106fa53bca7a3e639afd0feef6a48dc798" Namespace="calico-system" Pod="whisker-858764cc7c-zqhnl" WorkloadEndpoint="ci--4515.1.0--a--4ca6cdd03e-k8s-whisker--858764cc7c--zqhnl-eth0" Dec 16 12:46:22.739286 containerd[2138]: 2025-12-16 12:46:22.656 [INFO][4794] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c9430c0ccd94dbd7b27f7158266a1e106fa53bca7a3e639afd0feef6a48dc798" HandleID="k8s-pod-network.c9430c0ccd94dbd7b27f7158266a1e106fa53bca7a3e639afd0feef6a48dc798" Workload="ci--4515.1.0--a--4ca6cdd03e-k8s-whisker--858764cc7c--zqhnl-eth0" Dec 16 12:46:22.740097 containerd[2138]: 2025-12-16 12:46:22.656 [INFO][4794] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c9430c0ccd94dbd7b27f7158266a1e106fa53bca7a3e639afd0feef6a48dc798" HandleID="k8s-pod-network.c9430c0ccd94dbd7b27f7158266a1e106fa53bca7a3e639afd0feef6a48dc798" Workload="ci--4515.1.0--a--4ca6cdd03e-k8s-whisker--858764cc7c--zqhnl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024af80), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4515.1.0-a-4ca6cdd03e", "pod":"whisker-858764cc7c-zqhnl", "timestamp":"2025-12-16 12:46:22.656230576 +0000 UTC"}, Hostname:"ci-4515.1.0-a-4ca6cdd03e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:46:22.740097 containerd[2138]: 2025-12-16 12:46:22.656 [INFO][4794] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:46:22.740097 containerd[2138]: 2025-12-16 12:46:22.656 [INFO][4794] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:46:22.740097 containerd[2138]: 2025-12-16 12:46:22.656 [INFO][4794] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515.1.0-a-4ca6cdd03e' Dec 16 12:46:22.740097 containerd[2138]: 2025-12-16 12:46:22.661 [INFO][4794] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c9430c0ccd94dbd7b27f7158266a1e106fa53bca7a3e639afd0feef6a48dc798" host="ci-4515.1.0-a-4ca6cdd03e" Dec 16 12:46:22.740097 containerd[2138]: 2025-12-16 12:46:22.664 [INFO][4794] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515.1.0-a-4ca6cdd03e" Dec 16 12:46:22.740097 containerd[2138]: 2025-12-16 12:46:22.667 [INFO][4794] ipam/ipam.go 511: Trying affinity for 192.168.45.128/26 host="ci-4515.1.0-a-4ca6cdd03e" Dec 16 12:46:22.740097 containerd[2138]: 2025-12-16 12:46:22.668 [INFO][4794] ipam/ipam.go 158: Attempting to load block cidr=192.168.45.128/26 host="ci-4515.1.0-a-4ca6cdd03e" Dec 16 12:46:22.740097 containerd[2138]: 2025-12-16 12:46:22.670 [INFO][4794] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.45.128/26 host="ci-4515.1.0-a-4ca6cdd03e" Dec 16 12:46:22.740392 containerd[2138]: 2025-12-16 12:46:22.670 [INFO][4794] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.45.128/26 handle="k8s-pod-network.c9430c0ccd94dbd7b27f7158266a1e106fa53bca7a3e639afd0feef6a48dc798" host="ci-4515.1.0-a-4ca6cdd03e" Dec 16 12:46:22.740392 containerd[2138]: 2025-12-16 12:46:22.671 [INFO][4794] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c9430c0ccd94dbd7b27f7158266a1e106fa53bca7a3e639afd0feef6a48dc798 Dec 16 12:46:22.740392 containerd[2138]: 2025-12-16 12:46:22.680 [INFO][4794] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.45.128/26 handle="k8s-pod-network.c9430c0ccd94dbd7b27f7158266a1e106fa53bca7a3e639afd0feef6a48dc798" host="ci-4515.1.0-a-4ca6cdd03e" Dec 16 12:46:22.740392 containerd[2138]: 2025-12-16 12:46:22.693 [INFO][4794] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.45.129/26] block=192.168.45.128/26 handle="k8s-pod-network.c9430c0ccd94dbd7b27f7158266a1e106fa53bca7a3e639afd0feef6a48dc798" host="ci-4515.1.0-a-4ca6cdd03e" Dec 16 12:46:22.740392 containerd[2138]: 2025-12-16 12:46:22.693 [INFO][4794] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.45.129/26] handle="k8s-pod-network.c9430c0ccd94dbd7b27f7158266a1e106fa53bca7a3e639afd0feef6a48dc798" host="ci-4515.1.0-a-4ca6cdd03e" Dec 16 12:46:22.740392 containerd[2138]: 2025-12-16 12:46:22.693 [INFO][4794] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:46:22.740392 containerd[2138]: 2025-12-16 12:46:22.693 [INFO][4794] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.45.129/26] IPv6=[] ContainerID="c9430c0ccd94dbd7b27f7158266a1e106fa53bca7a3e639afd0feef6a48dc798" HandleID="k8s-pod-network.c9430c0ccd94dbd7b27f7158266a1e106fa53bca7a3e639afd0feef6a48dc798" Workload="ci--4515.1.0--a--4ca6cdd03e-k8s-whisker--858764cc7c--zqhnl-eth0" Dec 16 12:46:22.742511 containerd[2138]: 2025-12-16 12:46:22.697 [INFO][4783] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c9430c0ccd94dbd7b27f7158266a1e106fa53bca7a3e639afd0feef6a48dc798" Namespace="calico-system" Pod="whisker-858764cc7c-zqhnl" WorkloadEndpoint="ci--4515.1.0--a--4ca6cdd03e-k8s-whisker--858764cc7c--zqhnl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--a--4ca6cdd03e-k8s-whisker--858764cc7c--zqhnl-eth0", GenerateName:"whisker-858764cc7c-", Namespace:"calico-system", SelfLink:"", UID:"d2bf104b-aa2c-4645-b1eb-bf5f9ef78c24", ResourceVersion:"883", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 46, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"858764cc7c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-a-4ca6cdd03e", ContainerID:"", Pod:"whisker-858764cc7c-zqhnl", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.45.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali528ec29d684", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:46:22.742511 containerd[2138]: 2025-12-16 12:46:22.697 [INFO][4783] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.45.129/32] ContainerID="c9430c0ccd94dbd7b27f7158266a1e106fa53bca7a3e639afd0feef6a48dc798" Namespace="calico-system" Pod="whisker-858764cc7c-zqhnl" WorkloadEndpoint="ci--4515.1.0--a--4ca6cdd03e-k8s-whisker--858764cc7c--zqhnl-eth0" Dec 16 12:46:22.742573 containerd[2138]: 2025-12-16 12:46:22.697 [INFO][4783] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali528ec29d684 ContainerID="c9430c0ccd94dbd7b27f7158266a1e106fa53bca7a3e639afd0feef6a48dc798" Namespace="calico-system" Pod="whisker-858764cc7c-zqhnl" WorkloadEndpoint="ci--4515.1.0--a--4ca6cdd03e-k8s-whisker--858764cc7c--zqhnl-eth0" Dec 16 12:46:22.742573 containerd[2138]: 2025-12-16 12:46:22.708 [INFO][4783] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c9430c0ccd94dbd7b27f7158266a1e106fa53bca7a3e639afd0feef6a48dc798" Namespace="calico-system" Pod="whisker-858764cc7c-zqhnl" WorkloadEndpoint="ci--4515.1.0--a--4ca6cdd03e-k8s-whisker--858764cc7c--zqhnl-eth0" Dec 16 12:46:22.742601 containerd[2138]: 2025-12-16 12:46:22.711 [INFO][4783] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c9430c0ccd94dbd7b27f7158266a1e106fa53bca7a3e639afd0feef6a48dc798" Namespace="calico-system" Pod="whisker-858764cc7c-zqhnl" WorkloadEndpoint="ci--4515.1.0--a--4ca6cdd03e-k8s-whisker--858764cc7c--zqhnl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--a--4ca6cdd03e-k8s-whisker--858764cc7c--zqhnl-eth0", GenerateName:"whisker-858764cc7c-", Namespace:"calico-system", SelfLink:"", UID:"d2bf104b-aa2c-4645-b1eb-bf5f9ef78c24", ResourceVersion:"883", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 46, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"858764cc7c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-a-4ca6cdd03e", ContainerID:"c9430c0ccd94dbd7b27f7158266a1e106fa53bca7a3e639afd0feef6a48dc798", Pod:"whisker-858764cc7c-zqhnl", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.45.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali528ec29d684", MAC:"6e:bf:ab:c8:cc:83", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:46:22.742635 containerd[2138]: 2025-12-16 12:46:22.735 [INFO][4783] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c9430c0ccd94dbd7b27f7158266a1e106fa53bca7a3e639afd0feef6a48dc798" Namespace="calico-system" Pod="whisker-858764cc7c-zqhnl" WorkloadEndpoint="ci--4515.1.0--a--4ca6cdd03e-k8s-whisker--858764cc7c--zqhnl-eth0" Dec 16 12:46:22.835291 containerd[2138]: time="2025-12-16T12:46:22.835233455Z" level=info msg="connecting to shim c9430c0ccd94dbd7b27f7158266a1e106fa53bca7a3e639afd0feef6a48dc798" address="unix:///run/containerd/s/c6ae9adb1c4d0ed0d84271f6ddcf825a64fb53cd33dc97cdd6d637e9c440e8f7" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:46:22.872408 systemd[1]: Started cri-containerd-c9430c0ccd94dbd7b27f7158266a1e106fa53bca7a3e639afd0feef6a48dc798.scope - libcontainer container c9430c0ccd94dbd7b27f7158266a1e106fa53bca7a3e639afd0feef6a48dc798. Dec 16 12:46:22.904000 audit: BPF prog-id=199 op=LOAD Dec 16 12:46:22.905000 audit: BPF prog-id=200 op=LOAD Dec 16 12:46:22.905000 audit[4914]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400017e180 a2=98 a3=0 items=0 ppid=4903 pid=4914 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:22.905000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339343330633063636439346462643762323766373135383236366131 Dec 16 12:46:22.905000 audit: BPF prog-id=200 op=UNLOAD Dec 16 12:46:22.905000 audit[4914]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4903 pid=4914 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:22.905000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339343330633063636439346462643762323766373135383236366131 Dec 16 12:46:22.906000 audit: BPF prog-id=201 op=LOAD Dec 16 12:46:22.906000 audit[4914]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400017e3e8 a2=98 a3=0 items=0 ppid=4903 pid=4914 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:22.906000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339343330633063636439346462643762323766373135383236366131 Dec 16 12:46:22.906000 audit: BPF prog-id=202 op=LOAD Dec 16 12:46:22.906000 audit[4914]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=400017e168 a2=98 a3=0 items=0 ppid=4903 pid=4914 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:22.906000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339343330633063636439346462643762323766373135383236366131 Dec 16 12:46:22.906000 audit: BPF prog-id=202 op=UNLOAD Dec 16 12:46:22.906000 audit[4914]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4903 pid=4914 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:22.906000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339343330633063636439346462643762323766373135383236366131 Dec 16 12:46:22.906000 audit: BPF prog-id=201 op=UNLOAD Dec 16 12:46:22.906000 audit[4914]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4903 pid=4914 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:22.906000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339343330633063636439346462643762323766373135383236366131 Dec 16 12:46:22.906000 audit: BPF prog-id=203 op=LOAD Dec 16 12:46:22.906000 audit[4914]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400017e648 a2=98 a3=0 items=0 ppid=4903 pid=4914 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:22.906000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339343330633063636439346462643762323766373135383236366131 Dec 16 12:46:22.951398 containerd[2138]: time="2025-12-16T12:46:22.951362390Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-858764cc7c-zqhnl,Uid:d2bf104b-aa2c-4645-b1eb-bf5f9ef78c24,Namespace:calico-system,Attempt:0,} returns sandbox id \"c9430c0ccd94dbd7b27f7158266a1e106fa53bca7a3e639afd0feef6a48dc798\"" Dec 16 12:46:22.955695 containerd[2138]: time="2025-12-16T12:46:22.955598742Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 12:46:23.033000 audit: BPF prog-id=204 op=LOAD Dec 16 12:46:23.033000 audit[4956]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffcbe33f68 a2=98 a3=ffffcbe33f58 items=0 ppid=4827 pid=4956 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:23.033000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 12:46:23.034000 audit: BPF prog-id=204 op=UNLOAD Dec 16 12:46:23.034000 audit[4956]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffcbe33f38 a3=0 items=0 ppid=4827 pid=4956 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:23.034000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 12:46:23.034000 audit: BPF prog-id=205 op=LOAD Dec 16 12:46:23.034000 audit[4956]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffcbe33e18 a2=74 a3=95 items=0 ppid=4827 pid=4956 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:23.034000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 12:46:23.034000 audit: BPF prog-id=205 op=UNLOAD Dec 16 12:46:23.034000 audit[4956]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=4827 pid=4956 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:23.034000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 12:46:23.034000 audit: BPF prog-id=206 op=LOAD Dec 16 12:46:23.034000 audit[4956]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffcbe33e48 a2=40 a3=ffffcbe33e78 items=0 ppid=4827 pid=4956 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:23.034000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 12:46:23.034000 audit: BPF prog-id=206 op=UNLOAD Dec 16 12:46:23.034000 audit[4956]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=40 a3=ffffcbe33e78 items=0 ppid=4827 pid=4956 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:23.034000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 12:46:23.036000 audit: BPF prog-id=207 op=LOAD Dec 16 12:46:23.036000 audit[4957]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd9024fc8 a2=98 a3=ffffd9024fb8 items=0 ppid=4827 pid=4957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:23.036000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:46:23.038000 audit: BPF prog-id=207 op=UNLOAD Dec 16 12:46:23.038000 audit[4957]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffd9024f98 a3=0 items=0 ppid=4827 pid=4957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:23.038000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:46:23.040000 audit: BPF prog-id=208 op=LOAD Dec 16 12:46:23.040000 audit[4957]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffd9024c58 a2=74 a3=95 items=0 ppid=4827 pid=4957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:23.040000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:46:23.040000 audit: BPF prog-id=208 op=UNLOAD Dec 16 12:46:23.040000 audit[4957]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=74 a3=95 items=0 ppid=4827 pid=4957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:23.040000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:46:23.040000 audit: BPF prog-id=209 op=LOAD Dec 16 12:46:23.040000 audit[4957]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffd9024cb8 a2=94 a3=2 items=0 ppid=4827 pid=4957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:23.040000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:46:23.041000 audit: BPF prog-id=209 op=UNLOAD Dec 16 12:46:23.041000 audit[4957]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=70 a3=2 items=0 ppid=4827 pid=4957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:23.041000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:46:23.133000 audit: BPF prog-id=210 op=LOAD Dec 16 12:46:23.133000 audit[4957]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffd9024c78 a2=40 a3=ffffd9024ca8 items=0 ppid=4827 pid=4957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:23.133000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:46:23.134000 audit: BPF prog-id=210 op=UNLOAD Dec 16 12:46:23.134000 audit[4957]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=40 a3=ffffd9024ca8 items=0 ppid=4827 pid=4957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:23.134000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:46:23.140000 audit: BPF prog-id=211 op=LOAD Dec 16 12:46:23.140000 audit[4957]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffd9024c88 a2=94 a3=4 items=0 ppid=4827 pid=4957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:23.140000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:46:23.140000 audit: BPF prog-id=211 op=UNLOAD Dec 16 12:46:23.140000 audit[4957]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=4 items=0 ppid=4827 pid=4957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:23.140000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:46:23.141000 audit: BPF prog-id=212 op=LOAD Dec 16 12:46:23.141000 audit[4957]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffd9024ac8 a2=94 a3=5 items=0 ppid=4827 pid=4957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:23.141000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:46:23.141000 audit: BPF prog-id=212 op=UNLOAD Dec 16 12:46:23.141000 audit[4957]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=5 items=0 ppid=4827 pid=4957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:23.141000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:46:23.141000 audit: BPF prog-id=213 op=LOAD Dec 16 12:46:23.141000 audit[4957]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffd9024cf8 a2=94 a3=6 items=0 ppid=4827 pid=4957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:23.141000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:46:23.141000 audit: BPF prog-id=213 op=UNLOAD Dec 16 12:46:23.141000 audit[4957]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=6 items=0 ppid=4827 pid=4957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:23.141000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:46:23.141000 audit: BPF prog-id=214 op=LOAD Dec 16 12:46:23.141000 audit[4957]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffd90244c8 a2=94 a3=83 items=0 ppid=4827 pid=4957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:23.141000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:46:23.142000 audit: BPF prog-id=215 op=LOAD Dec 16 12:46:23.142000 audit[4957]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=7 a0=5 a1=ffffd9024288 a2=94 a3=2 items=0 ppid=4827 pid=4957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:23.142000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:46:23.142000 audit: BPF prog-id=215 op=UNLOAD Dec 16 12:46:23.142000 audit[4957]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=7 a1=57156c a2=c a3=0 items=0 ppid=4827 pid=4957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:23.142000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:46:23.142000 audit: BPF prog-id=214 op=UNLOAD Dec 16 12:46:23.142000 audit[4957]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=16085620 a3=16078b00 items=0 ppid=4827 pid=4957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:23.142000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:46:23.149000 audit: BPF prog-id=216 op=LOAD Dec 16 12:46:23.149000 audit[4975]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc36705f8 a2=98 a3=ffffc36705e8 items=0 ppid=4827 pid=4975 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:23.149000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 12:46:23.149000 audit: BPF prog-id=216 op=UNLOAD Dec 16 12:46:23.149000 audit[4975]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffc36705c8 a3=0 items=0 ppid=4827 pid=4975 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:23.149000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 12:46:23.149000 audit: BPF prog-id=217 op=LOAD Dec 16 12:46:23.149000 audit[4975]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc36704a8 a2=74 a3=95 items=0 ppid=4827 pid=4975 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:23.149000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 12:46:23.149000 audit: BPF prog-id=217 op=UNLOAD Dec 16 12:46:23.149000 audit[4975]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=4827 pid=4975 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:23.149000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 12:46:23.149000 audit: BPF prog-id=218 op=LOAD Dec 16 12:46:23.149000 audit[4975]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc36704d8 a2=40 a3=ffffc3670508 items=0 ppid=4827 pid=4975 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:23.149000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 12:46:23.149000 audit: BPF prog-id=218 op=UNLOAD Dec 16 12:46:23.149000 audit[4975]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=40 a3=ffffc3670508 items=0 ppid=4827 pid=4975 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:23.149000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 12:46:23.242619 containerd[2138]: time="2025-12-16T12:46:23.242578310Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:46:23.245829 containerd[2138]: time="2025-12-16T12:46:23.245798025Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 12:46:23.245934 containerd[2138]: time="2025-12-16T12:46:23.245863234Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 12:46:23.246056 kubelet[3668]: E1216 12:46:23.246018 3668 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:46:23.246338 kubelet[3668]: E1216 12:46:23.246061 3668 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:46:23.246338 kubelet[3668]: E1216 12:46:23.246130 3668 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-858764cc7c-zqhnl_calico-system(d2bf104b-aa2c-4645-b1eb-bf5f9ef78c24): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 12:46:23.247165 containerd[2138]: time="2025-12-16T12:46:23.247142159Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 12:46:23.258856 systemd-networkd[1717]: vxlan.calico: Link UP Dec 16 12:46:23.258864 systemd-networkd[1717]: vxlan.calico: Gained carrier Dec 16 12:46:23.275000 audit: BPF prog-id=219 op=LOAD Dec 16 12:46:23.275000 audit[5000]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffe3e61fc8 a2=98 a3=ffffe3e61fb8 items=0 ppid=4827 pid=5000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:23.275000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:46:23.275000 audit: BPF prog-id=219 op=UNLOAD Dec 16 12:46:23.275000 audit[5000]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffe3e61f98 a3=0 items=0 ppid=4827 pid=5000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:23.275000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:46:23.275000 audit: BPF prog-id=220 op=LOAD Dec 16 12:46:23.275000 audit[5000]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffe3e61ca8 a2=74 a3=95 items=0 ppid=4827 pid=5000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:23.275000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:46:23.275000 audit: BPF prog-id=220 op=UNLOAD Dec 16 12:46:23.275000 audit[5000]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=4827 pid=5000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:23.275000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:46:23.275000 audit: BPF prog-id=221 op=LOAD Dec 16 12:46:23.275000 audit[5000]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffe3e61d08 a2=94 a3=2 items=0 ppid=4827 pid=5000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:23.275000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:46:23.275000 audit: BPF prog-id=221 op=UNLOAD Dec 16 12:46:23.275000 audit[5000]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=70 a3=2 items=0 ppid=4827 pid=5000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:23.275000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:46:23.275000 audit: BPF prog-id=222 op=LOAD Dec 16 12:46:23.275000 audit[5000]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffe3e61b88 a2=40 a3=ffffe3e61bb8 items=0 ppid=4827 pid=5000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:23.275000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:46:23.275000 audit: BPF prog-id=222 op=UNLOAD Dec 16 12:46:23.275000 audit[5000]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=40 a3=ffffe3e61bb8 items=0 ppid=4827 pid=5000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:23.275000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:46:23.275000 audit: BPF prog-id=223 op=LOAD Dec 16 12:46:23.275000 audit[5000]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffe3e61cd8 a2=94 a3=b7 items=0 ppid=4827 pid=5000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:23.275000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:46:23.275000 audit: BPF prog-id=223 op=UNLOAD Dec 16 12:46:23.275000 audit[5000]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=b7 items=0 ppid=4827 pid=5000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:23.275000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:46:23.277000 audit: BPF prog-id=224 op=LOAD Dec 16 12:46:23.277000 audit[5000]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffe3e61388 a2=94 a3=2 items=0 ppid=4827 pid=5000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:23.277000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:46:23.277000 audit: BPF prog-id=224 op=UNLOAD Dec 16 12:46:23.277000 audit[5000]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=2 items=0 ppid=4827 pid=5000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:23.277000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:46:23.277000 audit: BPF prog-id=225 op=LOAD Dec 16 12:46:23.277000 audit[5000]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffe3e61518 a2=94 a3=30 items=0 ppid=4827 pid=5000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:23.277000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:46:23.279000 audit: BPF prog-id=226 op=LOAD Dec 16 12:46:23.279000 audit[5003]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffdc9863a8 a2=98 a3=ffffdc986398 items=0 ppid=4827 pid=5003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:23.279000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:46:23.279000 audit: BPF prog-id=226 op=UNLOAD Dec 16 12:46:23.279000 audit[5003]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffdc986378 a3=0 items=0 ppid=4827 pid=5003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:23.279000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:46:23.279000 audit: BPF prog-id=227 op=LOAD Dec 16 12:46:23.279000 audit[5003]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffdc986038 a2=74 a3=95 items=0 ppid=4827 pid=5003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:23.279000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:46:23.279000 audit: BPF prog-id=227 op=UNLOAD Dec 16 12:46:23.279000 audit[5003]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=74 a3=95 items=0 ppid=4827 pid=5003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:23.279000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:46:23.279000 audit: BPF prog-id=228 op=LOAD Dec 16 12:46:23.279000 audit[5003]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffdc986098 a2=94 a3=2 items=0 ppid=4827 pid=5003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:23.279000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:46:23.279000 audit: BPF prog-id=228 op=UNLOAD Dec 16 12:46:23.279000 audit[5003]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=70 a3=2 items=0 ppid=4827 pid=5003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:23.279000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:46:23.364000 audit: BPF prog-id=229 op=LOAD Dec 16 12:46:23.364000 audit[5003]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffdc986058 a2=40 a3=ffffdc986088 items=0 ppid=4827 pid=5003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:23.364000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:46:23.364000 audit: BPF prog-id=229 op=UNLOAD Dec 16 12:46:23.364000 audit[5003]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=40 a3=ffffdc986088 items=0 ppid=4827 pid=5003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:23.364000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:46:23.371000 audit: BPF prog-id=230 op=LOAD Dec 16 12:46:23.371000 audit[5003]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffdc986068 a2=94 a3=4 items=0 ppid=4827 pid=5003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:23.371000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:46:23.371000 audit: BPF prog-id=230 op=UNLOAD Dec 16 12:46:23.371000 audit[5003]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=4 items=0 ppid=4827 pid=5003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:23.371000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:46:23.371000 audit: BPF prog-id=231 op=LOAD Dec 16 12:46:23.371000 audit[5003]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffdc985ea8 a2=94 a3=5 items=0 ppid=4827 pid=5003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:23.371000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:46:23.371000 audit: BPF prog-id=231 op=UNLOAD Dec 16 12:46:23.371000 audit[5003]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=5 items=0 ppid=4827 pid=5003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:23.371000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:46:23.371000 audit: BPF prog-id=232 op=LOAD Dec 16 12:46:23.371000 audit[5003]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffdc9860d8 a2=94 a3=6 items=0 ppid=4827 pid=5003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:23.371000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:46:23.371000 audit: BPF prog-id=232 op=UNLOAD Dec 16 12:46:23.371000 audit[5003]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=6 items=0 ppid=4827 pid=5003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:23.371000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:46:23.371000 audit: BPF prog-id=233 op=LOAD Dec 16 12:46:23.371000 audit[5003]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffdc9858a8 a2=94 a3=83 items=0 ppid=4827 pid=5003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:23.371000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:46:23.371000 audit: BPF prog-id=234 op=LOAD Dec 16 12:46:23.371000 audit[5003]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=7 a0=5 a1=ffffdc985668 a2=94 a3=2 items=0 ppid=4827 pid=5003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:23.371000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:46:23.371000 audit: BPF prog-id=234 op=UNLOAD Dec 16 12:46:23.371000 audit[5003]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=7 a1=57156c a2=c a3=0 items=0 ppid=4827 pid=5003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:23.371000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:46:23.372000 audit: BPF prog-id=233 op=UNLOAD Dec 16 12:46:23.372000 audit[5003]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=2b03b620 a3=2b02eb00 items=0 ppid=4827 pid=5003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:23.372000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:46:23.376000 audit: BPF prog-id=225 op=UNLOAD Dec 16 12:46:23.376000 audit[4827]: SYSCALL arch=c00000b7 syscall=35 success=yes exit=0 a0=ffffffffffffff9c a1=400096cf00 a2=0 a3=0 items=0 ppid=4801 pid=4827 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:23.376000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Dec 16 12:46:23.460000 audit[5029]: NETFILTER_CFG table=mangle:122 family=2 entries=16 op=nft_register_chain pid=5029 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:46:23.460000 audit[5029]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6868 a0=3 a1=ffffc8b90030 a2=0 a3=ffff8aba1fa8 items=0 ppid=4827 pid=5029 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:23.460000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:46:23.472000 audit[5031]: NETFILTER_CFG table=nat:123 family=2 entries=15 op=nft_register_chain pid=5031 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:46:23.472000 audit[5031]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5084 a0=3 a1=ffffc7342dd0 a2=0 a3=ffff9d01dfa8 items=0 ppid=4827 pid=5031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:23.472000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:46:23.478000 audit[5030]: NETFILTER_CFG table=raw:124 family=2 entries=21 op=nft_register_chain pid=5030 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:46:23.478000 audit[5030]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8452 a0=3 a1=ffffc5d45e00 a2=0 a3=ffffbe962fa8 items=0 ppid=4827 pid=5030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:23.478000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:46:23.493000 audit[5032]: NETFILTER_CFG table=filter:125 family=2 entries=94 op=nft_register_chain pid=5032 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:46:23.493000 audit[5032]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=53116 a0=3 a1=fffff094b560 a2=0 a3=ffffbd3a9fa8 items=0 ppid=4827 pid=5032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:23.493000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:46:23.536746 containerd[2138]: time="2025-12-16T12:46:23.536658518Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:46:23.540523 containerd[2138]: time="2025-12-16T12:46:23.540489890Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 12:46:23.540748 containerd[2138]: time="2025-12-16T12:46:23.540612206Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 12:46:23.540854 kubelet[3668]: E1216 12:46:23.540811 3668 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:46:23.540923 kubelet[3668]: E1216 12:46:23.540865 3668 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:46:23.541044 kubelet[3668]: E1216 12:46:23.540939 3668 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-858764cc7c-zqhnl_calico-system(d2bf104b-aa2c-4645-b1eb-bf5f9ef78c24): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 12:46:23.541044 kubelet[3668]: E1216 12:46:23.540973 3668 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-858764cc7c-zqhnl" podUID="d2bf104b-aa2c-4645-b1eb-bf5f9ef78c24" Dec 16 12:46:24.004423 kubelet[3668]: I1216 12:46:24.004385 3668 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="156f2d35-55d2-4c5a-af35-bdad37a7ceaf" path="/var/lib/kubelet/pods/156f2d35-55d2-4c5a-af35-bdad37a7ceaf/volumes" Dec 16 12:46:24.190764 kubelet[3668]: E1216 12:46:24.189942 3668 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-858764cc7c-zqhnl" podUID="d2bf104b-aa2c-4645-b1eb-bf5f9ef78c24" Dec 16 12:46:24.249000 audit[5044]: NETFILTER_CFG table=filter:126 family=2 entries=20 op=nft_register_rule pid=5044 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:46:24.249000 audit[5044]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffe840a2e0 a2=0 a3=1 items=0 ppid=3822 pid=5044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:24.249000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:46:24.254000 audit[5044]: NETFILTER_CFG table=nat:127 family=2 entries=14 op=nft_register_rule pid=5044 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:46:24.254000 audit[5044]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffe840a2e0 a2=0 a3=1 items=0 ppid=3822 pid=5044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:24.254000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:46:24.329354 systemd-networkd[1717]: cali528ec29d684: Gained IPv6LL Dec 16 12:46:25.034460 systemd-networkd[1717]: vxlan.calico: Gained IPv6LL Dec 16 12:46:28.010302 containerd[2138]: time="2025-12-16T12:46:28.010251475Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5449d854d8-xfsgn,Uid:afc8cbd3-cce9-4afd-951f-828ed80d9307,Namespace:calico-system,Attempt:0,}" Dec 16 12:46:28.015080 containerd[2138]: time="2025-12-16T12:46:28.014894152Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xwwbh,Uid:5bbc1d74-de1f-40b8-bd99-2346a3e2bafe,Namespace:calico-system,Attempt:0,}" Dec 16 12:46:28.134776 systemd-networkd[1717]: calia28a565ee7e: Link UP Dec 16 12:46:28.134876 systemd-networkd[1717]: calia28a565ee7e: Gained carrier Dec 16 12:46:28.155773 containerd[2138]: 2025-12-16 12:46:28.069 [INFO][5047] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515.1.0--a--4ca6cdd03e-k8s-calico--kube--controllers--5449d854d8--xfsgn-eth0 calico-kube-controllers-5449d854d8- calico-system afc8cbd3-cce9-4afd-951f-828ed80d9307 810 0 2025-12-16 12:46:06 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5449d854d8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4515.1.0-a-4ca6cdd03e calico-kube-controllers-5449d854d8-xfsgn eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calia28a565ee7e [] [] }} ContainerID="a411fac797bc060abdfcbafbaf1917feeacacbdcdd4fc3bd08dcc53786a9da2e" Namespace="calico-system" Pod="calico-kube-controllers-5449d854d8-xfsgn" WorkloadEndpoint="ci--4515.1.0--a--4ca6cdd03e-k8s-calico--kube--controllers--5449d854d8--xfsgn-" Dec 16 12:46:28.155773 containerd[2138]: 2025-12-16 12:46:28.069 [INFO][5047] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a411fac797bc060abdfcbafbaf1917feeacacbdcdd4fc3bd08dcc53786a9da2e" Namespace="calico-system" Pod="calico-kube-controllers-5449d854d8-xfsgn" WorkloadEndpoint="ci--4515.1.0--a--4ca6cdd03e-k8s-calico--kube--controllers--5449d854d8--xfsgn-eth0" Dec 16 12:46:28.155773 containerd[2138]: 2025-12-16 12:46:28.098 [INFO][5072] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a411fac797bc060abdfcbafbaf1917feeacacbdcdd4fc3bd08dcc53786a9da2e" HandleID="k8s-pod-network.a411fac797bc060abdfcbafbaf1917feeacacbdcdd4fc3bd08dcc53786a9da2e" Workload="ci--4515.1.0--a--4ca6cdd03e-k8s-calico--kube--controllers--5449d854d8--xfsgn-eth0" Dec 16 12:46:28.156022 containerd[2138]: 2025-12-16 12:46:28.098 [INFO][5072] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a411fac797bc060abdfcbafbaf1917feeacacbdcdd4fc3bd08dcc53786a9da2e" HandleID="k8s-pod-network.a411fac797bc060abdfcbafbaf1917feeacacbdcdd4fc3bd08dcc53786a9da2e" Workload="ci--4515.1.0--a--4ca6cdd03e-k8s-calico--kube--controllers--5449d854d8--xfsgn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3800), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4515.1.0-a-4ca6cdd03e", "pod":"calico-kube-controllers-5449d854d8-xfsgn", "timestamp":"2025-12-16 12:46:28.098813232 +0000 UTC"}, Hostname:"ci-4515.1.0-a-4ca6cdd03e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:46:28.156022 containerd[2138]: 2025-12-16 12:46:28.099 [INFO][5072] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:46:28.156022 containerd[2138]: 2025-12-16 12:46:28.099 [INFO][5072] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:46:28.156022 containerd[2138]: 2025-12-16 12:46:28.099 [INFO][5072] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515.1.0-a-4ca6cdd03e' Dec 16 12:46:28.156022 containerd[2138]: 2025-12-16 12:46:28.105 [INFO][5072] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a411fac797bc060abdfcbafbaf1917feeacacbdcdd4fc3bd08dcc53786a9da2e" host="ci-4515.1.0-a-4ca6cdd03e" Dec 16 12:46:28.156022 containerd[2138]: 2025-12-16 12:46:28.109 [INFO][5072] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515.1.0-a-4ca6cdd03e" Dec 16 12:46:28.156022 containerd[2138]: 2025-12-16 12:46:28.112 [INFO][5072] ipam/ipam.go 511: Trying affinity for 192.168.45.128/26 host="ci-4515.1.0-a-4ca6cdd03e" Dec 16 12:46:28.156022 containerd[2138]: 2025-12-16 12:46:28.114 [INFO][5072] ipam/ipam.go 158: Attempting to load block cidr=192.168.45.128/26 host="ci-4515.1.0-a-4ca6cdd03e" Dec 16 12:46:28.156022 containerd[2138]: 2025-12-16 12:46:28.116 [INFO][5072] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.45.128/26 host="ci-4515.1.0-a-4ca6cdd03e" Dec 16 12:46:28.156161 containerd[2138]: 2025-12-16 12:46:28.116 [INFO][5072] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.45.128/26 handle="k8s-pod-network.a411fac797bc060abdfcbafbaf1917feeacacbdcdd4fc3bd08dcc53786a9da2e" host="ci-4515.1.0-a-4ca6cdd03e" Dec 16 12:46:28.156161 containerd[2138]: 2025-12-16 12:46:28.117 [INFO][5072] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a411fac797bc060abdfcbafbaf1917feeacacbdcdd4fc3bd08dcc53786a9da2e Dec 16 12:46:28.156161 containerd[2138]: 2025-12-16 12:46:28.121 [INFO][5072] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.45.128/26 handle="k8s-pod-network.a411fac797bc060abdfcbafbaf1917feeacacbdcdd4fc3bd08dcc53786a9da2e" host="ci-4515.1.0-a-4ca6cdd03e" Dec 16 12:46:28.156161 containerd[2138]: 2025-12-16 12:46:28.129 [INFO][5072] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.45.130/26] block=192.168.45.128/26 handle="k8s-pod-network.a411fac797bc060abdfcbafbaf1917feeacacbdcdd4fc3bd08dcc53786a9da2e" host="ci-4515.1.0-a-4ca6cdd03e" Dec 16 12:46:28.156161 containerd[2138]: 2025-12-16 12:46:28.129 [INFO][5072] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.45.130/26] handle="k8s-pod-network.a411fac797bc060abdfcbafbaf1917feeacacbdcdd4fc3bd08dcc53786a9da2e" host="ci-4515.1.0-a-4ca6cdd03e" Dec 16 12:46:28.156161 containerd[2138]: 2025-12-16 12:46:28.129 [INFO][5072] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:46:28.156161 containerd[2138]: 2025-12-16 12:46:28.129 [INFO][5072] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.45.130/26] IPv6=[] ContainerID="a411fac797bc060abdfcbafbaf1917feeacacbdcdd4fc3bd08dcc53786a9da2e" HandleID="k8s-pod-network.a411fac797bc060abdfcbafbaf1917feeacacbdcdd4fc3bd08dcc53786a9da2e" Workload="ci--4515.1.0--a--4ca6cdd03e-k8s-calico--kube--controllers--5449d854d8--xfsgn-eth0" Dec 16 12:46:28.156765 containerd[2138]: 2025-12-16 12:46:28.132 [INFO][5047] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a411fac797bc060abdfcbafbaf1917feeacacbdcdd4fc3bd08dcc53786a9da2e" Namespace="calico-system" Pod="calico-kube-controllers-5449d854d8-xfsgn" WorkloadEndpoint="ci--4515.1.0--a--4ca6cdd03e-k8s-calico--kube--controllers--5449d854d8--xfsgn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--a--4ca6cdd03e-k8s-calico--kube--controllers--5449d854d8--xfsgn-eth0", GenerateName:"calico-kube-controllers-5449d854d8-", Namespace:"calico-system", SelfLink:"", UID:"afc8cbd3-cce9-4afd-951f-828ed80d9307", ResourceVersion:"810", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 46, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5449d854d8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-a-4ca6cdd03e", ContainerID:"", Pod:"calico-kube-controllers-5449d854d8-xfsgn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.45.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia28a565ee7e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:46:28.156814 containerd[2138]: 2025-12-16 12:46:28.132 [INFO][5047] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.45.130/32] ContainerID="a411fac797bc060abdfcbafbaf1917feeacacbdcdd4fc3bd08dcc53786a9da2e" Namespace="calico-system" Pod="calico-kube-controllers-5449d854d8-xfsgn" WorkloadEndpoint="ci--4515.1.0--a--4ca6cdd03e-k8s-calico--kube--controllers--5449d854d8--xfsgn-eth0" Dec 16 12:46:28.156814 containerd[2138]: 2025-12-16 12:46:28.132 [INFO][5047] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia28a565ee7e ContainerID="a411fac797bc060abdfcbafbaf1917feeacacbdcdd4fc3bd08dcc53786a9da2e" Namespace="calico-system" Pod="calico-kube-controllers-5449d854d8-xfsgn" WorkloadEndpoint="ci--4515.1.0--a--4ca6cdd03e-k8s-calico--kube--controllers--5449d854d8--xfsgn-eth0" Dec 16 12:46:28.156814 containerd[2138]: 2025-12-16 12:46:28.136 [INFO][5047] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a411fac797bc060abdfcbafbaf1917feeacacbdcdd4fc3bd08dcc53786a9da2e" Namespace="calico-system" Pod="calico-kube-controllers-5449d854d8-xfsgn" WorkloadEndpoint="ci--4515.1.0--a--4ca6cdd03e-k8s-calico--kube--controllers--5449d854d8--xfsgn-eth0" Dec 16 12:46:28.156859 containerd[2138]: 2025-12-16 12:46:28.136 [INFO][5047] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a411fac797bc060abdfcbafbaf1917feeacacbdcdd4fc3bd08dcc53786a9da2e" Namespace="calico-system" Pod="calico-kube-controllers-5449d854d8-xfsgn" WorkloadEndpoint="ci--4515.1.0--a--4ca6cdd03e-k8s-calico--kube--controllers--5449d854d8--xfsgn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--a--4ca6cdd03e-k8s-calico--kube--controllers--5449d854d8--xfsgn-eth0", GenerateName:"calico-kube-controllers-5449d854d8-", Namespace:"calico-system", SelfLink:"", UID:"afc8cbd3-cce9-4afd-951f-828ed80d9307", ResourceVersion:"810", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 46, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5449d854d8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-a-4ca6cdd03e", ContainerID:"a411fac797bc060abdfcbafbaf1917feeacacbdcdd4fc3bd08dcc53786a9da2e", Pod:"calico-kube-controllers-5449d854d8-xfsgn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.45.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia28a565ee7e", MAC:"be:47:63:39:e1:99", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:46:28.156893 containerd[2138]: 2025-12-16 12:46:28.153 [INFO][5047] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a411fac797bc060abdfcbafbaf1917feeacacbdcdd4fc3bd08dcc53786a9da2e" Namespace="calico-system" Pod="calico-kube-controllers-5449d854d8-xfsgn" WorkloadEndpoint="ci--4515.1.0--a--4ca6cdd03e-k8s-calico--kube--controllers--5449d854d8--xfsgn-eth0" Dec 16 12:46:28.166000 audit[5095]: NETFILTER_CFG table=filter:128 family=2 entries=36 op=nft_register_chain pid=5095 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:46:28.170462 kernel: kauditd_printk_skb: 237 callbacks suppressed Dec 16 12:46:28.170527 kernel: audit: type=1325 audit(1765889188.166:677): table=filter:128 family=2 entries=36 op=nft_register_chain pid=5095 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:46:28.166000 audit[5095]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19576 a0=3 a1=fffff74db5c0 a2=0 a3=ffff98e70fa8 items=0 ppid=4827 pid=5095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:28.197499 kernel: audit: type=1300 audit(1765889188.166:677): arch=c00000b7 syscall=211 success=yes exit=19576 a0=3 a1=fffff74db5c0 a2=0 a3=ffff98e70fa8 items=0 ppid=4827 pid=5095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:28.166000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:46:28.208474 kernel: audit: type=1327 audit(1765889188.166:677): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:46:28.241333 containerd[2138]: time="2025-12-16T12:46:28.241303359Z" level=info msg="connecting to shim a411fac797bc060abdfcbafbaf1917feeacacbdcdd4fc3bd08dcc53786a9da2e" address="unix:///run/containerd/s/e3547e5e382742a83961d794915acc8818fa31950c472bb4890ea26189c77cdb" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:46:28.253419 systemd-networkd[1717]: cali5465f32f07c: Link UP Dec 16 12:46:28.255524 systemd-networkd[1717]: cali5465f32f07c: Gained carrier Dec 16 12:46:28.272604 containerd[2138]: 2025-12-16 12:46:28.076 [INFO][5056] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515.1.0--a--4ca6cdd03e-k8s-csi--node--driver--xwwbh-eth0 csi-node-driver- calico-system 5bbc1d74-de1f-40b8-bd99-2346a3e2bafe 715 0 2025-12-16 12:46:06 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4515.1.0-a-4ca6cdd03e csi-node-driver-xwwbh eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali5465f32f07c [] [] }} ContainerID="0b14adea1a807473a627dc44d04acb25915fea7745c6917ef97c31cc88bb8fcc" Namespace="calico-system" Pod="csi-node-driver-xwwbh" WorkloadEndpoint="ci--4515.1.0--a--4ca6cdd03e-k8s-csi--node--driver--xwwbh-" Dec 16 12:46:28.272604 containerd[2138]: 2025-12-16 12:46:28.076 [INFO][5056] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0b14adea1a807473a627dc44d04acb25915fea7745c6917ef97c31cc88bb8fcc" Namespace="calico-system" Pod="csi-node-driver-xwwbh" WorkloadEndpoint="ci--4515.1.0--a--4ca6cdd03e-k8s-csi--node--driver--xwwbh-eth0" Dec 16 12:46:28.272604 containerd[2138]: 2025-12-16 12:46:28.110 [INFO][5077] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0b14adea1a807473a627dc44d04acb25915fea7745c6917ef97c31cc88bb8fcc" HandleID="k8s-pod-network.0b14adea1a807473a627dc44d04acb25915fea7745c6917ef97c31cc88bb8fcc" Workload="ci--4515.1.0--a--4ca6cdd03e-k8s-csi--node--driver--xwwbh-eth0" Dec 16 12:46:28.273615 containerd[2138]: 2025-12-16 12:46:28.110 [INFO][5077] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0b14adea1a807473a627dc44d04acb25915fea7745c6917ef97c31cc88bb8fcc" HandleID="k8s-pod-network.0b14adea1a807473a627dc44d04acb25915fea7745c6917ef97c31cc88bb8fcc" Workload="ci--4515.1.0--a--4ca6cdd03e-k8s-csi--node--driver--xwwbh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024afe0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4515.1.0-a-4ca6cdd03e", "pod":"csi-node-driver-xwwbh", "timestamp":"2025-12-16 12:46:28.110438023 +0000 UTC"}, Hostname:"ci-4515.1.0-a-4ca6cdd03e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:46:28.273615 containerd[2138]: 2025-12-16 12:46:28.110 [INFO][5077] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:46:28.273615 containerd[2138]: 2025-12-16 12:46:28.129 [INFO][5077] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:46:28.273615 containerd[2138]: 2025-12-16 12:46:28.130 [INFO][5077] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515.1.0-a-4ca6cdd03e' Dec 16 12:46:28.273615 containerd[2138]: 2025-12-16 12:46:28.205 [INFO][5077] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0b14adea1a807473a627dc44d04acb25915fea7745c6917ef97c31cc88bb8fcc" host="ci-4515.1.0-a-4ca6cdd03e" Dec 16 12:46:28.273615 containerd[2138]: 2025-12-16 12:46:28.211 [INFO][5077] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515.1.0-a-4ca6cdd03e" Dec 16 12:46:28.273615 containerd[2138]: 2025-12-16 12:46:28.217 [INFO][5077] ipam/ipam.go 511: Trying affinity for 192.168.45.128/26 host="ci-4515.1.0-a-4ca6cdd03e" Dec 16 12:46:28.273615 containerd[2138]: 2025-12-16 12:46:28.219 [INFO][5077] ipam/ipam.go 158: Attempting to load block cidr=192.168.45.128/26 host="ci-4515.1.0-a-4ca6cdd03e" Dec 16 12:46:28.273615 containerd[2138]: 2025-12-16 12:46:28.221 [INFO][5077] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.45.128/26 host="ci-4515.1.0-a-4ca6cdd03e" Dec 16 12:46:28.273763 containerd[2138]: 2025-12-16 12:46:28.223 [INFO][5077] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.45.128/26 handle="k8s-pod-network.0b14adea1a807473a627dc44d04acb25915fea7745c6917ef97c31cc88bb8fcc" host="ci-4515.1.0-a-4ca6cdd03e" Dec 16 12:46:28.273763 containerd[2138]: 2025-12-16 12:46:28.225 [INFO][5077] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0b14adea1a807473a627dc44d04acb25915fea7745c6917ef97c31cc88bb8fcc Dec 16 12:46:28.273763 containerd[2138]: 2025-12-16 12:46:28.231 [INFO][5077] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.45.128/26 handle="k8s-pod-network.0b14adea1a807473a627dc44d04acb25915fea7745c6917ef97c31cc88bb8fcc" host="ci-4515.1.0-a-4ca6cdd03e" Dec 16 12:46:28.273763 containerd[2138]: 2025-12-16 12:46:28.244 [INFO][5077] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.45.131/26] block=192.168.45.128/26 handle="k8s-pod-network.0b14adea1a807473a627dc44d04acb25915fea7745c6917ef97c31cc88bb8fcc" host="ci-4515.1.0-a-4ca6cdd03e" Dec 16 12:46:28.273763 containerd[2138]: 2025-12-16 12:46:28.244 [INFO][5077] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.45.131/26] handle="k8s-pod-network.0b14adea1a807473a627dc44d04acb25915fea7745c6917ef97c31cc88bb8fcc" host="ci-4515.1.0-a-4ca6cdd03e" Dec 16 12:46:28.273763 containerd[2138]: 2025-12-16 12:46:28.244 [INFO][5077] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:46:28.273763 containerd[2138]: 2025-12-16 12:46:28.244 [INFO][5077] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.45.131/26] IPv6=[] ContainerID="0b14adea1a807473a627dc44d04acb25915fea7745c6917ef97c31cc88bb8fcc" HandleID="k8s-pod-network.0b14adea1a807473a627dc44d04acb25915fea7745c6917ef97c31cc88bb8fcc" Workload="ci--4515.1.0--a--4ca6cdd03e-k8s-csi--node--driver--xwwbh-eth0" Dec 16 12:46:28.273862 containerd[2138]: 2025-12-16 12:46:28.249 [INFO][5056] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0b14adea1a807473a627dc44d04acb25915fea7745c6917ef97c31cc88bb8fcc" Namespace="calico-system" Pod="csi-node-driver-xwwbh" WorkloadEndpoint="ci--4515.1.0--a--4ca6cdd03e-k8s-csi--node--driver--xwwbh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--a--4ca6cdd03e-k8s-csi--node--driver--xwwbh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5bbc1d74-de1f-40b8-bd99-2346a3e2bafe", ResourceVersion:"715", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 46, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-a-4ca6cdd03e", ContainerID:"", Pod:"csi-node-driver-xwwbh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.45.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5465f32f07c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:46:28.273896 containerd[2138]: 2025-12-16 12:46:28.250 [INFO][5056] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.45.131/32] ContainerID="0b14adea1a807473a627dc44d04acb25915fea7745c6917ef97c31cc88bb8fcc" Namespace="calico-system" Pod="csi-node-driver-xwwbh" WorkloadEndpoint="ci--4515.1.0--a--4ca6cdd03e-k8s-csi--node--driver--xwwbh-eth0" Dec 16 12:46:28.273896 containerd[2138]: 2025-12-16 12:46:28.250 [INFO][5056] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5465f32f07c ContainerID="0b14adea1a807473a627dc44d04acb25915fea7745c6917ef97c31cc88bb8fcc" Namespace="calico-system" Pod="csi-node-driver-xwwbh" WorkloadEndpoint="ci--4515.1.0--a--4ca6cdd03e-k8s-csi--node--driver--xwwbh-eth0" Dec 16 12:46:28.273896 containerd[2138]: 2025-12-16 12:46:28.252 [INFO][5056] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0b14adea1a807473a627dc44d04acb25915fea7745c6917ef97c31cc88bb8fcc" Namespace="calico-system" Pod="csi-node-driver-xwwbh" WorkloadEndpoint="ci--4515.1.0--a--4ca6cdd03e-k8s-csi--node--driver--xwwbh-eth0" Dec 16 12:46:28.273947 containerd[2138]: 2025-12-16 12:46:28.252 [INFO][5056] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0b14adea1a807473a627dc44d04acb25915fea7745c6917ef97c31cc88bb8fcc" Namespace="calico-system" Pod="csi-node-driver-xwwbh" WorkloadEndpoint="ci--4515.1.0--a--4ca6cdd03e-k8s-csi--node--driver--xwwbh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--a--4ca6cdd03e-k8s-csi--node--driver--xwwbh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5bbc1d74-de1f-40b8-bd99-2346a3e2bafe", ResourceVersion:"715", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 46, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-a-4ca6cdd03e", ContainerID:"0b14adea1a807473a627dc44d04acb25915fea7745c6917ef97c31cc88bb8fcc", Pod:"csi-node-driver-xwwbh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.45.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5465f32f07c", MAC:"1e:92:9b:55:8b:14", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:46:28.273980 containerd[2138]: 2025-12-16 12:46:28.264 [INFO][5056] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0b14adea1a807473a627dc44d04acb25915fea7745c6917ef97c31cc88bb8fcc" Namespace="calico-system" Pod="csi-node-driver-xwwbh" WorkloadEndpoint="ci--4515.1.0--a--4ca6cdd03e-k8s-csi--node--driver--xwwbh-eth0" Dec 16 12:46:28.278393 systemd[1]: Started cri-containerd-a411fac797bc060abdfcbafbaf1917feeacacbdcdd4fc3bd08dcc53786a9da2e.scope - libcontainer container a411fac797bc060abdfcbafbaf1917feeacacbdcdd4fc3bd08dcc53786a9da2e. Dec 16 12:46:28.287000 audit[5141]: NETFILTER_CFG table=filter:129 family=2 entries=46 op=nft_register_chain pid=5141 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:46:28.287000 audit[5141]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=23616 a0=3 a1=ffffd1c9a940 a2=0 a3=ffffb2d04fa8 items=0 ppid=4827 pid=5141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:28.316819 kernel: audit: type=1325 audit(1765889188.287:678): table=filter:129 family=2 entries=46 op=nft_register_chain pid=5141 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:46:28.316885 kernel: audit: type=1300 audit(1765889188.287:678): arch=c00000b7 syscall=211 success=yes exit=23616 a0=3 a1=ffffd1c9a940 a2=0 a3=ffffb2d04fa8 items=0 ppid=4827 pid=5141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:28.287000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:46:28.329383 kernel: audit: type=1327 audit(1765889188.287:678): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:46:28.329589 kernel: audit: type=1334 audit(1765889188.298:679): prog-id=235 op=LOAD Dec 16 12:46:28.298000 audit: BPF prog-id=235 op=LOAD Dec 16 12:46:28.316000 audit: BPF prog-id=236 op=LOAD Dec 16 12:46:28.337634 kernel: audit: type=1334 audit(1765889188.316:680): prog-id=236 op=LOAD Dec 16 12:46:28.316000 audit[5115]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=5103 pid=5115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:28.353813 kernel: audit: type=1300 audit(1765889188.316:680): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=5103 pid=5115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:28.316000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134313166616337393762633036306162646663626166626166313931 Dec 16 12:46:28.371252 kernel: audit: type=1327 audit(1765889188.316:680): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134313166616337393762633036306162646663626166626166313931 Dec 16 12:46:28.316000 audit: BPF prog-id=236 op=UNLOAD Dec 16 12:46:28.316000 audit[5115]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5103 pid=5115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:28.316000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134313166616337393762633036306162646663626166626166313931 Dec 16 12:46:28.317000 audit: BPF prog-id=237 op=LOAD Dec 16 12:46:28.317000 audit[5115]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=5103 pid=5115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:28.317000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134313166616337393762633036306162646663626166626166313931 Dec 16 12:46:28.317000 audit: BPF prog-id=238 op=LOAD Dec 16 12:46:28.317000 audit[5115]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=5103 pid=5115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:28.317000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134313166616337393762633036306162646663626166626166313931 Dec 16 12:46:28.317000 audit: BPF prog-id=238 op=UNLOAD Dec 16 12:46:28.317000 audit[5115]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5103 pid=5115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:28.317000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134313166616337393762633036306162646663626166626166313931 Dec 16 12:46:28.317000 audit: BPF prog-id=237 op=UNLOAD Dec 16 12:46:28.317000 audit[5115]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5103 pid=5115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:28.317000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134313166616337393762633036306162646663626166626166313931 Dec 16 12:46:28.317000 audit: BPF prog-id=239 op=LOAD Dec 16 12:46:28.317000 audit[5115]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=5103 pid=5115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:28.317000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134313166616337393762633036306162646663626166626166313931 Dec 16 12:46:28.401344 containerd[2138]: time="2025-12-16T12:46:28.401130800Z" level=info msg="connecting to shim 0b14adea1a807473a627dc44d04acb25915fea7745c6917ef97c31cc88bb8fcc" address="unix:///run/containerd/s/c6b39d90ce5de6f86bcbb6140ec1e51a0fa23dd086387bd5d3bdf64b4b80a0bd" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:46:28.403223 containerd[2138]: time="2025-12-16T12:46:28.403172563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5449d854d8-xfsgn,Uid:afc8cbd3-cce9-4afd-951f-828ed80d9307,Namespace:calico-system,Attempt:0,} returns sandbox id \"a411fac797bc060abdfcbafbaf1917feeacacbdcdd4fc3bd08dcc53786a9da2e\"" Dec 16 12:46:28.408473 containerd[2138]: time="2025-12-16T12:46:28.408454715Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 12:46:28.422379 systemd[1]: Started cri-containerd-0b14adea1a807473a627dc44d04acb25915fea7745c6917ef97c31cc88bb8fcc.scope - libcontainer container 0b14adea1a807473a627dc44d04acb25915fea7745c6917ef97c31cc88bb8fcc. Dec 16 12:46:28.430000 audit: BPF prog-id=240 op=LOAD Dec 16 12:46:28.430000 audit: BPF prog-id=241 op=LOAD Dec 16 12:46:28.430000 audit[5169]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001a8180 a2=98 a3=0 items=0 ppid=5157 pid=5169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:28.430000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062313461646561316138303734373361363237646334346430346163 Dec 16 12:46:28.430000 audit: BPF prog-id=241 op=UNLOAD Dec 16 12:46:28.430000 audit[5169]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5157 pid=5169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:28.430000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062313461646561316138303734373361363237646334346430346163 Dec 16 12:46:28.430000 audit: BPF prog-id=242 op=LOAD Dec 16 12:46:28.430000 audit[5169]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001a83e8 a2=98 a3=0 items=0 ppid=5157 pid=5169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:28.430000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062313461646561316138303734373361363237646334346430346163 Dec 16 12:46:28.430000 audit: BPF prog-id=243 op=LOAD Dec 16 12:46:28.430000 audit[5169]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=40001a8168 a2=98 a3=0 items=0 ppid=5157 pid=5169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:28.430000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062313461646561316138303734373361363237646334346430346163 Dec 16 12:46:28.430000 audit: BPF prog-id=243 op=UNLOAD Dec 16 12:46:28.430000 audit[5169]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=5157 pid=5169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:28.430000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062313461646561316138303734373361363237646334346430346163 Dec 16 12:46:28.430000 audit: BPF prog-id=242 op=UNLOAD Dec 16 12:46:28.430000 audit[5169]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5157 pid=5169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:28.430000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062313461646561316138303734373361363237646334346430346163 Dec 16 12:46:28.430000 audit: BPF prog-id=244 op=LOAD Dec 16 12:46:28.430000 audit[5169]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001a8648 a2=98 a3=0 items=0 ppid=5157 pid=5169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:28.430000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062313461646561316138303734373361363237646334346430346163 Dec 16 12:46:28.448003 containerd[2138]: time="2025-12-16T12:46:28.447977789Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xwwbh,Uid:5bbc1d74-de1f-40b8-bd99-2346a3e2bafe,Namespace:calico-system,Attempt:0,} returns sandbox id \"0b14adea1a807473a627dc44d04acb25915fea7745c6917ef97c31cc88bb8fcc\"" Dec 16 12:46:28.701451 containerd[2138]: time="2025-12-16T12:46:28.701293170Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:46:28.704881 containerd[2138]: time="2025-12-16T12:46:28.704840392Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 12:46:28.705024 containerd[2138]: time="2025-12-16T12:46:28.704856304Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 12:46:28.705195 kubelet[3668]: E1216 12:46:28.705164 3668 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:46:28.705906 kubelet[3668]: E1216 12:46:28.705546 3668 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:46:28.705906 kubelet[3668]: E1216 12:46:28.705739 3668 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-5449d854d8-xfsgn_calico-system(afc8cbd3-cce9-4afd-951f-828ed80d9307): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 12:46:28.705906 kubelet[3668]: E1216 12:46:28.705778 3668 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5449d854d8-xfsgn" podUID="afc8cbd3-cce9-4afd-951f-828ed80d9307" Dec 16 12:46:28.706049 containerd[2138]: time="2025-12-16T12:46:28.705842925Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 12:46:28.961350 containerd[2138]: time="2025-12-16T12:46:28.961169892Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:46:28.964290 containerd[2138]: time="2025-12-16T12:46:28.964250741Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 12:46:28.964505 containerd[2138]: time="2025-12-16T12:46:28.964332919Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 12:46:28.964536 kubelet[3668]: E1216 12:46:28.964502 3668 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:46:28.964582 kubelet[3668]: E1216 12:46:28.964547 3668 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:46:28.964639 kubelet[3668]: E1216 12:46:28.964612 3668 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-xwwbh_calico-system(5bbc1d74-de1f-40b8-bd99-2346a3e2bafe): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 12:46:28.966711 containerd[2138]: time="2025-12-16T12:46:28.966682451Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 12:46:29.009552 containerd[2138]: time="2025-12-16T12:46:29.009518300Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6ff5fc5c78-zxwl6,Uid:33f549f5-c190-4fdd-897c-292335e0de6b,Namespace:calico-apiserver,Attempt:0,}" Dec 16 12:46:29.106472 systemd-networkd[1717]: cali6cec8d42914: Link UP Dec 16 12:46:29.106586 systemd-networkd[1717]: cali6cec8d42914: Gained carrier Dec 16 12:46:29.124038 containerd[2138]: 2025-12-16 12:46:29.054 [INFO][5200] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515.1.0--a--4ca6cdd03e-k8s-calico--apiserver--6ff5fc5c78--zxwl6-eth0 calico-apiserver-6ff5fc5c78- calico-apiserver 33f549f5-c190-4fdd-897c-292335e0de6b 813 0 2025-12-16 12:45:58 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6ff5fc5c78 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4515.1.0-a-4ca6cdd03e calico-apiserver-6ff5fc5c78-zxwl6 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali6cec8d42914 [] [] }} ContainerID="877a8f7ac53517ffef97fc8435f618b082f958f7f46ae53a9e760a8629bb0c1c" Namespace="calico-apiserver" Pod="calico-apiserver-6ff5fc5c78-zxwl6" WorkloadEndpoint="ci--4515.1.0--a--4ca6cdd03e-k8s-calico--apiserver--6ff5fc5c78--zxwl6-" Dec 16 12:46:29.124038 containerd[2138]: 2025-12-16 12:46:29.054 [INFO][5200] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="877a8f7ac53517ffef97fc8435f618b082f958f7f46ae53a9e760a8629bb0c1c" Namespace="calico-apiserver" Pod="calico-apiserver-6ff5fc5c78-zxwl6" WorkloadEndpoint="ci--4515.1.0--a--4ca6cdd03e-k8s-calico--apiserver--6ff5fc5c78--zxwl6-eth0" Dec 16 12:46:29.124038 containerd[2138]: 2025-12-16 12:46:29.072 [INFO][5212] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="877a8f7ac53517ffef97fc8435f618b082f958f7f46ae53a9e760a8629bb0c1c" HandleID="k8s-pod-network.877a8f7ac53517ffef97fc8435f618b082f958f7f46ae53a9e760a8629bb0c1c" Workload="ci--4515.1.0--a--4ca6cdd03e-k8s-calico--apiserver--6ff5fc5c78--zxwl6-eth0" Dec 16 12:46:29.125144 containerd[2138]: 2025-12-16 12:46:29.072 [INFO][5212] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="877a8f7ac53517ffef97fc8435f618b082f958f7f46ae53a9e760a8629bb0c1c" HandleID="k8s-pod-network.877a8f7ac53517ffef97fc8435f618b082f958f7f46ae53a9e760a8629bb0c1c" Workload="ci--4515.1.0--a--4ca6cdd03e-k8s-calico--apiserver--6ff5fc5c78--zxwl6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c8fe0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4515.1.0-a-4ca6cdd03e", "pod":"calico-apiserver-6ff5fc5c78-zxwl6", "timestamp":"2025-12-16 12:46:29.072123838 +0000 UTC"}, Hostname:"ci-4515.1.0-a-4ca6cdd03e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:46:29.125144 containerd[2138]: 2025-12-16 12:46:29.072 [INFO][5212] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:46:29.125144 containerd[2138]: 2025-12-16 12:46:29.072 [INFO][5212] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:46:29.125144 containerd[2138]: 2025-12-16 12:46:29.072 [INFO][5212] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515.1.0-a-4ca6cdd03e' Dec 16 12:46:29.125144 containerd[2138]: 2025-12-16 12:46:29.077 [INFO][5212] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.877a8f7ac53517ffef97fc8435f618b082f958f7f46ae53a9e760a8629bb0c1c" host="ci-4515.1.0-a-4ca6cdd03e" Dec 16 12:46:29.125144 containerd[2138]: 2025-12-16 12:46:29.080 [INFO][5212] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515.1.0-a-4ca6cdd03e" Dec 16 12:46:29.125144 containerd[2138]: 2025-12-16 12:46:29.083 [INFO][5212] ipam/ipam.go 511: Trying affinity for 192.168.45.128/26 host="ci-4515.1.0-a-4ca6cdd03e" Dec 16 12:46:29.125144 containerd[2138]: 2025-12-16 12:46:29.084 [INFO][5212] ipam/ipam.go 158: Attempting to load block cidr=192.168.45.128/26 host="ci-4515.1.0-a-4ca6cdd03e" Dec 16 12:46:29.125144 containerd[2138]: 2025-12-16 12:46:29.086 [INFO][5212] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.45.128/26 host="ci-4515.1.0-a-4ca6cdd03e" Dec 16 12:46:29.125614 containerd[2138]: 2025-12-16 12:46:29.086 [INFO][5212] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.45.128/26 handle="k8s-pod-network.877a8f7ac53517ffef97fc8435f618b082f958f7f46ae53a9e760a8629bb0c1c" host="ci-4515.1.0-a-4ca6cdd03e" Dec 16 12:46:29.125614 containerd[2138]: 2025-12-16 12:46:29.088 [INFO][5212] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.877a8f7ac53517ffef97fc8435f618b082f958f7f46ae53a9e760a8629bb0c1c Dec 16 12:46:29.125614 containerd[2138]: 2025-12-16 12:46:29.092 [INFO][5212] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.45.128/26 handle="k8s-pod-network.877a8f7ac53517ffef97fc8435f618b082f958f7f46ae53a9e760a8629bb0c1c" host="ci-4515.1.0-a-4ca6cdd03e" Dec 16 12:46:29.125614 containerd[2138]: 2025-12-16 12:46:29.102 [INFO][5212] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.45.132/26] block=192.168.45.128/26 handle="k8s-pod-network.877a8f7ac53517ffef97fc8435f618b082f958f7f46ae53a9e760a8629bb0c1c" host="ci-4515.1.0-a-4ca6cdd03e" Dec 16 12:46:29.125614 containerd[2138]: 2025-12-16 12:46:29.102 [INFO][5212] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.45.132/26] handle="k8s-pod-network.877a8f7ac53517ffef97fc8435f618b082f958f7f46ae53a9e760a8629bb0c1c" host="ci-4515.1.0-a-4ca6cdd03e" Dec 16 12:46:29.125614 containerd[2138]: 2025-12-16 12:46:29.102 [INFO][5212] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:46:29.125614 containerd[2138]: 2025-12-16 12:46:29.102 [INFO][5212] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.45.132/26] IPv6=[] ContainerID="877a8f7ac53517ffef97fc8435f618b082f958f7f46ae53a9e760a8629bb0c1c" HandleID="k8s-pod-network.877a8f7ac53517ffef97fc8435f618b082f958f7f46ae53a9e760a8629bb0c1c" Workload="ci--4515.1.0--a--4ca6cdd03e-k8s-calico--apiserver--6ff5fc5c78--zxwl6-eth0" Dec 16 12:46:29.125712 containerd[2138]: 2025-12-16 12:46:29.103 [INFO][5200] cni-plugin/k8s.go 418: Populated endpoint ContainerID="877a8f7ac53517ffef97fc8435f618b082f958f7f46ae53a9e760a8629bb0c1c" Namespace="calico-apiserver" Pod="calico-apiserver-6ff5fc5c78-zxwl6" WorkloadEndpoint="ci--4515.1.0--a--4ca6cdd03e-k8s-calico--apiserver--6ff5fc5c78--zxwl6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--a--4ca6cdd03e-k8s-calico--apiserver--6ff5fc5c78--zxwl6-eth0", GenerateName:"calico-apiserver-6ff5fc5c78-", Namespace:"calico-apiserver", SelfLink:"", UID:"33f549f5-c190-4fdd-897c-292335e0de6b", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 45, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6ff5fc5c78", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-a-4ca6cdd03e", ContainerID:"", Pod:"calico-apiserver-6ff5fc5c78-zxwl6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.45.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6cec8d42914", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:46:29.125753 containerd[2138]: 2025-12-16 12:46:29.103 [INFO][5200] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.45.132/32] ContainerID="877a8f7ac53517ffef97fc8435f618b082f958f7f46ae53a9e760a8629bb0c1c" Namespace="calico-apiserver" Pod="calico-apiserver-6ff5fc5c78-zxwl6" WorkloadEndpoint="ci--4515.1.0--a--4ca6cdd03e-k8s-calico--apiserver--6ff5fc5c78--zxwl6-eth0" Dec 16 12:46:29.125753 containerd[2138]: 2025-12-16 12:46:29.103 [INFO][5200] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6cec8d42914 ContainerID="877a8f7ac53517ffef97fc8435f618b082f958f7f46ae53a9e760a8629bb0c1c" Namespace="calico-apiserver" Pod="calico-apiserver-6ff5fc5c78-zxwl6" WorkloadEndpoint="ci--4515.1.0--a--4ca6cdd03e-k8s-calico--apiserver--6ff5fc5c78--zxwl6-eth0" Dec 16 12:46:29.125753 containerd[2138]: 2025-12-16 12:46:29.107 [INFO][5200] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="877a8f7ac53517ffef97fc8435f618b082f958f7f46ae53a9e760a8629bb0c1c" Namespace="calico-apiserver" Pod="calico-apiserver-6ff5fc5c78-zxwl6" WorkloadEndpoint="ci--4515.1.0--a--4ca6cdd03e-k8s-calico--apiserver--6ff5fc5c78--zxwl6-eth0" Dec 16 12:46:29.125795 containerd[2138]: 2025-12-16 12:46:29.107 [INFO][5200] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="877a8f7ac53517ffef97fc8435f618b082f958f7f46ae53a9e760a8629bb0c1c" Namespace="calico-apiserver" Pod="calico-apiserver-6ff5fc5c78-zxwl6" WorkloadEndpoint="ci--4515.1.0--a--4ca6cdd03e-k8s-calico--apiserver--6ff5fc5c78--zxwl6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--a--4ca6cdd03e-k8s-calico--apiserver--6ff5fc5c78--zxwl6-eth0", GenerateName:"calico-apiserver-6ff5fc5c78-", Namespace:"calico-apiserver", SelfLink:"", UID:"33f549f5-c190-4fdd-897c-292335e0de6b", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 45, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6ff5fc5c78", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-a-4ca6cdd03e", ContainerID:"877a8f7ac53517ffef97fc8435f618b082f958f7f46ae53a9e760a8629bb0c1c", Pod:"calico-apiserver-6ff5fc5c78-zxwl6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.45.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6cec8d42914", MAC:"ce:50:53:30:e7:a2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:46:29.125833 containerd[2138]: 2025-12-16 12:46:29.120 [INFO][5200] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="877a8f7ac53517ffef97fc8435f618b082f958f7f46ae53a9e760a8629bb0c1c" Namespace="calico-apiserver" Pod="calico-apiserver-6ff5fc5c78-zxwl6" WorkloadEndpoint="ci--4515.1.0--a--4ca6cdd03e-k8s-calico--apiserver--6ff5fc5c78--zxwl6-eth0" Dec 16 12:46:29.132000 audit[5227]: NETFILTER_CFG table=filter:130 family=2 entries=54 op=nft_register_chain pid=5227 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:46:29.132000 audit[5227]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=29380 a0=3 a1=ffffcdb8b8d0 a2=0 a3=ffffb3dedfa8 items=0 ppid=4827 pid=5227 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:29.132000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:46:29.176818 containerd[2138]: time="2025-12-16T12:46:29.176728818Z" level=info msg="connecting to shim 877a8f7ac53517ffef97fc8435f618b082f958f7f46ae53a9e760a8629bb0c1c" address="unix:///run/containerd/s/40404f37ce7571f76b0f4bad13ca192b6420a64ae587fcad7ec193d2f018bfd1" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:46:29.194508 systemd[1]: Started cri-containerd-877a8f7ac53517ffef97fc8435f618b082f958f7f46ae53a9e760a8629bb0c1c.scope - libcontainer container 877a8f7ac53517ffef97fc8435f618b082f958f7f46ae53a9e760a8629bb0c1c. Dec 16 12:46:29.205593 kubelet[3668]: E1216 12:46:29.205443 3668 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5449d854d8-xfsgn" podUID="afc8cbd3-cce9-4afd-951f-828ed80d9307" Dec 16 12:46:29.214000 audit: BPF prog-id=245 op=LOAD Dec 16 12:46:29.215000 audit: BPF prog-id=246 op=LOAD Dec 16 12:46:29.215000 audit[5247]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=5236 pid=5247 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:29.215000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837376138663761633533353137666665663937666338343335663631 Dec 16 12:46:29.215000 audit: BPF prog-id=246 op=UNLOAD Dec 16 12:46:29.215000 audit[5247]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5236 pid=5247 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:29.215000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837376138663761633533353137666665663937666338343335663631 Dec 16 12:46:29.216000 audit: BPF prog-id=247 op=LOAD Dec 16 12:46:29.216000 audit[5247]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=5236 pid=5247 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:29.216000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837376138663761633533353137666665663937666338343335663631 Dec 16 12:46:29.216000 audit: BPF prog-id=248 op=LOAD Dec 16 12:46:29.216000 audit[5247]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=5236 pid=5247 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:29.216000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837376138663761633533353137666665663937666338343335663631 Dec 16 12:46:29.216000 audit: BPF prog-id=248 op=UNLOAD Dec 16 12:46:29.216000 audit[5247]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5236 pid=5247 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:29.216000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837376138663761633533353137666665663937666338343335663631 Dec 16 12:46:29.216000 audit: BPF prog-id=247 op=UNLOAD Dec 16 12:46:29.216000 audit[5247]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5236 pid=5247 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:29.216000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837376138663761633533353137666665663937666338343335663631 Dec 16 12:46:29.216000 audit: BPF prog-id=249 op=LOAD Dec 16 12:46:29.216000 audit[5247]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=5236 pid=5247 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:29.216000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837376138663761633533353137666665663937666338343335663631 Dec 16 12:46:29.233841 containerd[2138]: time="2025-12-16T12:46:29.233813133Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:46:29.238228 containerd[2138]: time="2025-12-16T12:46:29.238102505Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 12:46:29.238476 kubelet[3668]: E1216 12:46:29.238430 3668 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:46:29.238476 kubelet[3668]: E1216 12:46:29.238466 3668 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:46:29.238591 kubelet[3668]: E1216 12:46:29.238509 3668 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-xwwbh_calico-system(5bbc1d74-de1f-40b8-bd99-2346a3e2bafe): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 12:46:29.238591 kubelet[3668]: E1216 12:46:29.238534 3668 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-xwwbh" podUID="5bbc1d74-de1f-40b8-bd99-2346a3e2bafe" Dec 16 12:46:29.238964 containerd[2138]: time="2025-12-16T12:46:29.238191803Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 12:46:29.246402 containerd[2138]: time="2025-12-16T12:46:29.246372823Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6ff5fc5c78-zxwl6,Uid:33f549f5-c190-4fdd-897c-292335e0de6b,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"877a8f7ac53517ffef97fc8435f618b082f958f7f46ae53a9e760a8629bb0c1c\"" Dec 16 12:46:29.248030 containerd[2138]: time="2025-12-16T12:46:29.248002966Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:46:29.508161 containerd[2138]: time="2025-12-16T12:46:29.508086637Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:46:29.511535 containerd[2138]: time="2025-12-16T12:46:29.511468607Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:46:29.511535 containerd[2138]: time="2025-12-16T12:46:29.511497928Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 12:46:29.511728 kubelet[3668]: E1216 12:46:29.511686 3668 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:46:29.511779 kubelet[3668]: E1216 12:46:29.511736 3668 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:46:29.511810 kubelet[3668]: E1216 12:46:29.511801 3668 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6ff5fc5c78-zxwl6_calico-apiserver(33f549f5-c190-4fdd-897c-292335e0de6b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:46:29.511853 kubelet[3668]: E1216 12:46:29.511826 3668 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6ff5fc5c78-zxwl6" podUID="33f549f5-c190-4fdd-897c-292335e0de6b" Dec 16 12:46:30.008334 containerd[2138]: time="2025-12-16T12:46:30.008167467Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-gvb6w,Uid:1d9ba31c-b8d5-4a2c-a6bb-ca7d237b3d76,Namespace:kube-system,Attempt:0,}" Dec 16 12:46:30.013251 containerd[2138]: time="2025-12-16T12:46:30.013219820Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-257m4,Uid:f3748f03-9dbe-4ca7-b265-d450d86ecab7,Namespace:kube-system,Attempt:0,}" Dec 16 12:46:30.025351 systemd-networkd[1717]: calia28a565ee7e: Gained IPv6LL Dec 16 12:46:30.147341 systemd-networkd[1717]: cali10ab4dc0904: Link UP Dec 16 12:46:30.148176 systemd-networkd[1717]: cali10ab4dc0904: Gained carrier Dec 16 12:46:30.154416 systemd-networkd[1717]: cali5465f32f07c: Gained IPv6LL Dec 16 12:46:30.165173 containerd[2138]: 2025-12-16 12:46:30.060 [INFO][5272] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515.1.0--a--4ca6cdd03e-k8s-coredns--66bc5c9577--gvb6w-eth0 coredns-66bc5c9577- kube-system 1d9ba31c-b8d5-4a2c-a6bb-ca7d237b3d76 811 0 2025-12-16 12:45:48 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4515.1.0-a-4ca6cdd03e coredns-66bc5c9577-gvb6w eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali10ab4dc0904 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="aad80a4e56a93c74df42228e994b6fc05a79eb4dd25e95eea7ed5c94a8a4eb7a" Namespace="kube-system" Pod="coredns-66bc5c9577-gvb6w" WorkloadEndpoint="ci--4515.1.0--a--4ca6cdd03e-k8s-coredns--66bc5c9577--gvb6w-" Dec 16 12:46:30.165173 containerd[2138]: 2025-12-16 12:46:30.060 [INFO][5272] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="aad80a4e56a93c74df42228e994b6fc05a79eb4dd25e95eea7ed5c94a8a4eb7a" Namespace="kube-system" Pod="coredns-66bc5c9577-gvb6w" WorkloadEndpoint="ci--4515.1.0--a--4ca6cdd03e-k8s-coredns--66bc5c9577--gvb6w-eth0" Dec 16 12:46:30.165173 containerd[2138]: 2025-12-16 12:46:30.092 [INFO][5295] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="aad80a4e56a93c74df42228e994b6fc05a79eb4dd25e95eea7ed5c94a8a4eb7a" HandleID="k8s-pod-network.aad80a4e56a93c74df42228e994b6fc05a79eb4dd25e95eea7ed5c94a8a4eb7a" Workload="ci--4515.1.0--a--4ca6cdd03e-k8s-coredns--66bc5c9577--gvb6w-eth0" Dec 16 12:46:30.165614 containerd[2138]: 2025-12-16 12:46:30.092 [INFO][5295] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="aad80a4e56a93c74df42228e994b6fc05a79eb4dd25e95eea7ed5c94a8a4eb7a" HandleID="k8s-pod-network.aad80a4e56a93c74df42228e994b6fc05a79eb4dd25e95eea7ed5c94a8a4eb7a" Workload="ci--4515.1.0--a--4ca6cdd03e-k8s-coredns--66bc5c9577--gvb6w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024afe0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4515.1.0-a-4ca6cdd03e", "pod":"coredns-66bc5c9577-gvb6w", "timestamp":"2025-12-16 12:46:30.092667139 +0000 UTC"}, Hostname:"ci-4515.1.0-a-4ca6cdd03e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:46:30.165614 containerd[2138]: 2025-12-16 12:46:30.092 [INFO][5295] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:46:30.165614 containerd[2138]: 2025-12-16 12:46:30.092 [INFO][5295] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:46:30.165614 containerd[2138]: 2025-12-16 12:46:30.092 [INFO][5295] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515.1.0-a-4ca6cdd03e' Dec 16 12:46:30.165614 containerd[2138]: 2025-12-16 12:46:30.102 [INFO][5295] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.aad80a4e56a93c74df42228e994b6fc05a79eb4dd25e95eea7ed5c94a8a4eb7a" host="ci-4515.1.0-a-4ca6cdd03e" Dec 16 12:46:30.165614 containerd[2138]: 2025-12-16 12:46:30.111 [INFO][5295] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515.1.0-a-4ca6cdd03e" Dec 16 12:46:30.165614 containerd[2138]: 2025-12-16 12:46:30.117 [INFO][5295] ipam/ipam.go 511: Trying affinity for 192.168.45.128/26 host="ci-4515.1.0-a-4ca6cdd03e" Dec 16 12:46:30.165614 containerd[2138]: 2025-12-16 12:46:30.119 [INFO][5295] ipam/ipam.go 158: Attempting to load block cidr=192.168.45.128/26 host="ci-4515.1.0-a-4ca6cdd03e" Dec 16 12:46:30.165614 containerd[2138]: 2025-12-16 12:46:30.121 [INFO][5295] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.45.128/26 host="ci-4515.1.0-a-4ca6cdd03e" Dec 16 12:46:30.165757 containerd[2138]: 2025-12-16 12:46:30.122 [INFO][5295] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.45.128/26 handle="k8s-pod-network.aad80a4e56a93c74df42228e994b6fc05a79eb4dd25e95eea7ed5c94a8a4eb7a" host="ci-4515.1.0-a-4ca6cdd03e" Dec 16 12:46:30.165757 containerd[2138]: 2025-12-16 12:46:30.123 [INFO][5295] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.aad80a4e56a93c74df42228e994b6fc05a79eb4dd25e95eea7ed5c94a8a4eb7a Dec 16 12:46:30.165757 containerd[2138]: 2025-12-16 12:46:30.130 [INFO][5295] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.45.128/26 handle="k8s-pod-network.aad80a4e56a93c74df42228e994b6fc05a79eb4dd25e95eea7ed5c94a8a4eb7a" host="ci-4515.1.0-a-4ca6cdd03e" Dec 16 12:46:30.165757 containerd[2138]: 2025-12-16 12:46:30.138 [INFO][5295] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.45.133/26] block=192.168.45.128/26 handle="k8s-pod-network.aad80a4e56a93c74df42228e994b6fc05a79eb4dd25e95eea7ed5c94a8a4eb7a" host="ci-4515.1.0-a-4ca6cdd03e" Dec 16 12:46:30.165757 containerd[2138]: 2025-12-16 12:46:30.138 [INFO][5295] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.45.133/26] handle="k8s-pod-network.aad80a4e56a93c74df42228e994b6fc05a79eb4dd25e95eea7ed5c94a8a4eb7a" host="ci-4515.1.0-a-4ca6cdd03e" Dec 16 12:46:30.165757 containerd[2138]: 2025-12-16 12:46:30.138 [INFO][5295] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:46:30.165757 containerd[2138]: 2025-12-16 12:46:30.138 [INFO][5295] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.45.133/26] IPv6=[] ContainerID="aad80a4e56a93c74df42228e994b6fc05a79eb4dd25e95eea7ed5c94a8a4eb7a" HandleID="k8s-pod-network.aad80a4e56a93c74df42228e994b6fc05a79eb4dd25e95eea7ed5c94a8a4eb7a" Workload="ci--4515.1.0--a--4ca6cdd03e-k8s-coredns--66bc5c9577--gvb6w-eth0" Dec 16 12:46:30.165856 containerd[2138]: 2025-12-16 12:46:30.142 [INFO][5272] cni-plugin/k8s.go 418: Populated endpoint ContainerID="aad80a4e56a93c74df42228e994b6fc05a79eb4dd25e95eea7ed5c94a8a4eb7a" Namespace="kube-system" Pod="coredns-66bc5c9577-gvb6w" WorkloadEndpoint="ci--4515.1.0--a--4ca6cdd03e-k8s-coredns--66bc5c9577--gvb6w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--a--4ca6cdd03e-k8s-coredns--66bc5c9577--gvb6w-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"1d9ba31c-b8d5-4a2c-a6bb-ca7d237b3d76", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 45, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-a-4ca6cdd03e", ContainerID:"", Pod:"coredns-66bc5c9577-gvb6w", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.45.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali10ab4dc0904", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:46:30.165856 containerd[2138]: 2025-12-16 12:46:30.142 [INFO][5272] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.45.133/32] ContainerID="aad80a4e56a93c74df42228e994b6fc05a79eb4dd25e95eea7ed5c94a8a4eb7a" Namespace="kube-system" Pod="coredns-66bc5c9577-gvb6w" WorkloadEndpoint="ci--4515.1.0--a--4ca6cdd03e-k8s-coredns--66bc5c9577--gvb6w-eth0" Dec 16 12:46:30.165856 containerd[2138]: 2025-12-16 12:46:30.142 [INFO][5272] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali10ab4dc0904 ContainerID="aad80a4e56a93c74df42228e994b6fc05a79eb4dd25e95eea7ed5c94a8a4eb7a" Namespace="kube-system" Pod="coredns-66bc5c9577-gvb6w" WorkloadEndpoint="ci--4515.1.0--a--4ca6cdd03e-k8s-coredns--66bc5c9577--gvb6w-eth0" Dec 16 12:46:30.165856 containerd[2138]: 2025-12-16 12:46:30.148 [INFO][5272] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="aad80a4e56a93c74df42228e994b6fc05a79eb4dd25e95eea7ed5c94a8a4eb7a" Namespace="kube-system" Pod="coredns-66bc5c9577-gvb6w" WorkloadEndpoint="ci--4515.1.0--a--4ca6cdd03e-k8s-coredns--66bc5c9577--gvb6w-eth0" Dec 16 12:46:30.165856 containerd[2138]: 2025-12-16 12:46:30.148 [INFO][5272] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="aad80a4e56a93c74df42228e994b6fc05a79eb4dd25e95eea7ed5c94a8a4eb7a" Namespace="kube-system" Pod="coredns-66bc5c9577-gvb6w" WorkloadEndpoint="ci--4515.1.0--a--4ca6cdd03e-k8s-coredns--66bc5c9577--gvb6w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--a--4ca6cdd03e-k8s-coredns--66bc5c9577--gvb6w-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"1d9ba31c-b8d5-4a2c-a6bb-ca7d237b3d76", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 45, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-a-4ca6cdd03e", ContainerID:"aad80a4e56a93c74df42228e994b6fc05a79eb4dd25e95eea7ed5c94a8a4eb7a", Pod:"coredns-66bc5c9577-gvb6w", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.45.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali10ab4dc0904", MAC:"96:c5:52:16:0e:17", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:46:30.165984 containerd[2138]: 2025-12-16 12:46:30.162 [INFO][5272] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="aad80a4e56a93c74df42228e994b6fc05a79eb4dd25e95eea7ed5c94a8a4eb7a" Namespace="kube-system" Pod="coredns-66bc5c9577-gvb6w" WorkloadEndpoint="ci--4515.1.0--a--4ca6cdd03e-k8s-coredns--66bc5c9577--gvb6w-eth0" Dec 16 12:46:30.180000 audit[5323]: NETFILTER_CFG table=filter:131 family=2 entries=50 op=nft_register_chain pid=5323 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:46:30.180000 audit[5323]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=24912 a0=3 a1=ffffeab30140 a2=0 a3=ffffb76e0fa8 items=0 ppid=4827 pid=5323 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:30.180000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:46:30.210681 kubelet[3668]: E1216 12:46:30.210650 3668 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5449d854d8-xfsgn" podUID="afc8cbd3-cce9-4afd-951f-828ed80d9307" Dec 16 12:46:30.212853 kubelet[3668]: E1216 12:46:30.212825 3668 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6ff5fc5c78-zxwl6" podUID="33f549f5-c190-4fdd-897c-292335e0de6b" Dec 16 12:46:30.214255 kubelet[3668]: E1216 12:46:30.214206 3668 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-xwwbh" podUID="5bbc1d74-de1f-40b8-bd99-2346a3e2bafe" Dec 16 12:46:30.230375 containerd[2138]: time="2025-12-16T12:46:30.230345215Z" level=info msg="connecting to shim aad80a4e56a93c74df42228e994b6fc05a79eb4dd25e95eea7ed5c94a8a4eb7a" address="unix:///run/containerd/s/5b5529a5215b10b78d9aefedca6151831875c9d016e22a55236cbdeceb789b22" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:46:30.280508 systemd[1]: Started cri-containerd-aad80a4e56a93c74df42228e994b6fc05a79eb4dd25e95eea7ed5c94a8a4eb7a.scope - libcontainer container aad80a4e56a93c74df42228e994b6fc05a79eb4dd25e95eea7ed5c94a8a4eb7a. Dec 16 12:46:30.288296 systemd-networkd[1717]: cali76e9e6a73b4: Link UP Dec 16 12:46:30.290818 systemd-networkd[1717]: cali76e9e6a73b4: Gained carrier Dec 16 12:46:30.314980 containerd[2138]: 2025-12-16 12:46:30.094 [INFO][5284] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515.1.0--a--4ca6cdd03e-k8s-coredns--66bc5c9577--257m4-eth0 coredns-66bc5c9577- kube-system f3748f03-9dbe-4ca7-b265-d450d86ecab7 817 0 2025-12-16 12:45:48 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4515.1.0-a-4ca6cdd03e coredns-66bc5c9577-257m4 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali76e9e6a73b4 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="5b6756ccb5aecb9f6712869835dda25b9ecb9bf9f302cdb96d9d5874360b250a" Namespace="kube-system" Pod="coredns-66bc5c9577-257m4" WorkloadEndpoint="ci--4515.1.0--a--4ca6cdd03e-k8s-coredns--66bc5c9577--257m4-" Dec 16 12:46:30.314980 containerd[2138]: 2025-12-16 12:46:30.094 [INFO][5284] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5b6756ccb5aecb9f6712869835dda25b9ecb9bf9f302cdb96d9d5874360b250a" Namespace="kube-system" Pod="coredns-66bc5c9577-257m4" WorkloadEndpoint="ci--4515.1.0--a--4ca6cdd03e-k8s-coredns--66bc5c9577--257m4-eth0" Dec 16 12:46:30.314980 containerd[2138]: 2025-12-16 12:46:30.124 [INFO][5305] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5b6756ccb5aecb9f6712869835dda25b9ecb9bf9f302cdb96d9d5874360b250a" HandleID="k8s-pod-network.5b6756ccb5aecb9f6712869835dda25b9ecb9bf9f302cdb96d9d5874360b250a" Workload="ci--4515.1.0--a--4ca6cdd03e-k8s-coredns--66bc5c9577--257m4-eth0" Dec 16 12:46:30.314980 containerd[2138]: 2025-12-16 12:46:30.125 [INFO][5305] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5b6756ccb5aecb9f6712869835dda25b9ecb9bf9f302cdb96d9d5874360b250a" HandleID="k8s-pod-network.5b6756ccb5aecb9f6712869835dda25b9ecb9bf9f302cdb96d9d5874360b250a" Workload="ci--4515.1.0--a--4ca6cdd03e-k8s-coredns--66bc5c9577--257m4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d2fe0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4515.1.0-a-4ca6cdd03e", "pod":"coredns-66bc5c9577-257m4", "timestamp":"2025-12-16 12:46:30.124939908 +0000 UTC"}, Hostname:"ci-4515.1.0-a-4ca6cdd03e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:46:30.314980 containerd[2138]: 2025-12-16 12:46:30.125 [INFO][5305] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:46:30.314980 containerd[2138]: 2025-12-16 12:46:30.138 [INFO][5305] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:46:30.314980 containerd[2138]: 2025-12-16 12:46:30.139 [INFO][5305] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515.1.0-a-4ca6cdd03e' Dec 16 12:46:30.314980 containerd[2138]: 2025-12-16 12:46:30.200 [INFO][5305] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5b6756ccb5aecb9f6712869835dda25b9ecb9bf9f302cdb96d9d5874360b250a" host="ci-4515.1.0-a-4ca6cdd03e" Dec 16 12:46:30.314980 containerd[2138]: 2025-12-16 12:46:30.211 [INFO][5305] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515.1.0-a-4ca6cdd03e" Dec 16 12:46:30.314980 containerd[2138]: 2025-12-16 12:46:30.224 [INFO][5305] ipam/ipam.go 511: Trying affinity for 192.168.45.128/26 host="ci-4515.1.0-a-4ca6cdd03e" Dec 16 12:46:30.314980 containerd[2138]: 2025-12-16 12:46:30.232 [INFO][5305] ipam/ipam.go 158: Attempting to load block cidr=192.168.45.128/26 host="ci-4515.1.0-a-4ca6cdd03e" Dec 16 12:46:30.314980 containerd[2138]: 2025-12-16 12:46:30.238 [INFO][5305] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.45.128/26 host="ci-4515.1.0-a-4ca6cdd03e" Dec 16 12:46:30.314980 containerd[2138]: 2025-12-16 12:46:30.238 [INFO][5305] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.45.128/26 handle="k8s-pod-network.5b6756ccb5aecb9f6712869835dda25b9ecb9bf9f302cdb96d9d5874360b250a" host="ci-4515.1.0-a-4ca6cdd03e" Dec 16 12:46:30.314980 containerd[2138]: 2025-12-16 12:46:30.241 [INFO][5305] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5b6756ccb5aecb9f6712869835dda25b9ecb9bf9f302cdb96d9d5874360b250a Dec 16 12:46:30.314980 containerd[2138]: 2025-12-16 12:46:30.260 [INFO][5305] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.45.128/26 handle="k8s-pod-network.5b6756ccb5aecb9f6712869835dda25b9ecb9bf9f302cdb96d9d5874360b250a" host="ci-4515.1.0-a-4ca6cdd03e" Dec 16 12:46:30.314980 containerd[2138]: 2025-12-16 12:46:30.277 [INFO][5305] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.45.134/26] block=192.168.45.128/26 handle="k8s-pod-network.5b6756ccb5aecb9f6712869835dda25b9ecb9bf9f302cdb96d9d5874360b250a" host="ci-4515.1.0-a-4ca6cdd03e" Dec 16 12:46:30.314980 containerd[2138]: 2025-12-16 12:46:30.277 [INFO][5305] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.45.134/26] handle="k8s-pod-network.5b6756ccb5aecb9f6712869835dda25b9ecb9bf9f302cdb96d9d5874360b250a" host="ci-4515.1.0-a-4ca6cdd03e" Dec 16 12:46:30.314980 containerd[2138]: 2025-12-16 12:46:30.277 [INFO][5305] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:46:30.314980 containerd[2138]: 2025-12-16 12:46:30.277 [INFO][5305] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.45.134/26] IPv6=[] ContainerID="5b6756ccb5aecb9f6712869835dda25b9ecb9bf9f302cdb96d9d5874360b250a" HandleID="k8s-pod-network.5b6756ccb5aecb9f6712869835dda25b9ecb9bf9f302cdb96d9d5874360b250a" Workload="ci--4515.1.0--a--4ca6cdd03e-k8s-coredns--66bc5c9577--257m4-eth0" Dec 16 12:46:30.314000 audit: BPF prog-id=250 op=LOAD Dec 16 12:46:30.316586 containerd[2138]: 2025-12-16 12:46:30.285 [INFO][5284] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5b6756ccb5aecb9f6712869835dda25b9ecb9bf9f302cdb96d9d5874360b250a" Namespace="kube-system" Pod="coredns-66bc5c9577-257m4" WorkloadEndpoint="ci--4515.1.0--a--4ca6cdd03e-k8s-coredns--66bc5c9577--257m4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--a--4ca6cdd03e-k8s-coredns--66bc5c9577--257m4-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"f3748f03-9dbe-4ca7-b265-d450d86ecab7", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 45, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-a-4ca6cdd03e", ContainerID:"", Pod:"coredns-66bc5c9577-257m4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.45.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali76e9e6a73b4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:46:30.316586 containerd[2138]: 2025-12-16 12:46:30.286 [INFO][5284] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.45.134/32] ContainerID="5b6756ccb5aecb9f6712869835dda25b9ecb9bf9f302cdb96d9d5874360b250a" Namespace="kube-system" Pod="coredns-66bc5c9577-257m4" WorkloadEndpoint="ci--4515.1.0--a--4ca6cdd03e-k8s-coredns--66bc5c9577--257m4-eth0" Dec 16 12:46:30.316586 containerd[2138]: 2025-12-16 12:46:30.286 [INFO][5284] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali76e9e6a73b4 ContainerID="5b6756ccb5aecb9f6712869835dda25b9ecb9bf9f302cdb96d9d5874360b250a" Namespace="kube-system" Pod="coredns-66bc5c9577-257m4" WorkloadEndpoint="ci--4515.1.0--a--4ca6cdd03e-k8s-coredns--66bc5c9577--257m4-eth0" Dec 16 12:46:30.316586 containerd[2138]: 2025-12-16 12:46:30.291 [INFO][5284] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5b6756ccb5aecb9f6712869835dda25b9ecb9bf9f302cdb96d9d5874360b250a" Namespace="kube-system" Pod="coredns-66bc5c9577-257m4" WorkloadEndpoint="ci--4515.1.0--a--4ca6cdd03e-k8s-coredns--66bc5c9577--257m4-eth0" Dec 16 12:46:30.316586 containerd[2138]: 2025-12-16 12:46:30.294 [INFO][5284] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5b6756ccb5aecb9f6712869835dda25b9ecb9bf9f302cdb96d9d5874360b250a" Namespace="kube-system" Pod="coredns-66bc5c9577-257m4" WorkloadEndpoint="ci--4515.1.0--a--4ca6cdd03e-k8s-coredns--66bc5c9577--257m4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--a--4ca6cdd03e-k8s-coredns--66bc5c9577--257m4-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"f3748f03-9dbe-4ca7-b265-d450d86ecab7", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 45, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-a-4ca6cdd03e", ContainerID:"5b6756ccb5aecb9f6712869835dda25b9ecb9bf9f302cdb96d9d5874360b250a", Pod:"coredns-66bc5c9577-257m4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.45.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali76e9e6a73b4", MAC:"f6:b9:0a:0a:c3:42", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:46:30.316712 containerd[2138]: 2025-12-16 12:46:30.310 [INFO][5284] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5b6756ccb5aecb9f6712869835dda25b9ecb9bf9f302cdb96d9d5874360b250a" Namespace="kube-system" Pod="coredns-66bc5c9577-257m4" WorkloadEndpoint="ci--4515.1.0--a--4ca6cdd03e-k8s-coredns--66bc5c9577--257m4-eth0" Dec 16 12:46:30.316000 audit: BPF prog-id=251 op=LOAD Dec 16 12:46:30.316000 audit[5344]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=400019c180 a2=98 a3=0 items=0 ppid=5331 pid=5344 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:30.316000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161643830613465353661393363373464663432323238653939346236 Dec 16 12:46:30.316000 audit: BPF prog-id=251 op=UNLOAD Dec 16 12:46:30.316000 audit[5344]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5331 pid=5344 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:30.316000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161643830613465353661393363373464663432323238653939346236 Dec 16 12:46:30.316000 audit: BPF prog-id=252 op=LOAD Dec 16 12:46:30.316000 audit[5344]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=400019c3e8 a2=98 a3=0 items=0 ppid=5331 pid=5344 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:30.316000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161643830613465353661393363373464663432323238653939346236 Dec 16 12:46:30.316000 audit: BPF prog-id=253 op=LOAD Dec 16 12:46:30.316000 audit[5344]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=400019c168 a2=98 a3=0 items=0 ppid=5331 pid=5344 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:30.316000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161643830613465353661393363373464663432323238653939346236 Dec 16 12:46:30.316000 audit: BPF prog-id=253 op=UNLOAD Dec 16 12:46:30.316000 audit[5344]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=5331 pid=5344 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:30.316000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161643830613465353661393363373464663432323238653939346236 Dec 16 12:46:30.316000 audit: BPF prog-id=252 op=UNLOAD Dec 16 12:46:30.316000 audit[5344]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5331 pid=5344 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:30.316000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161643830613465353661393363373464663432323238653939346236 Dec 16 12:46:30.317000 audit: BPF prog-id=254 op=LOAD Dec 16 12:46:30.317000 audit[5344]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=400019c648 a2=98 a3=0 items=0 ppid=5331 pid=5344 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:30.317000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161643830613465353661393363373464663432323238653939346236 Dec 16 12:46:30.330000 audit[5370]: NETFILTER_CFG table=filter:132 family=2 entries=20 op=nft_register_rule pid=5370 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:46:30.330000 audit[5370]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffed58edb0 a2=0 a3=1 items=0 ppid=3822 pid=5370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:30.330000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:46:30.334000 audit[5370]: NETFILTER_CFG table=nat:133 family=2 entries=14 op=nft_register_rule pid=5370 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:46:30.334000 audit[5370]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffed58edb0 a2=0 a3=1 items=0 ppid=3822 pid=5370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:30.334000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:46:30.378141 containerd[2138]: time="2025-12-16T12:46:30.377609487Z" level=info msg="connecting to shim 5b6756ccb5aecb9f6712869835dda25b9ecb9bf9f302cdb96d9d5874360b250a" address="unix:///run/containerd/s/84db8aea1037607ee7b568f7cdd54939c7bed5662f93a6af4b181d1b51d5b4c0" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:46:30.384000 audit[5390]: NETFILTER_CFG table=filter:134 family=2 entries=44 op=nft_register_chain pid=5390 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:46:30.384000 audit[5390]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=21516 a0=3 a1=fffff4101960 a2=0 a3=ffffb5fdbfa8 items=0 ppid=4827 pid=5390 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:30.384000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:46:30.407081 containerd[2138]: time="2025-12-16T12:46:30.406975460Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-gvb6w,Uid:1d9ba31c-b8d5-4a2c-a6bb-ca7d237b3d76,Namespace:kube-system,Attempt:0,} returns sandbox id \"aad80a4e56a93c74df42228e994b6fc05a79eb4dd25e95eea7ed5c94a8a4eb7a\"" Dec 16 12:46:30.418144 containerd[2138]: time="2025-12-16T12:46:30.418112893Z" level=info msg="CreateContainer within sandbox \"aad80a4e56a93c74df42228e994b6fc05a79eb4dd25e95eea7ed5c94a8a4eb7a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 12:46:30.431372 systemd[1]: Started cri-containerd-5b6756ccb5aecb9f6712869835dda25b9ecb9bf9f302cdb96d9d5874360b250a.scope - libcontainer container 5b6756ccb5aecb9f6712869835dda25b9ecb9bf9f302cdb96d9d5874360b250a. Dec 16 12:46:30.439000 audit: BPF prog-id=255 op=LOAD Dec 16 12:46:30.440000 audit: BPF prog-id=256 op=LOAD Dec 16 12:46:30.440000 audit[5401]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000174180 a2=98 a3=0 items=0 ppid=5389 pid=5401 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:30.440000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562363735366363623561656362396636373132383639383335646461 Dec 16 12:46:30.440000 audit: BPF prog-id=256 op=UNLOAD Dec 16 12:46:30.440000 audit[5401]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5389 pid=5401 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:30.440000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562363735366363623561656362396636373132383639383335646461 Dec 16 12:46:30.441835 containerd[2138]: time="2025-12-16T12:46:30.441335865Z" level=info msg="Container 0403099dffd771dc66f219bf9e35e353f5fdd24bf4d7d235a4f75313a6d47cb2: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:46:30.441000 audit: BPF prog-id=257 op=LOAD Dec 16 12:46:30.441000 audit[5401]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001743e8 a2=98 a3=0 items=0 ppid=5389 pid=5401 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:30.441000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562363735366363623561656362396636373132383639383335646461 Dec 16 12:46:30.441000 audit: BPF prog-id=258 op=LOAD Dec 16 12:46:30.441000 audit[5401]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000174168 a2=98 a3=0 items=0 ppid=5389 pid=5401 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:30.441000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562363735366363623561656362396636373132383639383335646461 Dec 16 12:46:30.441000 audit: BPF prog-id=258 op=UNLOAD Dec 16 12:46:30.441000 audit[5401]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5389 pid=5401 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:30.441000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562363735366363623561656362396636373132383639383335646461 Dec 16 12:46:30.442000 audit: BPF prog-id=257 op=UNLOAD Dec 16 12:46:30.442000 audit[5401]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5389 pid=5401 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:30.442000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562363735366363623561656362396636373132383639383335646461 Dec 16 12:46:30.442000 audit: BPF prog-id=259 op=LOAD Dec 16 12:46:30.442000 audit[5401]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000174648 a2=98 a3=0 items=0 ppid=5389 pid=5401 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:30.442000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562363735366363623561656362396636373132383639383335646461 Dec 16 12:46:30.475918 containerd[2138]: time="2025-12-16T12:46:30.475845411Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-257m4,Uid:f3748f03-9dbe-4ca7-b265-d450d86ecab7,Namespace:kube-system,Attempt:0,} returns sandbox id \"5b6756ccb5aecb9f6712869835dda25b9ecb9bf9f302cdb96d9d5874360b250a\"" Dec 16 12:46:30.477452 containerd[2138]: time="2025-12-16T12:46:30.477278940Z" level=info msg="CreateContainer within sandbox \"aad80a4e56a93c74df42228e994b6fc05a79eb4dd25e95eea7ed5c94a8a4eb7a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0403099dffd771dc66f219bf9e35e353f5fdd24bf4d7d235a4f75313a6d47cb2\"" Dec 16 12:46:30.478230 containerd[2138]: time="2025-12-16T12:46:30.477660415Z" level=info msg="StartContainer for \"0403099dffd771dc66f219bf9e35e353f5fdd24bf4d7d235a4f75313a6d47cb2\"" Dec 16 12:46:30.478879 containerd[2138]: time="2025-12-16T12:46:30.478850185Z" level=info msg="connecting to shim 0403099dffd771dc66f219bf9e35e353f5fdd24bf4d7d235a4f75313a6d47cb2" address="unix:///run/containerd/s/5b5529a5215b10b78d9aefedca6151831875c9d016e22a55236cbdeceb789b22" protocol=ttrpc version=3 Dec 16 12:46:30.489224 containerd[2138]: time="2025-12-16T12:46:30.488742406Z" level=info msg="CreateContainer within sandbox \"5b6756ccb5aecb9f6712869835dda25b9ecb9bf9f302cdb96d9d5874360b250a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 12:46:30.503364 systemd[1]: Started cri-containerd-0403099dffd771dc66f219bf9e35e353f5fdd24bf4d7d235a4f75313a6d47cb2.scope - libcontainer container 0403099dffd771dc66f219bf9e35e353f5fdd24bf4d7d235a4f75313a6d47cb2. Dec 16 12:46:30.512000 audit: BPF prog-id=260 op=LOAD Dec 16 12:46:30.513000 audit: BPF prog-id=261 op=LOAD Dec 16 12:46:30.513000 audit[5428]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0180 a2=98 a3=0 items=0 ppid=5331 pid=5428 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:30.513000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034303330393964666664373731646336366632313962663965333565 Dec 16 12:46:30.513000 audit: BPF prog-id=261 op=UNLOAD Dec 16 12:46:30.513000 audit[5428]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5331 pid=5428 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:30.513000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034303330393964666664373731646336366632313962663965333565 Dec 16 12:46:30.513000 audit: BPF prog-id=262 op=LOAD Dec 16 12:46:30.513000 audit[5428]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=5331 pid=5428 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:30.513000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034303330393964666664373731646336366632313962663965333565 Dec 16 12:46:30.513000 audit: BPF prog-id=263 op=LOAD Dec 16 12:46:30.513000 audit[5428]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a0168 a2=98 a3=0 items=0 ppid=5331 pid=5428 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:30.513000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034303330393964666664373731646336366632313962663965333565 Dec 16 12:46:30.514000 audit: BPF prog-id=263 op=UNLOAD Dec 16 12:46:30.514000 audit[5428]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5331 pid=5428 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:30.514000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034303330393964666664373731646336366632313962663965333565 Dec 16 12:46:30.514000 audit: BPF prog-id=262 op=UNLOAD Dec 16 12:46:30.514000 audit[5428]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5331 pid=5428 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:30.514000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034303330393964666664373731646336366632313962663965333565 Dec 16 12:46:30.514000 audit: BPF prog-id=264 op=LOAD Dec 16 12:46:30.514000 audit[5428]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0648 a2=98 a3=0 items=0 ppid=5331 pid=5428 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:30.514000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034303330393964666664373731646336366632313962663965333565 Dec 16 12:46:30.521427 containerd[2138]: time="2025-12-16T12:46:30.521373290Z" level=info msg="Container 0100a76833deaf44bc69847b0047a9bb0ee29255a97ecba843a60f43c0a57734: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:46:30.533687 containerd[2138]: time="2025-12-16T12:46:30.533608042Z" level=info msg="StartContainer for \"0403099dffd771dc66f219bf9e35e353f5fdd24bf4d7d235a4f75313a6d47cb2\" returns successfully" Dec 16 12:46:30.537541 containerd[2138]: time="2025-12-16T12:46:30.537449536Z" level=info msg="CreateContainer within sandbox \"5b6756ccb5aecb9f6712869835dda25b9ecb9bf9f302cdb96d9d5874360b250a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0100a76833deaf44bc69847b0047a9bb0ee29255a97ecba843a60f43c0a57734\"" Dec 16 12:46:30.537923 containerd[2138]: time="2025-12-16T12:46:30.537892173Z" level=info msg="StartContainer for \"0100a76833deaf44bc69847b0047a9bb0ee29255a97ecba843a60f43c0a57734\"" Dec 16 12:46:30.538628 containerd[2138]: time="2025-12-16T12:46:30.538605362Z" level=info msg="connecting to shim 0100a76833deaf44bc69847b0047a9bb0ee29255a97ecba843a60f43c0a57734" address="unix:///run/containerd/s/84db8aea1037607ee7b568f7cdd54939c7bed5662f93a6af4b181d1b51d5b4c0" protocol=ttrpc version=3 Dec 16 12:46:30.566629 systemd[1]: Started cri-containerd-0100a76833deaf44bc69847b0047a9bb0ee29255a97ecba843a60f43c0a57734.scope - libcontainer container 0100a76833deaf44bc69847b0047a9bb0ee29255a97ecba843a60f43c0a57734. Dec 16 12:46:30.587000 audit: BPF prog-id=265 op=LOAD Dec 16 12:46:30.588000 audit: BPF prog-id=266 op=LOAD Dec 16 12:46:30.588000 audit[5457]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40000fe180 a2=98 a3=0 items=0 ppid=5389 pid=5457 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:30.588000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031303061373638333364656166343462633639383437623030343761 Dec 16 12:46:30.588000 audit: BPF prog-id=266 op=UNLOAD Dec 16 12:46:30.588000 audit[5457]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5389 pid=5457 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:30.588000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031303061373638333364656166343462633639383437623030343761 Dec 16 12:46:30.588000 audit: BPF prog-id=267 op=LOAD Dec 16 12:46:30.588000 audit[5457]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40000fe3e8 a2=98 a3=0 items=0 ppid=5389 pid=5457 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:30.588000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031303061373638333364656166343462633639383437623030343761 Dec 16 12:46:30.588000 audit: BPF prog-id=268 op=LOAD Dec 16 12:46:30.588000 audit[5457]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40000fe168 a2=98 a3=0 items=0 ppid=5389 pid=5457 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:30.588000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031303061373638333364656166343462633639383437623030343761 Dec 16 12:46:30.588000 audit: BPF prog-id=268 op=UNLOAD Dec 16 12:46:30.588000 audit[5457]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5389 pid=5457 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:30.588000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031303061373638333364656166343462633639383437623030343761 Dec 16 12:46:30.588000 audit: BPF prog-id=267 op=UNLOAD Dec 16 12:46:30.588000 audit[5457]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5389 pid=5457 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:30.588000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031303061373638333364656166343462633639383437623030343761 Dec 16 12:46:30.588000 audit: BPF prog-id=269 op=LOAD Dec 16 12:46:30.588000 audit[5457]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40000fe648 a2=98 a3=0 items=0 ppid=5389 pid=5457 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:30.588000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031303061373638333364656166343462633639383437623030343761 Dec 16 12:46:30.684508 containerd[2138]: time="2025-12-16T12:46:30.684458761Z" level=info msg="StartContainer for \"0100a76833deaf44bc69847b0047a9bb0ee29255a97ecba843a60f43c0a57734\" returns successfully" Dec 16 12:46:31.011179 containerd[2138]: time="2025-12-16T12:46:31.011129965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6ff5fc5c78-6d7bc,Uid:452df744-6814-429d-baf4-38ff85179742,Namespace:calico-apiserver,Attempt:0,}" Dec 16 12:46:31.017037 containerd[2138]: time="2025-12-16T12:46:31.016895427Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-slfr9,Uid:c0a1bd8d-1ce1-4f04-99f4-1d697ce95b28,Namespace:calico-system,Attempt:0,}" Dec 16 12:46:31.050034 systemd-networkd[1717]: cali6cec8d42914: Gained IPv6LL Dec 16 12:46:31.141416 systemd-networkd[1717]: cali3430497364c: Link UP Dec 16 12:46:31.142486 systemd-networkd[1717]: cali3430497364c: Gained carrier Dec 16 12:46:31.160666 containerd[2138]: 2025-12-16 12:46:31.078 [INFO][5494] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515.1.0--a--4ca6cdd03e-k8s-calico--apiserver--6ff5fc5c78--6d7bc-eth0 calico-apiserver-6ff5fc5c78- calico-apiserver 452df744-6814-429d-baf4-38ff85179742 815 0 2025-12-16 12:45:58 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6ff5fc5c78 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4515.1.0-a-4ca6cdd03e calico-apiserver-6ff5fc5c78-6d7bc eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali3430497364c [] [] }} ContainerID="679a3ebffae607e0eb09c281027577b42d8d27aa5d3694db86f072cb035efabc" Namespace="calico-apiserver" Pod="calico-apiserver-6ff5fc5c78-6d7bc" WorkloadEndpoint="ci--4515.1.0--a--4ca6cdd03e-k8s-calico--apiserver--6ff5fc5c78--6d7bc-" Dec 16 12:46:31.160666 containerd[2138]: 2025-12-16 12:46:31.080 [INFO][5494] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="679a3ebffae607e0eb09c281027577b42d8d27aa5d3694db86f072cb035efabc" Namespace="calico-apiserver" Pod="calico-apiserver-6ff5fc5c78-6d7bc" WorkloadEndpoint="ci--4515.1.0--a--4ca6cdd03e-k8s-calico--apiserver--6ff5fc5c78--6d7bc-eth0" Dec 16 12:46:31.160666 containerd[2138]: 2025-12-16 12:46:31.106 [INFO][5519] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="679a3ebffae607e0eb09c281027577b42d8d27aa5d3694db86f072cb035efabc" HandleID="k8s-pod-network.679a3ebffae607e0eb09c281027577b42d8d27aa5d3694db86f072cb035efabc" Workload="ci--4515.1.0--a--4ca6cdd03e-k8s-calico--apiserver--6ff5fc5c78--6d7bc-eth0" Dec 16 12:46:31.160666 containerd[2138]: 2025-12-16 12:46:31.106 [INFO][5519] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="679a3ebffae607e0eb09c281027577b42d8d27aa5d3694db86f072cb035efabc" HandleID="k8s-pod-network.679a3ebffae607e0eb09c281027577b42d8d27aa5d3694db86f072cb035efabc" Workload="ci--4515.1.0--a--4ca6cdd03e-k8s-calico--apiserver--6ff5fc5c78--6d7bc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3000), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4515.1.0-a-4ca6cdd03e", "pod":"calico-apiserver-6ff5fc5c78-6d7bc", "timestamp":"2025-12-16 12:46:31.106446853 +0000 UTC"}, Hostname:"ci-4515.1.0-a-4ca6cdd03e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:46:31.160666 containerd[2138]: 2025-12-16 12:46:31.106 [INFO][5519] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:46:31.160666 containerd[2138]: 2025-12-16 12:46:31.106 [INFO][5519] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:46:31.160666 containerd[2138]: 2025-12-16 12:46:31.106 [INFO][5519] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515.1.0-a-4ca6cdd03e' Dec 16 12:46:31.160666 containerd[2138]: 2025-12-16 12:46:31.111 [INFO][5519] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.679a3ebffae607e0eb09c281027577b42d8d27aa5d3694db86f072cb035efabc" host="ci-4515.1.0-a-4ca6cdd03e" Dec 16 12:46:31.160666 containerd[2138]: 2025-12-16 12:46:31.114 [INFO][5519] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515.1.0-a-4ca6cdd03e" Dec 16 12:46:31.160666 containerd[2138]: 2025-12-16 12:46:31.119 [INFO][5519] ipam/ipam.go 511: Trying affinity for 192.168.45.128/26 host="ci-4515.1.0-a-4ca6cdd03e" Dec 16 12:46:31.160666 containerd[2138]: 2025-12-16 12:46:31.121 [INFO][5519] ipam/ipam.go 158: Attempting to load block cidr=192.168.45.128/26 host="ci-4515.1.0-a-4ca6cdd03e" Dec 16 12:46:31.160666 containerd[2138]: 2025-12-16 12:46:31.122 [INFO][5519] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.45.128/26 host="ci-4515.1.0-a-4ca6cdd03e" Dec 16 12:46:31.160666 containerd[2138]: 2025-12-16 12:46:31.122 [INFO][5519] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.45.128/26 handle="k8s-pod-network.679a3ebffae607e0eb09c281027577b42d8d27aa5d3694db86f072cb035efabc" host="ci-4515.1.0-a-4ca6cdd03e" Dec 16 12:46:31.160666 containerd[2138]: 2025-12-16 12:46:31.124 [INFO][5519] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.679a3ebffae607e0eb09c281027577b42d8d27aa5d3694db86f072cb035efabc Dec 16 12:46:31.160666 containerd[2138]: 2025-12-16 12:46:31.128 [INFO][5519] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.45.128/26 handle="k8s-pod-network.679a3ebffae607e0eb09c281027577b42d8d27aa5d3694db86f072cb035efabc" host="ci-4515.1.0-a-4ca6cdd03e" Dec 16 12:46:31.160666 containerd[2138]: 2025-12-16 12:46:31.135 [INFO][5519] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.45.135/26] block=192.168.45.128/26 handle="k8s-pod-network.679a3ebffae607e0eb09c281027577b42d8d27aa5d3694db86f072cb035efabc" host="ci-4515.1.0-a-4ca6cdd03e" Dec 16 12:46:31.160666 containerd[2138]: 2025-12-16 12:46:31.135 [INFO][5519] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.45.135/26] handle="k8s-pod-network.679a3ebffae607e0eb09c281027577b42d8d27aa5d3694db86f072cb035efabc" host="ci-4515.1.0-a-4ca6cdd03e" Dec 16 12:46:31.160666 containerd[2138]: 2025-12-16 12:46:31.136 [INFO][5519] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:46:31.160666 containerd[2138]: 2025-12-16 12:46:31.136 [INFO][5519] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.45.135/26] IPv6=[] ContainerID="679a3ebffae607e0eb09c281027577b42d8d27aa5d3694db86f072cb035efabc" HandleID="k8s-pod-network.679a3ebffae607e0eb09c281027577b42d8d27aa5d3694db86f072cb035efabc" Workload="ci--4515.1.0--a--4ca6cdd03e-k8s-calico--apiserver--6ff5fc5c78--6d7bc-eth0" Dec 16 12:46:31.161888 containerd[2138]: 2025-12-16 12:46:31.138 [INFO][5494] cni-plugin/k8s.go 418: Populated endpoint ContainerID="679a3ebffae607e0eb09c281027577b42d8d27aa5d3694db86f072cb035efabc" Namespace="calico-apiserver" Pod="calico-apiserver-6ff5fc5c78-6d7bc" WorkloadEndpoint="ci--4515.1.0--a--4ca6cdd03e-k8s-calico--apiserver--6ff5fc5c78--6d7bc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--a--4ca6cdd03e-k8s-calico--apiserver--6ff5fc5c78--6d7bc-eth0", GenerateName:"calico-apiserver-6ff5fc5c78-", Namespace:"calico-apiserver", SelfLink:"", UID:"452df744-6814-429d-baf4-38ff85179742", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 45, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6ff5fc5c78", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-a-4ca6cdd03e", ContainerID:"", Pod:"calico-apiserver-6ff5fc5c78-6d7bc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.45.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3430497364c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:46:31.161888 containerd[2138]: 2025-12-16 12:46:31.138 [INFO][5494] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.45.135/32] ContainerID="679a3ebffae607e0eb09c281027577b42d8d27aa5d3694db86f072cb035efabc" Namespace="calico-apiserver" Pod="calico-apiserver-6ff5fc5c78-6d7bc" WorkloadEndpoint="ci--4515.1.0--a--4ca6cdd03e-k8s-calico--apiserver--6ff5fc5c78--6d7bc-eth0" Dec 16 12:46:31.161888 containerd[2138]: 2025-12-16 12:46:31.138 [INFO][5494] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3430497364c ContainerID="679a3ebffae607e0eb09c281027577b42d8d27aa5d3694db86f072cb035efabc" Namespace="calico-apiserver" Pod="calico-apiserver-6ff5fc5c78-6d7bc" WorkloadEndpoint="ci--4515.1.0--a--4ca6cdd03e-k8s-calico--apiserver--6ff5fc5c78--6d7bc-eth0" Dec 16 12:46:31.161888 containerd[2138]: 2025-12-16 12:46:31.140 [INFO][5494] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="679a3ebffae607e0eb09c281027577b42d8d27aa5d3694db86f072cb035efabc" Namespace="calico-apiserver" Pod="calico-apiserver-6ff5fc5c78-6d7bc" WorkloadEndpoint="ci--4515.1.0--a--4ca6cdd03e-k8s-calico--apiserver--6ff5fc5c78--6d7bc-eth0" Dec 16 12:46:31.161888 containerd[2138]: 2025-12-16 12:46:31.142 [INFO][5494] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="679a3ebffae607e0eb09c281027577b42d8d27aa5d3694db86f072cb035efabc" Namespace="calico-apiserver" Pod="calico-apiserver-6ff5fc5c78-6d7bc" WorkloadEndpoint="ci--4515.1.0--a--4ca6cdd03e-k8s-calico--apiserver--6ff5fc5c78--6d7bc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--a--4ca6cdd03e-k8s-calico--apiserver--6ff5fc5c78--6d7bc-eth0", GenerateName:"calico-apiserver-6ff5fc5c78-", Namespace:"calico-apiserver", SelfLink:"", UID:"452df744-6814-429d-baf4-38ff85179742", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 45, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6ff5fc5c78", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-a-4ca6cdd03e", ContainerID:"679a3ebffae607e0eb09c281027577b42d8d27aa5d3694db86f072cb035efabc", Pod:"calico-apiserver-6ff5fc5c78-6d7bc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.45.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3430497364c", MAC:"ba:d0:01:22:8d:32", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:46:31.161888 containerd[2138]: 2025-12-16 12:46:31.158 [INFO][5494] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="679a3ebffae607e0eb09c281027577b42d8d27aa5d3694db86f072cb035efabc" Namespace="calico-apiserver" Pod="calico-apiserver-6ff5fc5c78-6d7bc" WorkloadEndpoint="ci--4515.1.0--a--4ca6cdd03e-k8s-calico--apiserver--6ff5fc5c78--6d7bc-eth0" Dec 16 12:46:31.170000 audit[5540]: NETFILTER_CFG table=filter:135 family=2 entries=53 op=nft_register_chain pid=5540 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:46:31.170000 audit[5540]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=26624 a0=3 a1=fffff111ea40 a2=0 a3=ffff894e8fa8 items=0 ppid=4827 pid=5540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:31.170000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:46:31.206760 containerd[2138]: time="2025-12-16T12:46:31.206687107Z" level=info msg="connecting to shim 679a3ebffae607e0eb09c281027577b42d8d27aa5d3694db86f072cb035efabc" address="unix:///run/containerd/s/28a0363715a3f6e546f3c95475f4be66de052f3b2f83418820033b59004a1580" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:46:31.224289 kubelet[3668]: E1216 12:46:31.223754 3668 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6ff5fc5c78-zxwl6" podUID="33f549f5-c190-4fdd-897c-292335e0de6b" Dec 16 12:46:31.239188 kubelet[3668]: I1216 12:46:31.239081 3668 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-gvb6w" podStartSLOduration=43.238887186 podStartE2EDuration="43.238887186s" podCreationTimestamp="2025-12-16 12:45:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:46:31.238551809 +0000 UTC m=+49.311687834" watchObservedRunningTime="2025-12-16 12:46:31.238887186 +0000 UTC m=+49.312023211" Dec 16 12:46:31.241433 systemd[1]: Started cri-containerd-679a3ebffae607e0eb09c281027577b42d8d27aa5d3694db86f072cb035efabc.scope - libcontainer container 679a3ebffae607e0eb09c281027577b42d8d27aa5d3694db86f072cb035efabc. Dec 16 12:46:31.242325 systemd-networkd[1717]: cali10ab4dc0904: Gained IPv6LL Dec 16 12:46:31.252000 audit: BPF prog-id=270 op=LOAD Dec 16 12:46:31.253000 audit: BPF prog-id=271 op=LOAD Dec 16 12:46:31.253000 audit[5560]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000128180 a2=98 a3=0 items=0 ppid=5549 pid=5560 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:31.253000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637396133656266666165363037653065623039633238313032373537 Dec 16 12:46:31.253000 audit: BPF prog-id=271 op=UNLOAD Dec 16 12:46:31.253000 audit[5560]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5549 pid=5560 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:31.253000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637396133656266666165363037653065623039633238313032373537 Dec 16 12:46:31.253000 audit: BPF prog-id=272 op=LOAD Dec 16 12:46:31.253000 audit[5560]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001283e8 a2=98 a3=0 items=0 ppid=5549 pid=5560 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:31.253000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637396133656266666165363037653065623039633238313032373537 Dec 16 12:46:31.253000 audit: BPF prog-id=273 op=LOAD Dec 16 12:46:31.253000 audit[5560]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000128168 a2=98 a3=0 items=0 ppid=5549 pid=5560 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:31.253000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637396133656266666165363037653065623039633238313032373537 Dec 16 12:46:31.253000 audit: BPF prog-id=273 op=UNLOAD Dec 16 12:46:31.253000 audit[5560]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5549 pid=5560 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:31.253000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637396133656266666165363037653065623039633238313032373537 Dec 16 12:46:31.253000 audit: BPF prog-id=272 op=UNLOAD Dec 16 12:46:31.253000 audit[5560]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5549 pid=5560 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:31.253000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637396133656266666165363037653065623039633238313032373537 Dec 16 12:46:31.253000 audit: BPF prog-id=274 op=LOAD Dec 16 12:46:31.253000 audit[5560]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000128648 a2=98 a3=0 items=0 ppid=5549 pid=5560 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:31.253000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637396133656266666165363037653065623039633238313032373537 Dec 16 12:46:31.295724 kubelet[3668]: I1216 12:46:31.295267 3668 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-257m4" podStartSLOduration=43.295255033 podStartE2EDuration="43.295255033s" podCreationTimestamp="2025-12-16 12:45:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:46:31.294415329 +0000 UTC m=+49.367551386" watchObservedRunningTime="2025-12-16 12:46:31.295255033 +0000 UTC m=+49.368391058" Dec 16 12:46:31.299068 containerd[2138]: time="2025-12-16T12:46:31.298578129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6ff5fc5c78-6d7bc,Uid:452df744-6814-429d-baf4-38ff85179742,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"679a3ebffae607e0eb09c281027577b42d8d27aa5d3694db86f072cb035efabc\"" Dec 16 12:46:31.301464 containerd[2138]: time="2025-12-16T12:46:31.301437019Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:46:31.313290 systemd-networkd[1717]: calie3d52e29158: Link UP Dec 16 12:46:31.314020 systemd-networkd[1717]: calie3d52e29158: Gained carrier Dec 16 12:46:31.344123 containerd[2138]: 2025-12-16 12:46:31.080 [INFO][5503] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515.1.0--a--4ca6cdd03e-k8s-goldmane--7c778bb748--slfr9-eth0 goldmane-7c778bb748- calico-system c0a1bd8d-1ce1-4f04-99f4-1d697ce95b28 814 0 2025-12-16 12:46:02 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4515.1.0-a-4ca6cdd03e goldmane-7c778bb748-slfr9 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calie3d52e29158 [] [] }} ContainerID="34e7d14fdb35d56966cea8cc62995381dd49989c0e49a5703534c098454248a1" Namespace="calico-system" Pod="goldmane-7c778bb748-slfr9" WorkloadEndpoint="ci--4515.1.0--a--4ca6cdd03e-k8s-goldmane--7c778bb748--slfr9-" Dec 16 12:46:31.344123 containerd[2138]: 2025-12-16 12:46:31.080 [INFO][5503] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="34e7d14fdb35d56966cea8cc62995381dd49989c0e49a5703534c098454248a1" Namespace="calico-system" Pod="goldmane-7c778bb748-slfr9" WorkloadEndpoint="ci--4515.1.0--a--4ca6cdd03e-k8s-goldmane--7c778bb748--slfr9-eth0" Dec 16 12:46:31.344123 containerd[2138]: 2025-12-16 12:46:31.108 [INFO][5521] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="34e7d14fdb35d56966cea8cc62995381dd49989c0e49a5703534c098454248a1" HandleID="k8s-pod-network.34e7d14fdb35d56966cea8cc62995381dd49989c0e49a5703534c098454248a1" Workload="ci--4515.1.0--a--4ca6cdd03e-k8s-goldmane--7c778bb748--slfr9-eth0" Dec 16 12:46:31.344123 containerd[2138]: 2025-12-16 12:46:31.108 [INFO][5521] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="34e7d14fdb35d56966cea8cc62995381dd49989c0e49a5703534c098454248a1" HandleID="k8s-pod-network.34e7d14fdb35d56966cea8cc62995381dd49989c0e49a5703534c098454248a1" Workload="ci--4515.1.0--a--4ca6cdd03e-k8s-goldmane--7c778bb748--slfr9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002cb5a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4515.1.0-a-4ca6cdd03e", "pod":"goldmane-7c778bb748-slfr9", "timestamp":"2025-12-16 12:46:31.108265986 +0000 UTC"}, Hostname:"ci-4515.1.0-a-4ca6cdd03e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:46:31.344123 containerd[2138]: 2025-12-16 12:46:31.108 [INFO][5521] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:46:31.344123 containerd[2138]: 2025-12-16 12:46:31.136 [INFO][5521] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:46:31.344123 containerd[2138]: 2025-12-16 12:46:31.136 [INFO][5521] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515.1.0-a-4ca6cdd03e' Dec 16 12:46:31.344123 containerd[2138]: 2025-12-16 12:46:31.213 [INFO][5521] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.34e7d14fdb35d56966cea8cc62995381dd49989c0e49a5703534c098454248a1" host="ci-4515.1.0-a-4ca6cdd03e" Dec 16 12:46:31.344123 containerd[2138]: 2025-12-16 12:46:31.225 [INFO][5521] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515.1.0-a-4ca6cdd03e" Dec 16 12:46:31.344123 containerd[2138]: 2025-12-16 12:46:31.251 [INFO][5521] ipam/ipam.go 511: Trying affinity for 192.168.45.128/26 host="ci-4515.1.0-a-4ca6cdd03e" Dec 16 12:46:31.344123 containerd[2138]: 2025-12-16 12:46:31.256 [INFO][5521] ipam/ipam.go 158: Attempting to load block cidr=192.168.45.128/26 host="ci-4515.1.0-a-4ca6cdd03e" Dec 16 12:46:31.344123 containerd[2138]: 2025-12-16 12:46:31.263 [INFO][5521] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.45.128/26 host="ci-4515.1.0-a-4ca6cdd03e" Dec 16 12:46:31.344123 containerd[2138]: 2025-12-16 12:46:31.264 [INFO][5521] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.45.128/26 handle="k8s-pod-network.34e7d14fdb35d56966cea8cc62995381dd49989c0e49a5703534c098454248a1" host="ci-4515.1.0-a-4ca6cdd03e" Dec 16 12:46:31.344123 containerd[2138]: 2025-12-16 12:46:31.271 [INFO][5521] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.34e7d14fdb35d56966cea8cc62995381dd49989c0e49a5703534c098454248a1 Dec 16 12:46:31.344123 containerd[2138]: 2025-12-16 12:46:31.279 [INFO][5521] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.45.128/26 handle="k8s-pod-network.34e7d14fdb35d56966cea8cc62995381dd49989c0e49a5703534c098454248a1" host="ci-4515.1.0-a-4ca6cdd03e" Dec 16 12:46:31.344123 containerd[2138]: 2025-12-16 12:46:31.304 [INFO][5521] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.45.136/26] block=192.168.45.128/26 handle="k8s-pod-network.34e7d14fdb35d56966cea8cc62995381dd49989c0e49a5703534c098454248a1" host="ci-4515.1.0-a-4ca6cdd03e" Dec 16 12:46:31.344123 containerd[2138]: 2025-12-16 12:46:31.304 [INFO][5521] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.45.136/26] handle="k8s-pod-network.34e7d14fdb35d56966cea8cc62995381dd49989c0e49a5703534c098454248a1" host="ci-4515.1.0-a-4ca6cdd03e" Dec 16 12:46:31.344123 containerd[2138]: 2025-12-16 12:46:31.305 [INFO][5521] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:46:31.344123 containerd[2138]: 2025-12-16 12:46:31.305 [INFO][5521] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.45.136/26] IPv6=[] ContainerID="34e7d14fdb35d56966cea8cc62995381dd49989c0e49a5703534c098454248a1" HandleID="k8s-pod-network.34e7d14fdb35d56966cea8cc62995381dd49989c0e49a5703534c098454248a1" Workload="ci--4515.1.0--a--4ca6cdd03e-k8s-goldmane--7c778bb748--slfr9-eth0" Dec 16 12:46:31.344533 containerd[2138]: 2025-12-16 12:46:31.309 [INFO][5503] cni-plugin/k8s.go 418: Populated endpoint ContainerID="34e7d14fdb35d56966cea8cc62995381dd49989c0e49a5703534c098454248a1" Namespace="calico-system" Pod="goldmane-7c778bb748-slfr9" WorkloadEndpoint="ci--4515.1.0--a--4ca6cdd03e-k8s-goldmane--7c778bb748--slfr9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--a--4ca6cdd03e-k8s-goldmane--7c778bb748--slfr9-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"c0a1bd8d-1ce1-4f04-99f4-1d697ce95b28", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 46, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-a-4ca6cdd03e", ContainerID:"", Pod:"goldmane-7c778bb748-slfr9", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.45.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie3d52e29158", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:46:31.344533 containerd[2138]: 2025-12-16 12:46:31.309 [INFO][5503] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.45.136/32] ContainerID="34e7d14fdb35d56966cea8cc62995381dd49989c0e49a5703534c098454248a1" Namespace="calico-system" Pod="goldmane-7c778bb748-slfr9" WorkloadEndpoint="ci--4515.1.0--a--4ca6cdd03e-k8s-goldmane--7c778bb748--slfr9-eth0" Dec 16 12:46:31.344533 containerd[2138]: 2025-12-16 12:46:31.309 [INFO][5503] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie3d52e29158 ContainerID="34e7d14fdb35d56966cea8cc62995381dd49989c0e49a5703534c098454248a1" Namespace="calico-system" Pod="goldmane-7c778bb748-slfr9" WorkloadEndpoint="ci--4515.1.0--a--4ca6cdd03e-k8s-goldmane--7c778bb748--slfr9-eth0" Dec 16 12:46:31.344533 containerd[2138]: 2025-12-16 12:46:31.314 [INFO][5503] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="34e7d14fdb35d56966cea8cc62995381dd49989c0e49a5703534c098454248a1" Namespace="calico-system" Pod="goldmane-7c778bb748-slfr9" WorkloadEndpoint="ci--4515.1.0--a--4ca6cdd03e-k8s-goldmane--7c778bb748--slfr9-eth0" Dec 16 12:46:31.344533 containerd[2138]: 2025-12-16 12:46:31.314 [INFO][5503] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="34e7d14fdb35d56966cea8cc62995381dd49989c0e49a5703534c098454248a1" Namespace="calico-system" Pod="goldmane-7c778bb748-slfr9" WorkloadEndpoint="ci--4515.1.0--a--4ca6cdd03e-k8s-goldmane--7c778bb748--slfr9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--a--4ca6cdd03e-k8s-goldmane--7c778bb748--slfr9-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"c0a1bd8d-1ce1-4f04-99f4-1d697ce95b28", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 46, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-a-4ca6cdd03e", ContainerID:"34e7d14fdb35d56966cea8cc62995381dd49989c0e49a5703534c098454248a1", Pod:"goldmane-7c778bb748-slfr9", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.45.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie3d52e29158", MAC:"a2:94:e7:2c:49:af", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:46:31.344533 containerd[2138]: 2025-12-16 12:46:31.341 [INFO][5503] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="34e7d14fdb35d56966cea8cc62995381dd49989c0e49a5703534c098454248a1" Namespace="calico-system" Pod="goldmane-7c778bb748-slfr9" WorkloadEndpoint="ci--4515.1.0--a--4ca6cdd03e-k8s-goldmane--7c778bb748--slfr9-eth0" Dec 16 12:46:31.356000 audit[5597]: NETFILTER_CFG table=filter:136 family=2 entries=17 op=nft_register_rule pid=5597 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:46:31.356000 audit[5597]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffe538f390 a2=0 a3=1 items=0 ppid=3822 pid=5597 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:31.356000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:46:31.366000 audit[5599]: NETFILTER_CFG table=filter:137 family=2 entries=64 op=nft_register_chain pid=5599 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:46:31.366000 audit[5599]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=31104 a0=3 a1=ffffe7ebce10 a2=0 a3=ffff965acfa8 items=0 ppid=4827 pid=5599 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:31.366000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:46:31.381000 audit[5597]: NETFILTER_CFG table=nat:138 family=2 entries=47 op=nft_register_chain pid=5597 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:46:31.381000 audit[5597]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19860 a0=3 a1=ffffe538f390 a2=0 a3=1 items=0 ppid=3822 pid=5597 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:31.381000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:46:31.392442 containerd[2138]: time="2025-12-16T12:46:31.392382957Z" level=info msg="connecting to shim 34e7d14fdb35d56966cea8cc62995381dd49989c0e49a5703534c098454248a1" address="unix:///run/containerd/s/b9f76dc6ecca3c04e122189a18539e07d22c3730ec8e774bef93438797ff6202" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:46:31.410381 systemd[1]: Started cri-containerd-34e7d14fdb35d56966cea8cc62995381dd49989c0e49a5703534c098454248a1.scope - libcontainer container 34e7d14fdb35d56966cea8cc62995381dd49989c0e49a5703534c098454248a1. Dec 16 12:46:31.420000 audit: BPF prog-id=275 op=LOAD Dec 16 12:46:31.421000 audit: BPF prog-id=276 op=LOAD Dec 16 12:46:31.421000 audit[5620]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=5608 pid=5620 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:31.421000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3334653764313466646233356435363936366365613863633632393935 Dec 16 12:46:31.421000 audit: BPF prog-id=276 op=UNLOAD Dec 16 12:46:31.421000 audit[5620]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5608 pid=5620 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:31.421000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3334653764313466646233356435363936366365613863633632393935 Dec 16 12:46:31.421000 audit: BPF prog-id=277 op=LOAD Dec 16 12:46:31.421000 audit[5620]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=5608 pid=5620 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:31.421000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3334653764313466646233356435363936366365613863633632393935 Dec 16 12:46:31.421000 audit: BPF prog-id=278 op=LOAD Dec 16 12:46:31.421000 audit[5620]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=5608 pid=5620 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:31.421000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3334653764313466646233356435363936366365613863633632393935 Dec 16 12:46:31.421000 audit: BPF prog-id=278 op=UNLOAD Dec 16 12:46:31.421000 audit[5620]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5608 pid=5620 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:31.421000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3334653764313466646233356435363936366365613863633632393935 Dec 16 12:46:31.421000 audit: BPF prog-id=277 op=UNLOAD Dec 16 12:46:31.421000 audit[5620]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5608 pid=5620 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:31.421000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3334653764313466646233356435363936366365613863633632393935 Dec 16 12:46:31.421000 audit: BPF prog-id=279 op=LOAD Dec 16 12:46:31.421000 audit[5620]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=5608 pid=5620 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:31.421000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3334653764313466646233356435363936366365613863633632393935 Dec 16 12:46:31.433362 systemd-networkd[1717]: cali76e9e6a73b4: Gained IPv6LL Dec 16 12:46:31.445592 containerd[2138]: time="2025-12-16T12:46:31.445535863Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-slfr9,Uid:c0a1bd8d-1ce1-4f04-99f4-1d697ce95b28,Namespace:calico-system,Attempt:0,} returns sandbox id \"34e7d14fdb35d56966cea8cc62995381dd49989c0e49a5703534c098454248a1\"" Dec 16 12:46:31.576694 containerd[2138]: time="2025-12-16T12:46:31.576578348Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:46:31.583377 containerd[2138]: time="2025-12-16T12:46:31.583338871Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:46:31.583458 containerd[2138]: time="2025-12-16T12:46:31.583416145Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 12:46:31.583567 kubelet[3668]: E1216 12:46:31.583538 3668 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:46:31.583630 kubelet[3668]: E1216 12:46:31.583575 3668 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:46:31.584102 kubelet[3668]: E1216 12:46:31.583989 3668 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6ff5fc5c78-6d7bc_calico-apiserver(452df744-6814-429d-baf4-38ff85179742): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:46:31.584898 kubelet[3668]: E1216 12:46:31.584752 3668 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6ff5fc5c78-6d7bc" podUID="452df744-6814-429d-baf4-38ff85179742" Dec 16 12:46:31.584966 containerd[2138]: time="2025-12-16T12:46:31.584354116Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 12:46:31.868087 containerd[2138]: time="2025-12-16T12:46:31.867988106Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:46:31.873843 containerd[2138]: time="2025-12-16T12:46:31.873754512Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 12:46:31.873843 containerd[2138]: time="2025-12-16T12:46:31.873805865Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 12:46:31.873996 kubelet[3668]: E1216 12:46:31.873958 3668 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:46:31.874045 kubelet[3668]: E1216 12:46:31.873999 3668 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:46:31.874073 kubelet[3668]: E1216 12:46:31.874058 3668 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-slfr9_calico-system(c0a1bd8d-1ce1-4f04-99f4-1d697ce95b28): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 12:46:31.874093 kubelet[3668]: E1216 12:46:31.874081 3668 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-slfr9" podUID="c0a1bd8d-1ce1-4f04-99f4-1d697ce95b28" Dec 16 12:46:32.225655 kubelet[3668]: E1216 12:46:32.225505 3668 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-slfr9" podUID="c0a1bd8d-1ce1-4f04-99f4-1d697ce95b28" Dec 16 12:46:32.231658 kubelet[3668]: E1216 12:46:32.231629 3668 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6ff5fc5c78-6d7bc" podUID="452df744-6814-429d-baf4-38ff85179742" Dec 16 12:46:32.399000 audit[5647]: NETFILTER_CFG table=filter:139 family=2 entries=14 op=nft_register_rule pid=5647 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:46:32.399000 audit[5647]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=fffffe612f30 a2=0 a3=1 items=0 ppid=3822 pid=5647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:32.399000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:46:32.405000 audit[5647]: NETFILTER_CFG table=nat:140 family=2 entries=20 op=nft_register_rule pid=5647 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:46:32.405000 audit[5647]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=fffffe612f30 a2=0 a3=1 items=0 ppid=3822 pid=5647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:32.405000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:46:32.585334 systemd-networkd[1717]: cali3430497364c: Gained IPv6LL Dec 16 12:46:33.231308 kubelet[3668]: E1216 12:46:33.231272 3668 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-slfr9" podUID="c0a1bd8d-1ce1-4f04-99f4-1d697ce95b28" Dec 16 12:46:33.233106 kubelet[3668]: E1216 12:46:33.232134 3668 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6ff5fc5c78-6d7bc" podUID="452df744-6814-429d-baf4-38ff85179742" Dec 16 12:46:33.289397 systemd-networkd[1717]: calie3d52e29158: Gained IPv6LL Dec 16 12:46:35.005313 containerd[2138]: time="2025-12-16T12:46:35.005239174Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 12:46:35.278301 containerd[2138]: time="2025-12-16T12:46:35.278155086Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:46:35.281585 containerd[2138]: time="2025-12-16T12:46:35.281553751Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 12:46:35.281649 containerd[2138]: time="2025-12-16T12:46:35.281613313Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 12:46:35.281874 kubelet[3668]: E1216 12:46:35.281829 3668 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:46:35.282135 kubelet[3668]: E1216 12:46:35.281877 3668 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:46:35.282135 kubelet[3668]: E1216 12:46:35.281968 3668 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-858764cc7c-zqhnl_calico-system(d2bf104b-aa2c-4645-b1eb-bf5f9ef78c24): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 12:46:35.283511 containerd[2138]: time="2025-12-16T12:46:35.283464454Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 12:46:35.524218 containerd[2138]: time="2025-12-16T12:46:35.524135140Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:46:35.533748 containerd[2138]: time="2025-12-16T12:46:35.533612411Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 12:46:35.533748 containerd[2138]: time="2025-12-16T12:46:35.533666756Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 12:46:35.533841 kubelet[3668]: E1216 12:46:35.533800 3668 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:46:35.533841 kubelet[3668]: E1216 12:46:35.533835 3668 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:46:35.533929 kubelet[3668]: E1216 12:46:35.533891 3668 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-858764cc7c-zqhnl_calico-system(d2bf104b-aa2c-4645-b1eb-bf5f9ef78c24): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 12:46:35.533964 kubelet[3668]: E1216 12:46:35.533928 3668 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-858764cc7c-zqhnl" podUID="d2bf104b-aa2c-4645-b1eb-bf5f9ef78c24" Dec 16 12:46:41.004007 containerd[2138]: time="2025-12-16T12:46:41.003877446Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 12:46:41.275034 containerd[2138]: time="2025-12-16T12:46:41.274934992Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:46:41.278422 containerd[2138]: time="2025-12-16T12:46:41.278356272Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 12:46:41.278539 containerd[2138]: time="2025-12-16T12:46:41.278389849Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 12:46:41.278572 kubelet[3668]: E1216 12:46:41.278506 3668 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:46:41.278572 kubelet[3668]: E1216 12:46:41.278538 3668 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:46:41.278881 kubelet[3668]: E1216 12:46:41.278595 3668 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-5449d854d8-xfsgn_calico-system(afc8cbd3-cce9-4afd-951f-828ed80d9307): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 12:46:41.278881 kubelet[3668]: E1216 12:46:41.278622 3668 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5449d854d8-xfsgn" podUID="afc8cbd3-cce9-4afd-951f-828ed80d9307" Dec 16 12:46:44.006113 containerd[2138]: time="2025-12-16T12:46:44.005370905Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 12:46:44.280536 containerd[2138]: time="2025-12-16T12:46:44.280410860Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:46:44.283951 containerd[2138]: time="2025-12-16T12:46:44.283911983Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 12:46:44.284028 containerd[2138]: time="2025-12-16T12:46:44.283979857Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 12:46:44.284168 kubelet[3668]: E1216 12:46:44.284104 3668 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:46:44.284432 kubelet[3668]: E1216 12:46:44.284168 3668 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:46:44.284432 kubelet[3668]: E1216 12:46:44.284245 3668 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-xwwbh_calico-system(5bbc1d74-de1f-40b8-bd99-2346a3e2bafe): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 12:46:44.286138 containerd[2138]: time="2025-12-16T12:46:44.285930240Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 12:46:44.598774 containerd[2138]: time="2025-12-16T12:46:44.598647953Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:46:44.602519 containerd[2138]: time="2025-12-16T12:46:44.602467045Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 12:46:44.602605 containerd[2138]: time="2025-12-16T12:46:44.602535839Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 12:46:44.602730 kubelet[3668]: E1216 12:46:44.602693 3668 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:46:44.602782 kubelet[3668]: E1216 12:46:44.602735 3668 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:46:44.602824 kubelet[3668]: E1216 12:46:44.602808 3668 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-xwwbh_calico-system(5bbc1d74-de1f-40b8-bd99-2346a3e2bafe): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 12:46:44.603766 kubelet[3668]: E1216 12:46:44.602848 3668 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-xwwbh" podUID="5bbc1d74-de1f-40b8-bd99-2346a3e2bafe" Dec 16 12:46:45.003722 containerd[2138]: time="2025-12-16T12:46:45.003613133Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:46:45.244631 containerd[2138]: time="2025-12-16T12:46:45.244540437Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:46:45.247867 containerd[2138]: time="2025-12-16T12:46:45.247784536Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:46:45.247867 containerd[2138]: time="2025-12-16T12:46:45.247816513Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 12:46:45.248025 kubelet[3668]: E1216 12:46:45.247985 3668 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:46:45.248073 kubelet[3668]: E1216 12:46:45.248031 3668 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:46:45.248116 kubelet[3668]: E1216 12:46:45.248098 3668 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6ff5fc5c78-zxwl6_calico-apiserver(33f549f5-c190-4fdd-897c-292335e0de6b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:46:45.248143 kubelet[3668]: E1216 12:46:45.248125 3668 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6ff5fc5c78-zxwl6" podUID="33f549f5-c190-4fdd-897c-292335e0de6b" Dec 16 12:46:46.004232 containerd[2138]: time="2025-12-16T12:46:46.003969742Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 12:46:46.247535 containerd[2138]: time="2025-12-16T12:46:46.247483638Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:46:46.250695 containerd[2138]: time="2025-12-16T12:46:46.250657871Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 12:46:46.250830 containerd[2138]: time="2025-12-16T12:46:46.250668680Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 12:46:46.250928 kubelet[3668]: E1216 12:46:46.250889 3668 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:46:46.251130 kubelet[3668]: E1216 12:46:46.250935 3668 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:46:46.251130 kubelet[3668]: E1216 12:46:46.250998 3668 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-slfr9_calico-system(c0a1bd8d-1ce1-4f04-99f4-1d697ce95b28): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 12:46:46.251130 kubelet[3668]: E1216 12:46:46.251024 3668 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-slfr9" podUID="c0a1bd8d-1ce1-4f04-99f4-1d697ce95b28" Dec 16 12:46:48.004326 containerd[2138]: time="2025-12-16T12:46:48.004283722Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:46:48.308038 containerd[2138]: time="2025-12-16T12:46:48.307917339Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:46:48.311013 containerd[2138]: time="2025-12-16T12:46:48.310928736Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:46:48.311013 containerd[2138]: time="2025-12-16T12:46:48.310967914Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 12:46:48.311284 kubelet[3668]: E1216 12:46:48.311198 3668 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:46:48.311284 kubelet[3668]: E1216 12:46:48.311272 3668 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:46:48.311695 kubelet[3668]: E1216 12:46:48.311445 3668 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6ff5fc5c78-6d7bc_calico-apiserver(452df744-6814-429d-baf4-38ff85179742): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:46:48.312018 kubelet[3668]: E1216 12:46:48.311764 3668 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6ff5fc5c78-6d7bc" podUID="452df744-6814-429d-baf4-38ff85179742" Dec 16 12:46:49.006379 kubelet[3668]: E1216 12:46:49.006331 3668 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-858764cc7c-zqhnl" podUID="d2bf104b-aa2c-4645-b1eb-bf5f9ef78c24" Dec 16 12:46:55.003612 kubelet[3668]: E1216 12:46:55.003572 3668 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5449d854d8-xfsgn" podUID="afc8cbd3-cce9-4afd-951f-828ed80d9307" Dec 16 12:47:00.007522 kubelet[3668]: E1216 12:47:00.006296 3668 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6ff5fc5c78-zxwl6" podUID="33f549f5-c190-4fdd-897c-292335e0de6b" Dec 16 12:47:00.008678 kubelet[3668]: E1216 12:47:00.008540 3668 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-xwwbh" podUID="5bbc1d74-de1f-40b8-bd99-2346a3e2bafe" Dec 16 12:47:01.004598 kubelet[3668]: E1216 12:47:01.004341 3668 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-slfr9" podUID="c0a1bd8d-1ce1-4f04-99f4-1d697ce95b28" Dec 16 12:47:01.006822 containerd[2138]: time="2025-12-16T12:47:01.005554033Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 12:47:01.286057 containerd[2138]: time="2025-12-16T12:47:01.285936023Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:47:01.290364 containerd[2138]: time="2025-12-16T12:47:01.290291867Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 12:47:01.290617 containerd[2138]: time="2025-12-16T12:47:01.290351412Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 12:47:01.291031 kubelet[3668]: E1216 12:47:01.290751 3668 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:47:01.291741 kubelet[3668]: E1216 12:47:01.291256 3668 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:47:01.291741 kubelet[3668]: E1216 12:47:01.291469 3668 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-858764cc7c-zqhnl_calico-system(d2bf104b-aa2c-4645-b1eb-bf5f9ef78c24): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 12:47:01.293931 containerd[2138]: time="2025-12-16T12:47:01.293487885Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 12:47:01.554948 containerd[2138]: time="2025-12-16T12:47:01.554696341Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:47:01.558714 containerd[2138]: time="2025-12-16T12:47:01.558585275Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 12:47:01.558714 containerd[2138]: time="2025-12-16T12:47:01.558667598Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 12:47:01.558817 kubelet[3668]: E1216 12:47:01.558788 3668 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:47:01.558888 kubelet[3668]: E1216 12:47:01.558853 3668 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:47:01.559226 kubelet[3668]: E1216 12:47:01.558917 3668 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-858764cc7c-zqhnl_calico-system(d2bf104b-aa2c-4645-b1eb-bf5f9ef78c24): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 12:47:01.559226 kubelet[3668]: E1216 12:47:01.558954 3668 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-858764cc7c-zqhnl" podUID="d2bf104b-aa2c-4645-b1eb-bf5f9ef78c24" Dec 16 12:47:03.004088 kubelet[3668]: E1216 12:47:03.004036 3668 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6ff5fc5c78-6d7bc" podUID="452df744-6814-429d-baf4-38ff85179742" Dec 16 12:47:06.005469 containerd[2138]: time="2025-12-16T12:47:06.005423088Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 12:47:06.263596 containerd[2138]: time="2025-12-16T12:47:06.263463006Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:47:06.266930 containerd[2138]: time="2025-12-16T12:47:06.266841014Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 12:47:06.266930 containerd[2138]: time="2025-12-16T12:47:06.266898704Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 12:47:06.267100 kubelet[3668]: E1216 12:47:06.267056 3668 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:47:06.267403 kubelet[3668]: E1216 12:47:06.267103 3668 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:47:06.267403 kubelet[3668]: E1216 12:47:06.267162 3668 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-5449d854d8-xfsgn_calico-system(afc8cbd3-cce9-4afd-951f-828ed80d9307): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 12:47:06.267403 kubelet[3668]: E1216 12:47:06.267185 3668 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5449d854d8-xfsgn" podUID="afc8cbd3-cce9-4afd-951f-828ed80d9307" Dec 16 12:47:13.004705 containerd[2138]: time="2025-12-16T12:47:13.004479217Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:47:13.005815 kubelet[3668]: E1216 12:47:13.004608 3668 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-858764cc7c-zqhnl" podUID="d2bf104b-aa2c-4645-b1eb-bf5f9ef78c24" Dec 16 12:47:13.252620 containerd[2138]: time="2025-12-16T12:47:13.252572576Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:47:13.256661 containerd[2138]: time="2025-12-16T12:47:13.256581025Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:47:13.256759 containerd[2138]: time="2025-12-16T12:47:13.256672996Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 12:47:13.257100 kubelet[3668]: E1216 12:47:13.257019 3668 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:47:13.257100 kubelet[3668]: E1216 12:47:13.257068 3668 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:47:13.257315 kubelet[3668]: E1216 12:47:13.257260 3668 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6ff5fc5c78-zxwl6_calico-apiserver(33f549f5-c190-4fdd-897c-292335e0de6b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:47:13.257315 kubelet[3668]: E1216 12:47:13.257288 3668 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6ff5fc5c78-zxwl6" podUID="33f549f5-c190-4fdd-897c-292335e0de6b" Dec 16 12:47:15.003438 containerd[2138]: time="2025-12-16T12:47:15.003398453Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 12:47:15.241838 containerd[2138]: time="2025-12-16T12:47:15.241775263Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:47:15.245597 containerd[2138]: time="2025-12-16T12:47:15.245520322Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 12:47:15.245747 containerd[2138]: time="2025-12-16T12:47:15.245702455Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 12:47:15.248831 kubelet[3668]: E1216 12:47:15.248287 3668 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:47:15.248831 kubelet[3668]: E1216 12:47:15.248335 3668 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:47:15.248831 kubelet[3668]: E1216 12:47:15.248485 3668 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-xwwbh_calico-system(5bbc1d74-de1f-40b8-bd99-2346a3e2bafe): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 12:47:15.249163 containerd[2138]: time="2025-12-16T12:47:15.249124840Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 12:47:15.515288 containerd[2138]: time="2025-12-16T12:47:15.515240645Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:47:15.518758 containerd[2138]: time="2025-12-16T12:47:15.518727920Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 12:47:15.518839 containerd[2138]: time="2025-12-16T12:47:15.518798682Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 12:47:15.518997 kubelet[3668]: E1216 12:47:15.518963 3668 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:47:15.519065 kubelet[3668]: E1216 12:47:15.519006 3668 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:47:15.519680 kubelet[3668]: E1216 12:47:15.519610 3668 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-slfr9_calico-system(c0a1bd8d-1ce1-4f04-99f4-1d697ce95b28): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 12:47:15.519680 kubelet[3668]: E1216 12:47:15.519650 3668 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-slfr9" podUID="c0a1bd8d-1ce1-4f04-99f4-1d697ce95b28" Dec 16 12:47:15.519761 containerd[2138]: time="2025-12-16T12:47:15.519708900Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 12:47:15.763768 containerd[2138]: time="2025-12-16T12:47:15.763704182Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:47:15.767880 containerd[2138]: time="2025-12-16T12:47:15.767785066Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 12:47:15.767946 containerd[2138]: time="2025-12-16T12:47:15.767867076Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 12:47:15.768314 kubelet[3668]: E1216 12:47:15.768277 3668 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:47:15.768909 kubelet[3668]: E1216 12:47:15.768324 3668 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:47:15.768909 kubelet[3668]: E1216 12:47:15.768387 3668 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-xwwbh_calico-system(5bbc1d74-de1f-40b8-bd99-2346a3e2bafe): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 12:47:15.768909 kubelet[3668]: E1216 12:47:15.768420 3668 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-xwwbh" podUID="5bbc1d74-de1f-40b8-bd99-2346a3e2bafe" Dec 16 12:47:18.005537 containerd[2138]: time="2025-12-16T12:47:18.005287341Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:47:18.243043 containerd[2138]: time="2025-12-16T12:47:18.242831456Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:47:18.246962 containerd[2138]: time="2025-12-16T12:47:18.246877715Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:47:18.247033 containerd[2138]: time="2025-12-16T12:47:18.246934940Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 12:47:18.247267 kubelet[3668]: E1216 12:47:18.247216 3668 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:47:18.247267 kubelet[3668]: E1216 12:47:18.247268 3668 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:47:18.247611 kubelet[3668]: E1216 12:47:18.247338 3668 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6ff5fc5c78-6d7bc_calico-apiserver(452df744-6814-429d-baf4-38ff85179742): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:47:18.247611 kubelet[3668]: E1216 12:47:18.247362 3668 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6ff5fc5c78-6d7bc" podUID="452df744-6814-429d-baf4-38ff85179742" Dec 16 12:47:19.003768 kubelet[3668]: E1216 12:47:19.003435 3668 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5449d854d8-xfsgn" podUID="afc8cbd3-cce9-4afd-951f-828ed80d9307" Dec 16 12:47:27.004510 kubelet[3668]: E1216 12:47:27.004224 3668 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6ff5fc5c78-zxwl6" podUID="33f549f5-c190-4fdd-897c-292335e0de6b" Dec 16 12:47:27.005456 kubelet[3668]: E1216 12:47:27.004718 3668 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-858764cc7c-zqhnl" podUID="d2bf104b-aa2c-4645-b1eb-bf5f9ef78c24" Dec 16 12:47:30.005009 kubelet[3668]: E1216 12:47:30.004942 3668 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6ff5fc5c78-6d7bc" podUID="452df744-6814-429d-baf4-38ff85179742" Dec 16 12:47:30.008432 kubelet[3668]: E1216 12:47:30.008380 3668 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-xwwbh" podUID="5bbc1d74-de1f-40b8-bd99-2346a3e2bafe" Dec 16 12:47:31.003349 kubelet[3668]: E1216 12:47:31.003304 3668 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-slfr9" podUID="c0a1bd8d-1ce1-4f04-99f4-1d697ce95b28" Dec 16 12:47:31.004511 kubelet[3668]: E1216 12:47:31.004369 3668 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5449d854d8-xfsgn" podUID="afc8cbd3-cce9-4afd-951f-828ed80d9307" Dec 16 12:47:38.007051 kubelet[3668]: E1216 12:47:38.006889 3668 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-858764cc7c-zqhnl" podUID="d2bf104b-aa2c-4645-b1eb-bf5f9ef78c24" Dec 16 12:47:41.003659 kubelet[3668]: E1216 12:47:41.003617 3668 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-xwwbh" podUID="5bbc1d74-de1f-40b8-bd99-2346a3e2bafe" Dec 16 12:47:42.005547 kubelet[3668]: E1216 12:47:42.005391 3668 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6ff5fc5c78-zxwl6" podUID="33f549f5-c190-4fdd-897c-292335e0de6b" Dec 16 12:47:42.005547 kubelet[3668]: E1216 12:47:42.005398 3668 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5449d854d8-xfsgn" podUID="afc8cbd3-cce9-4afd-951f-828ed80d9307" Dec 16 12:47:44.006181 kubelet[3668]: E1216 12:47:44.005896 3668 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-slfr9" podUID="c0a1bd8d-1ce1-4f04-99f4-1d697ce95b28" Dec 16 12:47:45.005026 kubelet[3668]: E1216 12:47:45.004538 3668 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6ff5fc5c78-6d7bc" podUID="452df744-6814-429d-baf4-38ff85179742" Dec 16 12:47:53.004288 containerd[2138]: time="2025-12-16T12:47:53.004245232Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 12:47:53.296968 containerd[2138]: time="2025-12-16T12:47:53.296644554Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:47:53.300278 containerd[2138]: time="2025-12-16T12:47:53.300243703Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 12:47:53.300431 containerd[2138]: time="2025-12-16T12:47:53.300323930Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 12:47:53.300799 kubelet[3668]: E1216 12:47:53.300553 3668 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:47:53.300799 kubelet[3668]: E1216 12:47:53.300597 3668 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:47:53.300799 kubelet[3668]: E1216 12:47:53.300672 3668 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-858764cc7c-zqhnl_calico-system(d2bf104b-aa2c-4645-b1eb-bf5f9ef78c24): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 12:47:53.302519 containerd[2138]: time="2025-12-16T12:47:53.302468014Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 12:47:53.582923 containerd[2138]: time="2025-12-16T12:47:53.582620582Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:47:53.586255 containerd[2138]: time="2025-12-16T12:47:53.586197483Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 12:47:53.586345 containerd[2138]: time="2025-12-16T12:47:53.586301710Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 12:47:53.586494 kubelet[3668]: E1216 12:47:53.586456 3668 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:47:53.586548 kubelet[3668]: E1216 12:47:53.586500 3668 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:47:53.586585 kubelet[3668]: E1216 12:47:53.586567 3668 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-858764cc7c-zqhnl_calico-system(d2bf104b-aa2c-4645-b1eb-bf5f9ef78c24): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 12:47:53.586629 kubelet[3668]: E1216 12:47:53.586606 3668 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-858764cc7c-zqhnl" podUID="d2bf104b-aa2c-4645-b1eb-bf5f9ef78c24" Dec 16 12:47:54.005148 kubelet[3668]: E1216 12:47:54.004915 3668 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-xwwbh" podUID="5bbc1d74-de1f-40b8-bd99-2346a3e2bafe" Dec 16 12:47:55.004321 kubelet[3668]: E1216 12:47:55.004281 3668 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-slfr9" podUID="c0a1bd8d-1ce1-4f04-99f4-1d697ce95b28" Dec 16 12:47:55.005697 containerd[2138]: time="2025-12-16T12:47:55.004481314Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 12:47:55.093819 systemd[1]: Started sshd@7-10.200.20.37:22-10.200.16.10:45196.service - OpenSSH per-connection server daemon (10.200.16.10:45196). Dec 16 12:47:55.092000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.20.37:22-10.200.16.10:45196 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:47:55.097256 kernel: kauditd_printk_skb: 227 callbacks suppressed Dec 16 12:47:55.097482 kernel: audit: type=1130 audit(1765889275.092:762): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.20.37:22-10.200.16.10:45196 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:47:55.287387 containerd[2138]: time="2025-12-16T12:47:55.287257340Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:47:55.290556 containerd[2138]: time="2025-12-16T12:47:55.290510624Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 12:47:55.290652 containerd[2138]: time="2025-12-16T12:47:55.290593938Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 12:47:55.290874 kubelet[3668]: E1216 12:47:55.290821 3668 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:47:55.291063 kubelet[3668]: E1216 12:47:55.290866 3668 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:47:55.291158 kubelet[3668]: E1216 12:47:55.291139 3668 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-5449d854d8-xfsgn_calico-system(afc8cbd3-cce9-4afd-951f-828ed80d9307): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 12:47:55.291330 kubelet[3668]: E1216 12:47:55.291294 3668 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5449d854d8-xfsgn" podUID="afc8cbd3-cce9-4afd-951f-828ed80d9307" Dec 16 12:47:55.510298 sshd[5781]: Accepted publickey for core from 10.200.16.10 port 45196 ssh2: RSA SHA256:EQywgtLEgQeM5AlE8u6bSLW5d7rgTODhxBgGmPiEalo Dec 16 12:47:55.509000 audit[5781]: USER_ACCT pid=5781 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:55.533478 sshd-session[5781]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:47:55.527000 audit[5781]: CRED_ACQ pid=5781 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:55.541456 systemd-logind[2110]: New session 10 of user core. Dec 16 12:47:55.552216 kernel: audit: type=1101 audit(1765889275.509:763): pid=5781 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:55.552284 kernel: audit: type=1103 audit(1765889275.527:764): pid=5781 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:55.556514 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 16 12:47:55.567650 kernel: audit: type=1006 audit(1765889275.527:765): pid=5781 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Dec 16 12:47:55.527000 audit[5781]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffcf5ecc0 a2=3 a3=0 items=0 ppid=1 pid=5781 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:47:55.584253 kernel: audit: type=1300 audit(1765889275.527:765): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffcf5ecc0 a2=3 a3=0 items=0 ppid=1 pid=5781 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:47:55.527000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:47:55.593263 kernel: audit: type=1327 audit(1765889275.527:765): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:47:55.584000 audit[5781]: USER_START pid=5781 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:55.610169 kernel: audit: type=1105 audit(1765889275.584:766): pid=5781 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:55.586000 audit[5784]: CRED_ACQ pid=5784 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:55.625159 kernel: audit: type=1103 audit(1765889275.586:767): pid=5784 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:55.813276 sshd[5784]: Connection closed by 10.200.16.10 port 45196 Dec 16 12:47:55.814519 sshd-session[5781]: pam_unix(sshd:session): session closed for user core Dec 16 12:47:55.814000 audit[5781]: USER_END pid=5781 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:55.818669 systemd-logind[2110]: Session 10 logged out. Waiting for processes to exit. Dec 16 12:47:55.820835 systemd[1]: sshd@7-10.200.20.37:22-10.200.16.10:45196.service: Deactivated successfully. Dec 16 12:47:55.823615 systemd[1]: session-10.scope: Deactivated successfully. Dec 16 12:47:55.825317 systemd-logind[2110]: Removed session 10. Dec 16 12:47:55.815000 audit[5781]: CRED_DISP pid=5781 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:55.849889 kernel: audit: type=1106 audit(1765889275.814:768): pid=5781 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:55.849959 kernel: audit: type=1104 audit(1765889275.815:769): pid=5781 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:55.820000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.20.37:22-10.200.16.10:45196 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:47:57.004049 containerd[2138]: time="2025-12-16T12:47:57.003999704Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:47:57.271290 containerd[2138]: time="2025-12-16T12:47:57.270959579Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:47:57.275575 containerd[2138]: time="2025-12-16T12:47:57.275533972Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:47:57.275784 containerd[2138]: time="2025-12-16T12:47:57.275613278Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 12:47:57.276058 kubelet[3668]: E1216 12:47:57.275955 3668 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:47:57.276965 kubelet[3668]: E1216 12:47:57.276515 3668 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:47:57.276965 kubelet[3668]: E1216 12:47:57.276623 3668 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6ff5fc5c78-zxwl6_calico-apiserver(33f549f5-c190-4fdd-897c-292335e0de6b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:47:57.276965 kubelet[3668]: E1216 12:47:57.276649 3668 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6ff5fc5c78-zxwl6" podUID="33f549f5-c190-4fdd-897c-292335e0de6b" Dec 16 12:47:59.004239 containerd[2138]: time="2025-12-16T12:47:59.004118463Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:47:59.285854 containerd[2138]: time="2025-12-16T12:47:59.285687207Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:47:59.289826 containerd[2138]: time="2025-12-16T12:47:59.289727057Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:47:59.289826 containerd[2138]: time="2025-12-16T12:47:59.289782242Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 12:47:59.289968 kubelet[3668]: E1216 12:47:59.289922 3668 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:47:59.289968 kubelet[3668]: E1216 12:47:59.289962 3668 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:47:59.291019 kubelet[3668]: E1216 12:47:59.290027 3668 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6ff5fc5c78-6d7bc_calico-apiserver(452df744-6814-429d-baf4-38ff85179742): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:47:59.291019 kubelet[3668]: E1216 12:47:59.290055 3668 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6ff5fc5c78-6d7bc" podUID="452df744-6814-429d-baf4-38ff85179742" Dec 16 12:48:00.912448 systemd[1]: Started sshd@8-10.200.20.37:22-10.200.16.10:48300.service - OpenSSH per-connection server daemon (10.200.16.10:48300). Dec 16 12:48:00.911000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.20.37:22-10.200.16.10:48300 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:48:00.915994 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 12:48:00.916076 kernel: audit: type=1130 audit(1765889280.911:771): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.20.37:22-10.200.16.10:48300 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:48:01.351000 audit[5820]: USER_ACCT pid=5820 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:01.368556 sshd[5820]: Accepted publickey for core from 10.200.16.10 port 48300 ssh2: RSA SHA256:EQywgtLEgQeM5AlE8u6bSLW5d7rgTODhxBgGmPiEalo Dec 16 12:48:01.368000 audit[5820]: CRED_ACQ pid=5820 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:01.375260 sshd-session[5820]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:48:01.384812 kernel: audit: type=1101 audit(1765889281.351:772): pid=5820 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:01.384889 kernel: audit: type=1103 audit(1765889281.368:773): pid=5820 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:01.394486 kernel: audit: type=1006 audit(1765889281.368:774): pid=5820 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Dec 16 12:48:01.368000 audit[5820]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd9a803c0 a2=3 a3=0 items=0 ppid=1 pid=5820 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:48:01.411651 kernel: audit: type=1300 audit(1765889281.368:774): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd9a803c0 a2=3 a3=0 items=0 ppid=1 pid=5820 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:48:01.368000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:48:01.419217 kernel: audit: type=1327 audit(1765889281.368:774): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:48:01.419416 systemd-logind[2110]: New session 11 of user core. Dec 16 12:48:01.425352 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 16 12:48:01.426000 audit[5820]: USER_START pid=5820 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:01.430000 audit[5823]: CRED_ACQ pid=5823 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:01.465867 kernel: audit: type=1105 audit(1765889281.426:775): pid=5820 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:01.465932 kernel: audit: type=1103 audit(1765889281.430:776): pid=5823 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:01.638006 sshd[5823]: Connection closed by 10.200.16.10 port 48300 Dec 16 12:48:01.638777 sshd-session[5820]: pam_unix(sshd:session): session closed for user core Dec 16 12:48:01.638000 audit[5820]: USER_END pid=5820 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:01.643561 systemd[1]: sshd@8-10.200.20.37:22-10.200.16.10:48300.service: Deactivated successfully. Dec 16 12:48:01.645941 systemd[1]: session-11.scope: Deactivated successfully. Dec 16 12:48:01.638000 audit[5820]: CRED_DISP pid=5820 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:01.672661 kernel: audit: type=1106 audit(1765889281.638:777): pid=5820 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:01.672734 kernel: audit: type=1104 audit(1765889281.638:778): pid=5820 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:01.673775 systemd-logind[2110]: Session 11 logged out. Waiting for processes to exit. Dec 16 12:48:01.638000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.20.37:22-10.200.16.10:48300 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:48:01.676919 systemd-logind[2110]: Removed session 11. Dec 16 12:48:05.005045 kubelet[3668]: E1216 12:48:05.004987 3668 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-858764cc7c-zqhnl" podUID="d2bf104b-aa2c-4645-b1eb-bf5f9ef78c24" Dec 16 12:48:06.008244 containerd[2138]: time="2025-12-16T12:48:06.008079795Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 12:48:06.267317 containerd[2138]: time="2025-12-16T12:48:06.267079320Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:48:06.270987 containerd[2138]: time="2025-12-16T12:48:06.270948557Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 12:48:06.271217 containerd[2138]: time="2025-12-16T12:48:06.271036815Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 12:48:06.271631 kubelet[3668]: E1216 12:48:06.271396 3668 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:48:06.271631 kubelet[3668]: E1216 12:48:06.271442 3668 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:48:06.271631 kubelet[3668]: E1216 12:48:06.271511 3668 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-slfr9_calico-system(c0a1bd8d-1ce1-4f04-99f4-1d697ce95b28): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 12:48:06.271631 kubelet[3668]: E1216 12:48:06.271539 3668 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-slfr9" podUID="c0a1bd8d-1ce1-4f04-99f4-1d697ce95b28" Dec 16 12:48:06.712000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.20.37:22-10.200.16.10:48306 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:48:06.712471 systemd[1]: Started sshd@9-10.200.20.37:22-10.200.16.10:48306.service - OpenSSH per-connection server daemon (10.200.16.10:48306). Dec 16 12:48:06.715877 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 12:48:06.715943 kernel: audit: type=1130 audit(1765889286.712:780): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.20.37:22-10.200.16.10:48306 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:48:07.003994 kubelet[3668]: E1216 12:48:07.003755 3668 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5449d854d8-xfsgn" podUID="afc8cbd3-cce9-4afd-951f-828ed80d9307" Dec 16 12:48:07.119361 sshd[5837]: Accepted publickey for core from 10.200.16.10 port 48306 ssh2: RSA SHA256:EQywgtLEgQeM5AlE8u6bSLW5d7rgTODhxBgGmPiEalo Dec 16 12:48:07.119000 audit[5837]: USER_ACCT pid=5837 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:07.132170 sshd-session[5837]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:48:07.127000 audit[5837]: CRED_ACQ pid=5837 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:07.156468 kernel: audit: type=1101 audit(1765889287.119:781): pid=5837 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:07.156544 kernel: audit: type=1103 audit(1765889287.127:782): pid=5837 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:07.158628 systemd-logind[2110]: New session 12 of user core. Dec 16 12:48:07.167596 kernel: audit: type=1006 audit(1765889287.127:783): pid=5837 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Dec 16 12:48:07.127000 audit[5837]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe1f235f0 a2=3 a3=0 items=0 ppid=1 pid=5837 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:48:07.184368 kernel: audit: type=1300 audit(1765889287.127:783): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe1f235f0 a2=3 a3=0 items=0 ppid=1 pid=5837 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:48:07.127000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:48:07.191192 kernel: audit: type=1327 audit(1765889287.127:783): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:48:07.192378 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 16 12:48:07.195000 audit[5837]: USER_START pid=5837 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:07.201000 audit[5840]: CRED_ACQ pid=5840 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:07.228418 kernel: audit: type=1105 audit(1765889287.195:784): pid=5837 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:07.228497 kernel: audit: type=1103 audit(1765889287.201:785): pid=5840 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:07.392308 sshd[5840]: Connection closed by 10.200.16.10 port 48306 Dec 16 12:48:07.392758 sshd-session[5837]: pam_unix(sshd:session): session closed for user core Dec 16 12:48:07.393000 audit[5837]: USER_END pid=5837 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:07.412670 systemd[1]: sshd@9-10.200.20.37:22-10.200.16.10:48306.service: Deactivated successfully. Dec 16 12:48:07.393000 audit[5837]: CRED_DISP pid=5837 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:07.431607 kernel: audit: type=1106 audit(1765889287.393:786): pid=5837 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:07.431940 kernel: audit: type=1104 audit(1765889287.393:787): pid=5837 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:07.416140 systemd[1]: session-12.scope: Deactivated successfully. Dec 16 12:48:07.431673 systemd-logind[2110]: Session 12 logged out. Waiting for processes to exit. Dec 16 12:48:07.433343 systemd-logind[2110]: Removed session 12. Dec 16 12:48:07.412000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.20.37:22-10.200.16.10:48306 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:48:07.483000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.200.20.37:22-10.200.16.10:48310 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:48:07.483394 systemd[1]: Started sshd@10-10.200.20.37:22-10.200.16.10:48310.service - OpenSSH per-connection server daemon (10.200.16.10:48310). Dec 16 12:48:07.874000 audit[5852]: USER_ACCT pid=5852 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:07.875396 sshd[5852]: Accepted publickey for core from 10.200.16.10 port 48310 ssh2: RSA SHA256:EQywgtLEgQeM5AlE8u6bSLW5d7rgTODhxBgGmPiEalo Dec 16 12:48:07.875000 audit[5852]: CRED_ACQ pid=5852 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:07.875000 audit[5852]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd8380270 a2=3 a3=0 items=0 ppid=1 pid=5852 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:48:07.875000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:48:07.876124 sshd-session[5852]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:48:07.880360 systemd-logind[2110]: New session 13 of user core. Dec 16 12:48:07.887567 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 16 12:48:07.889000 audit[5852]: USER_START pid=5852 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:07.891000 audit[5855]: CRED_ACQ pid=5855 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:08.004256 containerd[2138]: time="2025-12-16T12:48:08.003829744Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 12:48:08.189291 sshd[5855]: Connection closed by 10.200.16.10 port 48310 Dec 16 12:48:08.189530 sshd-session[5852]: pam_unix(sshd:session): session closed for user core Dec 16 12:48:08.191000 audit[5852]: USER_END pid=5852 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:08.191000 audit[5852]: CRED_DISP pid=5852 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:08.194287 systemd[1]: sshd@10-10.200.20.37:22-10.200.16.10:48310.service: Deactivated successfully. Dec 16 12:48:08.197000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.200.20.37:22-10.200.16.10:48310 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:48:08.200226 systemd[1]: session-13.scope: Deactivated successfully. Dec 16 12:48:08.202383 systemd-logind[2110]: Session 13 logged out. Waiting for processes to exit. Dec 16 12:48:08.204233 systemd-logind[2110]: Removed session 13. Dec 16 12:48:08.267408 containerd[2138]: time="2025-12-16T12:48:08.267371013Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:48:08.271490 containerd[2138]: time="2025-12-16T12:48:08.271395942Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 12:48:08.271563 containerd[2138]: time="2025-12-16T12:48:08.271461136Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 12:48:08.271765 kubelet[3668]: E1216 12:48:08.271724 3668 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:48:08.271989 kubelet[3668]: E1216 12:48:08.271771 3668 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:48:08.271989 kubelet[3668]: E1216 12:48:08.271840 3668 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-xwwbh_calico-system(5bbc1d74-de1f-40b8-bd99-2346a3e2bafe): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 12:48:08.274425 containerd[2138]: time="2025-12-16T12:48:08.274376282Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 12:48:08.279449 systemd[1]: Started sshd@11-10.200.20.37:22-10.200.16.10:48314.service - OpenSSH per-connection server daemon (10.200.16.10:48314). Dec 16 12:48:08.279000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.200.20.37:22-10.200.16.10:48314 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:48:08.561874 containerd[2138]: time="2025-12-16T12:48:08.561821858Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:48:08.567303 containerd[2138]: time="2025-12-16T12:48:08.567195937Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 12:48:08.567303 containerd[2138]: time="2025-12-16T12:48:08.567254931Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 12:48:08.567432 kubelet[3668]: E1216 12:48:08.567387 3668 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:48:08.567498 kubelet[3668]: E1216 12:48:08.567438 3668 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:48:08.567521 kubelet[3668]: E1216 12:48:08.567510 3668 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-xwwbh_calico-system(5bbc1d74-de1f-40b8-bd99-2346a3e2bafe): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 12:48:08.567724 kubelet[3668]: E1216 12:48:08.567550 3668 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-xwwbh" podUID="5bbc1d74-de1f-40b8-bd99-2346a3e2bafe" Dec 16 12:48:08.703000 audit[5864]: USER_ACCT pid=5864 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:08.704328 sshd[5864]: Accepted publickey for core from 10.200.16.10 port 48314 ssh2: RSA SHA256:EQywgtLEgQeM5AlE8u6bSLW5d7rgTODhxBgGmPiEalo Dec 16 12:48:08.704000 audit[5864]: CRED_ACQ pid=5864 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:08.704000 audit[5864]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe7cb90e0 a2=3 a3=0 items=0 ppid=1 pid=5864 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:48:08.704000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:48:08.705416 sshd-session[5864]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:48:08.709301 systemd-logind[2110]: New session 14 of user core. Dec 16 12:48:08.716340 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 16 12:48:08.719000 audit[5864]: USER_START pid=5864 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:08.720000 audit[5867]: CRED_ACQ pid=5867 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:09.001741 sshd[5867]: Connection closed by 10.200.16.10 port 48314 Dec 16 12:48:09.000872 sshd-session[5864]: pam_unix(sshd:session): session closed for user core Dec 16 12:48:09.003000 audit[5864]: USER_END pid=5864 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:09.003000 audit[5864]: CRED_DISP pid=5864 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:09.005633 systemd[1]: sshd@11-10.200.20.37:22-10.200.16.10:48314.service: Deactivated successfully. Dec 16 12:48:09.005000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.200.20.37:22-10.200.16.10:48314 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:48:09.009864 systemd[1]: session-14.scope: Deactivated successfully. Dec 16 12:48:09.012958 systemd-logind[2110]: Session 14 logged out. Waiting for processes to exit. Dec 16 12:48:09.014645 systemd-logind[2110]: Removed session 14. Dec 16 12:48:10.005406 kubelet[3668]: E1216 12:48:10.005154 3668 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6ff5fc5c78-zxwl6" podUID="33f549f5-c190-4fdd-897c-292335e0de6b" Dec 16 12:48:11.004157 kubelet[3668]: E1216 12:48:11.003882 3668 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6ff5fc5c78-6d7bc" podUID="452df744-6814-429d-baf4-38ff85179742" Dec 16 12:48:14.074506 systemd[1]: Started sshd@12-10.200.20.37:22-10.200.16.10:46018.service - OpenSSH per-connection server daemon (10.200.16.10:46018). Dec 16 12:48:14.080182 kernel: kauditd_printk_skb: 23 callbacks suppressed Dec 16 12:48:14.080227 kernel: audit: type=1130 audit(1765889294.073:807): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.20.37:22-10.200.16.10:46018 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:48:14.073000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.20.37:22-10.200.16.10:46018 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:48:14.445000 audit[5879]: USER_ACCT pid=5879 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:14.464764 sshd[5879]: Accepted publickey for core from 10.200.16.10 port 46018 ssh2: RSA SHA256:EQywgtLEgQeM5AlE8u6bSLW5d7rgTODhxBgGmPiEalo Dec 16 12:48:14.465000 audit[5879]: CRED_ACQ pid=5879 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:14.483196 kernel: audit: type=1101 audit(1765889294.445:808): pid=5879 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:14.483406 kernel: audit: type=1103 audit(1765889294.465:809): pid=5879 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:14.467367 sshd-session[5879]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:48:14.486515 systemd-logind[2110]: New session 15 of user core. Dec 16 12:48:14.496635 kernel: audit: type=1006 audit(1765889294.465:810): pid=5879 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Dec 16 12:48:14.465000 audit[5879]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff29afc30 a2=3 a3=0 items=0 ppid=1 pid=5879 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:48:14.516881 kernel: audit: type=1300 audit(1765889294.465:810): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff29afc30 a2=3 a3=0 items=0 ppid=1 pid=5879 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:48:14.465000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:48:14.518380 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 16 12:48:14.523680 kernel: audit: type=1327 audit(1765889294.465:810): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:48:14.523000 audit[5879]: USER_START pid=5879 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:14.542455 kernel: audit: type=1105 audit(1765889294.523:811): pid=5879 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:14.543000 audit[5882]: CRED_ACQ pid=5882 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:14.558682 kernel: audit: type=1103 audit(1765889294.543:812): pid=5882 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:14.730001 sshd[5882]: Connection closed by 10.200.16.10 port 46018 Dec 16 12:48:14.729918 sshd-session[5879]: pam_unix(sshd:session): session closed for user core Dec 16 12:48:14.731000 audit[5879]: USER_END pid=5879 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:14.734950 systemd[1]: session-15.scope: Deactivated successfully. Dec 16 12:48:14.738006 systemd[1]: sshd@12-10.200.20.37:22-10.200.16.10:46018.service: Deactivated successfully. Dec 16 12:48:14.741927 systemd-logind[2110]: Session 15 logged out. Waiting for processes to exit. Dec 16 12:48:14.746087 systemd-logind[2110]: Removed session 15. Dec 16 12:48:14.731000 audit[5879]: CRED_DISP pid=5879 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:14.772974 kernel: audit: type=1106 audit(1765889294.731:813): pid=5879 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:14.773049 kernel: audit: type=1104 audit(1765889294.731:814): pid=5879 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:14.737000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.20.37:22-10.200.16.10:46018 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:48:17.005934 kubelet[3668]: E1216 12:48:17.005892 3668 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-858764cc7c-zqhnl" podUID="d2bf104b-aa2c-4645-b1eb-bf5f9ef78c24" Dec 16 12:48:18.004352 kubelet[3668]: E1216 12:48:18.003987 3668 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5449d854d8-xfsgn" podUID="afc8cbd3-cce9-4afd-951f-828ed80d9307" Dec 16 12:48:19.004842 kubelet[3668]: E1216 12:48:19.004469 3668 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-slfr9" podUID="c0a1bd8d-1ce1-4f04-99f4-1d697ce95b28" Dec 16 12:48:19.844000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.20.37:22-10.200.16.10:46022 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:48:19.845612 systemd[1]: Started sshd@13-10.200.20.37:22-10.200.16.10:46022.service - OpenSSH per-connection server daemon (10.200.16.10:46022). Dec 16 12:48:19.848611 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 12:48:19.848677 kernel: audit: type=1130 audit(1765889299.844:816): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.20.37:22-10.200.16.10:46022 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:48:20.281000 audit[5900]: USER_ACCT pid=5900 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:20.290337 sshd[5900]: Accepted publickey for core from 10.200.16.10 port 46022 ssh2: RSA SHA256:EQywgtLEgQeM5AlE8u6bSLW5d7rgTODhxBgGmPiEalo Dec 16 12:48:20.299099 sshd-session[5900]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:48:20.297000 audit[5900]: CRED_ACQ pid=5900 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:20.314184 kernel: audit: type=1101 audit(1765889300.281:817): pid=5900 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:20.314260 kernel: audit: type=1103 audit(1765889300.297:818): pid=5900 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:20.321111 systemd-logind[2110]: New session 16 of user core. Dec 16 12:48:20.324112 kernel: audit: type=1006 audit(1765889300.297:819): pid=5900 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Dec 16 12:48:20.297000 audit[5900]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffee5f21b0 a2=3 a3=0 items=0 ppid=1 pid=5900 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:48:20.341575 kernel: audit: type=1300 audit(1765889300.297:819): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffee5f21b0 a2=3 a3=0 items=0 ppid=1 pid=5900 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:48:20.297000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:48:20.348193 kernel: audit: type=1327 audit(1765889300.297:819): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:48:20.349370 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 16 12:48:20.351000 audit[5900]: USER_START pid=5900 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:20.353000 audit[5903]: CRED_ACQ pid=5903 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:20.384734 kernel: audit: type=1105 audit(1765889300.351:820): pid=5900 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:20.384807 kernel: audit: type=1103 audit(1765889300.353:821): pid=5903 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:20.577260 sshd[5903]: Connection closed by 10.200.16.10 port 46022 Dec 16 12:48:20.578188 sshd-session[5900]: pam_unix(sshd:session): session closed for user core Dec 16 12:48:20.578000 audit[5900]: USER_END pid=5900 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:20.581841 systemd[1]: sshd@13-10.200.20.37:22-10.200.16.10:46022.service: Deactivated successfully. Dec 16 12:48:20.585900 systemd[1]: session-16.scope: Deactivated successfully. Dec 16 12:48:20.587389 systemd-logind[2110]: Session 16 logged out. Waiting for processes to exit. Dec 16 12:48:20.588785 systemd-logind[2110]: Removed session 16. Dec 16 12:48:20.578000 audit[5900]: CRED_DISP pid=5900 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:20.612415 kernel: audit: type=1106 audit(1765889300.578:822): pid=5900 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:20.612476 kernel: audit: type=1104 audit(1765889300.578:823): pid=5900 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:20.580000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.20.37:22-10.200.16.10:46022 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:48:22.007078 kubelet[3668]: E1216 12:48:22.006907 3668 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-xwwbh" podUID="5bbc1d74-de1f-40b8-bd99-2346a3e2bafe" Dec 16 12:48:24.006227 kubelet[3668]: E1216 12:48:24.005761 3668 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6ff5fc5c78-6d7bc" podUID="452df744-6814-429d-baf4-38ff85179742" Dec 16 12:48:25.003604 kubelet[3668]: E1216 12:48:25.003559 3668 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6ff5fc5c78-zxwl6" podUID="33f549f5-c190-4fdd-897c-292335e0de6b" Dec 16 12:48:25.675225 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 12:48:25.675338 kernel: audit: type=1130 audit(1765889305.655:825): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.20.37:22-10.200.16.10:59190 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:48:25.655000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.20.37:22-10.200.16.10:59190 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:48:25.656320 systemd[1]: Started sshd@14-10.200.20.37:22-10.200.16.10:59190.service - OpenSSH per-connection server daemon (10.200.16.10:59190). Dec 16 12:48:26.072000 audit[5937]: USER_ACCT pid=5937 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:26.073758 sshd[5937]: Accepted publickey for core from 10.200.16.10 port 59190 ssh2: RSA SHA256:EQywgtLEgQeM5AlE8u6bSLW5d7rgTODhxBgGmPiEalo Dec 16 12:48:26.090527 sshd-session[5937]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:48:26.088000 audit[5937]: CRED_ACQ pid=5937 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:26.096090 systemd-logind[2110]: New session 17 of user core. Dec 16 12:48:26.106882 kernel: audit: type=1101 audit(1765889306.072:826): pid=5937 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:26.106953 kernel: audit: type=1103 audit(1765889306.088:827): pid=5937 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:26.116476 kernel: audit: type=1006 audit(1765889306.088:828): pid=5937 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Dec 16 12:48:26.117246 kernel: audit: type=1300 audit(1765889306.088:828): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffcb020740 a2=3 a3=0 items=0 ppid=1 pid=5937 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:48:26.088000 audit[5937]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffcb020740 a2=3 a3=0 items=0 ppid=1 pid=5937 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:48:26.088000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:48:26.140407 kernel: audit: type=1327 audit(1765889306.088:828): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:48:26.141390 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 16 12:48:26.144000 audit[5937]: USER_START pid=5937 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:26.150000 audit[5940]: CRED_ACQ pid=5940 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:26.179116 kernel: audit: type=1105 audit(1765889306.144:829): pid=5937 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:26.179181 kernel: audit: type=1103 audit(1765889306.150:830): pid=5940 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:26.355315 sshd[5940]: Connection closed by 10.200.16.10 port 59190 Dec 16 12:48:26.356124 sshd-session[5937]: pam_unix(sshd:session): session closed for user core Dec 16 12:48:26.355000 audit[5937]: USER_END pid=5937 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:26.361452 systemd[1]: sshd@14-10.200.20.37:22-10.200.16.10:59190.service: Deactivated successfully. Dec 16 12:48:26.364108 systemd[1]: session-17.scope: Deactivated successfully. Dec 16 12:48:26.366253 systemd-logind[2110]: Session 17 logged out. Waiting for processes to exit. Dec 16 12:48:26.367874 systemd-logind[2110]: Removed session 17. Dec 16 12:48:26.357000 audit[5937]: CRED_DISP pid=5937 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:26.389681 kernel: audit: type=1106 audit(1765889306.355:831): pid=5937 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:26.389749 kernel: audit: type=1104 audit(1765889306.357:832): pid=5937 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:26.360000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.20.37:22-10.200.16.10:59190 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:48:30.004907 kubelet[3668]: E1216 12:48:30.004375 3668 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-slfr9" podUID="c0a1bd8d-1ce1-4f04-99f4-1d697ce95b28" Dec 16 12:48:31.450484 systemd[1]: Started sshd@15-10.200.20.37:22-10.200.16.10:46902.service - OpenSSH per-connection server daemon (10.200.16.10:46902). Dec 16 12:48:31.449000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.20.37:22-10.200.16.10:46902 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:48:31.454392 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 12:48:31.454469 kernel: audit: type=1130 audit(1765889311.449:834): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.20.37:22-10.200.16.10:46902 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:48:31.885000 audit[5952]: USER_ACCT pid=5952 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:31.903429 sshd[5952]: Accepted publickey for core from 10.200.16.10 port 46902 ssh2: RSA SHA256:EQywgtLEgQeM5AlE8u6bSLW5d7rgTODhxBgGmPiEalo Dec 16 12:48:31.904189 sshd-session[5952]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:48:31.902000 audit[5952]: CRED_ACQ pid=5952 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:31.919142 kernel: audit: type=1101 audit(1765889311.885:835): pid=5952 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:31.919239 kernel: audit: type=1103 audit(1765889311.902:836): pid=5952 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:31.929096 kernel: audit: type=1006 audit(1765889311.902:837): pid=5952 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Dec 16 12:48:31.902000 audit[5952]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffef95e5d0 a2=3 a3=0 items=0 ppid=1 pid=5952 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:48:31.946369 kernel: audit: type=1300 audit(1765889311.902:837): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffef95e5d0 a2=3 a3=0 items=0 ppid=1 pid=5952 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:48:31.948690 systemd-logind[2110]: New session 18 of user core. Dec 16 12:48:31.902000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:48:31.956011 kernel: audit: type=1327 audit(1765889311.902:837): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:48:31.958358 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 16 12:48:31.960000 audit[5952]: USER_START pid=5952 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:31.980000 audit[5955]: CRED_ACQ pid=5955 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:31.999286 kernel: audit: type=1105 audit(1765889311.960:838): pid=5952 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:31.999355 kernel: audit: type=1103 audit(1765889311.980:839): pid=5955 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:32.005189 kubelet[3668]: E1216 12:48:32.005143 3668 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-858764cc7c-zqhnl" podUID="d2bf104b-aa2c-4645-b1eb-bf5f9ef78c24" Dec 16 12:48:32.282589 sshd[5955]: Connection closed by 10.200.16.10 port 46902 Dec 16 12:48:32.283216 sshd-session[5952]: pam_unix(sshd:session): session closed for user core Dec 16 12:48:32.284000 audit[5952]: USER_END pid=5952 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:32.284000 audit[5952]: CRED_DISP pid=5952 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:32.320198 kernel: audit: type=1106 audit(1765889312.284:840): pid=5952 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:32.320274 kernel: audit: type=1104 audit(1765889312.284:841): pid=5952 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:32.306678 systemd[1]: sshd@15-10.200.20.37:22-10.200.16.10:46902.service: Deactivated successfully. Dec 16 12:48:32.309302 systemd[1]: session-18.scope: Deactivated successfully. Dec 16 12:48:32.311047 systemd-logind[2110]: Session 18 logged out. Waiting for processes to exit. Dec 16 12:48:32.314833 systemd-logind[2110]: Removed session 18. Dec 16 12:48:32.304000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.20.37:22-10.200.16.10:46902 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:48:32.358000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.200.20.37:22-10.200.16.10:46912 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:48:32.360002 systemd[1]: Started sshd@16-10.200.20.37:22-10.200.16.10:46912.service - OpenSSH per-connection server daemon (10.200.16.10:46912). Dec 16 12:48:32.760000 audit[5973]: USER_ACCT pid=5973 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:32.761878 sshd[5973]: Accepted publickey for core from 10.200.16.10 port 46912 ssh2: RSA SHA256:EQywgtLEgQeM5AlE8u6bSLW5d7rgTODhxBgGmPiEalo Dec 16 12:48:32.761000 audit[5973]: CRED_ACQ pid=5973 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:32.761000 audit[5973]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffde2e7ec0 a2=3 a3=0 items=0 ppid=1 pid=5973 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:48:32.761000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:48:32.763035 sshd-session[5973]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:48:32.770541 systemd-logind[2110]: New session 19 of user core. Dec 16 12:48:32.778346 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 16 12:48:32.779000 audit[5973]: USER_START pid=5973 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:32.780000 audit[5976]: CRED_ACQ pid=5976 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:33.003959 kubelet[3668]: E1216 12:48:33.003897 3668 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5449d854d8-xfsgn" podUID="afc8cbd3-cce9-4afd-951f-828ed80d9307" Dec 16 12:48:33.164672 sshd[5976]: Connection closed by 10.200.16.10 port 46912 Dec 16 12:48:33.186555 sshd-session[5973]: pam_unix(sshd:session): session closed for user core Dec 16 12:48:33.186000 audit[5973]: USER_END pid=5973 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:33.187000 audit[5973]: CRED_DISP pid=5973 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:33.190463 systemd-logind[2110]: Session 19 logged out. Waiting for processes to exit. Dec 16 12:48:33.189000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.200.20.37:22-10.200.16.10:46912 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:48:33.190626 systemd[1]: sshd@16-10.200.20.37:22-10.200.16.10:46912.service: Deactivated successfully. Dec 16 12:48:33.194439 systemd[1]: session-19.scope: Deactivated successfully. Dec 16 12:48:33.196191 systemd-logind[2110]: Removed session 19. Dec 16 12:48:33.256000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.200.20.37:22-10.200.16.10:46914 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:48:33.257667 systemd[1]: Started sshd@17-10.200.20.37:22-10.200.16.10:46914.service - OpenSSH per-connection server daemon (10.200.16.10:46914). Dec 16 12:48:33.649000 audit[5987]: USER_ACCT pid=5987 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:33.651090 sshd[5987]: Accepted publickey for core from 10.200.16.10 port 46914 ssh2: RSA SHA256:EQywgtLEgQeM5AlE8u6bSLW5d7rgTODhxBgGmPiEalo Dec 16 12:48:33.652000 audit[5987]: CRED_ACQ pid=5987 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:33.652000 audit[5987]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc2cdd250 a2=3 a3=0 items=0 ppid=1 pid=5987 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:48:33.652000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:48:33.653721 sshd-session[5987]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:48:33.659273 systemd-logind[2110]: New session 20 of user core. Dec 16 12:48:33.663327 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 16 12:48:33.664000 audit[5987]: USER_START pid=5987 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:33.667000 audit[5990]: CRED_ACQ pid=5990 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:34.246000 audit[6003]: NETFILTER_CFG table=filter:141 family=2 entries=26 op=nft_register_rule pid=6003 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:48:34.246000 audit[6003]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14176 a0=3 a1=ffffec40cd10 a2=0 a3=1 items=0 ppid=3822 pid=6003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:48:34.246000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:48:34.256000 audit[6003]: NETFILTER_CFG table=nat:142 family=2 entries=20 op=nft_register_rule pid=6003 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:48:34.256000 audit[6003]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffec40cd10 a2=0 a3=1 items=0 ppid=3822 pid=6003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:48:34.256000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:48:34.345995 sshd[5990]: Connection closed by 10.200.16.10 port 46914 Dec 16 12:48:34.345198 sshd-session[5987]: pam_unix(sshd:session): session closed for user core Dec 16 12:48:34.346000 audit[5987]: USER_END pid=5987 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:34.346000 audit[5987]: CRED_DISP pid=5987 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:34.351162 systemd[1]: sshd@17-10.200.20.37:22-10.200.16.10:46914.service: Deactivated successfully. Dec 16 12:48:34.352000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.200.20.37:22-10.200.16.10:46914 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:48:34.357428 systemd[1]: session-20.scope: Deactivated successfully. Dec 16 12:48:34.359376 systemd-logind[2110]: Session 20 logged out. Waiting for processes to exit. Dec 16 12:48:34.361619 systemd-logind[2110]: Removed session 20. Dec 16 12:48:34.435000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.200.20.37:22-10.200.16.10:46926 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:48:34.436825 systemd[1]: Started sshd@18-10.200.20.37:22-10.200.16.10:46926.service - OpenSSH per-connection server daemon (10.200.16.10:46926). Dec 16 12:48:34.864000 audit[6008]: USER_ACCT pid=6008 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:34.866101 sshd[6008]: Accepted publickey for core from 10.200.16.10 port 46926 ssh2: RSA SHA256:EQywgtLEgQeM5AlE8u6bSLW5d7rgTODhxBgGmPiEalo Dec 16 12:48:34.865000 audit[6008]: CRED_ACQ pid=6008 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:34.865000 audit[6008]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe7b200e0 a2=3 a3=0 items=0 ppid=1 pid=6008 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:48:34.868884 sshd-session[6008]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:48:34.865000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:48:34.874256 systemd-logind[2110]: New session 21 of user core. Dec 16 12:48:34.878346 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 16 12:48:34.879000 audit[6008]: USER_START pid=6008 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:34.880000 audit[6011]: CRED_ACQ pid=6011 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:35.297318 sshd[6011]: Connection closed by 10.200.16.10 port 46926 Dec 16 12:48:35.297625 sshd-session[6008]: pam_unix(sshd:session): session closed for user core Dec 16 12:48:35.299000 audit[6008]: USER_END pid=6008 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:35.299000 audit[6008]: CRED_DISP pid=6008 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:35.303473 systemd[1]: session-21.scope: Deactivated successfully. Dec 16 12:48:35.304392 systemd[1]: sshd@18-10.200.20.37:22-10.200.16.10:46926.service: Deactivated successfully. Dec 16 12:48:35.303000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.200.20.37:22-10.200.16.10:46926 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:48:35.304000 audit[6018]: NETFILTER_CFG table=filter:143 family=2 entries=38 op=nft_register_rule pid=6018 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:48:35.304000 audit[6018]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14176 a0=3 a1=ffffcd81f220 a2=0 a3=1 items=0 ppid=3822 pid=6018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:48:35.304000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:48:35.308000 audit[6018]: NETFILTER_CFG table=nat:144 family=2 entries=20 op=nft_register_rule pid=6018 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:48:35.310426 systemd-logind[2110]: Session 21 logged out. Waiting for processes to exit. Dec 16 12:48:35.308000 audit[6018]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffcd81f220 a2=0 a3=1 items=0 ppid=3822 pid=6018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:48:35.308000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:48:35.311667 systemd-logind[2110]: Removed session 21. Dec 16 12:48:35.386449 systemd[1]: Started sshd@19-10.200.20.37:22-10.200.16.10:46934.service - OpenSSH per-connection server daemon (10.200.16.10:46934). Dec 16 12:48:35.385000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.20.37:22-10.200.16.10:46934 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:48:35.806000 audit[6023]: USER_ACCT pid=6023 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:35.806914 sshd[6023]: Accepted publickey for core from 10.200.16.10 port 46934 ssh2: RSA SHA256:EQywgtLEgQeM5AlE8u6bSLW5d7rgTODhxBgGmPiEalo Dec 16 12:48:35.807000 audit[6023]: CRED_ACQ pid=6023 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:35.807000 audit[6023]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe06cb040 a2=3 a3=0 items=0 ppid=1 pid=6023 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:48:35.807000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:48:35.808084 sshd-session[6023]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:48:35.813787 systemd-logind[2110]: New session 22 of user core. Dec 16 12:48:35.818345 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 16 12:48:35.821000 audit[6023]: USER_START pid=6023 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:35.822000 audit[6026]: CRED_ACQ pid=6026 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:36.005315 kubelet[3668]: E1216 12:48:36.005270 3668 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-xwwbh" podUID="5bbc1d74-de1f-40b8-bd99-2346a3e2bafe" Dec 16 12:48:36.094120 sshd[6026]: Connection closed by 10.200.16.10 port 46934 Dec 16 12:48:36.093605 sshd-session[6023]: pam_unix(sshd:session): session closed for user core Dec 16 12:48:36.094000 audit[6023]: USER_END pid=6023 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:36.094000 audit[6023]: CRED_DISP pid=6023 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:36.096772 systemd[1]: sshd@19-10.200.20.37:22-10.200.16.10:46934.service: Deactivated successfully. Dec 16 12:48:36.096000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.20.37:22-10.200.16.10:46934 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:48:36.100557 systemd[1]: session-22.scope: Deactivated successfully. Dec 16 12:48:36.102856 systemd-logind[2110]: Session 22 logged out. Waiting for processes to exit. Dec 16 12:48:36.103790 systemd-logind[2110]: Removed session 22. Dec 16 12:48:38.003507 kubelet[3668]: E1216 12:48:38.003462 3668 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6ff5fc5c78-zxwl6" podUID="33f549f5-c190-4fdd-897c-292335e0de6b" Dec 16 12:48:39.003360 kubelet[3668]: E1216 12:48:39.003316 3668 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6ff5fc5c78-6d7bc" podUID="452df744-6814-429d-baf4-38ff85179742" Dec 16 12:48:40.733000 audit[6038]: NETFILTER_CFG table=filter:145 family=2 entries=26 op=nft_register_rule pid=6038 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:48:40.737217 kernel: kauditd_printk_skb: 57 callbacks suppressed Dec 16 12:48:40.737297 kernel: audit: type=1325 audit(1765889320.733:883): table=filter:145 family=2 entries=26 op=nft_register_rule pid=6038 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:48:40.733000 audit[6038]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffe7c1eb80 a2=0 a3=1 items=0 ppid=3822 pid=6038 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:48:40.765305 kernel: audit: type=1300 audit(1765889320.733:883): arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffe7c1eb80 a2=0 a3=1 items=0 ppid=3822 pid=6038 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:48:40.733000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:48:40.775404 kernel: audit: type=1327 audit(1765889320.733:883): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:48:40.768000 audit[6038]: NETFILTER_CFG table=nat:146 family=2 entries=104 op=nft_register_chain pid=6038 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:48:40.784893 kernel: audit: type=1325 audit(1765889320.768:884): table=nat:146 family=2 entries=104 op=nft_register_chain pid=6038 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:48:40.768000 audit[6038]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=48684 a0=3 a1=ffffe7c1eb80 a2=0 a3=1 items=0 ppid=3822 pid=6038 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:48:40.803530 kernel: audit: type=1300 audit(1765889320.768:884): arch=c00000b7 syscall=211 success=yes exit=48684 a0=3 a1=ffffe7c1eb80 a2=0 a3=1 items=0 ppid=3822 pid=6038 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:48:40.768000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:48:40.813239 kernel: audit: type=1327 audit(1765889320.768:884): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:48:41.167101 systemd[1]: Started sshd@20-10.200.20.37:22-10.200.16.10:60490.service - OpenSSH per-connection server daemon (10.200.16.10:60490). Dec 16 12:48:41.166000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.20.37:22-10.200.16.10:60490 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:48:41.182689 kernel: audit: type=1130 audit(1765889321.166:885): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.20.37:22-10.200.16.10:60490 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:48:41.544000 audit[6040]: USER_ACCT pid=6040 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:41.545012 sshd[6040]: Accepted publickey for core from 10.200.16.10 port 60490 ssh2: RSA SHA256:EQywgtLEgQeM5AlE8u6bSLW5d7rgTODhxBgGmPiEalo Dec 16 12:48:41.563234 kernel: audit: type=1101 audit(1765889321.544:886): pid=6040 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:41.564000 audit[6040]: CRED_ACQ pid=6040 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:41.564552 sshd-session[6040]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:48:41.590223 kernel: audit: type=1103 audit(1765889321.564:887): pid=6040 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:41.590281 kernel: audit: type=1006 audit(1765889321.564:888): pid=6040 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Dec 16 12:48:41.564000 audit[6040]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff3665480 a2=3 a3=0 items=0 ppid=1 pid=6040 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:48:41.564000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:48:41.594295 systemd-logind[2110]: New session 23 of user core. Dec 16 12:48:41.599324 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 16 12:48:41.601000 audit[6040]: USER_START pid=6040 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:41.603000 audit[6043]: CRED_ACQ pid=6043 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:41.794821 sshd[6043]: Connection closed by 10.200.16.10 port 60490 Dec 16 12:48:41.796397 sshd-session[6040]: pam_unix(sshd:session): session closed for user core Dec 16 12:48:41.798000 audit[6040]: USER_END pid=6040 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:41.798000 audit[6040]: CRED_DISP pid=6040 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:41.800000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.20.37:22-10.200.16.10:60490 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:48:41.800937 systemd[1]: sshd@20-10.200.20.37:22-10.200.16.10:60490.service: Deactivated successfully. Dec 16 12:48:41.803656 systemd[1]: session-23.scope: Deactivated successfully. Dec 16 12:48:41.804615 systemd-logind[2110]: Session 23 logged out. Waiting for processes to exit. Dec 16 12:48:41.805991 systemd-logind[2110]: Removed session 23. Dec 16 12:48:42.004439 kubelet[3668]: E1216 12:48:42.003382 3668 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-slfr9" podUID="c0a1bd8d-1ce1-4f04-99f4-1d697ce95b28" Dec 16 12:48:44.004426 kubelet[3668]: E1216 12:48:44.004224 3668 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5449d854d8-xfsgn" podUID="afc8cbd3-cce9-4afd-951f-828ed80d9307" Dec 16 12:48:45.004483 kubelet[3668]: E1216 12:48:45.004421 3668 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-858764cc7c-zqhnl" podUID="d2bf104b-aa2c-4645-b1eb-bf5f9ef78c24" Dec 16 12:48:46.891447 systemd[1]: Started sshd@21-10.200.20.37:22-10.200.16.10:60504.service - OpenSSH per-connection server daemon (10.200.16.10:60504). Dec 16 12:48:46.890000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.20.37:22-10.200.16.10:60504 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:48:46.894550 kernel: kauditd_printk_skb: 7 callbacks suppressed Dec 16 12:48:46.894627 kernel: audit: type=1130 audit(1765889326.890:894): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.20.37:22-10.200.16.10:60504 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:48:47.006168 kubelet[3668]: E1216 12:48:47.006111 3668 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-xwwbh" podUID="5bbc1d74-de1f-40b8-bd99-2346a3e2bafe" Dec 16 12:48:47.337000 audit[6057]: USER_ACCT pid=6057 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:47.339386 sshd[6057]: Accepted publickey for core from 10.200.16.10 port 60504 ssh2: RSA SHA256:EQywgtLEgQeM5AlE8u6bSLW5d7rgTODhxBgGmPiEalo Dec 16 12:48:47.355305 sshd-session[6057]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:48:47.353000 audit[6057]: CRED_ACQ pid=6057 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:47.371281 kernel: audit: type=1101 audit(1765889327.337:895): pid=6057 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:47.371354 kernel: audit: type=1103 audit(1765889327.353:896): pid=6057 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:47.376524 systemd-logind[2110]: New session 24 of user core. Dec 16 12:48:47.383238 kernel: audit: type=1006 audit(1765889327.353:897): pid=6057 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Dec 16 12:48:47.353000 audit[6057]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff65c40b0 a2=3 a3=0 items=0 ppid=1 pid=6057 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:48:47.400507 kernel: audit: type=1300 audit(1765889327.353:897): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff65c40b0 a2=3 a3=0 items=0 ppid=1 pid=6057 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:48:47.353000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:48:47.407379 kernel: audit: type=1327 audit(1765889327.353:897): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:48:47.408368 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 16 12:48:47.409000 audit[6057]: USER_START pid=6057 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:47.420000 audit[6060]: CRED_ACQ pid=6060 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:47.444900 kernel: audit: type=1105 audit(1765889327.409:898): pid=6057 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:47.444999 kernel: audit: type=1103 audit(1765889327.420:899): pid=6060 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:47.638526 sshd[6060]: Connection closed by 10.200.16.10 port 60504 Dec 16 12:48:47.638967 sshd-session[6057]: pam_unix(sshd:session): session closed for user core Dec 16 12:48:47.639000 audit[6057]: USER_END pid=6057 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:47.644470 systemd[1]: sshd@21-10.200.20.37:22-10.200.16.10:60504.service: Deactivated successfully. Dec 16 12:48:47.640000 audit[6057]: CRED_DISP pid=6057 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:47.647487 systemd[1]: session-24.scope: Deactivated successfully. Dec 16 12:48:47.648938 systemd-logind[2110]: Session 24 logged out. Waiting for processes to exit. Dec 16 12:48:47.650431 systemd-logind[2110]: Removed session 24. Dec 16 12:48:47.673801 kernel: audit: type=1106 audit(1765889327.639:900): pid=6057 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:47.673914 kernel: audit: type=1104 audit(1765889327.640:901): pid=6057 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:47.643000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.20.37:22-10.200.16.10:60504 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:48:50.006223 kubelet[3668]: E1216 12:48:50.006093 3668 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6ff5fc5c78-zxwl6" podUID="33f549f5-c190-4fdd-897c-292335e0de6b" Dec 16 12:48:50.006932 kubelet[3668]: E1216 12:48:50.006857 3668 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6ff5fc5c78-6d7bc" podUID="452df744-6814-429d-baf4-38ff85179742" Dec 16 12:48:52.721000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.20.37:22-10.200.16.10:49472 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:48:52.722456 systemd[1]: Started sshd@22-10.200.20.37:22-10.200.16.10:49472.service - OpenSSH per-connection server daemon (10.200.16.10:49472). Dec 16 12:48:52.725643 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 12:48:52.725716 kernel: audit: type=1130 audit(1765889332.721:903): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.20.37:22-10.200.16.10:49472 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:48:53.130000 audit[6097]: USER_ACCT pid=6097 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:53.149781 sshd[6097]: Accepted publickey for core from 10.200.16.10 port 49472 ssh2: RSA SHA256:EQywgtLEgQeM5AlE8u6bSLW5d7rgTODhxBgGmPiEalo Dec 16 12:48:53.151221 kernel: audit: type=1101 audit(1765889333.130:904): pid=6097 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:53.150000 audit[6097]: CRED_ACQ pid=6097 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:53.151706 sshd-session[6097]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:48:53.181476 kernel: audit: type=1103 audit(1765889333.150:905): pid=6097 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:53.181537 kernel: audit: type=1006 audit(1765889333.150:906): pid=6097 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Dec 16 12:48:53.150000 audit[6097]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff28dd130 a2=3 a3=0 items=0 ppid=1 pid=6097 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:48:53.198907 kernel: audit: type=1300 audit(1765889333.150:906): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff28dd130 a2=3 a3=0 items=0 ppid=1 pid=6097 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:48:53.199292 systemd-logind[2110]: New session 25 of user core. Dec 16 12:48:53.150000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:48:53.206447 kernel: audit: type=1327 audit(1765889333.150:906): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:48:53.207356 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 16 12:48:53.209000 audit[6097]: USER_START pid=6097 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:53.227000 audit[6100]: CRED_ACQ pid=6100 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:53.246006 kernel: audit: type=1105 audit(1765889333.209:907): pid=6097 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:53.246059 kernel: audit: type=1103 audit(1765889333.227:908): pid=6100 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:53.435239 sshd[6100]: Connection closed by 10.200.16.10 port 49472 Dec 16 12:48:53.436109 sshd-session[6097]: pam_unix(sshd:session): session closed for user core Dec 16 12:48:53.436000 audit[6097]: USER_END pid=6097 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:53.457184 systemd[1]: sshd@22-10.200.20.37:22-10.200.16.10:49472.service: Deactivated successfully. Dec 16 12:48:53.459378 systemd[1]: session-25.scope: Deactivated successfully. Dec 16 12:48:53.460673 systemd-logind[2110]: Session 25 logged out. Waiting for processes to exit. Dec 16 12:48:53.436000 audit[6097]: CRED_DISP pid=6097 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:53.475518 kernel: audit: type=1106 audit(1765889333.436:909): pid=6097 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:53.475642 kernel: audit: type=1104 audit(1765889333.436:910): pid=6097 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:53.477599 systemd-logind[2110]: Removed session 25. Dec 16 12:48:53.456000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.20.37:22-10.200.16.10:49472 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:48:56.004908 kubelet[3668]: E1216 12:48:56.004597 3668 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-slfr9" podUID="c0a1bd8d-1ce1-4f04-99f4-1d697ce95b28" Dec 16 12:48:58.518000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.20.37:22-10.200.16.10:49488 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:48:58.519453 systemd[1]: Started sshd@23-10.200.20.37:22-10.200.16.10:49488.service - OpenSSH per-connection server daemon (10.200.16.10:49488). Dec 16 12:48:58.522843 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 12:48:58.522933 kernel: audit: type=1130 audit(1765889338.518:912): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.20.37:22-10.200.16.10:49488 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:48:58.932000 audit[6113]: USER_ACCT pid=6113 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:58.934575 sshd[6113]: Accepted publickey for core from 10.200.16.10 port 49488 ssh2: RSA SHA256:EQywgtLEgQeM5AlE8u6bSLW5d7rgTODhxBgGmPiEalo Dec 16 12:48:58.953523 sshd-session[6113]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:48:58.951000 audit[6113]: CRED_ACQ pid=6113 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:58.970361 kernel: audit: type=1101 audit(1765889338.932:913): pid=6113 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:58.970494 kernel: audit: type=1103 audit(1765889338.951:914): pid=6113 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:58.980392 kernel: audit: type=1006 audit(1765889338.951:915): pid=6113 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Dec 16 12:48:58.951000 audit[6113]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe83b8c10 a2=3 a3=0 items=0 ppid=1 pid=6113 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:48:58.996903 kernel: audit: type=1300 audit(1765889338.951:915): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe83b8c10 a2=3 a3=0 items=0 ppid=1 pid=6113 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:48:58.951000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:48:59.001141 systemd-logind[2110]: New session 26 of user core. Dec 16 12:48:59.003754 kernel: audit: type=1327 audit(1765889338.951:915): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:48:59.004907 kubelet[3668]: E1216 12:48:59.004875 3668 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5449d854d8-xfsgn" podUID="afc8cbd3-cce9-4afd-951f-828ed80d9307" Dec 16 12:48:59.007409 systemd[1]: Started session-26.scope - Session 26 of User core. Dec 16 12:48:59.009000 audit[6113]: USER_START pid=6113 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:59.031400 kernel: audit: type=1105 audit(1765889339.009:916): pid=6113 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:59.030000 audit[6116]: CRED_ACQ pid=6116 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:59.048256 kernel: audit: type=1103 audit(1765889339.030:917): pid=6116 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:59.243297 sshd[6116]: Connection closed by 10.200.16.10 port 49488 Dec 16 12:48:59.244380 sshd-session[6113]: pam_unix(sshd:session): session closed for user core Dec 16 12:48:59.244000 audit[6113]: USER_END pid=6113 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:59.245000 audit[6113]: CRED_DISP pid=6113 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:59.266820 systemd-logind[2110]: Session 26 logged out. Waiting for processes to exit. Dec 16 12:48:59.267305 systemd[1]: sshd@23-10.200.20.37:22-10.200.16.10:49488.service: Deactivated successfully. Dec 16 12:48:59.268822 systemd[1]: session-26.scope: Deactivated successfully. Dec 16 12:48:59.280823 kernel: audit: type=1106 audit(1765889339.244:918): pid=6113 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:59.280881 kernel: audit: type=1104 audit(1765889339.245:919): pid=6113 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:59.283094 systemd-logind[2110]: Removed session 26. Dec 16 12:48:59.266000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.20.37:22-10.200.16.10:49488 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:49:00.007308 kubelet[3668]: E1216 12:49:00.007153 3668 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-858764cc7c-zqhnl" podUID="d2bf104b-aa2c-4645-b1eb-bf5f9ef78c24" Dec 16 12:49:00.007717 kubelet[3668]: E1216 12:49:00.007351 3668 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-xwwbh" podUID="5bbc1d74-de1f-40b8-bd99-2346a3e2bafe" Dec 16 12:49:01.002986 kubelet[3668]: E1216 12:49:01.002935 3668 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6ff5fc5c78-zxwl6" podUID="33f549f5-c190-4fdd-897c-292335e0de6b" Dec 16 12:49:02.005244 kubelet[3668]: E1216 12:49:02.004505 3668 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6ff5fc5c78-6d7bc" podUID="452df744-6814-429d-baf4-38ff85179742" Dec 16 12:49:04.314445 systemd[1]: Started sshd@24-10.200.20.37:22-10.200.16.10:37444.service - OpenSSH per-connection server daemon (10.200.16.10:37444). Dec 16 12:49:04.314000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.200.20.37:22-10.200.16.10:37444 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:49:04.317548 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 12:49:04.317665 kernel: audit: type=1130 audit(1765889344.314:921): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.200.20.37:22-10.200.16.10:37444 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:49:04.697000 audit[6135]: USER_ACCT pid=6135 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:49:04.713256 sshd[6135]: Accepted publickey for core from 10.200.16.10 port 37444 ssh2: RSA SHA256:EQywgtLEgQeM5AlE8u6bSLW5d7rgTODhxBgGmPiEalo Dec 16 12:49:04.714370 kernel: audit: type=1101 audit(1765889344.697:922): pid=6135 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:49:04.715000 audit[6135]: CRED_ACQ pid=6135 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:49:04.716109 sshd-session[6135]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:49:04.735121 systemd-logind[2110]: New session 27 of user core. Dec 16 12:49:04.739564 kernel: audit: type=1103 audit(1765889344.715:923): pid=6135 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:49:04.739624 kernel: audit: type=1006 audit(1765889344.715:924): pid=6135 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Dec 16 12:49:04.715000 audit[6135]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe86c2070 a2=3 a3=0 items=0 ppid=1 pid=6135 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:49:04.756362 kernel: audit: type=1300 audit(1765889344.715:924): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe86c2070 a2=3 a3=0 items=0 ppid=1 pid=6135 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:49:04.715000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:49:04.757378 systemd[1]: Started session-27.scope - Session 27 of User core. Dec 16 12:49:04.763139 kernel: audit: type=1327 audit(1765889344.715:924): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:49:04.764000 audit[6135]: USER_START pid=6135 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:49:04.782000 audit[6138]: CRED_ACQ pid=6138 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:49:04.796461 kernel: audit: type=1105 audit(1765889344.764:925): pid=6135 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:49:04.796518 kernel: audit: type=1103 audit(1765889344.782:926): pid=6138 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:49:04.957248 sshd[6138]: Connection closed by 10.200.16.10 port 37444 Dec 16 12:49:04.957410 sshd-session[6135]: pam_unix(sshd:session): session closed for user core Dec 16 12:49:04.959000 audit[6135]: USER_END pid=6135 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:49:04.963493 systemd-logind[2110]: Session 27 logged out. Waiting for processes to exit. Dec 16 12:49:04.964169 systemd[1]: sshd@24-10.200.20.37:22-10.200.16.10:37444.service: Deactivated successfully. Dec 16 12:49:04.969524 systemd[1]: session-27.scope: Deactivated successfully. Dec 16 12:49:04.971016 systemd-logind[2110]: Removed session 27. Dec 16 12:49:04.959000 audit[6135]: CRED_DISP pid=6135 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:49:04.991774 kernel: audit: type=1106 audit(1765889344.959:927): pid=6135 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:49:04.991841 kernel: audit: type=1104 audit(1765889344.959:928): pid=6135 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:49:04.964000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.200.20.37:22-10.200.16.10:37444 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'