Sep 12 17:21:09.068197 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd490] Sep 12 17:21:09.068215 kernel: Linux version 6.12.47-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Fri Sep 12 15:37:01 -00 2025 Sep 12 17:21:09.068221 kernel: KASLR enabled Sep 12 17:21:09.068225 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Sep 12 17:21:09.068230 kernel: printk: legacy bootconsole [pl11] enabled Sep 12 17:21:09.068234 kernel: efi: EFI v2.7 by EDK II Sep 12 17:21:09.068239 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f20e018 RNG=0x3fd5f998 MEMRESERVE=0x3e471598 Sep 12 17:21:09.068243 kernel: random: crng init done Sep 12 17:21:09.068247 kernel: secureboot: Secure boot disabled Sep 12 17:21:09.068251 kernel: ACPI: Early table checksum verification disabled Sep 12 17:21:09.068255 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Sep 12 17:21:09.068259 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 17:21:09.068263 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 17:21:09.068267 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Sep 12 17:21:09.068272 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 17:21:09.068276 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 17:21:09.068281 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 17:21:09.068285 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 17:21:09.068290 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 17:21:09.068294 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 17:21:09.068298 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Sep 12 17:21:09.068302 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 17:21:09.068306 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Sep 12 17:21:09.068310 kernel: ACPI: Use ACPI SPCR as default console: No Sep 12 17:21:09.068314 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Sep 12 17:21:09.068318 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] hotplug Sep 12 17:21:09.068323 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] hotplug Sep 12 17:21:09.068327 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Sep 12 17:21:09.068331 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Sep 12 17:21:09.068336 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Sep 12 17:21:09.068340 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Sep 12 17:21:09.068344 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Sep 12 17:21:09.068348 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Sep 12 17:21:09.068352 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Sep 12 17:21:09.068356 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Sep 12 17:21:09.068360 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Sep 12 17:21:09.068364 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x1bfffffff] -> [mem 0x00000000-0x1bfffffff] Sep 12 17:21:09.068369 kernel: NODE_DATA(0) allocated [mem 0x1bf7fda00-0x1bf804fff] Sep 12 17:21:09.068373 kernel: Zone ranges: Sep 12 17:21:09.068377 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Sep 12 17:21:09.068383 kernel: DMA32 empty Sep 12 17:21:09.068388 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Sep 12 17:21:09.068392 kernel: Device empty Sep 12 17:21:09.068397 kernel: Movable zone start for each node Sep 12 17:21:09.068401 kernel: Early memory node ranges Sep 12 17:21:09.068406 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Sep 12 17:21:09.068410 kernel: node 0: [mem 0x0000000000824000-0x000000003e45ffff] Sep 12 17:21:09.068415 kernel: node 0: [mem 0x000000003e460000-0x000000003e46ffff] Sep 12 17:21:09.068419 kernel: node 0: [mem 0x000000003e470000-0x000000003e54ffff] Sep 12 17:21:09.068423 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Sep 12 17:21:09.068428 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Sep 12 17:21:09.068432 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Sep 12 17:21:09.068436 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Sep 12 17:21:09.068441 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Sep 12 17:21:09.068445 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Sep 12 17:21:09.068450 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Sep 12 17:21:09.068454 kernel: cma: Reserved 16 MiB at 0x000000003d400000 on node -1 Sep 12 17:21:09.068459 kernel: psci: probing for conduit method from ACPI. Sep 12 17:21:09.068463 kernel: psci: PSCIv1.1 detected in firmware. Sep 12 17:21:09.068468 kernel: psci: Using standard PSCI v0.2 function IDs Sep 12 17:21:09.068472 kernel: psci: MIGRATE_INFO_TYPE not supported. Sep 12 17:21:09.068477 kernel: psci: SMC Calling Convention v1.4 Sep 12 17:21:09.068481 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Sep 12 17:21:09.068485 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Sep 12 17:21:09.068490 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Sep 12 17:21:09.068494 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Sep 12 17:21:09.068499 kernel: pcpu-alloc: [0] 0 [0] 1 Sep 12 17:21:09.068503 kernel: Detected PIPT I-cache on CPU0 Sep 12 17:21:09.068508 kernel: CPU features: detected: Address authentication (architected QARMA5 algorithm) Sep 12 17:21:09.068513 kernel: CPU features: detected: GIC system register CPU interface Sep 12 17:21:09.068517 kernel: CPU features: detected: Spectre-v4 Sep 12 17:21:09.068521 kernel: CPU features: detected: Spectre-BHB Sep 12 17:21:09.068526 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 12 17:21:09.068530 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 12 17:21:09.068535 kernel: CPU features: detected: ARM erratum 2067961 or 2054223 Sep 12 17:21:09.068539 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 12 17:21:09.068543 kernel: alternatives: applying boot alternatives Sep 12 17:21:09.068549 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=9b01894f6bb04aff3ec9b8554b3ae56a087d51961f1a01981bc4d4f54ccefc09 Sep 12 17:21:09.068553 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 12 17:21:09.068559 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 12 17:21:09.068563 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 12 17:21:09.068567 kernel: Fallback order for Node 0: 0 Sep 12 17:21:09.068572 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048540 Sep 12 17:21:09.068576 kernel: Policy zone: Normal Sep 12 17:21:09.068580 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 12 17:21:09.068585 kernel: software IO TLB: area num 2. Sep 12 17:21:09.068589 kernel: software IO TLB: mapped [mem 0x0000000036290000-0x000000003a290000] (64MB) Sep 12 17:21:09.068593 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 12 17:21:09.068598 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 12 17:21:09.068603 kernel: rcu: RCU event tracing is enabled. Sep 12 17:21:09.068608 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 12 17:21:09.068613 kernel: Trampoline variant of Tasks RCU enabled. Sep 12 17:21:09.068617 kernel: Tracing variant of Tasks RCU enabled. Sep 12 17:21:09.068622 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 12 17:21:09.068644 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 12 17:21:09.068648 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 17:21:09.068653 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 17:21:09.068657 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 12 17:21:09.068662 kernel: GICv3: 960 SPIs implemented Sep 12 17:21:09.068666 kernel: GICv3: 0 Extended SPIs implemented Sep 12 17:21:09.068670 kernel: Root IRQ handler: gic_handle_irq Sep 12 17:21:09.068675 kernel: GICv3: GICv3 features: 16 PPIs, RSS Sep 12 17:21:09.068680 kernel: GICv3: GICD_CTRL.DS=0, SCR_EL3.FIQ=0 Sep 12 17:21:09.068685 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Sep 12 17:21:09.068689 kernel: ITS: No ITS available, not enabling LPIs Sep 12 17:21:09.068694 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 12 17:21:09.068698 kernel: arch_timer: cp15 timer(s) running at 1000.00MHz (virt). Sep 12 17:21:09.068703 kernel: clocksource: arch_sys_counter: mask: 0x1fffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 12 17:21:09.068707 kernel: sched_clock: 61 bits at 1000MHz, resolution 1ns, wraps every 4398046511103ns Sep 12 17:21:09.068712 kernel: Console: colour dummy device 80x25 Sep 12 17:21:09.068716 kernel: printk: legacy console [tty1] enabled Sep 12 17:21:09.068721 kernel: ACPI: Core revision 20240827 Sep 12 17:21:09.068725 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 2000.00 BogoMIPS (lpj=1000000) Sep 12 17:21:09.068731 kernel: pid_max: default: 32768 minimum: 301 Sep 12 17:21:09.068735 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 12 17:21:09.068740 kernel: landlock: Up and running. Sep 12 17:21:09.068744 kernel: SELinux: Initializing. Sep 12 17:21:09.068749 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 17:21:09.068757 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 17:21:09.068763 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x1a0000e, misc 0x31e1 Sep 12 17:21:09.068768 kernel: Hyper-V: Host Build 10.0.26100.1261-1-0 Sep 12 17:21:09.068773 kernel: Hyper-V: enabling crash_kexec_post_notifiers Sep 12 17:21:09.068777 kernel: rcu: Hierarchical SRCU implementation. Sep 12 17:21:09.068782 kernel: rcu: Max phase no-delay instances is 400. Sep 12 17:21:09.068787 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 12 17:21:09.068792 kernel: Remapping and enabling EFI services. Sep 12 17:21:09.068797 kernel: smp: Bringing up secondary CPUs ... Sep 12 17:21:09.068802 kernel: Detected PIPT I-cache on CPU1 Sep 12 17:21:09.068806 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Sep 12 17:21:09.068812 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd490] Sep 12 17:21:09.068817 kernel: smp: Brought up 1 node, 2 CPUs Sep 12 17:21:09.068821 kernel: SMP: Total of 2 processors activated. Sep 12 17:21:09.068826 kernel: CPU: All CPU(s) started at EL1 Sep 12 17:21:09.068831 kernel: CPU features: detected: 32-bit EL0 Support Sep 12 17:21:09.068836 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Sep 12 17:21:09.068840 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 12 17:21:09.068845 kernel: CPU features: detected: Common not Private translations Sep 12 17:21:09.068850 kernel: CPU features: detected: CRC32 instructions Sep 12 17:21:09.068856 kernel: CPU features: detected: Generic authentication (architected QARMA5 algorithm) Sep 12 17:21:09.068860 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 12 17:21:09.068865 kernel: CPU features: detected: LSE atomic instructions Sep 12 17:21:09.068870 kernel: CPU features: detected: Privileged Access Never Sep 12 17:21:09.068875 kernel: CPU features: detected: Speculation barrier (SB) Sep 12 17:21:09.068879 kernel: CPU features: detected: TLB range maintenance instructions Sep 12 17:21:09.068884 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 12 17:21:09.068889 kernel: CPU features: detected: Scalable Vector Extension Sep 12 17:21:09.068894 kernel: alternatives: applying system-wide alternatives Sep 12 17:21:09.068899 kernel: CPU features: detected: Hardware dirty bit management on CPU0-1 Sep 12 17:21:09.068904 kernel: SVE: maximum available vector length 16 bytes per vector Sep 12 17:21:09.068909 kernel: SVE: default vector length 16 bytes per vector Sep 12 17:21:09.068914 kernel: Memory: 3959668K/4194160K available (11136K kernel code, 2440K rwdata, 9068K rodata, 38912K init, 1038K bss, 213304K reserved, 16384K cma-reserved) Sep 12 17:21:09.068919 kernel: devtmpfs: initialized Sep 12 17:21:09.068923 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 12 17:21:09.068928 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 12 17:21:09.068933 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 12 17:21:09.068938 kernel: 0 pages in range for non-PLT usage Sep 12 17:21:09.068943 kernel: 508576 pages in range for PLT usage Sep 12 17:21:09.068948 kernel: pinctrl core: initialized pinctrl subsystem Sep 12 17:21:09.068953 kernel: SMBIOS 3.1.0 present. Sep 12 17:21:09.068958 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Sep 12 17:21:09.068962 kernel: DMI: Memory slots populated: 2/2 Sep 12 17:21:09.068967 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 12 17:21:09.068972 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 12 17:21:09.068977 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 12 17:21:09.068982 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 12 17:21:09.068987 kernel: audit: initializing netlink subsys (disabled) Sep 12 17:21:09.068992 kernel: audit: type=2000 audit(0.058:1): state=initialized audit_enabled=0 res=1 Sep 12 17:21:09.068997 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 12 17:21:09.069002 kernel: cpuidle: using governor menu Sep 12 17:21:09.069007 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 12 17:21:09.069011 kernel: ASID allocator initialised with 32768 entries Sep 12 17:21:09.069016 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 12 17:21:09.069021 kernel: Serial: AMBA PL011 UART driver Sep 12 17:21:09.069026 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 12 17:21:09.069031 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 12 17:21:09.069036 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 12 17:21:09.069040 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 12 17:21:09.069045 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 12 17:21:09.069050 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 12 17:21:09.069055 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 12 17:21:09.069060 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 12 17:21:09.069064 kernel: ACPI: Added _OSI(Module Device) Sep 12 17:21:09.069069 kernel: ACPI: Added _OSI(Processor Device) Sep 12 17:21:09.069074 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 12 17:21:09.069079 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 12 17:21:09.069084 kernel: ACPI: Interpreter enabled Sep 12 17:21:09.069089 kernel: ACPI: Using GIC for interrupt routing Sep 12 17:21:09.069094 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Sep 12 17:21:09.069098 kernel: printk: legacy console [ttyAMA0] enabled Sep 12 17:21:09.069103 kernel: printk: legacy bootconsole [pl11] disabled Sep 12 17:21:09.069108 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Sep 12 17:21:09.069113 kernel: ACPI: CPU0 has been hot-added Sep 12 17:21:09.069118 kernel: ACPI: CPU1 has been hot-added Sep 12 17:21:09.069123 kernel: iommu: Default domain type: Translated Sep 12 17:21:09.069127 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 12 17:21:09.069132 kernel: efivars: Registered efivars operations Sep 12 17:21:09.069137 kernel: vgaarb: loaded Sep 12 17:21:09.069141 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 12 17:21:09.069146 kernel: VFS: Disk quotas dquot_6.6.0 Sep 12 17:21:09.069151 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 12 17:21:09.069155 kernel: pnp: PnP ACPI init Sep 12 17:21:09.069161 kernel: pnp: PnP ACPI: found 0 devices Sep 12 17:21:09.069165 kernel: NET: Registered PF_INET protocol family Sep 12 17:21:09.069170 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 12 17:21:09.069175 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 12 17:21:09.069180 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 12 17:21:09.069185 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 12 17:21:09.069190 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 12 17:21:09.069194 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 12 17:21:09.069199 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 17:21:09.069204 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 17:21:09.069209 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 12 17:21:09.069214 kernel: PCI: CLS 0 bytes, default 64 Sep 12 17:21:09.069219 kernel: kvm [1]: HYP mode not available Sep 12 17:21:09.069223 kernel: Initialise system trusted keyrings Sep 12 17:21:09.069228 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 12 17:21:09.069233 kernel: Key type asymmetric registered Sep 12 17:21:09.069237 kernel: Asymmetric key parser 'x509' registered Sep 12 17:21:09.069242 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 12 17:21:09.069248 kernel: io scheduler mq-deadline registered Sep 12 17:21:09.069252 kernel: io scheduler kyber registered Sep 12 17:21:09.069257 kernel: io scheduler bfq registered Sep 12 17:21:09.069262 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 12 17:21:09.069267 kernel: thunder_xcv, ver 1.0 Sep 12 17:21:09.069271 kernel: thunder_bgx, ver 1.0 Sep 12 17:21:09.069276 kernel: nicpf, ver 1.0 Sep 12 17:21:09.069280 kernel: nicvf, ver 1.0 Sep 12 17:21:09.069381 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 12 17:21:09.069434 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-12T17:21:08 UTC (1757697668) Sep 12 17:21:09.069441 kernel: efifb: probing for efifb Sep 12 17:21:09.069445 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Sep 12 17:21:09.069450 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Sep 12 17:21:09.069455 kernel: efifb: scrolling: redraw Sep 12 17:21:09.069460 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 12 17:21:09.069465 kernel: Console: switching to colour frame buffer device 128x48 Sep 12 17:21:09.069469 kernel: fb0: EFI VGA frame buffer device Sep 12 17:21:09.069475 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Sep 12 17:21:09.069480 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 12 17:21:09.069485 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Sep 12 17:21:09.069489 kernel: watchdog: NMI not fully supported Sep 12 17:21:09.069494 kernel: watchdog: Hard watchdog permanently disabled Sep 12 17:21:09.069499 kernel: NET: Registered PF_INET6 protocol family Sep 12 17:21:09.069503 kernel: Segment Routing with IPv6 Sep 12 17:21:09.069508 kernel: In-situ OAM (IOAM) with IPv6 Sep 12 17:21:09.069513 kernel: NET: Registered PF_PACKET protocol family Sep 12 17:21:09.069519 kernel: Key type dns_resolver registered Sep 12 17:21:09.069523 kernel: registered taskstats version 1 Sep 12 17:21:09.069528 kernel: Loading compiled-in X.509 certificates Sep 12 17:21:09.069533 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.47-flatcar: 7675c1947f324bc6524fdc1ee0f8f5f343acfea7' Sep 12 17:21:09.069538 kernel: Demotion targets for Node 0: null Sep 12 17:21:09.069542 kernel: Key type .fscrypt registered Sep 12 17:21:09.069547 kernel: Key type fscrypt-provisioning registered Sep 12 17:21:09.069552 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 12 17:21:09.069556 kernel: ima: Allocated hash algorithm: sha1 Sep 12 17:21:09.069562 kernel: ima: No architecture policies found Sep 12 17:21:09.069567 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 12 17:21:09.069571 kernel: clk: Disabling unused clocks Sep 12 17:21:09.069576 kernel: PM: genpd: Disabling unused power domains Sep 12 17:21:09.069581 kernel: Warning: unable to open an initial console. Sep 12 17:21:09.069586 kernel: Freeing unused kernel memory: 38912K Sep 12 17:21:09.069590 kernel: Run /init as init process Sep 12 17:21:09.069595 kernel: with arguments: Sep 12 17:21:09.069600 kernel: /init Sep 12 17:21:09.069605 kernel: with environment: Sep 12 17:21:09.069610 kernel: HOME=/ Sep 12 17:21:09.069614 kernel: TERM=linux Sep 12 17:21:09.069619 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 12 17:21:09.069634 systemd[1]: Successfully made /usr/ read-only. Sep 12 17:21:09.069641 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 12 17:21:09.069647 systemd[1]: Detected virtualization microsoft. Sep 12 17:21:09.069653 systemd[1]: Detected architecture arm64. Sep 12 17:21:09.069658 systemd[1]: Running in initrd. Sep 12 17:21:09.069663 systemd[1]: No hostname configured, using default hostname. Sep 12 17:21:09.069669 systemd[1]: Hostname set to . Sep 12 17:21:09.069674 systemd[1]: Initializing machine ID from random generator. Sep 12 17:21:09.069679 systemd[1]: Queued start job for default target initrd.target. Sep 12 17:21:09.069684 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:21:09.069689 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:21:09.069695 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 12 17:21:09.069701 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 17:21:09.069706 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 12 17:21:09.069712 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 12 17:21:09.069718 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 12 17:21:09.069723 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 12 17:21:09.069728 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:21:09.069734 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:21:09.069739 systemd[1]: Reached target paths.target - Path Units. Sep 12 17:21:09.069745 systemd[1]: Reached target slices.target - Slice Units. Sep 12 17:21:09.069750 systemd[1]: Reached target swap.target - Swaps. Sep 12 17:21:09.069755 systemd[1]: Reached target timers.target - Timer Units. Sep 12 17:21:09.069760 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 17:21:09.069765 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 17:21:09.069771 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 12 17:21:09.069776 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 12 17:21:09.069782 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:21:09.069787 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 17:21:09.069792 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:21:09.069797 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 17:21:09.069802 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 12 17:21:09.069807 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 17:21:09.069812 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 12 17:21:09.069818 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 12 17:21:09.069824 systemd[1]: Starting systemd-fsck-usr.service... Sep 12 17:21:09.069829 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 17:21:09.069834 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 17:21:09.069850 systemd-journald[225]: Collecting audit messages is disabled. Sep 12 17:21:09.069864 systemd-journald[225]: Journal started Sep 12 17:21:09.069878 systemd-journald[225]: Runtime Journal (/run/log/journal/fab5ffed51b54ddb90eb9ec6bc0f0e01) is 8M, max 78.5M, 70.5M free. Sep 12 17:21:09.073657 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:21:09.078279 systemd-modules-load[227]: Inserted module 'overlay' Sep 12 17:21:09.096049 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 17:21:09.103669 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 12 17:21:09.112120 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 12 17:21:09.112134 kernel: Bridge firewalling registered Sep 12 17:21:09.113934 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:21:09.114281 systemd-modules-load[227]: Inserted module 'br_netfilter' Sep 12 17:21:09.126772 systemd[1]: Finished systemd-fsck-usr.service. Sep 12 17:21:09.129939 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 17:21:09.138645 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:21:09.147129 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:21:09.162836 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:21:09.175761 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 17:21:09.189762 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 17:21:09.205955 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:21:09.211172 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:21:09.221120 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 17:21:09.225155 systemd-tmpfiles[256]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 12 17:21:09.237857 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:21:09.249856 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 12 17:21:09.266325 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 17:21:09.275992 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 17:21:09.289547 dracut-cmdline[261]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=9b01894f6bb04aff3ec9b8554b3ae56a087d51961f1a01981bc4d4f54ccefc09 Sep 12 17:21:09.313690 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:21:09.338332 systemd-resolved[263]: Positive Trust Anchors: Sep 12 17:21:09.338344 systemd-resolved[263]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 17:21:09.338363 systemd-resolved[263]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 17:21:09.343228 systemd-resolved[263]: Defaulting to hostname 'linux'. Sep 12 17:21:09.343935 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 17:21:09.394398 kernel: SCSI subsystem initialized Sep 12 17:21:09.394414 kernel: Loading iSCSI transport class v2.0-870. Sep 12 17:21:09.353901 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:21:09.406640 kernel: iscsi: registered transport (tcp) Sep 12 17:21:09.419382 kernel: iscsi: registered transport (qla4xxx) Sep 12 17:21:09.419416 kernel: QLogic iSCSI HBA Driver Sep 12 17:21:09.431235 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 17:21:09.445831 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 17:21:09.452217 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 17:21:09.497826 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 12 17:21:09.503737 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 12 17:21:09.563637 kernel: raid6: neonx8 gen() 18562 MB/s Sep 12 17:21:09.582631 kernel: raid6: neonx4 gen() 18563 MB/s Sep 12 17:21:09.601657 kernel: raid6: neonx2 gen() 17061 MB/s Sep 12 17:21:09.621633 kernel: raid6: neonx1 gen() 15112 MB/s Sep 12 17:21:09.640631 kernel: raid6: int64x8 gen() 10543 MB/s Sep 12 17:21:09.659631 kernel: raid6: int64x4 gen() 10612 MB/s Sep 12 17:21:09.679632 kernel: raid6: int64x2 gen() 8992 MB/s Sep 12 17:21:09.700680 kernel: raid6: int64x1 gen() 7016 MB/s Sep 12 17:21:09.700722 kernel: raid6: using algorithm neonx4 gen() 18563 MB/s Sep 12 17:21:09.722771 kernel: raid6: .... xor() 15150 MB/s, rmw enabled Sep 12 17:21:09.722813 kernel: raid6: using neon recovery algorithm Sep 12 17:21:09.731587 kernel: xor: measuring software checksum speed Sep 12 17:21:09.731596 kernel: 8regs : 28649 MB/sec Sep 12 17:21:09.734120 kernel: 32regs : 28808 MB/sec Sep 12 17:21:09.737401 kernel: arm64_neon : 37597 MB/sec Sep 12 17:21:09.740166 kernel: xor: using function: arm64_neon (37597 MB/sec) Sep 12 17:21:09.777646 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 12 17:21:09.781924 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 12 17:21:09.791807 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:21:09.821597 systemd-udevd[474]: Using default interface naming scheme 'v255'. Sep 12 17:21:09.824474 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:21:09.837429 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 12 17:21:09.858482 dracut-pre-trigger[485]: rd.md=0: removing MD RAID activation Sep 12 17:21:09.876665 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 17:21:09.882930 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 17:21:09.923002 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:21:09.929949 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 12 17:21:09.997767 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:21:09.997854 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:21:10.014004 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:21:10.027866 kernel: hv_vmbus: Vmbus version:5.3 Sep 12 17:21:10.027884 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 12 17:21:10.031411 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:21:10.046875 kernel: hv_vmbus: registering driver hyperv_keyboard Sep 12 17:21:10.046892 kernel: hv_vmbus: registering driver hv_netvsc Sep 12 17:21:10.046899 kernel: hv_vmbus: registering driver hid_hyperv Sep 12 17:21:10.055646 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 12 17:21:10.071361 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Sep 12 17:21:10.071402 kernel: hv_vmbus: registering driver hv_storvsc Sep 12 17:21:10.071264 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:21:10.103688 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Sep 12 17:21:10.103704 kernel: scsi host1: storvsc_host_t Sep 12 17:21:10.103832 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Sep 12 17:21:10.103895 kernel: scsi host0: storvsc_host_t Sep 12 17:21:10.103964 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Sep 12 17:21:10.071365 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:21:10.119098 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Sep 12 17:21:10.119231 kernel: hv_netvsc 0022487e-4807-0022-487e-48070022487e eth0: VF slot 1 added Sep 12 17:21:10.088872 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 12 17:21:10.090745 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:21:10.139635 kernel: PTP clock support registered Sep 12 17:21:10.139670 kernel: hv_vmbus: registering driver hv_pci Sep 12 17:21:10.145775 kernel: hv_pci f816b0b5-9ce5-48d2-a2a4-974b6966a87e: PCI VMBus probing: Using version 0x10004 Sep 12 17:21:10.147765 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Sep 12 17:21:10.151278 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:21:10.172395 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Sep 12 17:21:10.172529 kernel: sd 0:0:0:0: [sda] Write Protect is off Sep 12 17:21:10.172599 kernel: hv_pci f816b0b5-9ce5-48d2-a2a4-974b6966a87e: PCI host bridge to bus 9ce5:00 Sep 12 17:21:10.172671 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Sep 12 17:21:10.172732 kernel: hv_utils: Registering HyperV Utility Driver Sep 12 17:21:10.172738 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Sep 12 17:21:10.185894 kernel: pci_bus 9ce5:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Sep 12 17:21:10.186011 kernel: hv_vmbus: registering driver hv_utils Sep 12 17:21:10.186019 kernel: hv_utils: Heartbeat IC version 3.0 Sep 12 17:21:10.186030 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Sep 12 17:21:10.186095 kernel: hv_utils: Shutdown IC version 3.2 Sep 12 17:21:10.199322 kernel: pci_bus 9ce5:00: No busn resource found for root bus, will use [bus 00-ff] Sep 12 17:21:10.199436 kernel: hv_utils: TimeSync IC version 4.0 Sep 12 17:21:10.558348 systemd-resolved[263]: Clock change detected. Flushing caches. Sep 12 17:21:10.575328 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#79 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Sep 12 17:21:10.575441 kernel: pci 9ce5:00:02.0: [15b3:101a] type 00 class 0x020000 PCIe Endpoint Sep 12 17:21:10.575460 kernel: pci 9ce5:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref] Sep 12 17:21:10.582841 kernel: pci 9ce5:00:02.0: enabling Extended Tags Sep 12 17:21:10.595857 kernel: pci 9ce5:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 9ce5:00:02.0 (capable of 252.048 Gb/s with 16.0 GT/s PCIe x16 link) Sep 12 17:21:10.595887 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 12 17:21:10.602198 kernel: pci_bus 9ce5:00: busn_res: [bus 00-ff] end is updated to 00 Sep 12 17:21:10.606550 kernel: pci 9ce5:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref]: assigned Sep 12 17:21:10.606671 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Sep 12 17:21:10.619140 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Sep 12 17:21:10.619290 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 12 17:21:10.621787 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Sep 12 17:21:10.638802 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#300 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Sep 12 17:21:10.658789 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#120 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Sep 12 17:21:10.679474 kernel: mlx5_core 9ce5:00:02.0: enabling device (0000 -> 0002) Sep 12 17:21:10.687117 kernel: mlx5_core 9ce5:00:02.0: PTM is not supported by PCIe Sep 12 17:21:10.687267 kernel: mlx5_core 9ce5:00:02.0: firmware version: 16.30.5006 Sep 12 17:21:10.853297 kernel: hv_netvsc 0022487e-4807-0022-487e-48070022487e eth0: VF registering: eth1 Sep 12 17:21:10.853461 kernel: mlx5_core 9ce5:00:02.0 eth1: joined to eth0 Sep 12 17:21:10.858774 kernel: mlx5_core 9ce5:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Sep 12 17:21:10.867789 kernel: mlx5_core 9ce5:00:02.0 enP40165s1: renamed from eth1 Sep 12 17:21:11.781037 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Sep 12 17:21:11.875544 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Sep 12 17:21:11.880483 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Sep 12 17:21:11.896199 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Sep 12 17:21:11.901721 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 12 17:21:11.999091 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Sep 12 17:21:12.940398 disk-uuid[646]: Warning: The kernel is still using the old partition table. Sep 12 17:21:12.940398 disk-uuid[646]: The new table will be used at the next reboot or after you Sep 12 17:21:12.940398 disk-uuid[646]: run partprobe(8) or kpartx(8) Sep 12 17:21:12.940398 disk-uuid[646]: The operation has completed successfully. Sep 12 17:21:13.410434 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 12 17:21:13.410522 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 12 17:21:13.420606 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 12 17:21:13.426262 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 12 17:21:13.431631 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 17:21:13.442057 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:21:13.452890 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 17:21:13.475654 sh[765]: Success Sep 12 17:21:13.470898 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 12 17:21:13.495866 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 12 17:21:13.518994 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 12 17:21:13.519018 kernel: device-mapper: uevent: version 1.0.3 Sep 12 17:21:13.519025 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 12 17:21:13.524795 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Sep 12 17:21:14.171945 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 12 17:21:14.188989 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 12 17:21:14.196778 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 12 17:21:14.216783 kernel: BTRFS: device fsid 752cb955-bdfa-486a-ad02-b54d5e61d194 devid 1 transid 39 /dev/mapper/usr (254:0) scanned by mount (791) Sep 12 17:21:14.226047 kernel: BTRFS info (device dm-0): first mount of filesystem 752cb955-bdfa-486a-ad02-b54d5e61d194 Sep 12 17:21:14.226056 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 12 17:21:14.564182 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 12 17:21:14.564260 kernel: BTRFS info (device dm-0): enabling free space tree Sep 12 17:21:14.593733 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 12 17:21:14.597317 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 12 17:21:14.604622 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 12 17:21:14.605187 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 12 17:21:14.625863 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 12 17:21:14.658803 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (814) Sep 12 17:21:14.668250 kernel: BTRFS info (device sda6): first mount of filesystem 5f4a7913-42f7-487c-8331-8ab180fe9df7 Sep 12 17:21:14.668280 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 12 17:21:14.720174 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 17:21:14.730928 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 17:21:14.763626 systemd-networkd[954]: lo: Link UP Sep 12 17:21:14.763636 systemd-networkd[954]: lo: Gained carrier Sep 12 17:21:14.764868 systemd-networkd[954]: Enumeration completed Sep 12 17:21:14.766566 systemd-networkd[954]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:21:14.803974 kernel: BTRFS info (device sda6): turning on async discard Sep 12 17:21:14.803997 kernel: BTRFS info (device sda6): enabling free space tree Sep 12 17:21:14.804004 kernel: BTRFS info (device sda6): last unmount of filesystem 5f4a7913-42f7-487c-8331-8ab180fe9df7 Sep 12 17:21:14.766570 systemd-networkd[954]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 17:21:14.767792 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 17:21:14.773487 systemd[1]: Reached target network.target - Network. Sep 12 17:21:14.804998 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 12 17:21:14.814043 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 12 17:21:14.875309 kernel: mlx5_core 9ce5:00:02.0 enP40165s1: Link up Sep 12 17:21:14.875520 kernel: buffer_size[0]=0 is not enough for lossless buffer Sep 12 17:21:14.908792 kernel: hv_netvsc 0022487e-4807-0022-487e-48070022487e eth0: Data path switched to VF: enP40165s1 Sep 12 17:21:14.909584 systemd-networkd[954]: enP40165s1: Link UP Sep 12 17:21:14.910381 systemd-networkd[954]: eth0: Link UP Sep 12 17:21:14.910479 systemd-networkd[954]: eth0: Gained carrier Sep 12 17:21:14.910491 systemd-networkd[954]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:21:14.927928 systemd-networkd[954]: enP40165s1: Gained carrier Sep 12 17:21:14.940801 systemd-networkd[954]: eth0: DHCPv4 address 10.200.20.38/24, gateway 10.200.20.1 acquired from 168.63.129.16 Sep 12 17:21:16.051990 systemd-networkd[954]: eth0: Gained IPv6LL Sep 12 17:21:16.618315 ignition[962]: Ignition 2.21.0 Sep 12 17:21:16.618330 ignition[962]: Stage: fetch-offline Sep 12 17:21:16.622263 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 17:21:16.618395 ignition[962]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:21:16.630102 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 12 17:21:16.618400 ignition[962]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 12 17:21:16.618476 ignition[962]: parsed url from cmdline: "" Sep 12 17:21:16.618478 ignition[962]: no config URL provided Sep 12 17:21:16.618481 ignition[962]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 17:21:16.618486 ignition[962]: no config at "/usr/lib/ignition/user.ign" Sep 12 17:21:16.618490 ignition[962]: failed to fetch config: resource requires networking Sep 12 17:21:16.618689 ignition[962]: Ignition finished successfully Sep 12 17:21:16.671157 ignition[972]: Ignition 2.21.0 Sep 12 17:21:16.671162 ignition[972]: Stage: fetch Sep 12 17:21:16.671389 ignition[972]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:21:16.671399 ignition[972]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 12 17:21:16.671464 ignition[972]: parsed url from cmdline: "" Sep 12 17:21:16.671466 ignition[972]: no config URL provided Sep 12 17:21:16.671469 ignition[972]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 17:21:16.671475 ignition[972]: no config at "/usr/lib/ignition/user.ign" Sep 12 17:21:16.671494 ignition[972]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Sep 12 17:21:16.743601 ignition[972]: GET result: OK Sep 12 17:21:16.743659 ignition[972]: config has been read from IMDS userdata Sep 12 17:21:16.743678 ignition[972]: parsing config with SHA512: 25a70b4c85f78fb266c1a42d22abc4399d50d7596e26de72d0f8712b218df582da1316c76b885c5044c33d8089704af3e58e9afd4b366810db6f9cd114a5868c Sep 12 17:21:16.750350 unknown[972]: fetched base config from "system" Sep 12 17:21:16.750375 unknown[972]: fetched base config from "system" Sep 12 17:21:16.750634 ignition[972]: fetch: fetch complete Sep 12 17:21:16.750379 unknown[972]: fetched user config from "azure" Sep 12 17:21:16.750638 ignition[972]: fetch: fetch passed Sep 12 17:21:16.755296 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 12 17:21:16.750685 ignition[972]: Ignition finished successfully Sep 12 17:21:16.763195 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 12 17:21:16.794820 ignition[979]: Ignition 2.21.0 Sep 12 17:21:16.797061 ignition[979]: Stage: kargs Sep 12 17:21:16.797244 ignition[979]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:21:16.799825 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 12 17:21:16.797252 ignition[979]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 12 17:21:16.807342 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 12 17:21:16.798142 ignition[979]: kargs: kargs passed Sep 12 17:21:16.798189 ignition[979]: Ignition finished successfully Sep 12 17:21:16.831502 ignition[985]: Ignition 2.21.0 Sep 12 17:21:16.831513 ignition[985]: Stage: disks Sep 12 17:21:16.831860 ignition[985]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:21:16.831870 ignition[985]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 12 17:21:16.840567 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 12 17:21:16.836280 ignition[985]: disks: disks passed Sep 12 17:21:16.848294 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 12 17:21:16.836323 ignition[985]: Ignition finished successfully Sep 12 17:21:16.856045 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 12 17:21:16.863279 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 17:21:16.871481 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 17:21:16.878185 systemd[1]: Reached target basic.target - Basic System. Sep 12 17:21:16.887022 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 12 17:21:17.021969 systemd-fsck[993]: ROOT: clean, 15/7326000 files, 477845/7359488 blocks Sep 12 17:21:17.031648 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 12 17:21:17.037666 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 12 17:21:19.364782 kernel: EXT4-fs (sda9): mounted filesystem c902100c-52b7-422c-84ac-d834d4db2717 r/w with ordered data mode. Quota mode: none. Sep 12 17:21:19.365154 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 12 17:21:19.369349 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 12 17:21:19.402896 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 17:21:19.418524 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 12 17:21:19.423882 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Sep 12 17:21:19.431890 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 12 17:21:19.432912 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 17:21:19.469179 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1007) Sep 12 17:21:19.442364 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 12 17:21:19.481997 kernel: BTRFS info (device sda6): first mount of filesystem 5f4a7913-42f7-487c-8331-8ab180fe9df7 Sep 12 17:21:19.482010 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 12 17:21:19.459896 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 12 17:21:19.495334 kernel: BTRFS info (device sda6): turning on async discard Sep 12 17:21:19.495366 kernel: BTRFS info (device sda6): enabling free space tree Sep 12 17:21:19.496461 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 17:21:20.144908 coreos-metadata[1009]: Sep 12 17:21:20.144 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Sep 12 17:21:20.151467 coreos-metadata[1009]: Sep 12 17:21:20.151 INFO Fetch successful Sep 12 17:21:20.155616 coreos-metadata[1009]: Sep 12 17:21:20.155 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Sep 12 17:21:20.164920 coreos-metadata[1009]: Sep 12 17:21:20.163 INFO Fetch successful Sep 12 17:21:20.164920 coreos-metadata[1009]: Sep 12 17:21:20.163 INFO wrote hostname ci-4426.1.0-a-dfa5c25729 to /sysroot/etc/hostname Sep 12 17:21:20.165022 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 12 17:21:20.328276 initrd-setup-root[1037]: cut: /sysroot/etc/passwd: No such file or directory Sep 12 17:21:20.376785 initrd-setup-root[1044]: cut: /sysroot/etc/group: No such file or directory Sep 12 17:21:20.394687 initrd-setup-root[1051]: cut: /sysroot/etc/shadow: No such file or directory Sep 12 17:21:20.399777 initrd-setup-root[1058]: cut: /sysroot/etc/gshadow: No such file or directory Sep 12 17:21:21.317673 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 12 17:21:21.323667 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 12 17:21:21.340229 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 12 17:21:21.350043 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 12 17:21:21.358952 kernel: BTRFS info (device sda6): last unmount of filesystem 5f4a7913-42f7-487c-8331-8ab180fe9df7 Sep 12 17:21:21.381254 ignition[1130]: INFO : Ignition 2.21.0 Sep 12 17:21:21.385490 ignition[1130]: INFO : Stage: mount Sep 12 17:21:21.385490 ignition[1130]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:21:21.385490 ignition[1130]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 12 17:21:21.381562 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 12 17:21:21.412974 ignition[1130]: INFO : mount: mount passed Sep 12 17:21:21.412974 ignition[1130]: INFO : Ignition finished successfully Sep 12 17:21:21.392755 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 12 17:21:21.397269 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 12 17:21:21.422869 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 17:21:21.457690 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1141) Sep 12 17:21:21.457736 kernel: BTRFS info (device sda6): first mount of filesystem 5f4a7913-42f7-487c-8331-8ab180fe9df7 Sep 12 17:21:21.461851 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 12 17:21:21.470538 kernel: BTRFS info (device sda6): turning on async discard Sep 12 17:21:21.470572 kernel: BTRFS info (device sda6): enabling free space tree Sep 12 17:21:21.471706 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 17:21:21.495330 ignition[1158]: INFO : Ignition 2.21.0 Sep 12 17:21:21.495330 ignition[1158]: INFO : Stage: files Sep 12 17:21:21.500852 ignition[1158]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:21:21.500852 ignition[1158]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 12 17:21:21.500852 ignition[1158]: DEBUG : files: compiled without relabeling support, skipping Sep 12 17:21:21.527940 ignition[1158]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 12 17:21:21.527940 ignition[1158]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 12 17:21:21.592930 ignition[1158]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 12 17:21:21.592930 ignition[1158]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 12 17:21:21.592930 ignition[1158]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 12 17:21:21.592812 unknown[1158]: wrote ssh authorized keys file for user: core Sep 12 17:21:21.637673 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Sep 12 17:21:21.637673 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Sep 12 17:21:21.668401 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 12 17:21:21.875121 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Sep 12 17:21:21.875121 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 12 17:21:21.889965 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Sep 12 17:21:22.144926 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 12 17:21:22.206137 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 12 17:21:22.214209 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 12 17:21:22.214209 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 12 17:21:22.214209 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 12 17:21:22.214209 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 12 17:21:22.214209 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 17:21:22.214209 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 17:21:22.214209 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 17:21:22.214209 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 17:21:22.458927 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 17:21:22.465955 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 17:21:22.465955 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 12 17:21:22.465955 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 12 17:21:22.465955 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 12 17:21:22.465955 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Sep 12 17:21:22.929241 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 12 17:21:23.107379 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 12 17:21:23.107379 ignition[1158]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 12 17:21:23.143086 ignition[1158]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 17:21:23.150772 ignition[1158]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 17:21:23.150772 ignition[1158]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 12 17:21:23.150772 ignition[1158]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Sep 12 17:21:23.150772 ignition[1158]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Sep 12 17:21:23.184342 ignition[1158]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 12 17:21:23.184342 ignition[1158]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 12 17:21:23.184342 ignition[1158]: INFO : files: files passed Sep 12 17:21:23.184342 ignition[1158]: INFO : Ignition finished successfully Sep 12 17:21:23.159414 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 12 17:21:23.168885 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 12 17:21:23.201187 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 12 17:21:23.216919 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 12 17:21:23.219006 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 12 17:21:23.237338 initrd-setup-root-after-ignition[1188]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:21:23.237338 initrd-setup-root-after-ignition[1188]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:21:23.251811 initrd-setup-root-after-ignition[1192]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:21:23.249479 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 17:21:23.254658 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 12 17:21:23.265449 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 12 17:21:23.303051 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 12 17:21:23.303129 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 12 17:21:23.311648 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 12 17:21:23.319995 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 12 17:21:23.327251 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 12 17:21:23.328822 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 12 17:21:23.357299 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 17:21:23.363210 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 12 17:21:23.391394 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:21:23.396287 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:21:23.405016 systemd[1]: Stopped target timers.target - Timer Units. Sep 12 17:21:23.412888 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 12 17:21:23.412973 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 17:21:23.424133 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 12 17:21:23.428876 systemd[1]: Stopped target basic.target - Basic System. Sep 12 17:21:23.436785 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 12 17:21:23.444783 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 17:21:23.452528 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 12 17:21:23.460869 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 12 17:21:23.469762 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 12 17:21:23.477792 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 17:21:23.486628 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 12 17:21:23.494252 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 12 17:21:23.502622 systemd[1]: Stopped target swap.target - Swaps. Sep 12 17:21:23.509816 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 12 17:21:23.509912 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 12 17:21:23.520402 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:21:23.524915 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:21:23.533056 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 12 17:21:23.533112 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:21:23.542073 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 12 17:21:23.542149 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 12 17:21:23.554882 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 12 17:21:23.554958 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 17:21:23.559945 systemd[1]: ignition-files.service: Deactivated successfully. Sep 12 17:21:23.560012 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 12 17:21:23.630953 ignition[1212]: INFO : Ignition 2.21.0 Sep 12 17:21:23.630953 ignition[1212]: INFO : Stage: umount Sep 12 17:21:23.630953 ignition[1212]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:21:23.630953 ignition[1212]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 12 17:21:23.567276 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 12 17:21:23.663298 ignition[1212]: INFO : umount: umount passed Sep 12 17:21:23.663298 ignition[1212]: INFO : Ignition finished successfully Sep 12 17:21:23.567335 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 12 17:21:23.578467 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 12 17:21:23.591887 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 12 17:21:23.592006 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:21:23.601746 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 12 17:21:23.614991 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 12 17:21:23.615110 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:21:23.625047 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 12 17:21:23.625127 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 17:21:23.642849 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 12 17:21:23.642932 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 12 17:21:23.651177 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 12 17:21:23.651240 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 12 17:21:23.660811 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 12 17:21:23.660882 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 12 17:21:23.667432 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 12 17:21:23.667467 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 12 17:21:23.675237 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 12 17:21:23.675269 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 12 17:21:23.682372 systemd[1]: Stopped target network.target - Network. Sep 12 17:21:23.690297 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 12 17:21:23.690346 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 17:21:23.698755 systemd[1]: Stopped target paths.target - Path Units. Sep 12 17:21:23.702314 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 12 17:21:23.707239 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:21:23.712346 systemd[1]: Stopped target slices.target - Slice Units. Sep 12 17:21:23.720508 systemd[1]: Stopped target sockets.target - Socket Units. Sep 12 17:21:23.727880 systemd[1]: iscsid.socket: Deactivated successfully. Sep 12 17:21:23.727918 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 17:21:23.735800 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 12 17:21:23.735829 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 17:21:23.744108 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 12 17:21:23.744147 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 12 17:21:23.751807 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 12 17:21:23.751836 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 12 17:21:23.760429 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 12 17:21:23.768034 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 12 17:21:23.776617 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 12 17:21:23.777127 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 12 17:21:23.777197 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 12 17:21:23.784862 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 12 17:21:23.784928 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 12 17:21:23.996371 kernel: hv_netvsc 0022487e-4807-0022-487e-48070022487e eth0: Data path switched from VF: enP40165s1 Sep 12 17:21:23.799116 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 12 17:21:23.799296 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 12 17:21:23.799386 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 12 17:21:23.811096 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 12 17:21:23.812553 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 12 17:21:23.819264 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 12 17:21:23.819294 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:21:23.828006 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 12 17:21:23.828054 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 12 17:21:23.837115 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 12 17:21:23.849399 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 12 17:21:23.849456 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 17:21:23.859203 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 17:21:23.859239 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:21:23.870919 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 12 17:21:23.870959 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 12 17:21:23.875486 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 12 17:21:23.875520 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:21:23.890746 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:21:23.897642 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 12 17:21:23.897693 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 12 17:21:23.922115 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 12 17:21:23.922224 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:21:23.927520 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 12 17:21:23.927554 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 12 17:21:23.935700 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 12 17:21:23.935721 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:21:23.943869 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 12 17:21:23.943907 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 12 17:21:23.956645 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 12 17:21:23.956685 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 12 17:21:23.967899 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 17:21:23.967929 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:21:23.986860 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 12 17:21:23.998812 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 12 17:21:23.998867 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 17:21:24.010511 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 12 17:21:24.010547 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:21:24.025867 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:21:24.025907 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:21:24.035749 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Sep 12 17:21:24.035802 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 12 17:21:24.269773 systemd-journald[225]: Received SIGTERM from PID 1 (systemd). Sep 12 17:21:24.035828 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 12 17:21:24.036042 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 12 17:21:24.036119 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 12 17:21:24.134036 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 12 17:21:24.134141 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 12 17:21:24.142593 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 12 17:21:24.152401 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 12 17:21:24.186744 systemd[1]: Switching root. Sep 12 17:21:24.300445 systemd-journald[225]: Journal stopped Sep 12 17:21:38.986700 kernel: SELinux: policy capability network_peer_controls=1 Sep 12 17:21:38.986717 kernel: SELinux: policy capability open_perms=1 Sep 12 17:21:38.986725 kernel: SELinux: policy capability extended_socket_class=1 Sep 12 17:21:38.986731 kernel: SELinux: policy capability always_check_network=0 Sep 12 17:21:38.986737 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 12 17:21:38.986742 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 12 17:21:38.986748 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 12 17:21:38.986753 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 12 17:21:38.986758 kernel: SELinux: policy capability userspace_initial_context=0 Sep 12 17:21:38.986763 kernel: audit: type=1403 audit(1757697685.885:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 12 17:21:38.986778 systemd[1]: Successfully loaded SELinux policy in 257.686ms. Sep 12 17:21:38.986786 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 4.280ms. Sep 12 17:21:38.986793 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 12 17:21:38.986799 systemd[1]: Detected virtualization microsoft. Sep 12 17:21:38.986805 systemd[1]: Detected architecture arm64. Sep 12 17:21:38.986812 systemd[1]: Detected first boot. Sep 12 17:21:38.986818 systemd[1]: Hostname set to . Sep 12 17:21:38.986824 systemd[1]: Initializing machine ID from random generator. Sep 12 17:21:38.986830 zram_generator::config[1256]: No configuration found. Sep 12 17:21:38.986836 kernel: NET: Registered PF_VSOCK protocol family Sep 12 17:21:38.986842 systemd[1]: Populated /etc with preset unit settings. Sep 12 17:21:38.986848 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 12 17:21:38.986855 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 12 17:21:38.986861 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 12 17:21:38.986867 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 12 17:21:38.986873 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 12 17:21:38.986879 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 12 17:21:38.986885 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 12 17:21:38.986891 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 12 17:21:38.986898 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 12 17:21:38.986904 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 12 17:21:38.986910 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 12 17:21:38.986916 systemd[1]: Created slice user.slice - User and Session Slice. Sep 12 17:21:38.986922 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:21:38.986928 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:21:38.986934 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 12 17:21:38.986940 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 12 17:21:38.986946 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 12 17:21:38.986953 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 17:21:38.986960 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 12 17:21:38.986968 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:21:38.986974 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:21:38.986980 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 12 17:21:38.986986 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 12 17:21:38.986992 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 12 17:21:38.986999 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 12 17:21:38.987006 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:21:38.987012 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 17:21:38.987018 systemd[1]: Reached target slices.target - Slice Units. Sep 12 17:21:38.987024 systemd[1]: Reached target swap.target - Swaps. Sep 12 17:21:38.987030 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 12 17:21:38.987036 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 12 17:21:38.987043 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 12 17:21:38.987050 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:21:38.987056 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 17:21:38.987062 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:21:38.987068 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 12 17:21:38.987074 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 12 17:21:38.987081 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 12 17:21:38.987087 systemd[1]: Mounting media.mount - External Media Directory... Sep 12 17:21:38.987095 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 12 17:21:38.987101 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 12 17:21:38.987107 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 12 17:21:38.987113 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 12 17:21:38.987120 systemd[1]: Reached target machines.target - Containers. Sep 12 17:21:38.987126 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 12 17:21:38.987133 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:21:38.987139 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 17:21:38.987146 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 12 17:21:38.987152 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:21:38.987158 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 17:21:38.987164 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:21:38.987170 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 12 17:21:38.987176 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:21:38.987183 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 12 17:21:38.987190 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 12 17:21:38.987196 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 12 17:21:38.987202 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 12 17:21:38.987208 systemd[1]: Stopped systemd-fsck-usr.service. Sep 12 17:21:38.987215 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 17:21:38.987221 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 17:21:38.987228 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 17:21:38.987234 kernel: fuse: init (API version 7.41) Sep 12 17:21:38.987240 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 17:21:38.987247 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 12 17:21:38.987253 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 12 17:21:38.987259 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 17:21:38.987265 systemd[1]: verity-setup.service: Deactivated successfully. Sep 12 17:21:38.987271 systemd[1]: Stopped verity-setup.service. Sep 12 17:21:38.987277 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 12 17:21:38.987283 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 12 17:21:38.987290 systemd[1]: Mounted media.mount - External Media Directory. Sep 12 17:21:38.987297 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 12 17:21:38.987303 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 12 17:21:38.987309 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 12 17:21:38.987315 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:21:38.987322 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 12 17:21:38.987328 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 12 17:21:38.987334 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:21:38.987340 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:21:38.987347 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:21:38.987353 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:21:38.987360 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 12 17:21:38.987366 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 12 17:21:38.987372 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 12 17:21:38.987378 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 12 17:21:38.987385 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 12 17:21:38.987391 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 12 17:21:38.987397 kernel: loop: module loaded Sep 12 17:21:38.987403 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:21:38.987409 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:21:38.987415 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 17:21:38.987421 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 12 17:21:38.987428 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 17:21:38.987434 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 17:21:38.987440 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 12 17:21:38.987446 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 17:21:38.987453 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 12 17:21:38.987473 systemd-journald[1336]: Collecting audit messages is disabled. Sep 12 17:21:38.987488 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 12 17:21:38.987495 systemd-journald[1336]: Journal started Sep 12 17:21:38.987510 systemd-journald[1336]: Runtime Journal (/run/log/journal/47e7d8171ebd42fe9db79e46acff1c16) is 8M, max 78.5M, 70.5M free. Sep 12 17:21:37.202447 systemd[1]: Queued start job for default target multi-user.target. Sep 12 17:21:37.209129 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Sep 12 17:21:37.209473 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 12 17:21:37.209708 systemd[1]: systemd-journald.service: Consumed 2.415s CPU time. Sep 12 17:21:39.000887 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:21:39.030774 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 12 17:21:39.030812 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 17:21:39.050785 kernel: ACPI: bus type drm_connector registered Sep 12 17:21:39.050816 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 12 17:21:39.177954 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 12 17:21:39.186325 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 17:21:39.187167 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 17:21:39.187306 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 17:21:39.191725 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 12 17:21:39.196684 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:21:39.210275 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 12 17:21:39.262829 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 17:21:39.268310 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:21:39.531832 kernel: loop0: detected capacity change from 0 to 211168 Sep 12 17:21:39.562873 systemd-journald[1336]: Time spent on flushing to /var/log/journal/47e7d8171ebd42fe9db79e46acff1c16 is 311.210ms for 941 entries. Sep 12 17:21:39.562873 systemd-journald[1336]: System Journal (/var/log/journal/47e7d8171ebd42fe9db79e46acff1c16) is 8M, max 2.6G, 2.6G free. Sep 12 17:21:44.156289 systemd-journald[1336]: Received client request to flush runtime journal. Sep 12 17:21:44.156357 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 12 17:21:39.963730 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 12 17:21:39.969115 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 12 17:21:39.977659 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 12 17:21:40.165384 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:21:41.216982 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 12 17:21:41.223066 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 12 17:21:44.157746 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 12 17:21:44.216785 kernel: loop1: detected capacity change from 0 to 119320 Sep 12 17:21:46.019433 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 12 17:21:46.024906 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 17:21:46.080932 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 12 17:21:46.081412 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 12 17:21:46.174441 systemd-tmpfiles[1410]: ACLs are not supported, ignoring. Sep 12 17:21:46.174451 systemd-tmpfiles[1410]: ACLs are not supported, ignoring. Sep 12 17:21:46.177109 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:21:46.276791 kernel: loop2: detected capacity change from 0 to 29264 Sep 12 17:21:46.677837 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 12 17:21:46.684180 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:21:46.708638 systemd-udevd[1416]: Using default interface naming scheme 'v255'. Sep 12 17:21:46.850784 kernel: loop3: detected capacity change from 0 to 100608 Sep 12 17:21:47.468790 kernel: loop4: detected capacity change from 0 to 211168 Sep 12 17:21:47.490783 kernel: loop5: detected capacity change from 0 to 119320 Sep 12 17:21:47.500781 kernel: loop6: detected capacity change from 0 to 29264 Sep 12 17:21:47.511781 kernel: loop7: detected capacity change from 0 to 100608 Sep 12 17:21:47.518260 (sd-merge)[1419]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Sep 12 17:21:47.519390 (sd-merge)[1419]: Merged extensions into '/usr'. Sep 12 17:21:47.521802 systemd[1]: Reload requested from client PID 1360 ('systemd-sysext') (unit systemd-sysext.service)... Sep 12 17:21:47.521814 systemd[1]: Reloading... Sep 12 17:21:47.605988 zram_generator::config[1473]: No configuration found. Sep 12 17:21:47.735825 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#5 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Sep 12 17:21:47.774773 kernel: mousedev: PS/2 mouse device common for all mice Sep 12 17:21:47.774831 kernel: hv_vmbus: registering driver hv_balloon Sep 12 17:21:47.788520 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Sep 12 17:21:47.793473 kernel: hv_balloon: Memory hot add disabled on ARM64 Sep 12 17:21:47.859018 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Sep 12 17:21:47.859327 systemd[1]: Reloading finished in 337 ms. Sep 12 17:21:47.864418 kernel: hv_vmbus: registering driver hyperv_fb Sep 12 17:21:47.864487 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Sep 12 17:21:47.870087 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Sep 12 17:21:47.873246 kernel: Console: switching to colour dummy device 80x25 Sep 12 17:21:47.878942 kernel: Console: switching to colour frame buffer device 128x48 Sep 12 17:21:47.882573 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:21:47.887689 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 12 17:21:47.913592 systemd[1]: Starting ensure-sysext.service... Sep 12 17:21:47.919921 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 17:21:47.927911 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 17:21:47.945523 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:21:47.957144 systemd[1]: Reload requested from client PID 1565 ('systemctl') (unit ensure-sysext.service)... Sep 12 17:21:47.957240 systemd[1]: Reloading... Sep 12 17:21:47.971923 systemd-tmpfiles[1567]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 12 17:21:47.971952 systemd-tmpfiles[1567]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 12 17:21:47.972721 systemd-tmpfiles[1567]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 12 17:21:47.974036 systemd-tmpfiles[1567]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 12 17:21:47.974596 systemd-tmpfiles[1567]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 12 17:21:47.975583 systemd-tmpfiles[1567]: ACLs are not supported, ignoring. Sep 12 17:21:47.975858 systemd-tmpfiles[1567]: ACLs are not supported, ignoring. Sep 12 17:21:48.027421 zram_generator::config[1618]: No configuration found. Sep 12 17:21:48.173523 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Sep 12 17:21:48.178720 systemd[1]: Reloading finished in 221 ms. Sep 12 17:21:48.218594 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 12 17:21:48.232822 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 12 17:21:48.238427 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:21:48.239955 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:21:48.249376 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:21:48.254752 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:21:48.260075 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:21:48.260170 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 17:21:48.260750 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:21:48.260934 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:21:48.267341 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:21:48.267459 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:21:48.275453 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:21:48.276460 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:21:48.281941 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:21:48.288743 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:21:48.288851 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 17:21:48.289379 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:21:48.289518 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:21:48.294109 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:21:48.296904 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:21:48.301860 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:21:48.301973 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:21:48.310507 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:21:48.311372 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:21:48.323946 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 17:21:48.329301 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:21:48.340415 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:21:48.344538 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:21:48.344614 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 17:21:48.344708 systemd[1]: Reached target time-set.target - System Time Set. Sep 12 17:21:48.349599 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:21:48.349739 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:21:48.354568 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 17:21:48.354679 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 17:21:48.359278 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:21:48.359387 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:21:48.364639 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:21:48.364738 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:21:48.370688 systemd[1]: Finished ensure-sysext.service. Sep 12 17:21:48.376126 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 17:21:48.376175 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 17:21:48.471256 systemd-tmpfiles[1567]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 17:21:48.471266 systemd-tmpfiles[1567]: Skipping /boot Sep 12 17:21:48.477075 systemd-tmpfiles[1567]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 17:21:48.477086 systemd-tmpfiles[1567]: Skipping /boot Sep 12 17:21:48.519362 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 12 17:21:48.619778 kernel: MACsec IEEE 802.1AE Sep 12 17:21:48.764562 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:21:48.770666 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 17:21:49.400351 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 12 17:21:49.411346 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 12 17:21:49.417877 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 17:21:49.423119 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 12 17:21:49.434833 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 12 17:21:49.472531 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 12 17:21:49.672239 systemd-networkd[1566]: lo: Link UP Sep 12 17:21:49.672247 systemd-networkd[1566]: lo: Gained carrier Sep 12 17:21:49.673235 systemd-networkd[1566]: Enumeration completed Sep 12 17:21:49.673312 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 17:21:49.678018 systemd-networkd[1566]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:21:49.678026 systemd-networkd[1566]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 17:21:49.679444 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 12 17:21:49.687005 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 12 17:21:49.723948 systemd-resolved[1749]: Positive Trust Anchors: Sep 12 17:21:49.723964 systemd-resolved[1749]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 17:21:49.723985 systemd-resolved[1749]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 17:21:49.734601 kernel: mlx5_core 9ce5:00:02.0 enP40165s1: Link up Sep 12 17:21:49.734818 kernel: buffer_size[0]=0 is not enough for lossless buffer Sep 12 17:21:49.755776 kernel: hv_netvsc 0022487e-4807-0022-487e-48070022487e eth0: Data path switched to VF: enP40165s1 Sep 12 17:21:49.755992 systemd-networkd[1566]: enP40165s1: Link UP Sep 12 17:21:49.756087 systemd-networkd[1566]: eth0: Link UP Sep 12 17:21:49.756092 systemd-networkd[1566]: eth0: Gained carrier Sep 12 17:21:49.756106 systemd-networkd[1566]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:21:49.765956 systemd-networkd[1566]: enP40165s1: Gained carrier Sep 12 17:21:49.777791 systemd-networkd[1566]: eth0: DHCPv4 address 10.200.20.38/24, gateway 10.200.20.1 acquired from 168.63.129.16 Sep 12 17:21:50.264349 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 12 17:21:50.322821 systemd-resolved[1749]: Using system hostname 'ci-4426.1.0-a-dfa5c25729'. Sep 12 17:21:50.323900 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 17:21:50.328622 systemd[1]: Reached target network.target - Network. Sep 12 17:21:50.332160 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:21:50.777397 augenrules[1770]: No rules Sep 12 17:21:50.779153 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 17:21:50.779341 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 17:21:50.931946 systemd-networkd[1566]: eth0: Gained IPv6LL Sep 12 17:21:50.933641 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 12 17:21:50.938899 systemd[1]: Reached target network-online.target - Network is Online. Sep 12 17:21:51.366888 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 12 17:21:55.313885 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:22:00.562985 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 12 17:22:00.569365 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 12 17:22:06.318285 ldconfig[1350]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 12 17:22:06.357380 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 12 17:22:06.363455 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 12 17:22:06.402598 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 12 17:22:06.407215 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 17:22:06.411520 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 12 17:22:06.416342 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 12 17:22:06.421615 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 12 17:22:06.425942 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 12 17:22:06.430843 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 12 17:22:06.435743 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 12 17:22:06.435777 systemd[1]: Reached target paths.target - Path Units. Sep 12 17:22:06.439160 systemd[1]: Reached target timers.target - Timer Units. Sep 12 17:22:06.461457 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 12 17:22:06.466703 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 12 17:22:06.471579 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 12 17:22:06.476901 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 12 17:22:06.482124 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 12 17:22:06.487587 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 12 17:22:06.505674 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 12 17:22:06.510929 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 12 17:22:06.515144 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 17:22:06.518877 systemd[1]: Reached target basic.target - Basic System. Sep 12 17:22:06.522279 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 12 17:22:06.522298 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 12 17:22:06.561958 systemd[1]: Starting chronyd.service - NTP client/server... Sep 12 17:22:06.573848 systemd[1]: Starting containerd.service - containerd container runtime... Sep 12 17:22:06.578682 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 12 17:22:06.584876 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 12 17:22:06.590888 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 12 17:22:06.602592 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 12 17:22:06.608857 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 12 17:22:06.612727 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 12 17:22:06.614222 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Sep 12 17:22:06.619104 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Sep 12 17:22:06.619886 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:22:06.631018 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 12 17:22:06.635891 KVP[1797]: KVP starting; pid is:1797 Sep 12 17:22:06.637133 jq[1795]: false Sep 12 17:22:06.638893 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 12 17:22:06.642304 chronyd[1787]: chronyd version 4.7 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Sep 12 17:22:06.644551 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 12 17:22:06.654961 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 12 17:22:06.662907 KVP[1797]: KVP LIC Version: 3.1 Sep 12 17:22:06.663924 kernel: hv_utils: KVP IC version 4.0 Sep 12 17:22:06.664662 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 12 17:22:06.675895 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 12 17:22:06.680474 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 12 17:22:06.680791 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 12 17:22:06.681175 systemd[1]: Starting update-engine.service - Update Engine... Sep 12 17:22:06.687927 chronyd[1787]: Timezone right/UTC failed leap second check, ignoring Sep 12 17:22:06.688058 chronyd[1787]: Loaded seccomp filter (level 2) Sep 12 17:22:06.688969 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 12 17:22:06.694417 systemd[1]: Started chronyd.service - NTP client/server. Sep 12 17:22:06.699426 jq[1815]: true Sep 12 17:22:06.700248 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 12 17:22:06.705709 extend-filesystems[1796]: Found /dev/sda6 Sep 12 17:22:06.709109 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 12 17:22:06.709253 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 12 17:22:06.711950 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 12 17:22:06.712114 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 12 17:22:06.721075 systemd[1]: motdgen.service: Deactivated successfully. Sep 12 17:22:06.723531 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 12 17:22:06.729567 extend-filesystems[1796]: Found /dev/sda9 Sep 12 17:22:06.743514 extend-filesystems[1796]: Checking size of /dev/sda9 Sep 12 17:22:06.749323 jq[1825]: true Sep 12 17:22:06.750145 (ntainerd)[1826]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 12 17:22:06.751801 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 12 17:22:06.808523 update_engine[1812]: I20250912 17:22:06.808454 1812 main.cc:92] Flatcar Update Engine starting Sep 12 17:22:06.839910 extend-filesystems[1796]: Old size kept for /dev/sda9 Sep 12 17:22:06.835500 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 12 17:22:06.835645 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 12 17:22:06.840623 systemd-logind[1809]: New seat seat0. Sep 12 17:22:06.842727 systemd-logind[1809]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 12 17:22:06.848885 systemd[1]: Started systemd-logind.service - User Login Management. Sep 12 17:22:06.871884 tar[1824]: linux-arm64/LICENSE Sep 12 17:22:06.873258 tar[1824]: linux-arm64/helm Sep 12 17:22:06.918419 sshd_keygen[1827]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 12 17:22:06.932347 bash[1862]: Updated "/home/core/.ssh/authorized_keys" Sep 12 17:22:06.935232 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 12 17:22:06.944323 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 12 17:22:06.958519 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 12 17:22:06.966639 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 12 17:22:06.974200 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Sep 12 17:22:07.008656 systemd[1]: issuegen.service: Deactivated successfully. Sep 12 17:22:07.008824 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 12 17:22:07.039221 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 12 17:22:07.062335 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Sep 12 17:22:07.088263 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 12 17:22:07.097934 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 12 17:22:07.103925 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 12 17:22:07.110093 systemd[1]: Reached target getty.target - Login Prompts. Sep 12 17:22:07.227735 dbus-daemon[1790]: [system] SELinux support is enabled Sep 12 17:22:07.227908 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 12 17:22:07.232493 update_engine[1812]: I20250912 17:22:07.230716 1812 update_check_scheduler.cc:74] Next update check in 2m35s Sep 12 17:22:07.235605 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 12 17:22:07.235624 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 12 17:22:07.241863 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 12 17:22:07.242029 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 12 17:22:07.249574 systemd[1]: Started update-engine.service - Update Engine. Sep 12 17:22:07.249662 dbus-daemon[1790]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 12 17:22:07.258035 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 12 17:22:07.290035 tar[1824]: linux-arm64/README.md Sep 12 17:22:07.300714 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 12 17:22:07.300878 coreos-metadata[1789]: Sep 12 17:22:07.300 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Sep 12 17:22:07.307751 coreos-metadata[1789]: Sep 12 17:22:07.307 INFO Fetch successful Sep 12 17:22:07.308262 coreos-metadata[1789]: Sep 12 17:22:07.308 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Sep 12 17:22:07.312114 coreos-metadata[1789]: Sep 12 17:22:07.311 INFO Fetch successful Sep 12 17:22:07.312515 coreos-metadata[1789]: Sep 12 17:22:07.312 INFO Fetching http://168.63.129.16/machine/39202d5c-7063-41c7-871f-3c2d42cfa14a/d35db3b7%2Dac19%2D4dc0%2Dab72%2D61c16cd40e14.%5Fci%2D4426.1.0%2Da%2Ddfa5c25729?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Sep 12 17:22:07.313800 coreos-metadata[1789]: Sep 12 17:22:07.313 INFO Fetch successful Sep 12 17:22:07.314006 coreos-metadata[1789]: Sep 12 17:22:07.313 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Sep 12 17:22:07.322795 coreos-metadata[1789]: Sep 12 17:22:07.321 INFO Fetch successful Sep 12 17:22:07.336797 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 12 17:22:07.342177 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 12 17:22:07.454992 locksmithd[1952]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 12 17:22:07.487588 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:22:07.670637 containerd[1826]: time="2025-09-12T17:22:07Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 12 17:22:07.671790 containerd[1826]: time="2025-09-12T17:22:07.671189600Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 12 17:22:07.676471 containerd[1826]: time="2025-09-12T17:22:07.676276184Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="6.904µs" Sep 12 17:22:07.676551 containerd[1826]: time="2025-09-12T17:22:07.676531792Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 12 17:22:07.676614 containerd[1826]: time="2025-09-12T17:22:07.676602960Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 12 17:22:07.676811 containerd[1826]: time="2025-09-12T17:22:07.676794048Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 12 17:22:07.676885 containerd[1826]: time="2025-09-12T17:22:07.676873080Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 12 17:22:07.676938 containerd[1826]: time="2025-09-12T17:22:07.676927328Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 12 17:22:07.677043 containerd[1826]: time="2025-09-12T17:22:07.677027352Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 12 17:22:07.677095 containerd[1826]: time="2025-09-12T17:22:07.677084616Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 12 17:22:07.677316 containerd[1826]: time="2025-09-12T17:22:07.677297896Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 12 17:22:07.677379 containerd[1826]: time="2025-09-12T17:22:07.677366848Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 12 17:22:07.677427 containerd[1826]: time="2025-09-12T17:22:07.677414600Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 12 17:22:07.677469 containerd[1826]: time="2025-09-12T17:22:07.677458288Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 12 17:22:07.677597 containerd[1826]: time="2025-09-12T17:22:07.677582200Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 12 17:22:07.677838 containerd[1826]: time="2025-09-12T17:22:07.677820520Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 12 17:22:07.677930 containerd[1826]: time="2025-09-12T17:22:07.677916848Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 12 17:22:07.677980 containerd[1826]: time="2025-09-12T17:22:07.677967752Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 12 17:22:07.678051 containerd[1826]: time="2025-09-12T17:22:07.678038032Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 12 17:22:07.678271 containerd[1826]: time="2025-09-12T17:22:07.678253840Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 12 17:22:07.678388 containerd[1826]: time="2025-09-12T17:22:07.678373528Z" level=info msg="metadata content store policy set" policy=shared Sep 12 17:22:07.694981 containerd[1826]: time="2025-09-12T17:22:07.694960056Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 12 17:22:07.695097 containerd[1826]: time="2025-09-12T17:22:07.695083984Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 12 17:22:07.695189 containerd[1826]: time="2025-09-12T17:22:07.695175248Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 12 17:22:07.695266 containerd[1826]: time="2025-09-12T17:22:07.695254216Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 12 17:22:07.695330 containerd[1826]: time="2025-09-12T17:22:07.695318856Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 12 17:22:07.695380 containerd[1826]: time="2025-09-12T17:22:07.695370120Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 12 17:22:07.695438 containerd[1826]: time="2025-09-12T17:22:07.695427200Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 12 17:22:07.695492 containerd[1826]: time="2025-09-12T17:22:07.695476536Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 12 17:22:07.695586 containerd[1826]: time="2025-09-12T17:22:07.695571632Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 12 17:22:07.695642 containerd[1826]: time="2025-09-12T17:22:07.695622136Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 12 17:22:07.695695 containerd[1826]: time="2025-09-12T17:22:07.695683040Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 12 17:22:07.695750 containerd[1826]: time="2025-09-12T17:22:07.695738264Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 12 17:22:07.695927 containerd[1826]: time="2025-09-12T17:22:07.695912296Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 12 17:22:07.696006 containerd[1826]: time="2025-09-12T17:22:07.695996232Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 12 17:22:07.696060 containerd[1826]: time="2025-09-12T17:22:07.696043504Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 12 17:22:07.696116 containerd[1826]: time="2025-09-12T17:22:07.696103616Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 12 17:22:07.696172 containerd[1826]: time="2025-09-12T17:22:07.696161576Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 12 17:22:07.696219 containerd[1826]: time="2025-09-12T17:22:07.696207992Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 12 17:22:07.696277 containerd[1826]: time="2025-09-12T17:22:07.696266240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 12 17:22:07.696334 containerd[1826]: time="2025-09-12T17:22:07.696315496Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 12 17:22:07.696389 containerd[1826]: time="2025-09-12T17:22:07.696375944Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 12 17:22:07.696443 containerd[1826]: time="2025-09-12T17:22:07.696432096Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 12 17:22:07.696499 containerd[1826]: time="2025-09-12T17:22:07.696483664Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 12 17:22:07.696590 containerd[1826]: time="2025-09-12T17:22:07.696579312Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 12 17:22:07.696649 containerd[1826]: time="2025-09-12T17:22:07.696638832Z" level=info msg="Start snapshots syncer" Sep 12 17:22:07.696735 containerd[1826]: time="2025-09-12T17:22:07.696722728Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 12 17:22:07.696981 containerd[1826]: time="2025-09-12T17:22:07.696955296Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 12 17:22:07.697132 containerd[1826]: time="2025-09-12T17:22:07.697118136Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 12 17:22:07.697270 containerd[1826]: time="2025-09-12T17:22:07.697246824Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 12 17:22:07.697436 containerd[1826]: time="2025-09-12T17:22:07.697420896Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 12 17:22:07.697512 containerd[1826]: time="2025-09-12T17:22:07.697502448Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 12 17:22:07.697582 containerd[1826]: time="2025-09-12T17:22:07.697569296Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 12 17:22:07.697631 containerd[1826]: time="2025-09-12T17:22:07.697619368Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 12 17:22:07.697684 containerd[1826]: time="2025-09-12T17:22:07.697675168Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 12 17:22:07.697729 containerd[1826]: time="2025-09-12T17:22:07.697719968Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 12 17:22:07.697786 containerd[1826]: time="2025-09-12T17:22:07.697776824Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 12 17:22:07.697868 containerd[1826]: time="2025-09-12T17:22:07.697856688Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 12 17:22:07.697933 containerd[1826]: time="2025-09-12T17:22:07.697922728Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 12 17:22:07.697979 containerd[1826]: time="2025-09-12T17:22:07.697969432Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 12 17:22:07.698063 containerd[1826]: time="2025-09-12T17:22:07.698050768Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 12 17:22:07.698124 containerd[1826]: time="2025-09-12T17:22:07.698112392Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 12 17:22:07.698168 containerd[1826]: time="2025-09-12T17:22:07.698158064Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 12 17:22:07.698217 containerd[1826]: time="2025-09-12T17:22:07.698206640Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 12 17:22:07.698260 containerd[1826]: time="2025-09-12T17:22:07.698250216Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 12 17:22:07.698309 containerd[1826]: time="2025-09-12T17:22:07.698300040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 12 17:22:07.698356 containerd[1826]: time="2025-09-12T17:22:07.698345344Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 12 17:22:07.698416 containerd[1826]: time="2025-09-12T17:22:07.698406632Z" level=info msg="runtime interface created" Sep 12 17:22:07.698452 containerd[1826]: time="2025-09-12T17:22:07.698445064Z" level=info msg="created NRI interface" Sep 12 17:22:07.698499 containerd[1826]: time="2025-09-12T17:22:07.698490120Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 12 17:22:07.698539 containerd[1826]: time="2025-09-12T17:22:07.698531656Z" level=info msg="Connect containerd service" Sep 12 17:22:07.698609 containerd[1826]: time="2025-09-12T17:22:07.698599624Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 12 17:22:07.699306 containerd[1826]: time="2025-09-12T17:22:07.699251000Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 17:22:07.842925 (kubelet)[1975]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:22:08.169407 kubelet[1975]: E0912 17:22:08.169298 1975 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:22:08.171459 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:22:08.171568 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:22:08.171837 systemd[1]: kubelet.service: Consumed 538ms CPU time, 256.2M memory peak. Sep 12 17:22:08.871351 containerd[1826]: time="2025-09-12T17:22:08.871311592Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 12 17:22:08.871628 containerd[1826]: time="2025-09-12T17:22:08.871367696Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 12 17:22:08.871628 containerd[1826]: time="2025-09-12T17:22:08.871378472Z" level=info msg="Start subscribing containerd event" Sep 12 17:22:08.871628 containerd[1826]: time="2025-09-12T17:22:08.871398368Z" level=info msg="Start recovering state" Sep 12 17:22:08.871628 containerd[1826]: time="2025-09-12T17:22:08.871463512Z" level=info msg="Start event monitor" Sep 12 17:22:08.871628 containerd[1826]: time="2025-09-12T17:22:08.871471632Z" level=info msg="Start cni network conf syncer for default" Sep 12 17:22:08.871628 containerd[1826]: time="2025-09-12T17:22:08.871476720Z" level=info msg="Start streaming server" Sep 12 17:22:08.871628 containerd[1826]: time="2025-09-12T17:22:08.871483800Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 12 17:22:08.871628 containerd[1826]: time="2025-09-12T17:22:08.871488536Z" level=info msg="runtime interface starting up..." Sep 12 17:22:08.871628 containerd[1826]: time="2025-09-12T17:22:08.871492488Z" level=info msg="starting plugins..." Sep 12 17:22:08.871628 containerd[1826]: time="2025-09-12T17:22:08.871504656Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 12 17:22:08.871628 containerd[1826]: time="2025-09-12T17:22:08.871591696Z" level=info msg="containerd successfully booted in 1.201231s" Sep 12 17:22:08.871879 systemd[1]: Started containerd.service - containerd container runtime. Sep 12 17:22:08.876478 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 12 17:22:08.881879 systemd[1]: Startup finished in 1.590s (kernel) + 16.657s (initrd) + 43.252s (userspace) = 1min 1.501s. Sep 12 17:22:10.697585 login[1949]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Sep 12 17:22:10.723863 login[1948]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:22:10.732360 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 12 17:22:10.733130 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 12 17:22:10.735156 systemd-logind[1809]: New session 1 of user core. Sep 12 17:22:11.023073 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 12 17:22:11.026844 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 12 17:22:11.048479 (systemd)[2003]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 12 17:22:11.050236 systemd-logind[1809]: New session c1 of user core. Sep 12 17:22:11.698682 login[1949]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:22:11.702267 systemd-logind[1809]: New session 2 of user core. Sep 12 17:22:11.791946 systemd[2003]: Queued start job for default target default.target. Sep 12 17:22:11.799513 systemd[2003]: Created slice app.slice - User Application Slice. Sep 12 17:22:11.799627 systemd[2003]: Reached target paths.target - Paths. Sep 12 17:22:11.799712 systemd[2003]: Reached target timers.target - Timers. Sep 12 17:22:11.800686 systemd[2003]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 12 17:22:11.807100 systemd[2003]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 12 17:22:11.807140 systemd[2003]: Reached target sockets.target - Sockets. Sep 12 17:22:11.807167 systemd[2003]: Reached target basic.target - Basic System. Sep 12 17:22:11.807186 systemd[2003]: Reached target default.target - Main User Target. Sep 12 17:22:11.807203 systemd[2003]: Startup finished in 752ms. Sep 12 17:22:11.807371 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 12 17:22:11.808863 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 12 17:22:11.809312 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 12 17:22:12.361229 waagent[1945]: 2025-09-12T17:22:12.357117Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Sep 12 17:22:12.361763 waagent[1945]: 2025-09-12T17:22:12.361727Z INFO Daemon Daemon OS: flatcar 4426.1.0 Sep 12 17:22:12.365114 waagent[1945]: 2025-09-12T17:22:12.365083Z INFO Daemon Daemon Python: 3.11.13 Sep 12 17:22:12.368375 waagent[1945]: 2025-09-12T17:22:12.368283Z INFO Daemon Daemon Run daemon Sep 12 17:22:12.371420 waagent[1945]: 2025-09-12T17:22:12.371392Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4426.1.0' Sep 12 17:22:12.378076 waagent[1945]: 2025-09-12T17:22:12.378043Z INFO Daemon Daemon Using waagent for provisioning Sep 12 17:22:12.382084 waagent[1945]: 2025-09-12T17:22:12.382056Z INFO Daemon Daemon Activate resource disk Sep 12 17:22:12.385381 waagent[1945]: 2025-09-12T17:22:12.385352Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Sep 12 17:22:12.393211 waagent[1945]: 2025-09-12T17:22:12.393180Z INFO Daemon Daemon Found device: None Sep 12 17:22:12.396468 waagent[1945]: 2025-09-12T17:22:12.396439Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Sep 12 17:22:12.402384 waagent[1945]: 2025-09-12T17:22:12.402355Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Sep 12 17:22:12.410576 waagent[1945]: 2025-09-12T17:22:12.410545Z INFO Daemon Daemon Clean protocol and wireserver endpoint Sep 12 17:22:12.414690 waagent[1945]: 2025-09-12T17:22:12.414662Z INFO Daemon Daemon Running default provisioning handler Sep 12 17:22:12.422655 waagent[1945]: 2025-09-12T17:22:12.422613Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Sep 12 17:22:12.432250 waagent[1945]: 2025-09-12T17:22:12.432214Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Sep 12 17:22:12.439036 waagent[1945]: 2025-09-12T17:22:12.439006Z INFO Daemon Daemon cloud-init is enabled: False Sep 12 17:22:12.442494 waagent[1945]: 2025-09-12T17:22:12.442472Z INFO Daemon Daemon Copying ovf-env.xml Sep 12 17:22:13.572519 waagent[1945]: 2025-09-12T17:22:13.572369Z INFO Daemon Daemon Successfully mounted dvd Sep 12 17:22:13.711809 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Sep 12 17:22:13.713990 waagent[1945]: 2025-09-12T17:22:13.713949Z INFO Daemon Daemon Detect protocol endpoint Sep 12 17:22:13.717717 waagent[1945]: 2025-09-12T17:22:13.717685Z INFO Daemon Daemon Clean protocol and wireserver endpoint Sep 12 17:22:13.721821 waagent[1945]: 2025-09-12T17:22:13.721795Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Sep 12 17:22:13.726463 waagent[1945]: 2025-09-12T17:22:13.726442Z INFO Daemon Daemon Test for route to 168.63.129.16 Sep 12 17:22:13.730284 waagent[1945]: 2025-09-12T17:22:13.730259Z INFO Daemon Daemon Route to 168.63.129.16 exists Sep 12 17:22:13.733934 waagent[1945]: 2025-09-12T17:22:13.733913Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Sep 12 17:22:14.616429 waagent[1945]: 2025-09-12T17:22:14.616384Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Sep 12 17:22:14.621319 waagent[1945]: 2025-09-12T17:22:14.621299Z INFO Daemon Daemon Wire protocol version:2012-11-30 Sep 12 17:22:14.625752 waagent[1945]: 2025-09-12T17:22:14.625715Z INFO Daemon Daemon Server preferred version:2015-04-05 Sep 12 17:22:14.913712 waagent[1945]: 2025-09-12T17:22:14.909068Z INFO Daemon Daemon Initializing goal state during protocol detection Sep 12 17:22:14.914016 waagent[1945]: 2025-09-12T17:22:14.913980Z INFO Daemon Daemon Forcing an update of the goal state. Sep 12 17:22:14.920999 waagent[1945]: 2025-09-12T17:22:14.920963Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Sep 12 17:22:14.954139 waagent[1945]: 2025-09-12T17:22:14.954109Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.175 Sep 12 17:22:14.958287 waagent[1945]: 2025-09-12T17:22:14.958251Z INFO Daemon Sep 12 17:22:14.960482 waagent[1945]: 2025-09-12T17:22:14.960455Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 404e0f9d-baba-4bcd-a8ec-bd30a5300587 eTag: 7569025441630374102 source: Fabric] Sep 12 17:22:14.968611 waagent[1945]: 2025-09-12T17:22:14.968580Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Sep 12 17:22:14.973239 waagent[1945]: 2025-09-12T17:22:14.973209Z INFO Daemon Sep 12 17:22:14.975235 waagent[1945]: 2025-09-12T17:22:14.975205Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Sep 12 17:22:14.982596 waagent[1945]: 2025-09-12T17:22:14.982570Z INFO Daemon Daemon Downloading artifacts profile blob Sep 12 17:22:15.037172 waagent[1945]: 2025-09-12T17:22:15.037140Z INFO Daemon Downloaded certificate {'thumbprint': '9B76671289DC342AADCC0A89FDE7C04F303E8199', 'hasPrivateKey': True} Sep 12 17:22:15.044199 waagent[1945]: 2025-09-12T17:22:15.044165Z INFO Daemon Fetch goal state completed Sep 12 17:22:15.051886 waagent[1945]: 2025-09-12T17:22:15.051861Z INFO Daemon Daemon Starting provisioning Sep 12 17:22:15.056311 waagent[1945]: 2025-09-12T17:22:15.056274Z INFO Daemon Daemon Handle ovf-env.xml. Sep 12 17:22:15.059919 waagent[1945]: 2025-09-12T17:22:15.059891Z INFO Daemon Daemon Set hostname [ci-4426.1.0-a-dfa5c25729] Sep 12 17:22:15.671790 waagent[1945]: 2025-09-12T17:22:15.671666Z INFO Daemon Daemon Publish hostname [ci-4426.1.0-a-dfa5c25729] Sep 12 17:22:15.676784 waagent[1945]: 2025-09-12T17:22:15.676477Z INFO Daemon Daemon Examine /proc/net/route for primary interface Sep 12 17:22:15.681060 waagent[1945]: 2025-09-12T17:22:15.681031Z INFO Daemon Daemon Primary interface is [eth0] Sep 12 17:22:15.770816 systemd-networkd[1566]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:22:15.770822 systemd-networkd[1566]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 17:22:15.770852 systemd-networkd[1566]: eth0: DHCP lease lost Sep 12 17:22:15.771722 waagent[1945]: 2025-09-12T17:22:15.771684Z INFO Daemon Daemon Create user account if not exists Sep 12 17:22:15.775834 waagent[1945]: 2025-09-12T17:22:15.775799Z INFO Daemon Daemon User core already exists, skip useradd Sep 12 17:22:15.780088 waagent[1945]: 2025-09-12T17:22:15.780051Z INFO Daemon Daemon Configure sudoer Sep 12 17:22:15.794791 systemd-networkd[1566]: eth0: DHCPv4 address 10.200.20.38/24, gateway 10.200.20.1 acquired from 168.63.129.16 Sep 12 17:22:16.355840 waagent[1945]: 2025-09-12T17:22:16.355740Z INFO Daemon Daemon Configure sshd Sep 12 17:22:16.704477 waagent[1945]: 2025-09-12T17:22:16.704359Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Sep 12 17:22:16.713465 waagent[1945]: 2025-09-12T17:22:16.713421Z INFO Daemon Daemon Deploy ssh public key. Sep 12 17:22:16.829499 waagent[1945]: 2025-09-12T17:22:16.829461Z INFO Daemon Daemon Provisioning complete Sep 12 17:22:16.840630 waagent[1945]: 2025-09-12T17:22:16.840601Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Sep 12 17:22:16.845053 waagent[1945]: 2025-09-12T17:22:16.845026Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Sep 12 17:22:16.851768 waagent[1945]: 2025-09-12T17:22:16.851745Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Sep 12 17:22:16.948865 waagent[2057]: 2025-09-12T17:22:16.948811Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Sep 12 17:22:16.949813 waagent[2057]: 2025-09-12T17:22:16.949207Z INFO ExtHandler ExtHandler OS: flatcar 4426.1.0 Sep 12 17:22:16.949813 waagent[2057]: 2025-09-12T17:22:16.949266Z INFO ExtHandler ExtHandler Python: 3.11.13 Sep 12 17:22:16.949813 waagent[2057]: 2025-09-12T17:22:16.949302Z INFO ExtHandler ExtHandler CPU Arch: aarch64 Sep 12 17:22:17.712798 waagent[2057]: 2025-09-12T17:22:17.712367Z INFO ExtHandler ExtHandler Distro: flatcar-4426.1.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.13; Arch: aarch64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Sep 12 17:22:17.712798 waagent[2057]: 2025-09-12T17:22:17.712572Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 12 17:22:17.712798 waagent[2057]: 2025-09-12T17:22:17.712617Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 12 17:22:17.812037 waagent[2057]: 2025-09-12T17:22:17.717532Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Sep 12 17:22:17.812037 waagent[2057]: 2025-09-12T17:22:17.721245Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.175 Sep 12 17:22:17.812037 waagent[2057]: 2025-09-12T17:22:17.721559Z INFO ExtHandler Sep 12 17:22:17.812037 waagent[2057]: 2025-09-12T17:22:17.721611Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 30246b7c-2428-40eb-b0f9-05c517327436 eTag: 7569025441630374102 source: Fabric] Sep 12 17:22:17.812037 waagent[2057]: 2025-09-12T17:22:17.721820Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Sep 12 17:22:17.812037 waagent[2057]: 2025-09-12T17:22:17.722191Z INFO ExtHandler Sep 12 17:22:17.812037 waagent[2057]: 2025-09-12T17:22:17.722229Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Sep 12 17:22:17.812037 waagent[2057]: 2025-09-12T17:22:17.724284Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Sep 12 17:22:18.116311 waagent[2057]: 2025-09-12T17:22:18.116093Z INFO ExtHandler Downloaded certificate {'thumbprint': '9B76671289DC342AADCC0A89FDE7C04F303E8199', 'hasPrivateKey': True} Sep 12 17:22:18.116626 waagent[2057]: 2025-09-12T17:22:18.116515Z INFO ExtHandler Fetch goal state completed Sep 12 17:22:18.126936 waagent[2057]: 2025-09-12T17:22:18.126896Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.4.1 11 Feb 2025 (Library: OpenSSL 3.4.1 11 Feb 2025) Sep 12 17:22:18.129988 waagent[2057]: 2025-09-12T17:22:18.129948Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 2057 Sep 12 17:22:18.130082 waagent[2057]: 2025-09-12T17:22:18.130059Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Sep 12 17:22:18.130305 waagent[2057]: 2025-09-12T17:22:18.130280Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Sep 12 17:22:18.131357 waagent[2057]: 2025-09-12T17:22:18.131326Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4426.1.0', '', 'Flatcar Container Linux by Kinvolk'] Sep 12 17:22:18.131656 waagent[2057]: 2025-09-12T17:22:18.131629Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4426.1.0', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Sep 12 17:22:18.131757 waagent[2057]: 2025-09-12T17:22:18.131737Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Sep 12 17:22:18.132184 waagent[2057]: 2025-09-12T17:22:18.132158Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Sep 12 17:22:18.250821 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 12 17:22:18.252916 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:22:18.825945 waagent[2057]: 2025-09-12T17:22:18.825593Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Sep 12 17:22:18.825945 waagent[2057]: 2025-09-12T17:22:18.825749Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Sep 12 17:22:18.829859 waagent[2057]: 2025-09-12T17:22:18.829834Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Sep 12 17:22:18.833867 systemd[1]: Reload requested from client PID 2077 ('systemctl') (unit waagent.service)... Sep 12 17:22:18.833958 systemd[1]: Reloading... Sep 12 17:22:18.901808 zram_generator::config[2120]: No configuration found. Sep 12 17:22:19.030480 systemd[1]: Reloading finished in 196 ms. Sep 12 17:22:19.040971 waagent[2057]: 2025-09-12T17:22:19.040371Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Sep 12 17:22:19.040971 waagent[2057]: 2025-09-12T17:22:19.040484Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Sep 12 17:22:22.020704 waagent[2057]: 2025-09-12T17:22:22.020581Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Sep 12 17:22:22.199428 waagent[2057]: 2025-09-12T17:22:22.198831Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Sep 12 17:22:22.308556 waagent[2057]: 2025-09-12T17:22:22.199637Z INFO ExtHandler ExtHandler Starting env monitor service. Sep 12 17:22:22.308556 waagent[2057]: 2025-09-12T17:22:22.199854Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 12 17:22:22.308556 waagent[2057]: 2025-09-12T17:22:22.199911Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 12 17:22:22.308556 waagent[2057]: 2025-09-12T17:22:22.200057Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Sep 12 17:22:22.308556 waagent[2057]: 2025-09-12T17:22:22.200186Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Sep 12 17:22:22.308556 waagent[2057]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Sep 12 17:22:22.308556 waagent[2057]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Sep 12 17:22:22.308556 waagent[2057]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Sep 12 17:22:22.308556 waagent[2057]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Sep 12 17:22:22.308556 waagent[2057]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 12 17:22:22.308556 waagent[2057]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 12 17:22:22.308556 waagent[2057]: 2025-09-12T17:22:22.200533Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 12 17:22:22.308556 waagent[2057]: 2025-09-12T17:22:22.200583Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 12 17:22:22.308556 waagent[2057]: 2025-09-12T17:22:22.200663Z INFO EnvHandler ExtHandler Configure routes Sep 12 17:22:22.308556 waagent[2057]: 2025-09-12T17:22:22.200697Z INFO EnvHandler ExtHandler Gateway:None Sep 12 17:22:22.308556 waagent[2057]: 2025-09-12T17:22:22.200719Z INFO EnvHandler ExtHandler Routes:None Sep 12 17:22:22.308556 waagent[2057]: 2025-09-12T17:22:22.200915Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Sep 12 17:22:22.308556 waagent[2057]: 2025-09-12T17:22:22.201242Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Sep 12 17:22:22.308556 waagent[2057]: 2025-09-12T17:22:22.201212Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Sep 12 17:22:22.308830 waagent[2057]: 2025-09-12T17:22:22.201848Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Sep 12 17:22:22.308830 waagent[2057]: 2025-09-12T17:22:22.201893Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Sep 12 17:22:22.308830 waagent[2057]: 2025-09-12T17:22:22.202086Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Sep 12 17:22:22.311041 waagent[2057]: 2025-09-12T17:22:22.311008Z INFO ExtHandler ExtHandler Sep 12 17:22:22.311086 waagent[2057]: 2025-09-12T17:22:22.311069Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: b0aa1eee-b011-405f-a401-887aa9cb2337 correlation 33f91ed2-a7a1-4e35-b91b-1de089983e93 created: 2025-09-12T17:20:24.928329Z] Sep 12 17:22:22.311339 waagent[2057]: 2025-09-12T17:22:22.311312Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Sep 12 17:22:22.312517 waagent[2057]: 2025-09-12T17:22:22.312343Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Sep 12 17:22:22.624652 waagent[2057]: 2025-09-12T17:22:22.624552Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Sep 12 17:22:22.624652 waagent[2057]: Try `iptables -h' or 'iptables --help' for more information.) Sep 12 17:22:22.624944 waagent[2057]: 2025-09-12T17:22:22.624912Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 39AEFF41-74A3-4E3D-9C5F-DC9470E344A1;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Sep 12 17:22:22.924588 waagent[2057]: 2025-09-12T17:22:22.924240Z INFO MonitorHandler ExtHandler Network interfaces: Sep 12 17:22:22.924588 waagent[2057]: Executing ['ip', '-a', '-o', 'link']: Sep 12 17:22:22.924588 waagent[2057]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Sep 12 17:22:22.924588 waagent[2057]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:7e:48:07 brd ff:ff:ff:ff:ff:ff Sep 12 17:22:22.924588 waagent[2057]: 3: enP40165s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:7e:48:07 brd ff:ff:ff:ff:ff:ff\ altname enP40165p0s2 Sep 12 17:22:22.924588 waagent[2057]: Executing ['ip', '-4', '-a', '-o', 'address']: Sep 12 17:22:22.924588 waagent[2057]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Sep 12 17:22:22.924588 waagent[2057]: 2: eth0 inet 10.200.20.38/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Sep 12 17:22:22.924588 waagent[2057]: Executing ['ip', '-6', '-a', '-o', 'address']: Sep 12 17:22:22.924588 waagent[2057]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Sep 12 17:22:22.924588 waagent[2057]: 2: eth0 inet6 fe80::222:48ff:fe7e:4807/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Sep 12 17:22:23.129014 waagent[2057]: 2025-09-12T17:22:23.128982Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Sep 12 17:22:23.129014 waagent[2057]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Sep 12 17:22:23.129014 waagent[2057]: pkts bytes target prot opt in out source destination Sep 12 17:22:23.129014 waagent[2057]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Sep 12 17:22:23.129014 waagent[2057]: pkts bytes target prot opt in out source destination Sep 12 17:22:23.129014 waagent[2057]: Chain OUTPUT (policy ACCEPT 5 packets, 400 bytes) Sep 12 17:22:23.129014 waagent[2057]: pkts bytes target prot opt in out source destination Sep 12 17:22:23.129014 waagent[2057]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Sep 12 17:22:23.129014 waagent[2057]: 5 647 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Sep 12 17:22:23.129014 waagent[2057]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Sep 12 17:22:23.131840 waagent[2057]: 2025-09-12T17:22:23.131563Z INFO EnvHandler ExtHandler Current Firewall rules: Sep 12 17:22:23.131840 waagent[2057]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Sep 12 17:22:23.131840 waagent[2057]: pkts bytes target prot opt in out source destination Sep 12 17:22:23.131840 waagent[2057]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Sep 12 17:22:23.131840 waagent[2057]: pkts bytes target prot opt in out source destination Sep 12 17:22:23.131840 waagent[2057]: Chain OUTPUT (policy ACCEPT 8 packets, 749 bytes) Sep 12 17:22:23.131840 waagent[2057]: pkts bytes target prot opt in out source destination Sep 12 17:22:23.131840 waagent[2057]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Sep 12 17:22:23.131840 waagent[2057]: 6 699 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Sep 12 17:22:23.131840 waagent[2057]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Sep 12 17:22:23.131840 waagent[2057]: 2025-09-12T17:22:23.131748Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Sep 12 17:22:25.045351 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:22:25.047840 (kubelet)[2210]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:22:25.080654 kubelet[2210]: E0912 17:22:25.080603 2210 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:22:25.083167 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:22:25.083271 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:22:25.083685 systemd[1]: kubelet.service: Consumed 110ms CPU time, 107.5M memory peak. Sep 12 17:22:30.475826 chronyd[1787]: Selected source PHC0 Sep 12 17:22:35.250850 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 12 17:22:35.252091 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:22:35.347316 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:22:35.350015 (kubelet)[2225]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:22:35.468813 kubelet[2225]: E0912 17:22:35.468763 2225 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:22:35.470720 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:22:35.470831 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:22:35.471245 systemd[1]: kubelet.service: Consumed 96ms CPU time, 105.2M memory peak. Sep 12 17:22:35.909667 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Sep 12 17:22:41.785832 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 12 17:22:41.787043 systemd[1]: Started sshd@0-10.200.20.38:22-10.200.16.10:51234.service - OpenSSH per-connection server daemon (10.200.16.10:51234). Sep 12 17:22:42.351336 sshd[2232]: Accepted publickey for core from 10.200.16.10 port 51234 ssh2: RSA SHA256:+on2THTR/nRp7vYd/q00sis2kB6WPhgipQlvfvqeQ7E Sep 12 17:22:42.352268 sshd-session[2232]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:22:42.355518 systemd-logind[1809]: New session 3 of user core. Sep 12 17:22:42.366865 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 12 17:22:42.742868 systemd[1]: Started sshd@1-10.200.20.38:22-10.200.16.10:51242.service - OpenSSH per-connection server daemon (10.200.16.10:51242). Sep 12 17:22:43.158378 sshd[2238]: Accepted publickey for core from 10.200.16.10 port 51242 ssh2: RSA SHA256:+on2THTR/nRp7vYd/q00sis2kB6WPhgipQlvfvqeQ7E Sep 12 17:22:43.159294 sshd-session[2238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:22:43.162441 systemd-logind[1809]: New session 4 of user core. Sep 12 17:22:43.174023 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 12 17:22:43.476024 sshd[2241]: Connection closed by 10.200.16.10 port 51242 Sep 12 17:22:43.475883 sshd-session[2238]: pam_unix(sshd:session): session closed for user core Sep 12 17:22:43.478571 systemd-logind[1809]: Session 4 logged out. Waiting for processes to exit. Sep 12 17:22:43.478692 systemd[1]: sshd@1-10.200.20.38:22-10.200.16.10:51242.service: Deactivated successfully. Sep 12 17:22:43.480246 systemd[1]: session-4.scope: Deactivated successfully. Sep 12 17:22:43.481818 systemd-logind[1809]: Removed session 4. Sep 12 17:22:43.553863 systemd[1]: Started sshd@2-10.200.20.38:22-10.200.16.10:51246.service - OpenSSH per-connection server daemon (10.200.16.10:51246). Sep 12 17:22:43.968363 sshd[2247]: Accepted publickey for core from 10.200.16.10 port 51246 ssh2: RSA SHA256:+on2THTR/nRp7vYd/q00sis2kB6WPhgipQlvfvqeQ7E Sep 12 17:22:43.969336 sshd-session[2247]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:22:43.972639 systemd-logind[1809]: New session 5 of user core. Sep 12 17:22:43.982871 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 12 17:22:44.266425 sshd[2250]: Connection closed by 10.200.16.10 port 51246 Sep 12 17:22:44.266849 sshd-session[2247]: pam_unix(sshd:session): session closed for user core Sep 12 17:22:44.270162 systemd-logind[1809]: Session 5 logged out. Waiting for processes to exit. Sep 12 17:22:44.270294 systemd[1]: sshd@2-10.200.20.38:22-10.200.16.10:51246.service: Deactivated successfully. Sep 12 17:22:44.271587 systemd[1]: session-5.scope: Deactivated successfully. Sep 12 17:22:44.273550 systemd-logind[1809]: Removed session 5. Sep 12 17:22:44.357792 systemd[1]: Started sshd@3-10.200.20.38:22-10.200.16.10:51262.service - OpenSSH per-connection server daemon (10.200.16.10:51262). Sep 12 17:22:44.816694 sshd[2256]: Accepted publickey for core from 10.200.16.10 port 51262 ssh2: RSA SHA256:+on2THTR/nRp7vYd/q00sis2kB6WPhgipQlvfvqeQ7E Sep 12 17:22:44.817648 sshd-session[2256]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:22:44.820912 systemd-logind[1809]: New session 6 of user core. Sep 12 17:22:44.828890 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 12 17:22:45.140696 sshd[2259]: Connection closed by 10.200.16.10 port 51262 Sep 12 17:22:45.140569 sshd-session[2256]: pam_unix(sshd:session): session closed for user core Sep 12 17:22:45.143432 systemd[1]: sshd@3-10.200.20.38:22-10.200.16.10:51262.service: Deactivated successfully. Sep 12 17:22:45.144888 systemd[1]: session-6.scope: Deactivated successfully. Sep 12 17:22:45.146368 systemd-logind[1809]: Session 6 logged out. Waiting for processes to exit. Sep 12 17:22:45.147248 systemd-logind[1809]: Removed session 6. Sep 12 17:22:45.227963 systemd[1]: Started sshd@4-10.200.20.38:22-10.200.16.10:51264.service - OpenSSH per-connection server daemon (10.200.16.10:51264). Sep 12 17:22:45.500907 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 12 17:22:45.503028 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:22:45.642228 sshd[2265]: Accepted publickey for core from 10.200.16.10 port 51264 ssh2: RSA SHA256:+on2THTR/nRp7vYd/q00sis2kB6WPhgipQlvfvqeQ7E Sep 12 17:22:45.642620 sshd-session[2265]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:22:45.646039 systemd-logind[1809]: New session 7 of user core. Sep 12 17:22:45.663962 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 12 17:22:46.036323 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:22:46.039007 (kubelet)[2277]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:22:46.063056 kubelet[2277]: E0912 17:22:46.063021 2277 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:22:46.064780 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:22:46.064881 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:22:46.065311 systemd[1]: kubelet.service: Consumed 97ms CPU time, 104.7M memory peak. Sep 12 17:22:49.406001 sudo[2272]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 12 17:22:49.406211 sudo[2272]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:22:49.436058 sudo[2272]: pam_unix(sudo:session): session closed for user root Sep 12 17:22:49.519371 sshd[2271]: Connection closed by 10.200.16.10 port 51264 Sep 12 17:22:49.518670 sshd-session[2265]: pam_unix(sshd:session): session closed for user core Sep 12 17:22:49.521446 systemd-logind[1809]: Session 7 logged out. Waiting for processes to exit. Sep 12 17:22:49.521781 systemd[1]: sshd@4-10.200.20.38:22-10.200.16.10:51264.service: Deactivated successfully. Sep 12 17:22:49.523138 systemd[1]: session-7.scope: Deactivated successfully. Sep 12 17:22:49.525334 systemd-logind[1809]: Removed session 7. Sep 12 17:22:49.628779 systemd[1]: Started sshd@5-10.200.20.38:22-10.200.16.10:51278.service - OpenSSH per-connection server daemon (10.200.16.10:51278). Sep 12 17:22:50.079977 sshd[2290]: Accepted publickey for core from 10.200.16.10 port 51278 ssh2: RSA SHA256:+on2THTR/nRp7vYd/q00sis2kB6WPhgipQlvfvqeQ7E Sep 12 17:22:50.082501 sshd-session[2290]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:22:50.085699 systemd-logind[1809]: New session 8 of user core. Sep 12 17:22:50.095859 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 12 17:22:50.334849 sudo[2295]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 12 17:22:50.335053 sudo[2295]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:22:50.341040 sudo[2295]: pam_unix(sudo:session): session closed for user root Sep 12 17:22:50.344255 sudo[2294]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 12 17:22:50.344436 sudo[2294]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:22:50.351486 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 17:22:50.377064 augenrules[2317]: No rules Sep 12 17:22:50.378002 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 17:22:50.378162 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 17:22:50.379159 sudo[2294]: pam_unix(sudo:session): session closed for user root Sep 12 17:22:50.462555 sshd[2293]: Connection closed by 10.200.16.10 port 51278 Sep 12 17:22:50.462908 sshd-session[2290]: pam_unix(sshd:session): session closed for user core Sep 12 17:22:50.465331 systemd-logind[1809]: Session 8 logged out. Waiting for processes to exit. Sep 12 17:22:50.465932 systemd[1]: sshd@5-10.200.20.38:22-10.200.16.10:51278.service: Deactivated successfully. Sep 12 17:22:50.467040 systemd[1]: session-8.scope: Deactivated successfully. Sep 12 17:22:50.468222 systemd-logind[1809]: Removed session 8. Sep 12 17:22:50.540638 systemd[1]: Started sshd@6-10.200.20.38:22-10.200.16.10:44660.service - OpenSSH per-connection server daemon (10.200.16.10:44660). Sep 12 17:22:50.951943 sshd[2326]: Accepted publickey for core from 10.200.16.10 port 44660 ssh2: RSA SHA256:+on2THTR/nRp7vYd/q00sis2kB6WPhgipQlvfvqeQ7E Sep 12 17:22:50.952795 sshd-session[2326]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:22:50.955961 systemd-logind[1809]: New session 9 of user core. Sep 12 17:22:50.963034 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 12 17:22:51.187426 sudo[2330]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 12 17:22:51.187624 sudo[2330]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:22:52.382059 update_engine[1812]: I20250912 17:22:52.381989 1812 update_attempter.cc:509] Updating boot flags... Sep 12 17:22:53.054687 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 12 17:22:53.062989 (dockerd)[2411]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 12 17:22:54.166043 dockerd[2411]: time="2025-09-12T17:22:54.165758883Z" level=info msg="Starting up" Sep 12 17:22:54.167335 dockerd[2411]: time="2025-09-12T17:22:54.167295022Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 12 17:22:54.175109 dockerd[2411]: time="2025-09-12T17:22:54.175085882Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 12 17:22:54.299931 dockerd[2411]: time="2025-09-12T17:22:54.299905781Z" level=info msg="Loading containers: start." Sep 12 17:22:54.354781 kernel: Initializing XFRM netlink socket Sep 12 17:22:54.790958 systemd-networkd[1566]: docker0: Link UP Sep 12 17:22:54.808600 dockerd[2411]: time="2025-09-12T17:22:54.808568877Z" level=info msg="Loading containers: done." Sep 12 17:22:54.830271 dockerd[2411]: time="2025-09-12T17:22:54.830242053Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 12 17:22:54.830375 dockerd[2411]: time="2025-09-12T17:22:54.830295159Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 12 17:22:54.830375 dockerd[2411]: time="2025-09-12T17:22:54.830356448Z" level=info msg="Initializing buildkit" Sep 12 17:22:54.894954 dockerd[2411]: time="2025-09-12T17:22:54.894930367Z" level=info msg="Completed buildkit initialization" Sep 12 17:22:54.899715 dockerd[2411]: time="2025-09-12T17:22:54.899682717Z" level=info msg="Daemon has completed initialization" Sep 12 17:22:54.899923 dockerd[2411]: time="2025-09-12T17:22:54.899884730Z" level=info msg="API listen on /run/docker.sock" Sep 12 17:22:54.900099 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 12 17:22:55.673864 containerd[1826]: time="2025-09-12T17:22:55.673829813Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Sep 12 17:22:56.250679 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Sep 12 17:22:56.252040 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:22:56.340831 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:22:56.346062 (kubelet)[2623]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:22:56.476787 kubelet[2623]: E0912 17:22:56.476661 2623 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:22:56.478842 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:22:56.478950 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:22:56.479421 systemd[1]: kubelet.service: Consumed 97ms CPU time, 105.5M memory peak. Sep 12 17:22:57.093421 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2526496480.mount: Deactivated successfully. Sep 12 17:22:59.967931 containerd[1826]: time="2025-09-12T17:22:59.967871752Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:23:00.014774 containerd[1826]: time="2025-09-12T17:23:00.014731563Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=27390228" Sep 12 17:23:00.058954 containerd[1826]: time="2025-09-12T17:23:00.058900191Z" level=info msg="ImageCreate event name:\"sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:23:00.063412 containerd[1826]: time="2025-09-12T17:23:00.063373511Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:23:00.064009 containerd[1826]: time="2025-09-12T17:23:00.063882427Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"27386827\" in 4.389958916s" Sep 12 17:23:00.064009 containerd[1826]: time="2025-09-12T17:23:00.063909220Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f\"" Sep 12 17:23:00.065183 containerd[1826]: time="2025-09-12T17:23:00.065159921Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Sep 12 17:23:03.119895 containerd[1826]: time="2025-09-12T17:23:03.119827320Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:23:03.162201 containerd[1826]: time="2025-09-12T17:23:03.162152426Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=23547917" Sep 12 17:23:03.166400 containerd[1826]: time="2025-09-12T17:23:03.165723751Z" level=info msg="ImageCreate event name:\"sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:23:03.211022 containerd[1826]: time="2025-09-12T17:23:03.210992617Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:23:03.211775 containerd[1826]: time="2025-09-12T17:23:03.211737811Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"25135832\" in 3.146549258s" Sep 12 17:23:03.211881 containerd[1826]: time="2025-09-12T17:23:03.211867454Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23\"" Sep 12 17:23:03.212383 containerd[1826]: time="2025-09-12T17:23:03.212324433Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Sep 12 17:23:06.464789 containerd[1826]: time="2025-09-12T17:23:06.464535263Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:23:06.500710 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Sep 12 17:23:06.502472 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:23:06.508522 containerd[1826]: time="2025-09-12T17:23:06.508488000Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=18295977" Sep 12 17:23:06.513520 containerd[1826]: time="2025-09-12T17:23:06.513136567Z" level=info msg="ImageCreate event name:\"sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:23:06.594489 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:23:06.596946 (kubelet)[2704]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:23:06.724240 kubelet[2704]: E0912 17:23:06.724138 2704 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:23:06.726043 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:23:06.726239 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:23:06.726915 systemd[1]: kubelet.service: Consumed 98ms CPU time, 107M memory peak. Sep 12 17:23:08.854199 containerd[1826]: time="2025-09-12T17:23:08.853313104Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:23:08.854199 containerd[1826]: time="2025-09-12T17:23:08.853939839Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"19883910\" in 5.641426666s" Sep 12 17:23:08.854199 containerd[1826]: time="2025-09-12T17:23:08.853963056Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15\"" Sep 12 17:23:08.854943 containerd[1826]: time="2025-09-12T17:23:08.854918094Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Sep 12 17:23:16.750879 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Sep 12 17:23:16.752825 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:23:17.640479 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:23:17.648994 (kubelet)[2719]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:23:17.673275 kubelet[2719]: E0912 17:23:17.673226 2719 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:23:17.675154 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:23:17.675352 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:23:17.675801 systemd[1]: kubelet.service: Consumed 96ms CPU time, 106.6M memory peak. Sep 12 17:23:19.304641 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount482468139.mount: Deactivated successfully. Sep 12 17:23:19.564266 containerd[1826]: time="2025-09-12T17:23:19.563942729Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:23:19.567571 containerd[1826]: time="2025-09-12T17:23:19.567547211Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=28240106" Sep 12 17:23:19.570820 containerd[1826]: time="2025-09-12T17:23:19.570799374Z" level=info msg="ImageCreate event name:\"sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:23:19.574955 containerd[1826]: time="2025-09-12T17:23:19.574903523Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:23:19.575264 containerd[1826]: time="2025-09-12T17:23:19.575076239Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"28239125\" in 10.720132336s" Sep 12 17:23:19.575309 containerd[1826]: time="2025-09-12T17:23:19.575269827Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70\"" Sep 12 17:23:19.575947 containerd[1826]: time="2025-09-12T17:23:19.575926290Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Sep 12 17:23:20.359056 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3259406702.mount: Deactivated successfully. Sep 12 17:23:21.294828 containerd[1826]: time="2025-09-12T17:23:21.294778658Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:23:21.299348 containerd[1826]: time="2025-09-12T17:23:21.299322418Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152117" Sep 12 17:23:21.303685 containerd[1826]: time="2025-09-12T17:23:21.303641188Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:23:21.308946 containerd[1826]: time="2025-09-12T17:23:21.308917685Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:23:21.309556 containerd[1826]: time="2025-09-12T17:23:21.309422328Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.733398156s" Sep 12 17:23:21.309556 containerd[1826]: time="2025-09-12T17:23:21.309450161Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Sep 12 17:23:21.310055 containerd[1826]: time="2025-09-12T17:23:21.310030358Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 12 17:23:22.915574 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4186430290.mount: Deactivated successfully. Sep 12 17:23:24.858520 containerd[1826]: time="2025-09-12T17:23:24.858024323Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:23:24.860895 containerd[1826]: time="2025-09-12T17:23:24.860874948Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Sep 12 17:23:24.906800 containerd[1826]: time="2025-09-12T17:23:24.906762921Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:23:24.954590 containerd[1826]: time="2025-09-12T17:23:24.954547454Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:23:24.955124 containerd[1826]: time="2025-09-12T17:23:24.955005632Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 3.644948065s" Sep 12 17:23:24.955124 containerd[1826]: time="2025-09-12T17:23:24.955030361Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 12 17:23:24.955512 containerd[1826]: time="2025-09-12T17:23:24.955477171Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Sep 12 17:23:26.622785 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount213454284.mount: Deactivated successfully. Sep 12 17:23:27.750712 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Sep 12 17:23:27.751827 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:23:27.842848 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:23:27.850953 (kubelet)[2806]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:23:27.876433 kubelet[2806]: E0912 17:23:27.876388 2806 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:23:27.878295 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:23:27.878477 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:23:27.878973 systemd[1]: kubelet.service: Consumed 100ms CPU time, 106.4M memory peak. Sep 12 17:23:38.001026 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Sep 12 17:23:38.003926 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:23:38.345388 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:23:38.347728 (kubelet)[2826]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:23:38.371026 kubelet[2826]: E0912 17:23:38.370988 2826 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:23:38.372995 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:23:38.373181 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:23:38.373660 systemd[1]: kubelet.service: Consumed 101ms CPU time, 104.4M memory peak. Sep 12 17:23:43.427603 containerd[1826]: time="2025-09-12T17:23:43.427553249Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:23:43.431338 containerd[1826]: time="2025-09-12T17:23:43.431303807Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69465857" Sep 12 17:23:43.436240 containerd[1826]: time="2025-09-12T17:23:43.436195523Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:23:43.441165 containerd[1826]: time="2025-09-12T17:23:43.440499332Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:23:43.441165 containerd[1826]: time="2025-09-12T17:23:43.441055678Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 18.485558651s" Sep 12 17:23:43.441165 containerd[1826]: time="2025-09-12T17:23:43.441081135Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Sep 12 17:23:46.543092 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:23:46.543202 systemd[1]: kubelet.service: Consumed 101ms CPU time, 104.4M memory peak. Sep 12 17:23:46.545582 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:23:46.566985 systemd[1]: Reload requested from client PID 2903 ('systemctl') (unit session-9.scope)... Sep 12 17:23:46.566997 systemd[1]: Reloading... Sep 12 17:23:46.632788 zram_generator::config[2950]: No configuration found. Sep 12 17:23:46.789666 systemd[1]: Reloading finished in 222 ms. Sep 12 17:23:46.846803 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 12 17:23:46.846879 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 12 17:23:46.847830 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:23:46.847891 systemd[1]: kubelet.service: Consumed 51ms CPU time, 75.3M memory peak. Sep 12 17:23:46.849764 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:23:47.007610 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:23:47.010525 (kubelet)[3014]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 17:23:47.035784 kubelet[3014]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:23:47.035784 kubelet[3014]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 12 17:23:47.035784 kubelet[3014]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:23:47.035784 kubelet[3014]: I0912 17:23:47.034847 3014 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 17:23:47.448870 kubelet[3014]: I0912 17:23:47.448839 3014 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 12 17:23:47.450182 kubelet[3014]: I0912 17:23:47.448998 3014 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 17:23:47.450182 kubelet[3014]: I0912 17:23:47.449275 3014 server.go:956] "Client rotation is on, will bootstrap in background" Sep 12 17:23:47.468552 kubelet[3014]: E0912 17:23:47.468523 3014 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.20.38:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.38:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 12 17:23:47.469731 kubelet[3014]: I0912 17:23:47.469718 3014 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 17:23:47.476373 kubelet[3014]: I0912 17:23:47.476359 3014 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 12 17:23:47.478606 kubelet[3014]: I0912 17:23:47.478592 3014 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 17:23:47.479634 kubelet[3014]: I0912 17:23:47.479610 3014 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 17:23:47.479835 kubelet[3014]: I0912 17:23:47.479707 3014 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4426.1.0-a-dfa5c25729","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 17:23:47.479967 kubelet[3014]: I0912 17:23:47.479956 3014 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 17:23:47.480017 kubelet[3014]: I0912 17:23:47.480008 3014 container_manager_linux.go:303] "Creating device plugin manager" Sep 12 17:23:47.480156 kubelet[3014]: I0912 17:23:47.480146 3014 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:23:47.482788 kubelet[3014]: I0912 17:23:47.482773 3014 kubelet.go:480] "Attempting to sync node with API server" Sep 12 17:23:47.482874 kubelet[3014]: I0912 17:23:47.482864 3014 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 17:23:47.482933 kubelet[3014]: I0912 17:23:47.482925 3014 kubelet.go:386] "Adding apiserver pod source" Sep 12 17:23:47.483966 kubelet[3014]: I0912 17:23:47.483953 3014 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 17:23:47.485202 kubelet[3014]: E0912 17:23:47.485176 3014 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.38:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4426.1.0-a-dfa5c25729&limit=500&resourceVersion=0\": dial tcp 10.200.20.38:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 12 17:23:47.485496 kubelet[3014]: E0912 17:23:47.485469 3014 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.38:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.38:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 12 17:23:47.485550 kubelet[3014]: I0912 17:23:47.485531 3014 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 12 17:23:47.485897 kubelet[3014]: I0912 17:23:47.485882 3014 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 12 17:23:47.485954 kubelet[3014]: W0912 17:23:47.485925 3014 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 12 17:23:47.488181 kubelet[3014]: I0912 17:23:47.488166 3014 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 12 17:23:47.488235 kubelet[3014]: I0912 17:23:47.488197 3014 server.go:1289] "Started kubelet" Sep 12 17:23:47.494734 kubelet[3014]: I0912 17:23:47.493487 3014 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 17:23:47.494995 kubelet[3014]: I0912 17:23:47.494881 3014 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 17:23:47.495191 kubelet[3014]: E0912 17:23:47.492734 3014 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.38:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.38:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4426.1.0-a-dfa5c25729.186498dc85e8a128 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4426.1.0-a-dfa5c25729,UID:ci-4426.1.0-a-dfa5c25729,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4426.1.0-a-dfa5c25729,},FirstTimestamp:2025-09-12 17:23:47.488178472 +0000 UTC m=+0.474728521,LastTimestamp:2025-09-12 17:23:47.488178472 +0000 UTC m=+0.474728521,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4426.1.0-a-dfa5c25729,}" Sep 12 17:23:47.495261 kubelet[3014]: I0912 17:23:47.495217 3014 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 12 17:23:47.497271 kubelet[3014]: I0912 17:23:47.497225 3014 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 17:23:47.497560 kubelet[3014]: I0912 17:23:47.497544 3014 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 17:23:47.499409 kubelet[3014]: I0912 17:23:47.499391 3014 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 17:23:47.500101 kubelet[3014]: I0912 17:23:47.500086 3014 server.go:317] "Adding debug handlers to kubelet server" Sep 12 17:23:47.501409 kubelet[3014]: E0912 17:23:47.501390 3014 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4426.1.0-a-dfa5c25729\" not found" Sep 12 17:23:47.502441 kubelet[3014]: E0912 17:23:47.502417 3014 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4426.1.0-a-dfa5c25729?timeout=10s\": dial tcp 10.200.20.38:6443: connect: connection refused" interval="200ms" Sep 12 17:23:47.502511 kubelet[3014]: I0912 17:23:47.502438 3014 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 12 17:23:47.502553 kubelet[3014]: I0912 17:23:47.502473 3014 reconciler.go:26] "Reconciler: start to sync state" Sep 12 17:23:47.502753 kubelet[3014]: I0912 17:23:47.502740 3014 factory.go:223] Registration of the systemd container factory successfully Sep 12 17:23:47.502878 kubelet[3014]: I0912 17:23:47.502865 3014 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 17:23:47.503666 kubelet[3014]: E0912 17:23:47.503653 3014 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 17:23:47.504065 kubelet[3014]: I0912 17:23:47.504051 3014 factory.go:223] Registration of the containerd container factory successfully Sep 12 17:23:47.524631 kubelet[3014]: E0912 17:23:47.524613 3014 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.38:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.38:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 12 17:23:47.528231 kubelet[3014]: I0912 17:23:47.528215 3014 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 12 17:23:47.528231 kubelet[3014]: I0912 17:23:47.528226 3014 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 12 17:23:47.528307 kubelet[3014]: I0912 17:23:47.528239 3014 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:23:47.534701 kubelet[3014]: I0912 17:23:47.534687 3014 policy_none.go:49] "None policy: Start" Sep 12 17:23:47.534701 kubelet[3014]: I0912 17:23:47.534701 3014 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 12 17:23:47.534775 kubelet[3014]: I0912 17:23:47.534709 3014 state_mem.go:35] "Initializing new in-memory state store" Sep 12 17:23:47.542102 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 12 17:23:47.549483 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 12 17:23:47.552196 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 12 17:23:47.561983 kubelet[3014]: I0912 17:23:47.561657 3014 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 12 17:23:47.563404 kubelet[3014]: E0912 17:23:47.563384 3014 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 12 17:23:47.563528 kubelet[3014]: I0912 17:23:47.563512 3014 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 17:23:47.563567 kubelet[3014]: I0912 17:23:47.563525 3014 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 17:23:47.563910 kubelet[3014]: I0912 17:23:47.563887 3014 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 17:23:47.564939 kubelet[3014]: I0912 17:23:47.564652 3014 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 12 17:23:47.564939 kubelet[3014]: I0912 17:23:47.564672 3014 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 12 17:23:47.564939 kubelet[3014]: I0912 17:23:47.564687 3014 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 12 17:23:47.564939 kubelet[3014]: I0912 17:23:47.564692 3014 kubelet.go:2436] "Starting kubelet main sync loop" Sep 12 17:23:47.564939 kubelet[3014]: E0912 17:23:47.564722 3014 kubelet.go:2460] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Sep 12 17:23:47.566657 kubelet[3014]: E0912 17:23:47.566637 3014 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 12 17:23:47.566718 kubelet[3014]: E0912 17:23:47.566665 3014 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4426.1.0-a-dfa5c25729\" not found" Sep 12 17:23:47.566991 kubelet[3014]: E0912 17:23:47.566967 3014 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.38:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.38:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 12 17:23:47.664885 kubelet[3014]: I0912 17:23:47.664726 3014 kubelet_node_status.go:75] "Attempting to register node" node="ci-4426.1.0-a-dfa5c25729" Sep 12 17:23:47.665209 kubelet[3014]: E0912 17:23:47.665155 3014 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.38:6443/api/v1/nodes\": dial tcp 10.200.20.38:6443: connect: connection refused" node="ci-4426.1.0-a-dfa5c25729" Sep 12 17:23:47.674109 systemd[1]: Created slice kubepods-burstable-pod1188cc58b6f1eab357681080f724bff6.slice - libcontainer container kubepods-burstable-pod1188cc58b6f1eab357681080f724bff6.slice. Sep 12 17:23:47.680349 kubelet[3014]: E0912 17:23:47.680326 3014 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4426.1.0-a-dfa5c25729\" not found" node="ci-4426.1.0-a-dfa5c25729" Sep 12 17:23:47.684136 systemd[1]: Created slice kubepods-burstable-podb57a61821d1f8d713c34170ae8730105.slice - libcontainer container kubepods-burstable-podb57a61821d1f8d713c34170ae8730105.slice. Sep 12 17:23:47.686045 kubelet[3014]: E0912 17:23:47.686030 3014 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4426.1.0-a-dfa5c25729\" not found" node="ci-4426.1.0-a-dfa5c25729" Sep 12 17:23:47.697570 systemd[1]: Created slice kubepods-burstable-pod4c43a5ee9553d9985e9e0e304ac1102e.slice - libcontainer container kubepods-burstable-pod4c43a5ee9553d9985e9e0e304ac1102e.slice. Sep 12 17:23:47.699016 kubelet[3014]: E0912 17:23:47.698957 3014 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4426.1.0-a-dfa5c25729\" not found" node="ci-4426.1.0-a-dfa5c25729" Sep 12 17:23:47.703193 kubelet[3014]: I0912 17:23:47.703174 3014 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b57a61821d1f8d713c34170ae8730105-ca-certs\") pod \"kube-apiserver-ci-4426.1.0-a-dfa5c25729\" (UID: \"b57a61821d1f8d713c34170ae8730105\") " pod="kube-system/kube-apiserver-ci-4426.1.0-a-dfa5c25729" Sep 12 17:23:47.703445 kubelet[3014]: I0912 17:23:47.703428 3014 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b57a61821d1f8d713c34170ae8730105-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4426.1.0-a-dfa5c25729\" (UID: \"b57a61821d1f8d713c34170ae8730105\") " pod="kube-system/kube-apiserver-ci-4426.1.0-a-dfa5c25729" Sep 12 17:23:47.703552 kubelet[3014]: I0912 17:23:47.703541 3014 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4c43a5ee9553d9985e9e0e304ac1102e-k8s-certs\") pod \"kube-controller-manager-ci-4426.1.0-a-dfa5c25729\" (UID: \"4c43a5ee9553d9985e9e0e304ac1102e\") " pod="kube-system/kube-controller-manager-ci-4426.1.0-a-dfa5c25729" Sep 12 17:23:47.703632 kubelet[3014]: I0912 17:23:47.703622 3014 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4c43a5ee9553d9985e9e0e304ac1102e-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4426.1.0-a-dfa5c25729\" (UID: \"4c43a5ee9553d9985e9e0e304ac1102e\") " pod="kube-system/kube-controller-manager-ci-4426.1.0-a-dfa5c25729" Sep 12 17:23:47.703713 kubelet[3014]: I0912 17:23:47.703703 3014 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b57a61821d1f8d713c34170ae8730105-k8s-certs\") pod \"kube-apiserver-ci-4426.1.0-a-dfa5c25729\" (UID: \"b57a61821d1f8d713c34170ae8730105\") " pod="kube-system/kube-apiserver-ci-4426.1.0-a-dfa5c25729" Sep 12 17:23:47.703802 kubelet[3014]: I0912 17:23:47.703791 3014 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4c43a5ee9553d9985e9e0e304ac1102e-ca-certs\") pod \"kube-controller-manager-ci-4426.1.0-a-dfa5c25729\" (UID: \"4c43a5ee9553d9985e9e0e304ac1102e\") " pod="kube-system/kube-controller-manager-ci-4426.1.0-a-dfa5c25729" Sep 12 17:23:47.703884 kubelet[3014]: I0912 17:23:47.703873 3014 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4c43a5ee9553d9985e9e0e304ac1102e-flexvolume-dir\") pod \"kube-controller-manager-ci-4426.1.0-a-dfa5c25729\" (UID: \"4c43a5ee9553d9985e9e0e304ac1102e\") " pod="kube-system/kube-controller-manager-ci-4426.1.0-a-dfa5c25729" Sep 12 17:23:47.703961 kubelet[3014]: I0912 17:23:47.703952 3014 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4c43a5ee9553d9985e9e0e304ac1102e-kubeconfig\") pod \"kube-controller-manager-ci-4426.1.0-a-dfa5c25729\" (UID: \"4c43a5ee9553d9985e9e0e304ac1102e\") " pod="kube-system/kube-controller-manager-ci-4426.1.0-a-dfa5c25729" Sep 12 17:23:47.704010 kubelet[3014]: E0912 17:23:47.703373 3014 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4426.1.0-a-dfa5c25729?timeout=10s\": dial tcp 10.200.20.38:6443: connect: connection refused" interval="400ms" Sep 12 17:23:47.704089 kubelet[3014]: I0912 17:23:47.704070 3014 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1188cc58b6f1eab357681080f724bff6-kubeconfig\") pod \"kube-scheduler-ci-4426.1.0-a-dfa5c25729\" (UID: \"1188cc58b6f1eab357681080f724bff6\") " pod="kube-system/kube-scheduler-ci-4426.1.0-a-dfa5c25729" Sep 12 17:23:47.867138 kubelet[3014]: I0912 17:23:47.867120 3014 kubelet_node_status.go:75] "Attempting to register node" node="ci-4426.1.0-a-dfa5c25729" Sep 12 17:23:47.867500 kubelet[3014]: E0912 17:23:47.867481 3014 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.38:6443/api/v1/nodes\": dial tcp 10.200.20.38:6443: connect: connection refused" node="ci-4426.1.0-a-dfa5c25729" Sep 12 17:23:47.982690 containerd[1826]: time="2025-09-12T17:23:47.982610240Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4426.1.0-a-dfa5c25729,Uid:1188cc58b6f1eab357681080f724bff6,Namespace:kube-system,Attempt:0,}" Sep 12 17:23:47.987533 containerd[1826]: time="2025-09-12T17:23:47.987507445Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4426.1.0-a-dfa5c25729,Uid:b57a61821d1f8d713c34170ae8730105,Namespace:kube-system,Attempt:0,}" Sep 12 17:23:48.000434 containerd[1826]: time="2025-09-12T17:23:48.000412839Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4426.1.0-a-dfa5c25729,Uid:4c43a5ee9553d9985e9e0e304ac1102e,Namespace:kube-system,Attempt:0,}" Sep 12 17:23:48.090300 containerd[1826]: time="2025-09-12T17:23:48.090274766Z" level=info msg="connecting to shim dc1a200d385d08e9e96b431229ef8276e446cedaeced3676c3db36c73aef079e" address="unix:///run/containerd/s/e6bb53c94a4170dcde87d21fd7c82c504fb925a59243cb5807ab3f4906f23e5e" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:23:48.096481 containerd[1826]: time="2025-09-12T17:23:48.096449786Z" level=info msg="connecting to shim af70118f1ebd975f07bbcf22372758236c26042cc38a93ed7cede73e42fe071e" address="unix:///run/containerd/s/4624d8110656f681a04b8c7ece3aaf5423f89a38160ee5f2fb0189edc3e667ea" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:23:48.102181 containerd[1826]: time="2025-09-12T17:23:48.102014130Z" level=info msg="connecting to shim f4b6f37f3f7859c453793abf6f85a26c7ca47917348817326933657e6544c50f" address="unix:///run/containerd/s/ea35335af505442c54e74f117a52ca3026045008df1e43bfa9592ce1af433be4" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:23:48.106080 kubelet[3014]: E0912 17:23:48.106054 3014 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4426.1.0-a-dfa5c25729?timeout=10s\": dial tcp 10.200.20.38:6443: connect: connection refused" interval="800ms" Sep 12 17:23:48.128900 systemd[1]: Started cri-containerd-af70118f1ebd975f07bbcf22372758236c26042cc38a93ed7cede73e42fe071e.scope - libcontainer container af70118f1ebd975f07bbcf22372758236c26042cc38a93ed7cede73e42fe071e. Sep 12 17:23:48.136486 systemd[1]: Started cri-containerd-dc1a200d385d08e9e96b431229ef8276e446cedaeced3676c3db36c73aef079e.scope - libcontainer container dc1a200d385d08e9e96b431229ef8276e446cedaeced3676c3db36c73aef079e. Sep 12 17:23:48.138218 systemd[1]: Started cri-containerd-f4b6f37f3f7859c453793abf6f85a26c7ca47917348817326933657e6544c50f.scope - libcontainer container f4b6f37f3f7859c453793abf6f85a26c7ca47917348817326933657e6544c50f. Sep 12 17:23:48.269785 kubelet[3014]: I0912 17:23:48.269736 3014 kubelet_node_status.go:75] "Attempting to register node" node="ci-4426.1.0-a-dfa5c25729" Sep 12 17:23:48.270088 kubelet[3014]: E0912 17:23:48.270060 3014 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.38:6443/api/v1/nodes\": dial tcp 10.200.20.38:6443: connect: connection refused" node="ci-4426.1.0-a-dfa5c25729" Sep 12 17:23:48.636162 kubelet[3014]: E0912 17:23:48.636066 3014 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.38:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.38:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 12 17:23:48.650511 kubelet[3014]: E0912 17:23:48.650486 3014 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.38:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.38:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 12 17:23:48.705229 kubelet[3014]: E0912 17:23:48.705201 3014 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.38:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4426.1.0-a-dfa5c25729&limit=500&resourceVersion=0\": dial tcp 10.200.20.38:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 12 17:23:48.773392 containerd[1826]: time="2025-09-12T17:23:48.773281746Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4426.1.0-a-dfa5c25729,Uid:1188cc58b6f1eab357681080f724bff6,Namespace:kube-system,Attempt:0,} returns sandbox id \"af70118f1ebd975f07bbcf22372758236c26042cc38a93ed7cede73e42fe071e\"" Sep 12 17:23:48.816105 containerd[1826]: time="2025-09-12T17:23:48.816045893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4426.1.0-a-dfa5c25729,Uid:4c43a5ee9553d9985e9e0e304ac1102e,Namespace:kube-system,Attempt:0,} returns sandbox id \"dc1a200d385d08e9e96b431229ef8276e446cedaeced3676c3db36c73aef079e\"" Sep 12 17:23:48.819224 containerd[1826]: time="2025-09-12T17:23:48.819201048Z" level=info msg="CreateContainer within sandbox \"af70118f1ebd975f07bbcf22372758236c26042cc38a93ed7cede73e42fe071e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 12 17:23:48.825508 containerd[1826]: time="2025-09-12T17:23:48.825029094Z" level=info msg="CreateContainer within sandbox \"dc1a200d385d08e9e96b431229ef8276e446cedaeced3676c3db36c73aef079e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 12 17:23:48.825748 containerd[1826]: time="2025-09-12T17:23:48.825727091Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4426.1.0-a-dfa5c25729,Uid:b57a61821d1f8d713c34170ae8730105,Namespace:kube-system,Attempt:0,} returns sandbox id \"f4b6f37f3f7859c453793abf6f85a26c7ca47917348817326933657e6544c50f\"" Sep 12 17:23:48.834321 containerd[1826]: time="2025-09-12T17:23:48.834294748Z" level=info msg="CreateContainer within sandbox \"f4b6f37f3f7859c453793abf6f85a26c7ca47917348817326933657e6544c50f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 12 17:23:48.861260 containerd[1826]: time="2025-09-12T17:23:48.861234005Z" level=info msg="Container bf7c7a9b7f5aa2766331e57662f8922697b8bfb0287dc4bd5318d61c029d613a: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:23:48.868264 containerd[1826]: time="2025-09-12T17:23:48.868239865Z" level=info msg="Container 6e7276c51322ca1ce4a252d59b7ce241d2ea9c12a5f8fa7da55bf07cf65e0d91: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:23:48.876300 containerd[1826]: time="2025-09-12T17:23:48.876269808Z" level=info msg="Container 98a384c832a253f2eef27b586a33a5a9455793270ddd8be1c5b57313abd59c10: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:23:48.907375 kubelet[3014]: E0912 17:23:48.907136 3014 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4426.1.0-a-dfa5c25729?timeout=10s\": dial tcp 10.200.20.38:6443: connect: connection refused" interval="1.6s" Sep 12 17:23:48.910854 containerd[1826]: time="2025-09-12T17:23:48.910829481Z" level=info msg="CreateContainer within sandbox \"af70118f1ebd975f07bbcf22372758236c26042cc38a93ed7cede73e42fe071e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"bf7c7a9b7f5aa2766331e57662f8922697b8bfb0287dc4bd5318d61c029d613a\"" Sep 12 17:23:48.915052 containerd[1826]: time="2025-09-12T17:23:48.914981431Z" level=info msg="StartContainer for \"bf7c7a9b7f5aa2766331e57662f8922697b8bfb0287dc4bd5318d61c029d613a\"" Sep 12 17:23:48.915691 containerd[1826]: time="2025-09-12T17:23:48.915664195Z" level=info msg="connecting to shim bf7c7a9b7f5aa2766331e57662f8922697b8bfb0287dc4bd5318d61c029d613a" address="unix:///run/containerd/s/4624d8110656f681a04b8c7ece3aaf5423f89a38160ee5f2fb0189edc3e667ea" protocol=ttrpc version=3 Sep 12 17:23:48.918823 containerd[1826]: time="2025-09-12T17:23:48.918066464Z" level=info msg="CreateContainer within sandbox \"dc1a200d385d08e9e96b431229ef8276e446cedaeced3676c3db36c73aef079e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"6e7276c51322ca1ce4a252d59b7ce241d2ea9c12a5f8fa7da55bf07cf65e0d91\"" Sep 12 17:23:48.919216 containerd[1826]: time="2025-09-12T17:23:48.919194606Z" level=info msg="StartContainer for \"6e7276c51322ca1ce4a252d59b7ce241d2ea9c12a5f8fa7da55bf07cf65e0d91\"" Sep 12 17:23:48.920873 containerd[1826]: time="2025-09-12T17:23:48.920846701Z" level=info msg="connecting to shim 6e7276c51322ca1ce4a252d59b7ce241d2ea9c12a5f8fa7da55bf07cf65e0d91" address="unix:///run/containerd/s/e6bb53c94a4170dcde87d21fd7c82c504fb925a59243cb5807ab3f4906f23e5e" protocol=ttrpc version=3 Sep 12 17:23:48.929866 containerd[1826]: time="2025-09-12T17:23:48.929823813Z" level=info msg="CreateContainer within sandbox \"f4b6f37f3f7859c453793abf6f85a26c7ca47917348817326933657e6544c50f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"98a384c832a253f2eef27b586a33a5a9455793270ddd8be1c5b57313abd59c10\"" Sep 12 17:23:48.930900 containerd[1826]: time="2025-09-12T17:23:48.930094354Z" level=info msg="StartContainer for \"98a384c832a253f2eef27b586a33a5a9455793270ddd8be1c5b57313abd59c10\"" Sep 12 17:23:48.931841 containerd[1826]: time="2025-09-12T17:23:48.931814122Z" level=info msg="connecting to shim 98a384c832a253f2eef27b586a33a5a9455793270ddd8be1c5b57313abd59c10" address="unix:///run/containerd/s/ea35335af505442c54e74f117a52ca3026045008df1e43bfa9592ce1af433be4" protocol=ttrpc version=3 Sep 12 17:23:48.932022 systemd[1]: Started cri-containerd-bf7c7a9b7f5aa2766331e57662f8922697b8bfb0287dc4bd5318d61c029d613a.scope - libcontainer container bf7c7a9b7f5aa2766331e57662f8922697b8bfb0287dc4bd5318d61c029d613a. Sep 12 17:23:48.948005 systemd[1]: Started cri-containerd-6e7276c51322ca1ce4a252d59b7ce241d2ea9c12a5f8fa7da55bf07cf65e0d91.scope - libcontainer container 6e7276c51322ca1ce4a252d59b7ce241d2ea9c12a5f8fa7da55bf07cf65e0d91. Sep 12 17:23:48.951547 systemd[1]: Started cri-containerd-98a384c832a253f2eef27b586a33a5a9455793270ddd8be1c5b57313abd59c10.scope - libcontainer container 98a384c832a253f2eef27b586a33a5a9455793270ddd8be1c5b57313abd59c10. Sep 12 17:23:48.997223 containerd[1826]: time="2025-09-12T17:23:48.997187118Z" level=info msg="StartContainer for \"6e7276c51322ca1ce4a252d59b7ce241d2ea9c12a5f8fa7da55bf07cf65e0d91\" returns successfully" Sep 12 17:23:49.013242 containerd[1826]: time="2025-09-12T17:23:49.013219563Z" level=info msg="StartContainer for \"bf7c7a9b7f5aa2766331e57662f8922697b8bfb0287dc4bd5318d61c029d613a\" returns successfully" Sep 12 17:23:49.013732 containerd[1826]: time="2025-09-12T17:23:49.013377773Z" level=info msg="StartContainer for \"98a384c832a253f2eef27b586a33a5a9455793270ddd8be1c5b57313abd59c10\" returns successfully" Sep 12 17:23:49.073141 kubelet[3014]: I0912 17:23:49.073118 3014 kubelet_node_status.go:75] "Attempting to register node" node="ci-4426.1.0-a-dfa5c25729" Sep 12 17:23:49.575794 kubelet[3014]: E0912 17:23:49.575314 3014 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4426.1.0-a-dfa5c25729\" not found" node="ci-4426.1.0-a-dfa5c25729" Sep 12 17:23:49.579527 kubelet[3014]: E0912 17:23:49.579504 3014 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4426.1.0-a-dfa5c25729\" not found" node="ci-4426.1.0-a-dfa5c25729" Sep 12 17:23:49.581963 kubelet[3014]: E0912 17:23:49.581939 3014 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4426.1.0-a-dfa5c25729\" not found" node="ci-4426.1.0-a-dfa5c25729" Sep 12 17:23:50.421425 kubelet[3014]: I0912 17:23:50.421391 3014 kubelet_node_status.go:78] "Successfully registered node" node="ci-4426.1.0-a-dfa5c25729" Sep 12 17:23:50.421425 kubelet[3014]: E0912 17:23:50.421425 3014 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4426.1.0-a-dfa5c25729\": node \"ci-4426.1.0-a-dfa5c25729\" not found" Sep 12 17:23:50.557947 kubelet[3014]: E0912 17:23:50.557915 3014 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4426.1.0-a-dfa5c25729\" not found" Sep 12 17:23:50.582733 kubelet[3014]: E0912 17:23:50.582546 3014 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4426.1.0-a-dfa5c25729\" not found" node="ci-4426.1.0-a-dfa5c25729" Sep 12 17:23:50.582733 kubelet[3014]: E0912 17:23:50.582672 3014 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4426.1.0-a-dfa5c25729\" not found" node="ci-4426.1.0-a-dfa5c25729" Sep 12 17:23:50.658695 kubelet[3014]: E0912 17:23:50.658669 3014 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4426.1.0-a-dfa5c25729\" not found" Sep 12 17:23:50.759251 kubelet[3014]: E0912 17:23:50.759222 3014 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4426.1.0-a-dfa5c25729\" not found" Sep 12 17:23:50.859759 kubelet[3014]: E0912 17:23:50.859729 3014 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4426.1.0-a-dfa5c25729\" not found" Sep 12 17:23:50.960320 kubelet[3014]: E0912 17:23:50.960287 3014 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4426.1.0-a-dfa5c25729\" not found" Sep 12 17:23:51.061507 kubelet[3014]: E0912 17:23:51.061404 3014 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4426.1.0-a-dfa5c25729\" not found" Sep 12 17:23:51.161530 kubelet[3014]: E0912 17:23:51.161500 3014 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4426.1.0-a-dfa5c25729\" not found" Sep 12 17:23:51.262075 kubelet[3014]: E0912 17:23:51.262047 3014 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4426.1.0-a-dfa5c25729\" not found" Sep 12 17:23:51.362863 kubelet[3014]: E0912 17:23:51.362548 3014 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4426.1.0-a-dfa5c25729\" not found" Sep 12 17:23:51.463634 kubelet[3014]: E0912 17:23:51.463603 3014 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4426.1.0-a-dfa5c25729\" not found" Sep 12 17:23:51.583345 kubelet[3014]: I0912 17:23:51.583113 3014 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4426.1.0-a-dfa5c25729" Sep 12 17:23:51.583345 kubelet[3014]: I0912 17:23:51.583203 3014 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4426.1.0-a-dfa5c25729" Sep 12 17:23:51.595953 kubelet[3014]: I0912 17:23:51.595928 3014 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4426.1.0-a-dfa5c25729" Sep 12 17:23:51.621922 kubelet[3014]: I0912 17:23:51.621819 3014 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 12 17:23:51.664000 kubelet[3014]: I0912 17:23:51.663966 3014 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 12 17:23:51.665075 kubelet[3014]: I0912 17:23:51.665059 3014 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 12 17:23:51.665227 kubelet[3014]: E0912 17:23:51.665213 3014 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4426.1.0-a-dfa5c25729\" already exists" pod="kube-system/kube-apiserver-ci-4426.1.0-a-dfa5c25729" Sep 12 17:23:51.665298 kubelet[3014]: I0912 17:23:51.665290 3014 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4426.1.0-a-dfa5c25729" Sep 12 17:23:51.671957 kubelet[3014]: I0912 17:23:51.671930 3014 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 12 17:23:51.672022 kubelet[3014]: I0912 17:23:51.671994 3014 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4426.1.0-a-dfa5c25729" Sep 12 17:23:51.759542 kubelet[3014]: I0912 17:23:51.759490 3014 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 12 17:23:51.759542 kubelet[3014]: E0912 17:23:51.759529 3014 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4426.1.0-a-dfa5c25729\" already exists" pod="kube-system/kube-scheduler-ci-4426.1.0-a-dfa5c25729" Sep 12 17:23:51.969209 kubelet[3014]: I0912 17:23:51.968882 3014 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4426.1.0-a-dfa5c25729" Sep 12 17:23:52.014713 kubelet[3014]: I0912 17:23:52.014683 3014 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 12 17:23:52.014847 kubelet[3014]: E0912 17:23:52.014739 3014 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4426.1.0-a-dfa5c25729\" already exists" pod="kube-system/kube-controller-manager-ci-4426.1.0-a-dfa5c25729" Sep 12 17:23:52.488411 kubelet[3014]: I0912 17:23:52.488376 3014 apiserver.go:52] "Watching apiserver" Sep 12 17:23:52.502750 kubelet[3014]: I0912 17:23:52.502717 3014 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 12 17:23:53.050710 systemd[1]: Reload requested from client PID 3289 ('systemctl') (unit session-9.scope)... Sep 12 17:23:53.050726 systemd[1]: Reloading... Sep 12 17:23:53.126789 zram_generator::config[3336]: No configuration found. Sep 12 17:23:53.287105 systemd[1]: Reloading finished in 236 ms. Sep 12 17:23:53.304399 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:23:53.319392 systemd[1]: kubelet.service: Deactivated successfully. Sep 12 17:23:53.319786 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:23:53.319930 systemd[1]: kubelet.service: Consumed 691ms CPU time, 127.2M memory peak. Sep 12 17:23:53.321689 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:23:58.824464 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:23:58.832108 (kubelet)[3400]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 17:23:58.856454 kubelet[3400]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:23:58.856659 kubelet[3400]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 12 17:23:58.856700 kubelet[3400]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:23:58.856816 kubelet[3400]: I0912 17:23:58.856791 3400 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 17:23:58.861599 kubelet[3400]: I0912 17:23:58.861568 3400 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 12 17:23:58.861599 kubelet[3400]: I0912 17:23:58.861587 3400 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 17:23:58.861860 kubelet[3400]: I0912 17:23:58.861846 3400 server.go:956] "Client rotation is on, will bootstrap in background" Sep 12 17:23:58.863055 kubelet[3400]: I0912 17:23:58.863038 3400 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Sep 12 17:23:58.865340 kubelet[3400]: I0912 17:23:58.865318 3400 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 17:23:58.867955 kubelet[3400]: I0912 17:23:58.867939 3400 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 12 17:23:58.870102 kubelet[3400]: I0912 17:23:58.870083 3400 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 17:23:58.870238 kubelet[3400]: I0912 17:23:58.870217 3400 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 17:23:58.870327 kubelet[3400]: I0912 17:23:58.870235 3400 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4426.1.0-a-dfa5c25729","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 17:23:58.870395 kubelet[3400]: I0912 17:23:58.870331 3400 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 17:23:58.870395 kubelet[3400]: I0912 17:23:58.870338 3400 container_manager_linux.go:303] "Creating device plugin manager" Sep 12 17:23:58.870395 kubelet[3400]: I0912 17:23:58.870368 3400 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:23:58.870479 kubelet[3400]: I0912 17:23:58.870467 3400 kubelet.go:480] "Attempting to sync node with API server" Sep 12 17:23:58.870499 kubelet[3400]: I0912 17:23:58.870481 3400 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 17:23:58.870499 kubelet[3400]: I0912 17:23:58.870499 3400 kubelet.go:386] "Adding apiserver pod source" Sep 12 17:23:58.871643 kubelet[3400]: I0912 17:23:58.871623 3400 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 17:23:58.875423 kubelet[3400]: I0912 17:23:58.875330 3400 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 12 17:23:58.875827 kubelet[3400]: I0912 17:23:58.875815 3400 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 12 17:23:58.878298 kubelet[3400]: I0912 17:23:58.877648 3400 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 12 17:23:58.878298 kubelet[3400]: I0912 17:23:58.877676 3400 server.go:1289] "Started kubelet" Sep 12 17:23:58.880853 kubelet[3400]: I0912 17:23:58.880839 3400 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 17:23:58.882405 kubelet[3400]: I0912 17:23:58.882373 3400 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 17:23:58.884390 kubelet[3400]: I0912 17:23:58.883872 3400 server.go:317] "Adding debug handlers to kubelet server" Sep 12 17:23:58.889014 kubelet[3400]: I0912 17:23:58.888978 3400 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 17:23:58.889897 kubelet[3400]: I0912 17:23:58.889287 3400 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 17:23:58.890580 kubelet[3400]: I0912 17:23:58.890399 3400 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 17:23:58.891730 kubelet[3400]: I0912 17:23:58.891587 3400 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 12 17:23:58.891906 kubelet[3400]: E0912 17:23:58.891891 3400 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4426.1.0-a-dfa5c25729\" not found" Sep 12 17:23:58.892842 kubelet[3400]: I0912 17:23:58.892805 3400 factory.go:223] Registration of the systemd container factory successfully Sep 12 17:23:58.893004 kubelet[3400]: I0912 17:23:58.892989 3400 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 17:23:58.895353 kubelet[3400]: I0912 17:23:58.895164 3400 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 12 17:23:58.895502 kubelet[3400]: I0912 17:23:58.895294 3400 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 12 17:23:58.898901 kubelet[3400]: I0912 17:23:58.898753 3400 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 12 17:23:58.899343 kubelet[3400]: I0912 17:23:58.899246 3400 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 12 17:23:58.899525 kubelet[3400]: I0912 17:23:58.899512 3400 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 12 17:23:58.900147 kubelet[3400]: I0912 17:23:58.900130 3400 kubelet.go:2436] "Starting kubelet main sync loop" Sep 12 17:23:58.901358 kubelet[3400]: E0912 17:23:58.900830 3400 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 17:23:58.902652 kubelet[3400]: E0912 17:23:58.902610 3400 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 17:23:58.903412 kubelet[3400]: I0912 17:23:58.903395 3400 factory.go:223] Registration of the containerd container factory successfully Sep 12 17:23:58.908993 kubelet[3400]: I0912 17:23:58.898780 3400 reconciler.go:26] "Reconciler: start to sync state" Sep 12 17:23:58.960134 kubelet[3400]: I0912 17:23:58.960080 3400 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 12 17:23:58.960262 kubelet[3400]: I0912 17:23:58.960248 3400 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 12 17:23:58.960349 kubelet[3400]: I0912 17:23:58.960339 3400 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:23:58.961461 kubelet[3400]: I0912 17:23:58.960472 3400 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 12 17:23:58.961461 kubelet[3400]: I0912 17:23:58.960482 3400 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 12 17:23:58.961461 kubelet[3400]: I0912 17:23:58.960495 3400 policy_none.go:49] "None policy: Start" Sep 12 17:23:58.961461 kubelet[3400]: I0912 17:23:58.960502 3400 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 12 17:23:58.961461 kubelet[3400]: I0912 17:23:58.960509 3400 state_mem.go:35] "Initializing new in-memory state store" Sep 12 17:23:58.961461 kubelet[3400]: I0912 17:23:58.960565 3400 state_mem.go:75] "Updated machine memory state" Sep 12 17:23:58.964957 kubelet[3400]: E0912 17:23:58.964938 3400 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 12 17:23:58.966281 kubelet[3400]: I0912 17:23:58.966251 3400 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 17:23:58.966281 kubelet[3400]: I0912 17:23:58.966273 3400 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 17:23:58.966992 kubelet[3400]: I0912 17:23:58.966440 3400 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 17:23:58.970042 kubelet[3400]: E0912 17:23:58.970024 3400 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 12 17:23:59.001556 kubelet[3400]: I0912 17:23:59.001527 3400 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4426.1.0-a-dfa5c25729" Sep 12 17:23:59.003205 kubelet[3400]: I0912 17:23:59.002878 3400 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4426.1.0-a-dfa5c25729" Sep 12 17:23:59.003205 kubelet[3400]: I0912 17:23:59.003106 3400 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4426.1.0-a-dfa5c25729" Sep 12 17:23:59.041117 sudo[3437]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 12 17:23:59.041647 sudo[3437]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 12 17:23:59.074205 kubelet[3400]: I0912 17:23:59.074184 3400 kubelet_node_status.go:75] "Attempting to register node" node="ci-4426.1.0-a-dfa5c25729" Sep 12 17:23:59.086898 kubelet[3400]: I0912 17:23:59.086758 3400 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 12 17:23:59.086898 kubelet[3400]: E0912 17:23:59.086816 3400 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4426.1.0-a-dfa5c25729\" already exists" pod="kube-system/kube-apiserver-ci-4426.1.0-a-dfa5c25729" Sep 12 17:23:59.087784 kubelet[3400]: I0912 17:23:59.087094 3400 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 12 17:23:59.087784 kubelet[3400]: E0912 17:23:59.087136 3400 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4426.1.0-a-dfa5c25729\" already exists" pod="kube-system/kube-scheduler-ci-4426.1.0-a-dfa5c25729" Sep 12 17:23:59.089907 kubelet[3400]: I0912 17:23:59.089883 3400 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 12 17:23:59.090048 kubelet[3400]: E0912 17:23:59.090029 3400 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4426.1.0-a-dfa5c25729\" already exists" pod="kube-system/kube-controller-manager-ci-4426.1.0-a-dfa5c25729" Sep 12 17:23:59.093387 kubelet[3400]: I0912 17:23:59.093370 3400 kubelet_node_status.go:124] "Node was previously registered" node="ci-4426.1.0-a-dfa5c25729" Sep 12 17:23:59.093511 kubelet[3400]: I0912 17:23:59.093425 3400 kubelet_node_status.go:78] "Successfully registered node" node="ci-4426.1.0-a-dfa5c25729" Sep 12 17:23:59.093511 kubelet[3400]: I0912 17:23:59.093440 3400 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 12 17:23:59.093816 containerd[1826]: time="2025-09-12T17:23:59.093786872Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 12 17:23:59.094348 kubelet[3400]: I0912 17:23:59.093969 3400 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 12 17:23:59.110654 kubelet[3400]: I0912 17:23:59.110629 3400 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4c43a5ee9553d9985e9e0e304ac1102e-ca-certs\") pod \"kube-controller-manager-ci-4426.1.0-a-dfa5c25729\" (UID: \"4c43a5ee9553d9985e9e0e304ac1102e\") " pod="kube-system/kube-controller-manager-ci-4426.1.0-a-dfa5c25729" Sep 12 17:23:59.110718 kubelet[3400]: I0912 17:23:59.110658 3400 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4c43a5ee9553d9985e9e0e304ac1102e-kubeconfig\") pod \"kube-controller-manager-ci-4426.1.0-a-dfa5c25729\" (UID: \"4c43a5ee9553d9985e9e0e304ac1102e\") " pod="kube-system/kube-controller-manager-ci-4426.1.0-a-dfa5c25729" Sep 12 17:23:59.110718 kubelet[3400]: I0912 17:23:59.110674 3400 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4c43a5ee9553d9985e9e0e304ac1102e-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4426.1.0-a-dfa5c25729\" (UID: \"4c43a5ee9553d9985e9e0e304ac1102e\") " pod="kube-system/kube-controller-manager-ci-4426.1.0-a-dfa5c25729" Sep 12 17:23:59.110718 kubelet[3400]: I0912 17:23:59.110686 3400 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b57a61821d1f8d713c34170ae8730105-ca-certs\") pod \"kube-apiserver-ci-4426.1.0-a-dfa5c25729\" (UID: \"b57a61821d1f8d713c34170ae8730105\") " pod="kube-system/kube-apiserver-ci-4426.1.0-a-dfa5c25729" Sep 12 17:23:59.110718 kubelet[3400]: I0912 17:23:59.110696 3400 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b57a61821d1f8d713c34170ae8730105-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4426.1.0-a-dfa5c25729\" (UID: \"b57a61821d1f8d713c34170ae8730105\") " pod="kube-system/kube-apiserver-ci-4426.1.0-a-dfa5c25729" Sep 12 17:23:59.110718 kubelet[3400]: I0912 17:23:59.110705 3400 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4c43a5ee9553d9985e9e0e304ac1102e-flexvolume-dir\") pod \"kube-controller-manager-ci-4426.1.0-a-dfa5c25729\" (UID: \"4c43a5ee9553d9985e9e0e304ac1102e\") " pod="kube-system/kube-controller-manager-ci-4426.1.0-a-dfa5c25729" Sep 12 17:23:59.110817 kubelet[3400]: I0912 17:23:59.110714 3400 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4c43a5ee9553d9985e9e0e304ac1102e-k8s-certs\") pod \"kube-controller-manager-ci-4426.1.0-a-dfa5c25729\" (UID: \"4c43a5ee9553d9985e9e0e304ac1102e\") " pod="kube-system/kube-controller-manager-ci-4426.1.0-a-dfa5c25729" Sep 12 17:23:59.110817 kubelet[3400]: I0912 17:23:59.110723 3400 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1188cc58b6f1eab357681080f724bff6-kubeconfig\") pod \"kube-scheduler-ci-4426.1.0-a-dfa5c25729\" (UID: \"1188cc58b6f1eab357681080f724bff6\") " pod="kube-system/kube-scheduler-ci-4426.1.0-a-dfa5c25729" Sep 12 17:23:59.110817 kubelet[3400]: I0912 17:23:59.110732 3400 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b57a61821d1f8d713c34170ae8730105-k8s-certs\") pod \"kube-apiserver-ci-4426.1.0-a-dfa5c25729\" (UID: \"b57a61821d1f8d713c34170ae8730105\") " pod="kube-system/kube-apiserver-ci-4426.1.0-a-dfa5c25729" Sep 12 17:23:59.285792 sudo[3437]: pam_unix(sudo:session): session closed for user root Sep 12 17:23:59.872601 kubelet[3400]: I0912 17:23:59.872574 3400 apiserver.go:52] "Watching apiserver" Sep 12 17:23:59.916130 kubelet[3400]: I0912 17:23:59.916099 3400 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f8048efa-d411-4933-a828-7271729f9a7b-kube-proxy\") pod \"kube-proxy-ggk2c\" (UID: \"f8048efa-d411-4933-a828-7271729f9a7b\") " pod="kube-system/kube-proxy-ggk2c" Sep 12 17:23:59.916130 kubelet[3400]: I0912 17:23:59.916132 3400 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f8048efa-d411-4933-a828-7271729f9a7b-xtables-lock\") pod \"kube-proxy-ggk2c\" (UID: \"f8048efa-d411-4933-a828-7271729f9a7b\") " pod="kube-system/kube-proxy-ggk2c" Sep 12 17:23:59.916243 kubelet[3400]: I0912 17:23:59.916146 3400 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f8048efa-d411-4933-a828-7271729f9a7b-lib-modules\") pod \"kube-proxy-ggk2c\" (UID: \"f8048efa-d411-4933-a828-7271729f9a7b\") " pod="kube-system/kube-proxy-ggk2c" Sep 12 17:23:59.916243 kubelet[3400]: I0912 17:23:59.916158 3400 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlhkl\" (UniqueName: \"kubernetes.io/projected/f8048efa-d411-4933-a828-7271729f9a7b-kube-api-access-mlhkl\") pod \"kube-proxy-ggk2c\" (UID: \"f8048efa-d411-4933-a828-7271729f9a7b\") " pod="kube-system/kube-proxy-ggk2c" Sep 12 17:24:00.017181 kubelet[3400]: E0912 17:24:00.017153 3400 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: object "kube-system"/"kube-proxy" not registered Sep 12 17:24:00.017263 kubelet[3400]: E0912 17:24:00.017216 3400 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f8048efa-d411-4933-a828-7271729f9a7b-kube-proxy podName:f8048efa-d411-4933-a828-7271729f9a7b nodeName:}" failed. No retries permitted until 2025-09-12 17:24:00.517197272 +0000 UTC m=+1.682174839 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/f8048efa-d411-4933-a828-7271729f9a7b-kube-proxy") pod "kube-proxy-ggk2c" (UID: "f8048efa-d411-4933-a828-7271729f9a7b") : object "kube-system"/"kube-proxy" not registered Sep 12 17:24:00.023200 kubelet[3400]: E0912 17:24:00.023177 3400 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: object "kube-system"/"kube-root-ca.crt" not registered Sep 12 17:24:00.023200 kubelet[3400]: E0912 17:24:00.023200 3400 projected.go:194] Error preparing data for projected volume kube-api-access-mlhkl for pod kube-system/kube-proxy-ggk2c: object "kube-system"/"kube-root-ca.crt" not registered Sep 12 17:24:00.023317 kubelet[3400]: E0912 17:24:00.023243 3400 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f8048efa-d411-4933-a828-7271729f9a7b-kube-api-access-mlhkl podName:f8048efa-d411-4933-a828-7271729f9a7b nodeName:}" failed. No retries permitted until 2025-09-12 17:24:00.523231923 +0000 UTC m=+1.688209482 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-mlhkl" (UniqueName: "kubernetes.io/projected/f8048efa-d411-4933-a828-7271729f9a7b-kube-api-access-mlhkl") pod "kube-proxy-ggk2c" (UID: "f8048efa-d411-4933-a828-7271729f9a7b") : object "kube-system"/"kube-root-ca.crt" not registered Sep 12 17:24:00.519996 kubelet[3400]: E0912 17:24:00.519968 3400 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: object "kube-system"/"kube-proxy" not registered Sep 12 17:24:00.520104 kubelet[3400]: E0912 17:24:00.520027 3400 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f8048efa-d411-4933-a828-7271729f9a7b-kube-proxy podName:f8048efa-d411-4933-a828-7271729f9a7b nodeName:}" failed. No retries permitted until 2025-09-12 17:24:01.520016662 +0000 UTC m=+2.684994221 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/f8048efa-d411-4933-a828-7271729f9a7b-kube-proxy") pod "kube-proxy-ggk2c" (UID: "f8048efa-d411-4933-a828-7271729f9a7b") : object "kube-system"/"kube-proxy" not registered Sep 12 17:24:00.620607 kubelet[3400]: E0912 17:24:00.620579 3400 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: object "kube-system"/"kube-root-ca.crt" not registered Sep 12 17:24:00.620607 kubelet[3400]: E0912 17:24:00.620610 3400 projected.go:194] Error preparing data for projected volume kube-api-access-mlhkl for pod kube-system/kube-proxy-ggk2c: object "kube-system"/"kube-root-ca.crt" not registered Sep 12 17:24:00.620747 kubelet[3400]: E0912 17:24:00.620643 3400 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f8048efa-d411-4933-a828-7271729f9a7b-kube-api-access-mlhkl podName:f8048efa-d411-4933-a828-7271729f9a7b nodeName:}" failed. No retries permitted until 2025-09-12 17:24:01.620633721 +0000 UTC m=+2.785611280 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-mlhkl" (UniqueName: "kubernetes.io/projected/f8048efa-d411-4933-a828-7271729f9a7b-kube-api-access-mlhkl") pod "kube-proxy-ggk2c" (UID: "f8048efa-d411-4933-a828-7271729f9a7b") : object "kube-system"/"kube-root-ca.crt" not registered Sep 12 17:24:01.128348 kubelet[3400]: I0912 17:24:01.128045 3400 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4426.1.0-a-dfa5c25729" Sep 12 17:24:01.134411 systemd[1]: Created slice kubepods-besteffort-podf8048efa_d411_4933_a828_7271729f9a7b.slice - libcontainer container kubepods-besteffort-podf8048efa_d411_4933_a828_7271729f9a7b.slice. Sep 12 17:24:01.142529 kubelet[3400]: I0912 17:24:01.142500 3400 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 12 17:24:01.142605 kubelet[3400]: E0912 17:24:01.142547 3400 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4426.1.0-a-dfa5c25729\" already exists" pod="kube-system/kube-apiserver-ci-4426.1.0-a-dfa5c25729" Sep 12 17:24:01.147565 kubelet[3400]: I0912 17:24:01.147283 3400 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4426.1.0-a-dfa5c25729" podStartSLOduration=10.147264635 podStartE2EDuration="10.147264635s" podCreationTimestamp="2025-09-12 17:23:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:24:01.146695943 +0000 UTC m=+2.311673510" watchObservedRunningTime="2025-09-12 17:24:01.147264635 +0000 UTC m=+2.312242194" Sep 12 17:24:01.151563 systemd[1]: Created slice kubepods-burstable-podc7aba473_7601_4ebc_83c4_74c14847ce3a.slice - libcontainer container kubepods-burstable-podc7aba473_7601_4ebc_83c4_74c14847ce3a.slice. Sep 12 17:24:01.161294 systemd[1]: Created slice kubepods-besteffort-poda086ac3d_35ef_408f_8541_248f710d0583.slice - libcontainer container kubepods-besteffort-poda086ac3d_35ef_408f_8541_248f710d0583.slice. Sep 12 17:24:01.162926 kubelet[3400]: I0912 17:24:01.162791 3400 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4426.1.0-a-dfa5c25729" podStartSLOduration=10.162781306 podStartE2EDuration="10.162781306s" podCreationTimestamp="2025-09-12 17:23:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:24:01.162185261 +0000 UTC m=+2.327162820" watchObservedRunningTime="2025-09-12 17:24:01.162781306 +0000 UTC m=+2.327758865" Sep 12 17:24:01.182875 kubelet[3400]: I0912 17:24:01.182751 3400 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4426.1.0-a-dfa5c25729" podStartSLOduration=10.182636846 podStartE2EDuration="10.182636846s" podCreationTimestamp="2025-09-12 17:23:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:24:01.173435344 +0000 UTC m=+2.338412903" watchObservedRunningTime="2025-09-12 17:24:01.182636846 +0000 UTC m=+2.347614405" Sep 12 17:24:01.196223 kubelet[3400]: I0912 17:24:01.196188 3400 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 12 17:24:01.225062 kubelet[3400]: I0912 17:24:01.224786 3400 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c7aba473-7601-4ebc-83c4-74c14847ce3a-bpf-maps\") pod \"cilium-5hp67\" (UID: \"c7aba473-7601-4ebc-83c4-74c14847ce3a\") " pod="kube-system/cilium-5hp67" Sep 12 17:24:01.225062 kubelet[3400]: I0912 17:24:01.224810 3400 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c7aba473-7601-4ebc-83c4-74c14847ce3a-host-proc-sys-kernel\") pod \"cilium-5hp67\" (UID: \"c7aba473-7601-4ebc-83c4-74c14847ce3a\") " pod="kube-system/cilium-5hp67" Sep 12 17:24:01.225062 kubelet[3400]: I0912 17:24:01.224822 3400 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c7aba473-7601-4ebc-83c4-74c14847ce3a-hubble-tls\") pod \"cilium-5hp67\" (UID: \"c7aba473-7601-4ebc-83c4-74c14847ce3a\") " pod="kube-system/cilium-5hp67" Sep 12 17:24:01.225062 kubelet[3400]: I0912 17:24:01.224832 3400 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrjqm\" (UniqueName: \"kubernetes.io/projected/c7aba473-7601-4ebc-83c4-74c14847ce3a-kube-api-access-nrjqm\") pod \"cilium-5hp67\" (UID: \"c7aba473-7601-4ebc-83c4-74c14847ce3a\") " pod="kube-system/cilium-5hp67" Sep 12 17:24:01.225062 kubelet[3400]: I0912 17:24:01.224846 3400 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a086ac3d-35ef-408f-8541-248f710d0583-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-jxjn4\" (UID: \"a086ac3d-35ef-408f-8541-248f710d0583\") " pod="kube-system/cilium-operator-6c4d7847fc-jxjn4" Sep 12 17:24:01.225228 kubelet[3400]: I0912 17:24:01.224857 3400 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7przg\" (UniqueName: \"kubernetes.io/projected/a086ac3d-35ef-408f-8541-248f710d0583-kube-api-access-7przg\") pod \"cilium-operator-6c4d7847fc-jxjn4\" (UID: \"a086ac3d-35ef-408f-8541-248f710d0583\") " pod="kube-system/cilium-operator-6c4d7847fc-jxjn4" Sep 12 17:24:01.225228 kubelet[3400]: I0912 17:24:01.224866 3400 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c7aba473-7601-4ebc-83c4-74c14847ce3a-cilium-run\") pod \"cilium-5hp67\" (UID: \"c7aba473-7601-4ebc-83c4-74c14847ce3a\") " pod="kube-system/cilium-5hp67" Sep 12 17:24:01.225228 kubelet[3400]: I0912 17:24:01.224875 3400 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c7aba473-7601-4ebc-83c4-74c14847ce3a-hostproc\") pod \"cilium-5hp67\" (UID: \"c7aba473-7601-4ebc-83c4-74c14847ce3a\") " pod="kube-system/cilium-5hp67" Sep 12 17:24:01.225228 kubelet[3400]: I0912 17:24:01.224885 3400 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c7aba473-7601-4ebc-83c4-74c14847ce3a-lib-modules\") pod \"cilium-5hp67\" (UID: \"c7aba473-7601-4ebc-83c4-74c14847ce3a\") " pod="kube-system/cilium-5hp67" Sep 12 17:24:01.225228 kubelet[3400]: I0912 17:24:01.224894 3400 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c7aba473-7601-4ebc-83c4-74c14847ce3a-cni-path\") pod \"cilium-5hp67\" (UID: \"c7aba473-7601-4ebc-83c4-74c14847ce3a\") " pod="kube-system/cilium-5hp67" Sep 12 17:24:01.225228 kubelet[3400]: I0912 17:24:01.224902 3400 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c7aba473-7601-4ebc-83c4-74c14847ce3a-xtables-lock\") pod \"cilium-5hp67\" (UID: \"c7aba473-7601-4ebc-83c4-74c14847ce3a\") " pod="kube-system/cilium-5hp67" Sep 12 17:24:01.225327 kubelet[3400]: I0912 17:24:01.224917 3400 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c7aba473-7601-4ebc-83c4-74c14847ce3a-clustermesh-secrets\") pod \"cilium-5hp67\" (UID: \"c7aba473-7601-4ebc-83c4-74c14847ce3a\") " pod="kube-system/cilium-5hp67" Sep 12 17:24:01.225327 kubelet[3400]: I0912 17:24:01.224933 3400 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c7aba473-7601-4ebc-83c4-74c14847ce3a-host-proc-sys-net\") pod \"cilium-5hp67\" (UID: \"c7aba473-7601-4ebc-83c4-74c14847ce3a\") " pod="kube-system/cilium-5hp67" Sep 12 17:24:01.225327 kubelet[3400]: I0912 17:24:01.224947 3400 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c7aba473-7601-4ebc-83c4-74c14847ce3a-etc-cni-netd\") pod \"cilium-5hp67\" (UID: \"c7aba473-7601-4ebc-83c4-74c14847ce3a\") " pod="kube-system/cilium-5hp67" Sep 12 17:24:01.225327 kubelet[3400]: I0912 17:24:01.224956 3400 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c7aba473-7601-4ebc-83c4-74c14847ce3a-cilium-cgroup\") pod \"cilium-5hp67\" (UID: \"c7aba473-7601-4ebc-83c4-74c14847ce3a\") " pod="kube-system/cilium-5hp67" Sep 12 17:24:01.225327 kubelet[3400]: I0912 17:24:01.224968 3400 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c7aba473-7601-4ebc-83c4-74c14847ce3a-cilium-config-path\") pod \"cilium-5hp67\" (UID: \"c7aba473-7601-4ebc-83c4-74c14847ce3a\") " pod="kube-system/cilium-5hp67" Sep 12 17:24:01.456830 containerd[1826]: time="2025-09-12T17:24:01.456717318Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5hp67,Uid:c7aba473-7601-4ebc-83c4-74c14847ce3a,Namespace:kube-system,Attempt:0,}" Sep 12 17:24:01.464851 containerd[1826]: time="2025-09-12T17:24:01.464822373Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-jxjn4,Uid:a086ac3d-35ef-408f-8541-248f710d0583,Namespace:kube-system,Attempt:0,}" Sep 12 17:24:01.748616 containerd[1826]: time="2025-09-12T17:24:01.748523294Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ggk2c,Uid:f8048efa-d411-4933-a828-7271729f9a7b,Namespace:kube-system,Attempt:0,}" Sep 12 17:24:02.322606 containerd[1826]: time="2025-09-12T17:24:02.322561773Z" level=info msg="connecting to shim 3340c7258545ac24a81fbd22dc156ce9dda054e33fe3d00b03e133102c768099" address="unix:///run/containerd/s/84e7b3e2c88f4669b3b99e29e6974689bed8cd4a782aa0f5e2edc9df6a160ced" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:24:02.383891 systemd[1]: Started cri-containerd-3340c7258545ac24a81fbd22dc156ce9dda054e33fe3d00b03e133102c768099.scope - libcontainer container 3340c7258545ac24a81fbd22dc156ce9dda054e33fe3d00b03e133102c768099. Sep 12 17:24:02.419116 containerd[1826]: time="2025-09-12T17:24:02.419042063Z" level=info msg="connecting to shim b454cc1e4948998445e831d06c190c75da1e6560bad602181024917f7ae92053" address="unix:///run/containerd/s/5be95c26c644234fdeaa9f7c76bb55d1a48d644c505d895525495b2c189f84bb" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:24:02.436896 systemd[1]: Started cri-containerd-b454cc1e4948998445e831d06c190c75da1e6560bad602181024917f7ae92053.scope - libcontainer container b454cc1e4948998445e831d06c190c75da1e6560bad602181024917f7ae92053. Sep 12 17:24:02.468005 containerd[1826]: time="2025-09-12T17:24:02.467788690Z" level=info msg="connecting to shim 1c277592bb85ad8bc3b98002a01cbbe844e46ad3e2a663588ffed84d6772d39a" address="unix:///run/containerd/s/51b1b3c32cb9c45f35d7dcaf3548d9e2104b8970dd7bced615789da83e9f182e" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:24:02.486878 systemd[1]: Started cri-containerd-1c277592bb85ad8bc3b98002a01cbbe844e46ad3e2a663588ffed84d6772d39a.scope - libcontainer container 1c277592bb85ad8bc3b98002a01cbbe844e46ad3e2a663588ffed84d6772d39a. Sep 12 17:24:02.508149 containerd[1826]: time="2025-09-12T17:24:02.508080095Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5hp67,Uid:c7aba473-7601-4ebc-83c4-74c14847ce3a,Namespace:kube-system,Attempt:0,} returns sandbox id \"3340c7258545ac24a81fbd22dc156ce9dda054e33fe3d00b03e133102c768099\"" Sep 12 17:24:02.512079 containerd[1826]: time="2025-09-12T17:24:02.511565050Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 12 17:24:02.558536 containerd[1826]: time="2025-09-12T17:24:02.558502231Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-jxjn4,Uid:a086ac3d-35ef-408f-8541-248f710d0583,Namespace:kube-system,Attempt:0,} returns sandbox id \"b454cc1e4948998445e831d06c190c75da1e6560bad602181024917f7ae92053\"" Sep 12 17:24:02.604150 containerd[1826]: time="2025-09-12T17:24:02.604036469Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ggk2c,Uid:f8048efa-d411-4933-a828-7271729f9a7b,Namespace:kube-system,Attempt:0,} returns sandbox id \"1c277592bb85ad8bc3b98002a01cbbe844e46ad3e2a663588ffed84d6772d39a\"" Sep 12 17:24:02.668558 containerd[1826]: time="2025-09-12T17:24:02.668329960Z" level=info msg="CreateContainer within sandbox \"1c277592bb85ad8bc3b98002a01cbbe844e46ad3e2a663588ffed84d6772d39a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 12 17:24:02.903252 containerd[1826]: time="2025-09-12T17:24:02.903164730Z" level=info msg="Container e1d01e1f1ff264d45ee3f959b878749d0f59d2769ac3b12b8a231343d5556405: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:24:03.062304 containerd[1826]: time="2025-09-12T17:24:03.062266547Z" level=info msg="CreateContainer within sandbox \"1c277592bb85ad8bc3b98002a01cbbe844e46ad3e2a663588ffed84d6772d39a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e1d01e1f1ff264d45ee3f959b878749d0f59d2769ac3b12b8a231343d5556405\"" Sep 12 17:24:03.063003 containerd[1826]: time="2025-09-12T17:24:03.062982266Z" level=info msg="StartContainer for \"e1d01e1f1ff264d45ee3f959b878749d0f59d2769ac3b12b8a231343d5556405\"" Sep 12 17:24:03.064060 containerd[1826]: time="2025-09-12T17:24:03.064012456Z" level=info msg="connecting to shim e1d01e1f1ff264d45ee3f959b878749d0f59d2769ac3b12b8a231343d5556405" address="unix:///run/containerd/s/51b1b3c32cb9c45f35d7dcaf3548d9e2104b8970dd7bced615789da83e9f182e" protocol=ttrpc version=3 Sep 12 17:24:03.079879 systemd[1]: Started cri-containerd-e1d01e1f1ff264d45ee3f959b878749d0f59d2769ac3b12b8a231343d5556405.scope - libcontainer container e1d01e1f1ff264d45ee3f959b878749d0f59d2769ac3b12b8a231343d5556405. Sep 12 17:24:03.170484 containerd[1826]: time="2025-09-12T17:24:03.170392047Z" level=info msg="StartContainer for \"e1d01e1f1ff264d45ee3f959b878749d0f59d2769ac3b12b8a231343d5556405\" returns successfully" Sep 12 17:24:07.842195 kubelet[3400]: I0912 17:24:07.842107 3400 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-ggk2c" podStartSLOduration=9.842093282 podStartE2EDuration="9.842093282s" podCreationTimestamp="2025-09-12 17:23:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:24:03.959993169 +0000 UTC m=+5.124970736" watchObservedRunningTime="2025-09-12 17:24:07.842093282 +0000 UTC m=+9.007070841" Sep 12 17:24:08.963026 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount6411000.mount: Deactivated successfully. Sep 12 17:24:10.315301 containerd[1826]: time="2025-09-12T17:24:10.315253955Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:24:11.054862 containerd[1826]: time="2025-09-12T17:24:11.054802680Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Sep 12 17:24:11.062792 containerd[1826]: time="2025-09-12T17:24:11.062733672Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:24:11.063762 containerd[1826]: time="2025-09-12T17:24:11.063678428Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 8.552086497s" Sep 12 17:24:11.063762 containerd[1826]: time="2025-09-12T17:24:11.063704301Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Sep 12 17:24:11.065243 containerd[1826]: time="2025-09-12T17:24:11.065195133Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 12 17:24:11.072783 containerd[1826]: time="2025-09-12T17:24:11.072695644Z" level=info msg="CreateContainer within sandbox \"3340c7258545ac24a81fbd22dc156ce9dda054e33fe3d00b03e133102c768099\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 12 17:24:11.095781 containerd[1826]: time="2025-09-12T17:24:11.095742341Z" level=info msg="Container 9b6c4d51987cc36309e6a43ee8c709adadc4301a96cc6312ca673ea146ba6a18: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:24:11.114102 containerd[1826]: time="2025-09-12T17:24:11.114065106Z" level=info msg="CreateContainer within sandbox \"3340c7258545ac24a81fbd22dc156ce9dda054e33fe3d00b03e133102c768099\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9b6c4d51987cc36309e6a43ee8c709adadc4301a96cc6312ca673ea146ba6a18\"" Sep 12 17:24:11.114680 containerd[1826]: time="2025-09-12T17:24:11.114653039Z" level=info msg="StartContainer for \"9b6c4d51987cc36309e6a43ee8c709adadc4301a96cc6312ca673ea146ba6a18\"" Sep 12 17:24:11.115316 containerd[1826]: time="2025-09-12T17:24:11.115293708Z" level=info msg="connecting to shim 9b6c4d51987cc36309e6a43ee8c709adadc4301a96cc6312ca673ea146ba6a18" address="unix:///run/containerd/s/84e7b3e2c88f4669b3b99e29e6974689bed8cd4a782aa0f5e2edc9df6a160ced" protocol=ttrpc version=3 Sep 12 17:24:11.130879 systemd[1]: Started cri-containerd-9b6c4d51987cc36309e6a43ee8c709adadc4301a96cc6312ca673ea146ba6a18.scope - libcontainer container 9b6c4d51987cc36309e6a43ee8c709adadc4301a96cc6312ca673ea146ba6a18. Sep 12 17:24:11.154285 containerd[1826]: time="2025-09-12T17:24:11.154258967Z" level=info msg="StartContainer for \"9b6c4d51987cc36309e6a43ee8c709adadc4301a96cc6312ca673ea146ba6a18\" returns successfully" Sep 12 17:24:11.159324 systemd[1]: cri-containerd-9b6c4d51987cc36309e6a43ee8c709adadc4301a96cc6312ca673ea146ba6a18.scope: Deactivated successfully. Sep 12 17:24:11.162743 containerd[1826]: time="2025-09-12T17:24:11.162716363Z" level=info msg="received exit event container_id:\"9b6c4d51987cc36309e6a43ee8c709adadc4301a96cc6312ca673ea146ba6a18\" id:\"9b6c4d51987cc36309e6a43ee8c709adadc4301a96cc6312ca673ea146ba6a18\" pid:3801 exited_at:{seconds:1757697851 nanos:162287002}" Sep 12 17:24:11.162839 containerd[1826]: time="2025-09-12T17:24:11.162820629Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9b6c4d51987cc36309e6a43ee8c709adadc4301a96cc6312ca673ea146ba6a18\" id:\"9b6c4d51987cc36309e6a43ee8c709adadc4301a96cc6312ca673ea146ba6a18\" pid:3801 exited_at:{seconds:1757697851 nanos:162287002}" Sep 12 17:24:11.177518 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9b6c4d51987cc36309e6a43ee8c709adadc4301a96cc6312ca673ea146ba6a18-rootfs.mount: Deactivated successfully. Sep 12 17:24:12.966186 containerd[1826]: time="2025-09-12T17:24:12.966148281Z" level=info msg="CreateContainer within sandbox \"3340c7258545ac24a81fbd22dc156ce9dda054e33fe3d00b03e133102c768099\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 12 17:24:12.988866 containerd[1826]: time="2025-09-12T17:24:12.988465395Z" level=info msg="Container 744518f0a06910f1b94ba008166601d0ee6a4ddc32f4e41640e5df895c31abb7: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:24:13.009356 containerd[1826]: time="2025-09-12T17:24:13.008549797Z" level=info msg="CreateContainer within sandbox \"3340c7258545ac24a81fbd22dc156ce9dda054e33fe3d00b03e133102c768099\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"744518f0a06910f1b94ba008166601d0ee6a4ddc32f4e41640e5df895c31abb7\"" Sep 12 17:24:13.009912 containerd[1826]: time="2025-09-12T17:24:13.009887978Z" level=info msg="StartContainer for \"744518f0a06910f1b94ba008166601d0ee6a4ddc32f4e41640e5df895c31abb7\"" Sep 12 17:24:13.010970 containerd[1826]: time="2025-09-12T17:24:13.010947264Z" level=info msg="connecting to shim 744518f0a06910f1b94ba008166601d0ee6a4ddc32f4e41640e5df895c31abb7" address="unix:///run/containerd/s/84e7b3e2c88f4669b3b99e29e6974689bed8cd4a782aa0f5e2edc9df6a160ced" protocol=ttrpc version=3 Sep 12 17:24:13.028871 systemd[1]: Started cri-containerd-744518f0a06910f1b94ba008166601d0ee6a4ddc32f4e41640e5df895c31abb7.scope - libcontainer container 744518f0a06910f1b94ba008166601d0ee6a4ddc32f4e41640e5df895c31abb7. Sep 12 17:24:13.069475 containerd[1826]: time="2025-09-12T17:24:13.069415090Z" level=info msg="StartContainer for \"744518f0a06910f1b94ba008166601d0ee6a4ddc32f4e41640e5df895c31abb7\" returns successfully" Sep 12 17:24:13.072073 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 17:24:13.072429 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:24:13.072682 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:24:13.074939 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:24:13.076874 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 12 17:24:13.078894 systemd[1]: cri-containerd-744518f0a06910f1b94ba008166601d0ee6a4ddc32f4e41640e5df895c31abb7.scope: Deactivated successfully. Sep 12 17:24:13.079504 containerd[1826]: time="2025-09-12T17:24:13.079478383Z" level=info msg="TaskExit event in podsandbox handler container_id:\"744518f0a06910f1b94ba008166601d0ee6a4ddc32f4e41640e5df895c31abb7\" id:\"744518f0a06910f1b94ba008166601d0ee6a4ddc32f4e41640e5df895c31abb7\" pid:3843 exited_at:{seconds:1757697853 nanos:79048854}" Sep 12 17:24:13.079639 containerd[1826]: time="2025-09-12T17:24:13.079620546Z" level=info msg="received exit event container_id:\"744518f0a06910f1b94ba008166601d0ee6a4ddc32f4e41640e5df895c31abb7\" id:\"744518f0a06910f1b94ba008166601d0ee6a4ddc32f4e41640e5df895c31abb7\" pid:3843 exited_at:{seconds:1757697853 nanos:79048854}" Sep 12 17:24:13.092975 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:24:13.988049 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-744518f0a06910f1b94ba008166601d0ee6a4ddc32f4e41640e5df895c31abb7-rootfs.mount: Deactivated successfully. Sep 12 17:24:14.973362 containerd[1826]: time="2025-09-12T17:24:14.973260548Z" level=info msg="CreateContainer within sandbox \"3340c7258545ac24a81fbd22dc156ce9dda054e33fe3d00b03e133102c768099\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 12 17:24:15.816950 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2893798248.mount: Deactivated successfully. Sep 12 17:24:15.958790 containerd[1826]: time="2025-09-12T17:24:15.958230805Z" level=info msg="Container df51837c16de93e41a2f38258481fb3d124e1482e0f42ed9aa742a988fceeda9: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:24:15.960339 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2105438940.mount: Deactivated successfully. Sep 12 17:24:16.710880 containerd[1826]: time="2025-09-12T17:24:16.710837674Z" level=info msg="CreateContainer within sandbox \"3340c7258545ac24a81fbd22dc156ce9dda054e33fe3d00b03e133102c768099\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"df51837c16de93e41a2f38258481fb3d124e1482e0f42ed9aa742a988fceeda9\"" Sep 12 17:24:16.711599 containerd[1826]: time="2025-09-12T17:24:16.711570889Z" level=info msg="StartContainer for \"df51837c16de93e41a2f38258481fb3d124e1482e0f42ed9aa742a988fceeda9\"" Sep 12 17:24:16.712705 containerd[1826]: time="2025-09-12T17:24:16.712681577Z" level=info msg="connecting to shim df51837c16de93e41a2f38258481fb3d124e1482e0f42ed9aa742a988fceeda9" address="unix:///run/containerd/s/84e7b3e2c88f4669b3b99e29e6974689bed8cd4a782aa0f5e2edc9df6a160ced" protocol=ttrpc version=3 Sep 12 17:24:16.727889 systemd[1]: Started cri-containerd-df51837c16de93e41a2f38258481fb3d124e1482e0f42ed9aa742a988fceeda9.scope - libcontainer container df51837c16de93e41a2f38258481fb3d124e1482e0f42ed9aa742a988fceeda9. Sep 12 17:24:16.750129 systemd[1]: cri-containerd-df51837c16de93e41a2f38258481fb3d124e1482e0f42ed9aa742a988fceeda9.scope: Deactivated successfully. Sep 12 17:24:16.751464 containerd[1826]: time="2025-09-12T17:24:16.751438093Z" level=info msg="TaskExit event in podsandbox handler container_id:\"df51837c16de93e41a2f38258481fb3d124e1482e0f42ed9aa742a988fceeda9\" id:\"df51837c16de93e41a2f38258481fb3d124e1482e0f42ed9aa742a988fceeda9\" pid:3893 exited_at:{seconds:1757697856 nanos:751235617}" Sep 12 17:24:16.913022 containerd[1826]: time="2025-09-12T17:24:16.912987898Z" level=info msg="received exit event container_id:\"df51837c16de93e41a2f38258481fb3d124e1482e0f42ed9aa742a988fceeda9\" id:\"df51837c16de93e41a2f38258481fb3d124e1482e0f42ed9aa742a988fceeda9\" pid:3893 exited_at:{seconds:1757697856 nanos:751235617}" Sep 12 17:24:16.918302 containerd[1826]: time="2025-09-12T17:24:16.918282203Z" level=info msg="StartContainer for \"df51837c16de93e41a2f38258481fb3d124e1482e0f42ed9aa742a988fceeda9\" returns successfully" Sep 12 17:24:16.929281 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-df51837c16de93e41a2f38258481fb3d124e1482e0f42ed9aa742a988fceeda9-rootfs.mount: Deactivated successfully. Sep 12 17:24:19.108033 containerd[1826]: time="2025-09-12T17:24:19.107990718Z" level=info msg="CreateContainer within sandbox \"3340c7258545ac24a81fbd22dc156ce9dda054e33fe3d00b03e133102c768099\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 12 17:24:19.311327 containerd[1826]: time="2025-09-12T17:24:19.311287519Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:24:19.406211 containerd[1826]: time="2025-09-12T17:24:19.406122321Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Sep 12 17:24:19.454495 containerd[1826]: time="2025-09-12T17:24:19.454435650Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:24:19.518520 containerd[1826]: time="2025-09-12T17:24:19.518486442Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 8.453260749s" Sep 12 17:24:19.518520 containerd[1826]: time="2025-09-12T17:24:19.518522203Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Sep 12 17:24:19.524880 containerd[1826]: time="2025-09-12T17:24:19.524844786Z" level=info msg="Container 019b5a70974ea1a5f6491702944b2c6a1383116421d06bd9826e121cf784b4fc: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:24:19.563406 containerd[1826]: time="2025-09-12T17:24:19.563350177Z" level=info msg="CreateContainer within sandbox \"b454cc1e4948998445e831d06c190c75da1e6560bad602181024917f7ae92053\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 12 17:24:19.753547 containerd[1826]: time="2025-09-12T17:24:19.753484120Z" level=info msg="CreateContainer within sandbox \"3340c7258545ac24a81fbd22dc156ce9dda054e33fe3d00b03e133102c768099\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"019b5a70974ea1a5f6491702944b2c6a1383116421d06bd9826e121cf784b4fc\"" Sep 12 17:24:19.755175 containerd[1826]: time="2025-09-12T17:24:19.754855765Z" level=info msg="StartContainer for \"019b5a70974ea1a5f6491702944b2c6a1383116421d06bd9826e121cf784b4fc\"" Sep 12 17:24:19.755562 containerd[1826]: time="2025-09-12T17:24:19.755539388Z" level=info msg="connecting to shim 019b5a70974ea1a5f6491702944b2c6a1383116421d06bd9826e121cf784b4fc" address="unix:///run/containerd/s/84e7b3e2c88f4669b3b99e29e6974689bed8cd4a782aa0f5e2edc9df6a160ced" protocol=ttrpc version=3 Sep 12 17:24:19.778892 systemd[1]: Started cri-containerd-019b5a70974ea1a5f6491702944b2c6a1383116421d06bd9826e121cf784b4fc.scope - libcontainer container 019b5a70974ea1a5f6491702944b2c6a1383116421d06bd9826e121cf784b4fc. Sep 12 17:24:19.795002 systemd[1]: cri-containerd-019b5a70974ea1a5f6491702944b2c6a1383116421d06bd9826e121cf784b4fc.scope: Deactivated successfully. Sep 12 17:24:19.797579 containerd[1826]: time="2025-09-12T17:24:19.797556710Z" level=info msg="TaskExit event in podsandbox handler container_id:\"019b5a70974ea1a5f6491702944b2c6a1383116421d06bd9826e121cf784b4fc\" id:\"019b5a70974ea1a5f6491702944b2c6a1383116421d06bd9826e121cf784b4fc\" pid:3946 exited_at:{seconds:1757697859 nanos:797372866}" Sep 12 17:24:19.863670 containerd[1826]: time="2025-09-12T17:24:19.863564192Z" level=info msg="received exit event container_id:\"019b5a70974ea1a5f6491702944b2c6a1383116421d06bd9826e121cf784b4fc\" id:\"019b5a70974ea1a5f6491702944b2c6a1383116421d06bd9826e121cf784b4fc\" pid:3946 exited_at:{seconds:1757697859 nanos:797372866}" Sep 12 17:24:19.873209 containerd[1826]: time="2025-09-12T17:24:19.873142413Z" level=info msg="StartContainer for \"019b5a70974ea1a5f6491702944b2c6a1383116421d06bd9826e121cf784b4fc\" returns successfully" Sep 12 17:24:19.881460 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-019b5a70974ea1a5f6491702944b2c6a1383116421d06bd9826e121cf784b4fc-rootfs.mount: Deactivated successfully. Sep 12 17:24:22.651553 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4104191954.mount: Deactivated successfully. Sep 12 17:24:22.653784 containerd[1826]: time="2025-09-12T17:24:22.652313274Z" level=info msg="Container dbd1bb050aae9b404848ad2c0215f12734cce96ccca29cb3d2a878f8527bdf30: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:24:23.515790 containerd[1826]: time="2025-09-12T17:24:23.515723057Z" level=info msg="CreateContainer within sandbox \"b454cc1e4948998445e831d06c190c75da1e6560bad602181024917f7ae92053\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"dbd1bb050aae9b404848ad2c0215f12734cce96ccca29cb3d2a878f8527bdf30\"" Sep 12 17:24:23.516363 containerd[1826]: time="2025-09-12T17:24:23.516251684Z" level=info msg="StartContainer for \"dbd1bb050aae9b404848ad2c0215f12734cce96ccca29cb3d2a878f8527bdf30\"" Sep 12 17:24:23.518050 containerd[1826]: time="2025-09-12T17:24:23.518025946Z" level=info msg="connecting to shim dbd1bb050aae9b404848ad2c0215f12734cce96ccca29cb3d2a878f8527bdf30" address="unix:///run/containerd/s/5be95c26c644234fdeaa9f7c76bb55d1a48d644c505d895525495b2c189f84bb" protocol=ttrpc version=3 Sep 12 17:24:23.538878 systemd[1]: Started cri-containerd-dbd1bb050aae9b404848ad2c0215f12734cce96ccca29cb3d2a878f8527bdf30.scope - libcontainer container dbd1bb050aae9b404848ad2c0215f12734cce96ccca29cb3d2a878f8527bdf30. Sep 12 17:24:23.562960 containerd[1826]: time="2025-09-12T17:24:23.562936627Z" level=info msg="StartContainer for \"dbd1bb050aae9b404848ad2c0215f12734cce96ccca29cb3d2a878f8527bdf30\" returns successfully" Sep 12 17:24:23.998357 containerd[1826]: time="2025-09-12T17:24:23.997671727Z" level=info msg="CreateContainer within sandbox \"3340c7258545ac24a81fbd22dc156ce9dda054e33fe3d00b03e133102c768099\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 12 17:24:24.035144 kubelet[3400]: I0912 17:24:24.035096 3400 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-jxjn4" podStartSLOduration=8.073442995 podStartE2EDuration="25.035084162s" podCreationTimestamp="2025-09-12 17:23:59 +0000 UTC" firstStartedPulling="2025-09-12 17:24:02.559430835 +0000 UTC m=+3.724408394" lastFinishedPulling="2025-09-12 17:24:19.521072002 +0000 UTC m=+20.686049561" observedRunningTime="2025-09-12 17:24:24.035064586 +0000 UTC m=+25.200042153" watchObservedRunningTime="2025-09-12 17:24:24.035084162 +0000 UTC m=+25.200061721" Sep 12 17:24:24.173410 containerd[1826]: time="2025-09-12T17:24:24.172894729Z" level=info msg="Container ed73e5f4a6541cf009c09fadf73f3e97c3159f4ce3e3442faf03217df4b2c367: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:24:24.175011 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount939166316.mount: Deactivated successfully. Sep 12 17:24:24.317839 containerd[1826]: time="2025-09-12T17:24:24.317743651Z" level=info msg="CreateContainer within sandbox \"3340c7258545ac24a81fbd22dc156ce9dda054e33fe3d00b03e133102c768099\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ed73e5f4a6541cf009c09fadf73f3e97c3159f4ce3e3442faf03217df4b2c367\"" Sep 12 17:24:24.318991 containerd[1826]: time="2025-09-12T17:24:24.318363640Z" level=info msg="StartContainer for \"ed73e5f4a6541cf009c09fadf73f3e97c3159f4ce3e3442faf03217df4b2c367\"" Sep 12 17:24:24.320508 containerd[1826]: time="2025-09-12T17:24:24.320399059Z" level=info msg="connecting to shim ed73e5f4a6541cf009c09fadf73f3e97c3159f4ce3e3442faf03217df4b2c367" address="unix:///run/containerd/s/84e7b3e2c88f4669b3b99e29e6974689bed8cd4a782aa0f5e2edc9df6a160ced" protocol=ttrpc version=3 Sep 12 17:24:24.338884 systemd[1]: Started cri-containerd-ed73e5f4a6541cf009c09fadf73f3e97c3159f4ce3e3442faf03217df4b2c367.scope - libcontainer container ed73e5f4a6541cf009c09fadf73f3e97c3159f4ce3e3442faf03217df4b2c367. Sep 12 17:24:24.367585 containerd[1826]: time="2025-09-12T17:24:24.367560780Z" level=info msg="StartContainer for \"ed73e5f4a6541cf009c09fadf73f3e97c3159f4ce3e3442faf03217df4b2c367\" returns successfully" Sep 12 17:24:24.428233 containerd[1826]: time="2025-09-12T17:24:24.428202265Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ed73e5f4a6541cf009c09fadf73f3e97c3159f4ce3e3442faf03217df4b2c367\" id:\"2a211171079e2afbc1b006ac0245196f1f59e48ff4420c65b6613b51b54b0355\" pid:4045 exited_at:{seconds:1757697864 nanos:427871394}" Sep 12 17:24:24.452507 kubelet[3400]: I0912 17:24:24.452485 3400 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 12 17:24:24.493796 systemd[1]: Created slice kubepods-burstable-pod65303b69_d784_440e_8edb_614297835fb6.slice - libcontainer container kubepods-burstable-pod65303b69_d784_440e_8edb_614297835fb6.slice. Sep 12 17:24:24.501185 systemd[1]: Created slice kubepods-burstable-podb8ddf37f_2789_4f0c_9ca3_f02542ba523e.slice - libcontainer container kubepods-burstable-podb8ddf37f_2789_4f0c_9ca3_f02542ba523e.slice. Sep 12 17:24:24.585590 kubelet[3400]: I0912 17:24:24.585115 3400 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b8ddf37f-2789-4f0c-9ca3-f02542ba523e-config-volume\") pod \"coredns-674b8bbfcf-qgp29\" (UID: \"b8ddf37f-2789-4f0c-9ca3-f02542ba523e\") " pod="kube-system/coredns-674b8bbfcf-qgp29" Sep 12 17:24:24.585590 kubelet[3400]: I0912 17:24:24.585427 3400 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpnng\" (UniqueName: \"kubernetes.io/projected/65303b69-d784-440e-8edb-614297835fb6-kube-api-access-hpnng\") pod \"coredns-674b8bbfcf-g6j8g\" (UID: \"65303b69-d784-440e-8edb-614297835fb6\") " pod="kube-system/coredns-674b8bbfcf-g6j8g" Sep 12 17:24:24.585590 kubelet[3400]: I0912 17:24:24.585454 3400 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rstfg\" (UniqueName: \"kubernetes.io/projected/b8ddf37f-2789-4f0c-9ca3-f02542ba523e-kube-api-access-rstfg\") pod \"coredns-674b8bbfcf-qgp29\" (UID: \"b8ddf37f-2789-4f0c-9ca3-f02542ba523e\") " pod="kube-system/coredns-674b8bbfcf-qgp29" Sep 12 17:24:24.585878 kubelet[3400]: I0912 17:24:24.585728 3400 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/65303b69-d784-440e-8edb-614297835fb6-config-volume\") pod \"coredns-674b8bbfcf-g6j8g\" (UID: \"65303b69-d784-440e-8edb-614297835fb6\") " pod="kube-system/coredns-674b8bbfcf-g6j8g" Sep 12 17:24:24.797752 containerd[1826]: time="2025-09-12T17:24:24.797705919Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-g6j8g,Uid:65303b69-d784-440e-8edb-614297835fb6,Namespace:kube-system,Attempt:0,}" Sep 12 17:24:24.804421 containerd[1826]: time="2025-09-12T17:24:24.804398748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-qgp29,Uid:b8ddf37f-2789-4f0c-9ca3-f02542ba523e,Namespace:kube-system,Attempt:0,}" Sep 12 17:24:25.016147 kubelet[3400]: I0912 17:24:25.016095 3400 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-5hp67" podStartSLOduration=17.462931262 podStartE2EDuration="26.016080214s" podCreationTimestamp="2025-09-12 17:23:59 +0000 UTC" firstStartedPulling="2025-09-12 17:24:02.511282564 +0000 UTC m=+3.676260123" lastFinishedPulling="2025-09-12 17:24:11.064431516 +0000 UTC m=+12.229409075" observedRunningTime="2025-09-12 17:24:25.015050144 +0000 UTC m=+26.180027711" watchObservedRunningTime="2025-09-12 17:24:25.016080214 +0000 UTC m=+26.181057773" Sep 12 17:24:26.007470 containerd[1826]: time="2025-09-12T17:24:26.007434074Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ed73e5f4a6541cf009c09fadf73f3e97c3159f4ce3e3442faf03217df4b2c367\" id:\"14cebac3b254ff300d3751985b8b7e6296004c667a6b205556c5a1804006b0cd\" pid:4151 exit_status:1 exited_at:{seconds:1757697866 nanos:7079011}" Sep 12 17:24:27.438488 systemd-networkd[1566]: cilium_host: Link UP Sep 12 17:24:27.439358 systemd-networkd[1566]: cilium_net: Link UP Sep 12 17:24:27.439780 systemd-networkd[1566]: cilium_net: Gained carrier Sep 12 17:24:27.439873 systemd-networkd[1566]: cilium_host: Gained carrier Sep 12 17:24:27.591315 systemd-networkd[1566]: cilium_vxlan: Link UP Sep 12 17:24:27.591470 systemd-networkd[1566]: cilium_vxlan: Gained carrier Sep 12 17:24:27.854788 kernel: NET: Registered PF_ALG protocol family Sep 12 17:24:27.989264 systemd-networkd[1566]: cilium_net: Gained IPv6LL Sep 12 17:24:28.084879 containerd[1826]: time="2025-09-12T17:24:28.084830782Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ed73e5f4a6541cf009c09fadf73f3e97c3159f4ce3e3442faf03217df4b2c367\" id:\"af833194271d312e6891753293ec68d037668f0502481489e823bf579ccda918\" pid:4292 exit_status:1 exited_at:{seconds:1757697868 nanos:84204785}" Sep 12 17:24:28.307852 systemd-networkd[1566]: cilium_host: Gained IPv6LL Sep 12 17:24:28.456215 systemd-networkd[1566]: lxc_health: Link UP Sep 12 17:24:28.456357 systemd-networkd[1566]: lxc_health: Gained carrier Sep 12 17:24:28.836064 kernel: eth0: renamed from tmp12967 Sep 12 17:24:28.836794 systemd-networkd[1566]: lxc8b6c22827329: Link UP Sep 12 17:24:28.836990 systemd-networkd[1566]: lxc8b6c22827329: Gained carrier Sep 12 17:24:28.883894 systemd-networkd[1566]: cilium_vxlan: Gained IPv6LL Sep 12 17:24:28.932925 kernel: eth0: renamed from tmpae375 Sep 12 17:24:28.932691 systemd-networkd[1566]: lxc2c7b976b4518: Link UP Sep 12 17:24:28.932868 systemd-networkd[1566]: lxc2c7b976b4518: Gained carrier Sep 12 17:24:30.159497 containerd[1826]: time="2025-09-12T17:24:30.159451984Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ed73e5f4a6541cf009c09fadf73f3e97c3159f4ce3e3442faf03217df4b2c367\" id:\"f38a67b6508030dfc032e45b7a9d7746f0d88649d73cf000b5429c89a6874b4d\" pid:4567 exited_at:{seconds:1757697870 nanos:159028231}" Sep 12 17:24:30.419896 systemd-networkd[1566]: lxc_health: Gained IPv6LL Sep 12 17:24:30.548873 systemd-networkd[1566]: lxc2c7b976b4518: Gained IPv6LL Sep 12 17:24:30.803931 systemd-networkd[1566]: lxc8b6c22827329: Gained IPv6LL Sep 12 17:24:31.378662 containerd[1826]: time="2025-09-12T17:24:31.378571890Z" level=info msg="connecting to shim ae3751d3989b573da69658b5156f78d32fc91bf4edcf00894aaa5953c529e728" address="unix:///run/containerd/s/f3e16a2b3deb82b62f63b7d1d02c8d6cdc8f34aa6478a5e4a22a6d1193b97f0e" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:24:31.393150 containerd[1826]: time="2025-09-12T17:24:31.393120559Z" level=info msg="connecting to shim 1296746adecb07a3a1794898870a4e6b207492c862635caf9f3265efc35f62f4" address="unix:///run/containerd/s/62b19321e3320b065c866c78760750f73b1a3504579a4533fd34c58f0fcd2f2f" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:24:31.408232 systemd[1]: Started cri-containerd-ae3751d3989b573da69658b5156f78d32fc91bf4edcf00894aaa5953c529e728.scope - libcontainer container ae3751d3989b573da69658b5156f78d32fc91bf4edcf00894aaa5953c529e728. Sep 12 17:24:31.413716 systemd[1]: Started cri-containerd-1296746adecb07a3a1794898870a4e6b207492c862635caf9f3265efc35f62f4.scope - libcontainer container 1296746adecb07a3a1794898870a4e6b207492c862635caf9f3265efc35f62f4. Sep 12 17:24:31.452772 containerd[1826]: time="2025-09-12T17:24:31.452736412Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-qgp29,Uid:b8ddf37f-2789-4f0c-9ca3-f02542ba523e,Namespace:kube-system,Attempt:0,} returns sandbox id \"ae3751d3989b573da69658b5156f78d32fc91bf4edcf00894aaa5953c529e728\"" Sep 12 17:24:31.460387 containerd[1826]: time="2025-09-12T17:24:31.460359740Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-g6j8g,Uid:65303b69-d784-440e-8edb-614297835fb6,Namespace:kube-system,Attempt:0,} returns sandbox id \"1296746adecb07a3a1794898870a4e6b207492c862635caf9f3265efc35f62f4\"" Sep 12 17:24:31.462498 containerd[1826]: time="2025-09-12T17:24:31.462173509Z" level=info msg="CreateContainer within sandbox \"ae3751d3989b573da69658b5156f78d32fc91bf4edcf00894aaa5953c529e728\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 17:24:31.480391 containerd[1826]: time="2025-09-12T17:24:31.479817785Z" level=info msg="CreateContainer within sandbox \"1296746adecb07a3a1794898870a4e6b207492c862635caf9f3265efc35f62f4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 17:24:31.485122 containerd[1826]: time="2025-09-12T17:24:31.485091144Z" level=info msg="Container 54e3f95fb9a5b65f45e254e10a4adde35860c1fa97dd4baa02aa17018c737f02: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:24:31.512308 containerd[1826]: time="2025-09-12T17:24:31.512278775Z" level=info msg="CreateContainer within sandbox \"ae3751d3989b573da69658b5156f78d32fc91bf4edcf00894aaa5953c529e728\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"54e3f95fb9a5b65f45e254e10a4adde35860c1fa97dd4baa02aa17018c737f02\"" Sep 12 17:24:31.512733 containerd[1826]: time="2025-09-12T17:24:31.512710791Z" level=info msg="StartContainer for \"54e3f95fb9a5b65f45e254e10a4adde35860c1fa97dd4baa02aa17018c737f02\"" Sep 12 17:24:31.513576 containerd[1826]: time="2025-09-12T17:24:31.513530901Z" level=info msg="connecting to shim 54e3f95fb9a5b65f45e254e10a4adde35860c1fa97dd4baa02aa17018c737f02" address="unix:///run/containerd/s/f3e16a2b3deb82b62f63b7d1d02c8d6cdc8f34aa6478a5e4a22a6d1193b97f0e" protocol=ttrpc version=3 Sep 12 17:24:31.517800 containerd[1826]: time="2025-09-12T17:24:31.517563902Z" level=info msg="Container 4a61080e66772406f909b3f55b53b236dbd3714a31cb9343de32d3f22c4c77a2: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:24:31.527884 systemd[1]: Started cri-containerd-54e3f95fb9a5b65f45e254e10a4adde35860c1fa97dd4baa02aa17018c737f02.scope - libcontainer container 54e3f95fb9a5b65f45e254e10a4adde35860c1fa97dd4baa02aa17018c737f02. Sep 12 17:24:31.534056 containerd[1826]: time="2025-09-12T17:24:31.534010037Z" level=info msg="CreateContainer within sandbox \"1296746adecb07a3a1794898870a4e6b207492c862635caf9f3265efc35f62f4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4a61080e66772406f909b3f55b53b236dbd3714a31cb9343de32d3f22c4c77a2\"" Sep 12 17:24:31.534860 containerd[1826]: time="2025-09-12T17:24:31.534504629Z" level=info msg="StartContainer for \"4a61080e66772406f909b3f55b53b236dbd3714a31cb9343de32d3f22c4c77a2\"" Sep 12 17:24:31.535280 containerd[1826]: time="2025-09-12T17:24:31.535255923Z" level=info msg="connecting to shim 4a61080e66772406f909b3f55b53b236dbd3714a31cb9343de32d3f22c4c77a2" address="unix:///run/containerd/s/62b19321e3320b065c866c78760750f73b1a3504579a4533fd34c58f0fcd2f2f" protocol=ttrpc version=3 Sep 12 17:24:31.556220 systemd[1]: Started cri-containerd-4a61080e66772406f909b3f55b53b236dbd3714a31cb9343de32d3f22c4c77a2.scope - libcontainer container 4a61080e66772406f909b3f55b53b236dbd3714a31cb9343de32d3f22c4c77a2. Sep 12 17:24:31.565927 containerd[1826]: time="2025-09-12T17:24:31.565880536Z" level=info msg="StartContainer for \"54e3f95fb9a5b65f45e254e10a4adde35860c1fa97dd4baa02aa17018c737f02\" returns successfully" Sep 12 17:24:31.595181 containerd[1826]: time="2025-09-12T17:24:31.595159965Z" level=info msg="StartContainer for \"4a61080e66772406f909b3f55b53b236dbd3714a31cb9343de32d3f22c4c77a2\" returns successfully" Sep 12 17:24:32.037523 kubelet[3400]: I0912 17:24:32.037468 3400 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-g6j8g" podStartSLOduration=34.037453781 podStartE2EDuration="34.037453781s" podCreationTimestamp="2025-09-12 17:23:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:24:32.025039142 +0000 UTC m=+33.190016701" watchObservedRunningTime="2025-09-12 17:24:32.037453781 +0000 UTC m=+33.202431340" Sep 12 17:24:32.049995 kubelet[3400]: I0912 17:24:32.049942 3400 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-qgp29" podStartSLOduration=34.049932573 podStartE2EDuration="34.049932573s" podCreationTimestamp="2025-09-12 17:23:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:24:32.049070861 +0000 UTC m=+33.214048420" watchObservedRunningTime="2025-09-12 17:24:32.049932573 +0000 UTC m=+33.214910132" Sep 12 17:24:32.228406 containerd[1826]: time="2025-09-12T17:24:32.228364211Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ed73e5f4a6541cf009c09fadf73f3e97c3159f4ce3e3442faf03217df4b2c367\" id:\"1653e82cb7d976cbb4ac5191471295033e252f8400b1c6e306a37156fae9a648\" pid:4763 exited_at:{seconds:1757697872 nanos:227619582}" Sep 12 17:24:32.230525 kubelet[3400]: E0912 17:24:32.230491 3400 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:48442->127.0.0.1:43539: write tcp 127.0.0.1:48442->127.0.0.1:43539: write: broken pipe Sep 12 17:24:32.373296 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1575103622.mount: Deactivated successfully. Sep 12 17:24:34.294067 containerd[1826]: time="2025-09-12T17:24:34.294029176Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ed73e5f4a6541cf009c09fadf73f3e97c3159f4ce3e3442faf03217df4b2c367\" id:\"06c0f7bd47128298dea87922b76da36912d175f2e48d80c933b96c6010933369\" pid:4788 exited_at:{seconds:1757697874 nanos:293540759}" Sep 12 17:24:34.296308 kubelet[3400]: E0912 17:24:34.296277 3400 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:48452->127.0.0.1:43539: write tcp 127.0.0.1:48452->127.0.0.1:43539: write: broken pipe Sep 12 17:24:34.397831 containerd[1826]: time="2025-09-12T17:24:34.397762283Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ed73e5f4a6541cf009c09fadf73f3e97c3159f4ce3e3442faf03217df4b2c367\" id:\"b8252dfbf01fae5c1a1819d3fb3125eb8a5caad6ff34de0ef665cc0a7feabbf0\" pid:4819 exited_at:{seconds:1757697874 nanos:396924028}" Sep 12 17:24:34.697892 sudo[2330]: pam_unix(sudo:session): session closed for user root Sep 12 17:24:34.769806 sshd[2329]: Connection closed by 10.200.16.10 port 44660 Sep 12 17:24:34.770284 sshd-session[2326]: pam_unix(sshd:session): session closed for user core Sep 12 17:24:34.773295 systemd[1]: sshd@6-10.200.20.38:22-10.200.16.10:44660.service: Deactivated successfully. Sep 12 17:24:34.775019 systemd[1]: session-9.scope: Deactivated successfully. Sep 12 17:24:34.775208 systemd[1]: session-9.scope: Consumed 3.985s CPU time, 264.8M memory peak. Sep 12 17:24:34.776423 systemd-logind[1809]: Session 9 logged out. Waiting for processes to exit. Sep 12 17:24:34.777543 systemd-logind[1809]: Removed session 9. Sep 12 17:24:42.388282 update_engine[1812]: I20250912 17:24:42.387841 1812 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Sep 12 17:24:42.388282 update_engine[1812]: I20250912 17:24:42.387887 1812 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Sep 12 17:24:42.388282 update_engine[1812]: I20250912 17:24:42.388038 1812 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Sep 12 17:24:42.388878 update_engine[1812]: I20250912 17:24:42.388346 1812 omaha_request_params.cc:62] Current group set to beta Sep 12 17:24:42.388878 update_engine[1812]: I20250912 17:24:42.388662 1812 update_attempter.cc:499] Already updated boot flags. Skipping. Sep 12 17:24:42.388878 update_engine[1812]: I20250912 17:24:42.388677 1812 update_attempter.cc:643] Scheduling an action processor start. Sep 12 17:24:42.388878 update_engine[1812]: I20250912 17:24:42.388695 1812 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Sep 12 17:24:42.388878 update_engine[1812]: I20250912 17:24:42.388720 1812 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Sep 12 17:24:42.388878 update_engine[1812]: I20250912 17:24:42.388762 1812 omaha_request_action.cc:271] Posting an Omaha request to disabled Sep 12 17:24:42.388878 update_engine[1812]: I20250912 17:24:42.388785 1812 omaha_request_action.cc:272] Request: Sep 12 17:24:42.388878 update_engine[1812]: Sep 12 17:24:42.388878 update_engine[1812]: Sep 12 17:24:42.388878 update_engine[1812]: Sep 12 17:24:42.388878 update_engine[1812]: Sep 12 17:24:42.388878 update_engine[1812]: Sep 12 17:24:42.388878 update_engine[1812]: Sep 12 17:24:42.388878 update_engine[1812]: Sep 12 17:24:42.388878 update_engine[1812]: Sep 12 17:24:42.388878 update_engine[1812]: I20250912 17:24:42.388790 1812 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 12 17:24:42.389257 locksmithd[1952]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Sep 12 17:24:42.389721 update_engine[1812]: I20250912 17:24:42.389698 1812 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 12 17:24:42.389988 update_engine[1812]: I20250912 17:24:42.389966 1812 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 12 17:24:42.493449 update_engine[1812]: E20250912 17:24:42.493413 1812 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 12 17:24:42.493524 update_engine[1812]: I20250912 17:24:42.493474 1812 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Sep 12 17:24:52.381343 update_engine[1812]: I20250912 17:24:52.381284 1812 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 12 17:24:52.381700 update_engine[1812]: I20250912 17:24:52.381490 1812 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 12 17:24:52.381721 update_engine[1812]: I20250912 17:24:52.381696 1812 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 12 17:24:52.452873 update_engine[1812]: E20250912 17:24:52.452836 1812 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 12 17:24:52.452952 update_engine[1812]: I20250912 17:24:52.452888 1812 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Sep 12 17:25:02.382812 update_engine[1812]: I20250912 17:25:02.382737 1812 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 12 17:25:02.383139 update_engine[1812]: I20250912 17:25:02.382959 1812 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 12 17:25:02.383180 update_engine[1812]: I20250912 17:25:02.383154 1812 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 12 17:25:02.423780 update_engine[1812]: E20250912 17:25:02.422876 1812 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 12 17:25:02.423780 update_engine[1812]: I20250912 17:25:02.422930 1812 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Sep 12 17:25:12.389957 update_engine[1812]: I20250912 17:25:12.389894 1812 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 12 17:25:12.390276 update_engine[1812]: I20250912 17:25:12.390092 1812 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 12 17:25:12.390305 update_engine[1812]: I20250912 17:25:12.390287 1812 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 12 17:25:12.493172 update_engine[1812]: E20250912 17:25:12.493132 1812 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 12 17:25:12.493257 update_engine[1812]: I20250912 17:25:12.493186 1812 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Sep 12 17:25:12.493257 update_engine[1812]: I20250912 17:25:12.493192 1812 omaha_request_action.cc:617] Omaha request response: Sep 12 17:25:12.493292 update_engine[1812]: E20250912 17:25:12.493264 1812 omaha_request_action.cc:636] Omaha request network transfer failed. Sep 12 17:25:12.493292 update_engine[1812]: I20250912 17:25:12.493279 1812 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Sep 12 17:25:12.493292 update_engine[1812]: I20250912 17:25:12.493283 1812 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 12 17:25:12.493292 update_engine[1812]: I20250912 17:25:12.493286 1812 update_attempter.cc:306] Processing Done. Sep 12 17:25:12.493346 update_engine[1812]: E20250912 17:25:12.493297 1812 update_attempter.cc:619] Update failed. Sep 12 17:25:12.493346 update_engine[1812]: I20250912 17:25:12.493301 1812 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Sep 12 17:25:12.493346 update_engine[1812]: I20250912 17:25:12.493305 1812 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Sep 12 17:25:12.493346 update_engine[1812]: I20250912 17:25:12.493309 1812 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Sep 12 17:25:12.493397 update_engine[1812]: I20250912 17:25:12.493368 1812 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Sep 12 17:25:12.493397 update_engine[1812]: I20250912 17:25:12.493384 1812 omaha_request_action.cc:271] Posting an Omaha request to disabled Sep 12 17:25:12.493397 update_engine[1812]: I20250912 17:25:12.493387 1812 omaha_request_action.cc:272] Request: Sep 12 17:25:12.493397 update_engine[1812]: Sep 12 17:25:12.493397 update_engine[1812]: Sep 12 17:25:12.493397 update_engine[1812]: Sep 12 17:25:12.493397 update_engine[1812]: Sep 12 17:25:12.493397 update_engine[1812]: Sep 12 17:25:12.493397 update_engine[1812]: Sep 12 17:25:12.493397 update_engine[1812]: I20250912 17:25:12.493391 1812 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 12 17:25:12.493519 update_engine[1812]: I20250912 17:25:12.493491 1812 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 12 17:25:12.493722 update_engine[1812]: I20250912 17:25:12.493643 1812 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 12 17:25:12.493896 locksmithd[1952]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Sep 12 17:25:12.498006 update_engine[1812]: E20250912 17:25:12.497980 1812 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 12 17:25:12.498066 update_engine[1812]: I20250912 17:25:12.498018 1812 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Sep 12 17:25:12.498066 update_engine[1812]: I20250912 17:25:12.498023 1812 omaha_request_action.cc:617] Omaha request response: Sep 12 17:25:12.498066 update_engine[1812]: I20250912 17:25:12.498026 1812 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 12 17:25:12.498066 update_engine[1812]: I20250912 17:25:12.498030 1812 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 12 17:25:12.498066 update_engine[1812]: I20250912 17:25:12.498034 1812 update_attempter.cc:306] Processing Done. Sep 12 17:25:12.498066 update_engine[1812]: I20250912 17:25:12.498037 1812 update_attempter.cc:310] Error event sent. Sep 12 17:25:12.498066 update_engine[1812]: I20250912 17:25:12.498044 1812 update_check_scheduler.cc:74] Next update check in 44m48s Sep 12 17:25:12.498271 locksmithd[1952]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Sep 12 17:27:06.746586 systemd[1]: Started sshd@7-10.200.20.38:22-10.200.16.10:38412.service - OpenSSH per-connection server daemon (10.200.16.10:38412). Sep 12 17:27:07.203022 sshd[4874]: Accepted publickey for core from 10.200.16.10 port 38412 ssh2: RSA SHA256:+on2THTR/nRp7vYd/q00sis2kB6WPhgipQlvfvqeQ7E Sep 12 17:27:07.203962 sshd-session[4874]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:27:07.207370 systemd-logind[1809]: New session 10 of user core. Sep 12 17:27:07.217889 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 12 17:27:07.577291 sshd[4878]: Connection closed by 10.200.16.10 port 38412 Sep 12 17:27:07.577749 sshd-session[4874]: pam_unix(sshd:session): session closed for user core Sep 12 17:27:07.581197 systemd[1]: sshd@7-10.200.20.38:22-10.200.16.10:38412.service: Deactivated successfully. Sep 12 17:27:07.582992 systemd[1]: session-10.scope: Deactivated successfully. Sep 12 17:27:07.583658 systemd-logind[1809]: Session 10 logged out. Waiting for processes to exit. Sep 12 17:27:07.585109 systemd-logind[1809]: Removed session 10. Sep 12 17:27:12.667307 systemd[1]: Started sshd@8-10.200.20.38:22-10.200.16.10:41786.service - OpenSSH per-connection server daemon (10.200.16.10:41786). Sep 12 17:27:13.123034 sshd[4892]: Accepted publickey for core from 10.200.16.10 port 41786 ssh2: RSA SHA256:+on2THTR/nRp7vYd/q00sis2kB6WPhgipQlvfvqeQ7E Sep 12 17:27:13.123981 sshd-session[4892]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:27:13.127179 systemd-logind[1809]: New session 11 of user core. Sep 12 17:27:13.136040 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 12 17:27:13.487149 sshd[4895]: Connection closed by 10.200.16.10 port 41786 Sep 12 17:27:13.487728 sshd-session[4892]: pam_unix(sshd:session): session closed for user core Sep 12 17:27:13.490883 systemd[1]: sshd@8-10.200.20.38:22-10.200.16.10:41786.service: Deactivated successfully. Sep 12 17:27:13.492583 systemd[1]: session-11.scope: Deactivated successfully. Sep 12 17:27:13.493413 systemd-logind[1809]: Session 11 logged out. Waiting for processes to exit. Sep 12 17:27:13.494874 systemd-logind[1809]: Removed session 11. Sep 12 17:27:18.569293 systemd[1]: Started sshd@9-10.200.20.38:22-10.200.16.10:41796.service - OpenSSH per-connection server daemon (10.200.16.10:41796). Sep 12 17:27:19.019517 sshd[4908]: Accepted publickey for core from 10.200.16.10 port 41796 ssh2: RSA SHA256:+on2THTR/nRp7vYd/q00sis2kB6WPhgipQlvfvqeQ7E Sep 12 17:27:19.020419 sshd-session[4908]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:27:19.023794 systemd-logind[1809]: New session 12 of user core. Sep 12 17:27:19.036071 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 12 17:27:19.389972 sshd[4911]: Connection closed by 10.200.16.10 port 41796 Sep 12 17:27:19.389837 sshd-session[4908]: pam_unix(sshd:session): session closed for user core Sep 12 17:27:19.392714 systemd[1]: sshd@9-10.200.20.38:22-10.200.16.10:41796.service: Deactivated successfully. Sep 12 17:27:19.393190 systemd-logind[1809]: Session 12 logged out. Waiting for processes to exit. Sep 12 17:27:19.394456 systemd[1]: session-12.scope: Deactivated successfully. Sep 12 17:27:19.398799 systemd-logind[1809]: Removed session 12. Sep 12 17:27:24.475529 systemd[1]: Started sshd@10-10.200.20.38:22-10.200.16.10:37794.service - OpenSSH per-connection server daemon (10.200.16.10:37794). Sep 12 17:27:24.929339 sshd[4928]: Accepted publickey for core from 10.200.16.10 port 37794 ssh2: RSA SHA256:+on2THTR/nRp7vYd/q00sis2kB6WPhgipQlvfvqeQ7E Sep 12 17:27:24.930294 sshd-session[4928]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:27:24.933674 systemd-logind[1809]: New session 13 of user core. Sep 12 17:27:24.941883 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 12 17:27:25.293481 sshd[4931]: Connection closed by 10.200.16.10 port 37794 Sep 12 17:27:25.293965 sshd-session[4928]: pam_unix(sshd:session): session closed for user core Sep 12 17:27:25.297128 systemd-logind[1809]: Session 13 logged out. Waiting for processes to exit. Sep 12 17:27:25.297668 systemd[1]: sshd@10-10.200.20.38:22-10.200.16.10:37794.service: Deactivated successfully. Sep 12 17:27:25.299113 systemd[1]: session-13.scope: Deactivated successfully. Sep 12 17:27:25.300279 systemd-logind[1809]: Removed session 13. Sep 12 17:27:25.369943 systemd[1]: Started sshd@11-10.200.20.38:22-10.200.16.10:37804.service - OpenSSH per-connection server daemon (10.200.16.10:37804). Sep 12 17:27:25.780436 sshd[4944]: Accepted publickey for core from 10.200.16.10 port 37804 ssh2: RSA SHA256:+on2THTR/nRp7vYd/q00sis2kB6WPhgipQlvfvqeQ7E Sep 12 17:27:25.781376 sshd-session[4944]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:27:25.784741 systemd-logind[1809]: New session 14 of user core. Sep 12 17:27:25.793051 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 12 17:27:26.150817 sshd[4947]: Connection closed by 10.200.16.10 port 37804 Sep 12 17:27:26.151039 sshd-session[4944]: pam_unix(sshd:session): session closed for user core Sep 12 17:27:26.154721 systemd[1]: sshd@11-10.200.20.38:22-10.200.16.10:37804.service: Deactivated successfully. Sep 12 17:27:26.156553 systemd[1]: session-14.scope: Deactivated successfully. Sep 12 17:27:26.158075 systemd-logind[1809]: Session 14 logged out. Waiting for processes to exit. Sep 12 17:27:26.159128 systemd-logind[1809]: Removed session 14. Sep 12 17:27:26.245085 systemd[1]: Started sshd@12-10.200.20.38:22-10.200.16.10:37818.service - OpenSSH per-connection server daemon (10.200.16.10:37818). Sep 12 17:27:26.697536 sshd[4957]: Accepted publickey for core from 10.200.16.10 port 37818 ssh2: RSA SHA256:+on2THTR/nRp7vYd/q00sis2kB6WPhgipQlvfvqeQ7E Sep 12 17:27:26.698892 sshd-session[4957]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:27:26.702514 systemd-logind[1809]: New session 15 of user core. Sep 12 17:27:26.709888 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 12 17:27:27.061227 sshd[4960]: Connection closed by 10.200.16.10 port 37818 Sep 12 17:27:27.061667 sshd-session[4957]: pam_unix(sshd:session): session closed for user core Sep 12 17:27:27.064305 systemd[1]: sshd@12-10.200.20.38:22-10.200.16.10:37818.service: Deactivated successfully. Sep 12 17:27:27.065816 systemd[1]: session-15.scope: Deactivated successfully. Sep 12 17:27:27.066708 systemd-logind[1809]: Session 15 logged out. Waiting for processes to exit. Sep 12 17:27:27.067626 systemd-logind[1809]: Removed session 15. Sep 12 17:27:32.147223 systemd[1]: Started sshd@13-10.200.20.38:22-10.200.16.10:55326.service - OpenSSH per-connection server daemon (10.200.16.10:55326). Sep 12 17:27:32.595371 sshd[4971]: Accepted publickey for core from 10.200.16.10 port 55326 ssh2: RSA SHA256:+on2THTR/nRp7vYd/q00sis2kB6WPhgipQlvfvqeQ7E Sep 12 17:27:32.596328 sshd-session[4971]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:27:32.599572 systemd-logind[1809]: New session 16 of user core. Sep 12 17:27:32.605866 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 12 17:27:32.955992 sshd[4974]: Connection closed by 10.200.16.10 port 55326 Sep 12 17:27:32.956393 sshd-session[4971]: pam_unix(sshd:session): session closed for user core Sep 12 17:27:32.959236 systemd[1]: sshd@13-10.200.20.38:22-10.200.16.10:55326.service: Deactivated successfully. Sep 12 17:27:32.960599 systemd[1]: session-16.scope: Deactivated successfully. Sep 12 17:27:32.961269 systemd-logind[1809]: Session 16 logged out. Waiting for processes to exit. Sep 12 17:27:32.962214 systemd-logind[1809]: Removed session 16. Sep 12 17:27:33.038846 systemd[1]: Started sshd@14-10.200.20.38:22-10.200.16.10:55336.service - OpenSSH per-connection server daemon (10.200.16.10:55336). Sep 12 17:27:33.497452 sshd[4987]: Accepted publickey for core from 10.200.16.10 port 55336 ssh2: RSA SHA256:+on2THTR/nRp7vYd/q00sis2kB6WPhgipQlvfvqeQ7E Sep 12 17:27:33.498640 sshd-session[4987]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:27:33.502549 systemd-logind[1809]: New session 17 of user core. Sep 12 17:27:33.511894 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 12 17:27:33.898785 sshd[4992]: Connection closed by 10.200.16.10 port 55336 Sep 12 17:27:33.899362 sshd-session[4987]: pam_unix(sshd:session): session closed for user core Sep 12 17:27:33.902681 systemd-logind[1809]: Session 17 logged out. Waiting for processes to exit. Sep 12 17:27:33.903002 systemd[1]: sshd@14-10.200.20.38:22-10.200.16.10:55336.service: Deactivated successfully. Sep 12 17:27:33.904465 systemd[1]: session-17.scope: Deactivated successfully. Sep 12 17:27:33.906281 systemd-logind[1809]: Removed session 17. Sep 12 17:27:33.977296 systemd[1]: Started sshd@15-10.200.20.38:22-10.200.16.10:55340.service - OpenSSH per-connection server daemon (10.200.16.10:55340). Sep 12 17:27:34.390292 sshd[5002]: Accepted publickey for core from 10.200.16.10 port 55340 ssh2: RSA SHA256:+on2THTR/nRp7vYd/q00sis2kB6WPhgipQlvfvqeQ7E Sep 12 17:27:34.391274 sshd-session[5002]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:27:34.394812 systemd-logind[1809]: New session 18 of user core. Sep 12 17:27:34.399875 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 12 17:27:35.484378 sshd[5005]: Connection closed by 10.200.16.10 port 55340 Sep 12 17:27:35.484914 sshd-session[5002]: pam_unix(sshd:session): session closed for user core Sep 12 17:27:35.487960 systemd[1]: sshd@15-10.200.20.38:22-10.200.16.10:55340.service: Deactivated successfully. Sep 12 17:27:35.489338 systemd[1]: session-18.scope: Deactivated successfully. Sep 12 17:27:35.491290 systemd-logind[1809]: Session 18 logged out. Waiting for processes to exit. Sep 12 17:27:35.492542 systemd-logind[1809]: Removed session 18. Sep 12 17:27:35.576966 systemd[1]: Started sshd@16-10.200.20.38:22-10.200.16.10:55352.service - OpenSSH per-connection server daemon (10.200.16.10:55352). Sep 12 17:27:36.028953 sshd[5022]: Accepted publickey for core from 10.200.16.10 port 55352 ssh2: RSA SHA256:+on2THTR/nRp7vYd/q00sis2kB6WPhgipQlvfvqeQ7E Sep 12 17:27:36.029927 sshd-session[5022]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:27:36.033609 systemd-logind[1809]: New session 19 of user core. Sep 12 17:27:36.042891 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 12 17:27:36.479614 sshd[5025]: Connection closed by 10.200.16.10 port 55352 Sep 12 17:27:36.480176 sshd-session[5022]: pam_unix(sshd:session): session closed for user core Sep 12 17:27:36.483106 systemd[1]: sshd@16-10.200.20.38:22-10.200.16.10:55352.service: Deactivated successfully. Sep 12 17:27:36.484556 systemd[1]: session-19.scope: Deactivated successfully. Sep 12 17:27:36.486292 systemd-logind[1809]: Session 19 logged out. Waiting for processes to exit. Sep 12 17:27:36.487510 systemd-logind[1809]: Removed session 19. Sep 12 17:27:36.550939 systemd[1]: Started sshd@17-10.200.20.38:22-10.200.16.10:55354.service - OpenSSH per-connection server daemon (10.200.16.10:55354). Sep 12 17:27:36.969926 sshd[5035]: Accepted publickey for core from 10.200.16.10 port 55354 ssh2: RSA SHA256:+on2THTR/nRp7vYd/q00sis2kB6WPhgipQlvfvqeQ7E Sep 12 17:27:36.971280 sshd-session[5035]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:27:36.974563 systemd-logind[1809]: New session 20 of user core. Sep 12 17:27:36.981862 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 12 17:27:37.325929 sshd[5038]: Connection closed by 10.200.16.10 port 55354 Sep 12 17:27:37.326323 sshd-session[5035]: pam_unix(sshd:session): session closed for user core Sep 12 17:27:37.329270 systemd[1]: sshd@17-10.200.20.38:22-10.200.16.10:55354.service: Deactivated successfully. Sep 12 17:27:37.332161 systemd[1]: session-20.scope: Deactivated successfully. Sep 12 17:27:37.333111 systemd-logind[1809]: Session 20 logged out. Waiting for processes to exit. Sep 12 17:27:37.335003 systemd-logind[1809]: Removed session 20. Sep 12 17:27:42.408043 systemd[1]: Started sshd@18-10.200.20.38:22-10.200.16.10:57568.service - OpenSSH per-connection server daemon (10.200.16.10:57568). Sep 12 17:27:42.861412 sshd[5052]: Accepted publickey for core from 10.200.16.10 port 57568 ssh2: RSA SHA256:+on2THTR/nRp7vYd/q00sis2kB6WPhgipQlvfvqeQ7E Sep 12 17:27:42.862441 sshd-session[5052]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:27:42.865885 systemd-logind[1809]: New session 21 of user core. Sep 12 17:27:42.872987 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 12 17:27:43.223870 sshd[5055]: Connection closed by 10.200.16.10 port 57568 Sep 12 17:27:43.224316 sshd-session[5052]: pam_unix(sshd:session): session closed for user core Sep 12 17:27:43.227573 systemd[1]: sshd@18-10.200.20.38:22-10.200.16.10:57568.service: Deactivated successfully. Sep 12 17:27:43.229120 systemd[1]: session-21.scope: Deactivated successfully. Sep 12 17:27:43.229778 systemd-logind[1809]: Session 21 logged out. Waiting for processes to exit. Sep 12 17:27:43.232076 systemd-logind[1809]: Removed session 21. Sep 12 17:27:48.300949 systemd[1]: Started sshd@19-10.200.20.38:22-10.200.16.10:57576.service - OpenSSH per-connection server daemon (10.200.16.10:57576). Sep 12 17:27:48.717009 sshd[5067]: Accepted publickey for core from 10.200.16.10 port 57576 ssh2: RSA SHA256:+on2THTR/nRp7vYd/q00sis2kB6WPhgipQlvfvqeQ7E Sep 12 17:27:48.717960 sshd-session[5067]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:27:48.721306 systemd-logind[1809]: New session 22 of user core. Sep 12 17:27:48.732856 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 12 17:27:49.065946 sshd[5070]: Connection closed by 10.200.16.10 port 57576 Sep 12 17:27:49.066388 sshd-session[5067]: pam_unix(sshd:session): session closed for user core Sep 12 17:27:49.069108 systemd[1]: sshd@19-10.200.20.38:22-10.200.16.10:57576.service: Deactivated successfully. Sep 12 17:27:49.070486 systemd[1]: session-22.scope: Deactivated successfully. Sep 12 17:27:49.071244 systemd-logind[1809]: Session 22 logged out. Waiting for processes to exit. Sep 12 17:27:49.072379 systemd-logind[1809]: Removed session 22. Sep 12 17:27:49.147074 systemd[1]: Started sshd@20-10.200.20.38:22-10.200.16.10:57586.service - OpenSSH per-connection server daemon (10.200.16.10:57586). Sep 12 17:27:49.604518 sshd[5082]: Accepted publickey for core from 10.200.16.10 port 57586 ssh2: RSA SHA256:+on2THTR/nRp7vYd/q00sis2kB6WPhgipQlvfvqeQ7E Sep 12 17:27:49.605421 sshd-session[5082]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:27:49.608440 systemd-logind[1809]: New session 23 of user core. Sep 12 17:27:49.618868 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 12 17:27:51.145788 containerd[1826]: time="2025-09-12T17:27:51.145652387Z" level=info msg="StopContainer for \"dbd1bb050aae9b404848ad2c0215f12734cce96ccca29cb3d2a878f8527bdf30\" with timeout 30 (s)" Sep 12 17:27:51.147136 containerd[1826]: time="2025-09-12T17:27:51.147115628Z" level=info msg="Stop container \"dbd1bb050aae9b404848ad2c0215f12734cce96ccca29cb3d2a878f8527bdf30\" with signal terminated" Sep 12 17:27:51.153206 containerd[1826]: time="2025-09-12T17:27:51.153164252Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 17:27:51.156824 containerd[1826]: time="2025-09-12T17:27:51.156789618Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ed73e5f4a6541cf009c09fadf73f3e97c3159f4ce3e3442faf03217df4b2c367\" id:\"d93b4619db7f6c124e074e58f313d53c5d666911daa7c2d7b1a7793ad19b39f7\" pid:5105 exited_at:{seconds:1757698071 nanos:156604431}" Sep 12 17:27:51.159615 containerd[1826]: time="2025-09-12T17:27:51.159592642Z" level=info msg="StopContainer for \"ed73e5f4a6541cf009c09fadf73f3e97c3159f4ce3e3442faf03217df4b2c367\" with timeout 2 (s)" Sep 12 17:27:51.159957 containerd[1826]: time="2025-09-12T17:27:51.159891199Z" level=info msg="Stop container \"ed73e5f4a6541cf009c09fadf73f3e97c3159f4ce3e3442faf03217df4b2c367\" with signal terminated" Sep 12 17:27:51.161223 systemd[1]: cri-containerd-dbd1bb050aae9b404848ad2c0215f12734cce96ccca29cb3d2a878f8527bdf30.scope: Deactivated successfully. Sep 12 17:27:51.163429 containerd[1826]: time="2025-09-12T17:27:51.163380027Z" level=info msg="received exit event container_id:\"dbd1bb050aae9b404848ad2c0215f12734cce96ccca29cb3d2a878f8527bdf30\" id:\"dbd1bb050aae9b404848ad2c0215f12734cce96ccca29cb3d2a878f8527bdf30\" pid:3984 exited_at:{seconds:1757698071 nanos:163212440}" Sep 12 17:27:51.163584 containerd[1826]: time="2025-09-12T17:27:51.163535638Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dbd1bb050aae9b404848ad2c0215f12734cce96ccca29cb3d2a878f8527bdf30\" id:\"dbd1bb050aae9b404848ad2c0215f12734cce96ccca29cb3d2a878f8527bdf30\" pid:3984 exited_at:{seconds:1757698071 nanos:163212440}" Sep 12 17:27:51.168846 systemd-networkd[1566]: lxc_health: Link DOWN Sep 12 17:27:51.168853 systemd-networkd[1566]: lxc_health: Lost carrier Sep 12 17:27:51.185475 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dbd1bb050aae9b404848ad2c0215f12734cce96ccca29cb3d2a878f8527bdf30-rootfs.mount: Deactivated successfully. Sep 12 17:27:51.187486 systemd[1]: cri-containerd-ed73e5f4a6541cf009c09fadf73f3e97c3159f4ce3e3442faf03217df4b2c367.scope: Deactivated successfully. Sep 12 17:27:51.188110 systemd[1]: cri-containerd-ed73e5f4a6541cf009c09fadf73f3e97c3159f4ce3e3442faf03217df4b2c367.scope: Consumed 4.445s CPU time, 140.5M memory peak, 144K read from disk, 12.9M written to disk. Sep 12 17:27:51.189055 containerd[1826]: time="2025-09-12T17:27:51.188915898Z" level=info msg="received exit event container_id:\"ed73e5f4a6541cf009c09fadf73f3e97c3159f4ce3e3442faf03217df4b2c367\" id:\"ed73e5f4a6541cf009c09fadf73f3e97c3159f4ce3e3442faf03217df4b2c367\" pid:4016 exited_at:{seconds:1757698071 nanos:188710278}" Sep 12 17:27:51.189411 containerd[1826]: time="2025-09-12T17:27:51.188944474Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ed73e5f4a6541cf009c09fadf73f3e97c3159f4ce3e3442faf03217df4b2c367\" id:\"ed73e5f4a6541cf009c09fadf73f3e97c3159f4ce3e3442faf03217df4b2c367\" pid:4016 exited_at:{seconds:1757698071 nanos:188710278}" Sep 12 17:27:51.207102 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ed73e5f4a6541cf009c09fadf73f3e97c3159f4ce3e3442faf03217df4b2c367-rootfs.mount: Deactivated successfully. Sep 12 17:27:51.241013 containerd[1826]: time="2025-09-12T17:27:51.240989280Z" level=info msg="StopContainer for \"dbd1bb050aae9b404848ad2c0215f12734cce96ccca29cb3d2a878f8527bdf30\" returns successfully" Sep 12 17:27:51.241649 containerd[1826]: time="2025-09-12T17:27:51.241627499Z" level=info msg="StopPodSandbox for \"b454cc1e4948998445e831d06c190c75da1e6560bad602181024917f7ae92053\"" Sep 12 17:27:51.241706 containerd[1826]: time="2025-09-12T17:27:51.241668804Z" level=info msg="Container to stop \"dbd1bb050aae9b404848ad2c0215f12734cce96ccca29cb3d2a878f8527bdf30\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:27:51.245623 systemd[1]: cri-containerd-b454cc1e4948998445e831d06c190c75da1e6560bad602181024917f7ae92053.scope: Deactivated successfully. Sep 12 17:27:51.248206 containerd[1826]: time="2025-09-12T17:27:51.248179700Z" level=info msg="StopContainer for \"ed73e5f4a6541cf009c09fadf73f3e97c3159f4ce3e3442faf03217df4b2c367\" returns successfully" Sep 12 17:27:51.248534 containerd[1826]: time="2025-09-12T17:27:51.248517113Z" level=info msg="StopPodSandbox for \"3340c7258545ac24a81fbd22dc156ce9dda054e33fe3d00b03e133102c768099\"" Sep 12 17:27:51.249948 containerd[1826]: time="2025-09-12T17:27:51.249898249Z" level=info msg="Container to stop \"ed73e5f4a6541cf009c09fadf73f3e97c3159f4ce3e3442faf03217df4b2c367\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:27:51.249948 containerd[1826]: time="2025-09-12T17:27:51.249920138Z" level=info msg="Container to stop \"9b6c4d51987cc36309e6a43ee8c709adadc4301a96cc6312ca673ea146ba6a18\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:27:51.249948 containerd[1826]: time="2025-09-12T17:27:51.249927610Z" level=info msg="Container to stop \"744518f0a06910f1b94ba008166601d0ee6a4ddc32f4e41640e5df895c31abb7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:27:51.249948 containerd[1826]: time="2025-09-12T17:27:51.249933370Z" level=info msg="Container to stop \"df51837c16de93e41a2f38258481fb3d124e1482e0f42ed9aa742a988fceeda9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:27:51.250148 containerd[1826]: time="2025-09-12T17:27:51.249938498Z" level=info msg="Container to stop \"019b5a70974ea1a5f6491702944b2c6a1383116421d06bd9826e121cf784b4fc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:27:51.252593 containerd[1826]: time="2025-09-12T17:27:51.252567263Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b454cc1e4948998445e831d06c190c75da1e6560bad602181024917f7ae92053\" id:\"b454cc1e4948998445e831d06c190c75da1e6560bad602181024917f7ae92053\" pid:3534 exit_status:137 exited_at:{seconds:1757698071 nanos:252388100}" Sep 12 17:27:51.255114 systemd[1]: cri-containerd-3340c7258545ac24a81fbd22dc156ce9dda054e33fe3d00b03e133102c768099.scope: Deactivated successfully. Sep 12 17:27:51.275325 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b454cc1e4948998445e831d06c190c75da1e6560bad602181024917f7ae92053-rootfs.mount: Deactivated successfully. Sep 12 17:27:51.281228 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3340c7258545ac24a81fbd22dc156ce9dda054e33fe3d00b03e133102c768099-rootfs.mount: Deactivated successfully. Sep 12 17:27:51.301296 containerd[1826]: time="2025-09-12T17:27:51.301205890Z" level=info msg="shim disconnected" id=b454cc1e4948998445e831d06c190c75da1e6560bad602181024917f7ae92053 namespace=k8s.io Sep 12 17:27:51.301296 containerd[1826]: time="2025-09-12T17:27:51.301260851Z" level=warning msg="cleaning up after shim disconnected" id=b454cc1e4948998445e831d06c190c75da1e6560bad602181024917f7ae92053 namespace=k8s.io Sep 12 17:27:51.301296 containerd[1826]: time="2025-09-12T17:27:51.301285148Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:27:51.301784 containerd[1826]: time="2025-09-12T17:27:51.301471527Z" level=info msg="shim disconnected" id=3340c7258545ac24a81fbd22dc156ce9dda054e33fe3d00b03e133102c768099 namespace=k8s.io Sep 12 17:27:51.301784 containerd[1826]: time="2025-09-12T17:27:51.301491895Z" level=warning msg="cleaning up after shim disconnected" id=3340c7258545ac24a81fbd22dc156ce9dda054e33fe3d00b03e133102c768099 namespace=k8s.io Sep 12 17:27:51.301784 containerd[1826]: time="2025-09-12T17:27:51.301508352Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:27:51.315217 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3340c7258545ac24a81fbd22dc156ce9dda054e33fe3d00b03e133102c768099-shm.mount: Deactivated successfully. Sep 12 17:27:51.315879 containerd[1826]: time="2025-09-12T17:27:51.315531888Z" level=info msg="received exit event sandbox_id:\"3340c7258545ac24a81fbd22dc156ce9dda054e33fe3d00b03e133102c768099\" exit_status:137 exited_at:{seconds:1757698071 nanos:262955401}" Sep 12 17:27:51.316218 containerd[1826]: time="2025-09-12T17:27:51.316193892Z" level=info msg="received exit event sandbox_id:\"b454cc1e4948998445e831d06c190c75da1e6560bad602181024917f7ae92053\" exit_status:137 exited_at:{seconds:1757698071 nanos:252388100}" Sep 12 17:27:51.316583 containerd[1826]: time="2025-09-12T17:27:51.316401271Z" level=info msg="TearDown network for sandbox \"3340c7258545ac24a81fbd22dc156ce9dda054e33fe3d00b03e133102c768099\" successfully" Sep 12 17:27:51.316583 containerd[1826]: time="2025-09-12T17:27:51.316418696Z" level=info msg="StopPodSandbox for \"3340c7258545ac24a81fbd22dc156ce9dda054e33fe3d00b03e133102c768099\" returns successfully" Sep 12 17:27:51.316583 containerd[1826]: time="2025-09-12T17:27:51.316514009Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3340c7258545ac24a81fbd22dc156ce9dda054e33fe3d00b03e133102c768099\" id:\"3340c7258545ac24a81fbd22dc156ce9dda054e33fe3d00b03e133102c768099\" pid:3489 exit_status:137 exited_at:{seconds:1757698071 nanos:262955401}" Sep 12 17:27:51.316707 containerd[1826]: time="2025-09-12T17:27:51.316672868Z" level=info msg="TearDown network for sandbox \"b454cc1e4948998445e831d06c190c75da1e6560bad602181024917f7ae92053\" successfully" Sep 12 17:27:51.316707 containerd[1826]: time="2025-09-12T17:27:51.316685468Z" level=info msg="StopPodSandbox for \"b454cc1e4948998445e831d06c190c75da1e6560bad602181024917f7ae92053\" returns successfully" Sep 12 17:27:51.329776 kubelet[3400]: I0912 17:27:51.329716 3400 scope.go:117] "RemoveContainer" containerID="ed73e5f4a6541cf009c09fadf73f3e97c3159f4ce3e3442faf03217df4b2c367" Sep 12 17:27:51.331396 containerd[1826]: time="2025-09-12T17:27:51.331364456Z" level=info msg="RemoveContainer for \"ed73e5f4a6541cf009c09fadf73f3e97c3159f4ce3e3442faf03217df4b2c367\"" Sep 12 17:27:51.345892 containerd[1826]: time="2025-09-12T17:27:51.345817616Z" level=info msg="RemoveContainer for \"ed73e5f4a6541cf009c09fadf73f3e97c3159f4ce3e3442faf03217df4b2c367\" returns successfully" Sep 12 17:27:51.346420 kubelet[3400]: I0912 17:27:51.346397 3400 scope.go:117] "RemoveContainer" containerID="019b5a70974ea1a5f6491702944b2c6a1383116421d06bd9826e121cf784b4fc" Sep 12 17:27:51.347794 containerd[1826]: time="2025-09-12T17:27:51.347610359Z" level=info msg="RemoveContainer for \"019b5a70974ea1a5f6491702944b2c6a1383116421d06bd9826e121cf784b4fc\"" Sep 12 17:27:51.355166 containerd[1826]: time="2025-09-12T17:27:51.355141721Z" level=info msg="RemoveContainer for \"019b5a70974ea1a5f6491702944b2c6a1383116421d06bd9826e121cf784b4fc\" returns successfully" Sep 12 17:27:51.355349 kubelet[3400]: I0912 17:27:51.355269 3400 scope.go:117] "RemoveContainer" containerID="df51837c16de93e41a2f38258481fb3d124e1482e0f42ed9aa742a988fceeda9" Sep 12 17:27:51.357270 containerd[1826]: time="2025-09-12T17:27:51.357248805Z" level=info msg="RemoveContainer for \"df51837c16de93e41a2f38258481fb3d124e1482e0f42ed9aa742a988fceeda9\"" Sep 12 17:27:51.367672 containerd[1826]: time="2025-09-12T17:27:51.367638871Z" level=info msg="RemoveContainer for \"df51837c16de93e41a2f38258481fb3d124e1482e0f42ed9aa742a988fceeda9\" returns successfully" Sep 12 17:27:51.367843 kubelet[3400]: I0912 17:27:51.367812 3400 scope.go:117] "RemoveContainer" containerID="744518f0a06910f1b94ba008166601d0ee6a4ddc32f4e41640e5df895c31abb7" Sep 12 17:27:51.371895 containerd[1826]: time="2025-09-12T17:27:51.371867160Z" level=info msg="RemoveContainer for \"744518f0a06910f1b94ba008166601d0ee6a4ddc32f4e41640e5df895c31abb7\"" Sep 12 17:27:51.379472 containerd[1826]: time="2025-09-12T17:27:51.379447314Z" level=info msg="RemoveContainer for \"744518f0a06910f1b94ba008166601d0ee6a4ddc32f4e41640e5df895c31abb7\" returns successfully" Sep 12 17:27:51.379613 kubelet[3400]: I0912 17:27:51.379594 3400 scope.go:117] "RemoveContainer" containerID="9b6c4d51987cc36309e6a43ee8c709adadc4301a96cc6312ca673ea146ba6a18" Sep 12 17:27:51.381887 containerd[1826]: time="2025-09-12T17:27:51.381860867Z" level=info msg="RemoveContainer for \"9b6c4d51987cc36309e6a43ee8c709adadc4301a96cc6312ca673ea146ba6a18\"" Sep 12 17:27:51.390192 containerd[1826]: time="2025-09-12T17:27:51.390168730Z" level=info msg="RemoveContainer for \"9b6c4d51987cc36309e6a43ee8c709adadc4301a96cc6312ca673ea146ba6a18\" returns successfully" Sep 12 17:27:51.390367 kubelet[3400]: I0912 17:27:51.390313 3400 scope.go:117] "RemoveContainer" containerID="ed73e5f4a6541cf009c09fadf73f3e97c3159f4ce3e3442faf03217df4b2c367" Sep 12 17:27:51.390781 kubelet[3400]: E0912 17:27:51.390623 3400 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ed73e5f4a6541cf009c09fadf73f3e97c3159f4ce3e3442faf03217df4b2c367\": not found" containerID="ed73e5f4a6541cf009c09fadf73f3e97c3159f4ce3e3442faf03217df4b2c367" Sep 12 17:27:51.390781 kubelet[3400]: I0912 17:27:51.390649 3400 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ed73e5f4a6541cf009c09fadf73f3e97c3159f4ce3e3442faf03217df4b2c367"} err="failed to get container status \"ed73e5f4a6541cf009c09fadf73f3e97c3159f4ce3e3442faf03217df4b2c367\": rpc error: code = NotFound desc = an error occurred when try to find container \"ed73e5f4a6541cf009c09fadf73f3e97c3159f4ce3e3442faf03217df4b2c367\": not found" Sep 12 17:27:51.390781 kubelet[3400]: I0912 17:27:51.390678 3400 scope.go:117] "RemoveContainer" containerID="019b5a70974ea1a5f6491702944b2c6a1383116421d06bd9826e121cf784b4fc" Sep 12 17:27:51.390865 containerd[1826]: time="2025-09-12T17:27:51.390521960Z" level=error msg="ContainerStatus for \"ed73e5f4a6541cf009c09fadf73f3e97c3159f4ce3e3442faf03217df4b2c367\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ed73e5f4a6541cf009c09fadf73f3e97c3159f4ce3e3442faf03217df4b2c367\": not found" Sep 12 17:27:51.390865 containerd[1826]: time="2025-09-12T17:27:51.390822077Z" level=error msg="ContainerStatus for \"019b5a70974ea1a5f6491702944b2c6a1383116421d06bd9826e121cf784b4fc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"019b5a70974ea1a5f6491702944b2c6a1383116421d06bd9826e121cf784b4fc\": not found" Sep 12 17:27:51.390957 kubelet[3400]: E0912 17:27:51.390919 3400 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"019b5a70974ea1a5f6491702944b2c6a1383116421d06bd9826e121cf784b4fc\": not found" containerID="019b5a70974ea1a5f6491702944b2c6a1383116421d06bd9826e121cf784b4fc" Sep 12 17:27:51.390991 kubelet[3400]: I0912 17:27:51.390960 3400 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"019b5a70974ea1a5f6491702944b2c6a1383116421d06bd9826e121cf784b4fc"} err="failed to get container status \"019b5a70974ea1a5f6491702944b2c6a1383116421d06bd9826e121cf784b4fc\": rpc error: code = NotFound desc = an error occurred when try to find container \"019b5a70974ea1a5f6491702944b2c6a1383116421d06bd9826e121cf784b4fc\": not found" Sep 12 17:27:51.390991 kubelet[3400]: I0912 17:27:51.390973 3400 scope.go:117] "RemoveContainer" containerID="df51837c16de93e41a2f38258481fb3d124e1482e0f42ed9aa742a988fceeda9" Sep 12 17:27:51.392673 containerd[1826]: time="2025-09-12T17:27:51.392621628Z" level=error msg="ContainerStatus for \"df51837c16de93e41a2f38258481fb3d124e1482e0f42ed9aa742a988fceeda9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"df51837c16de93e41a2f38258481fb3d124e1482e0f42ed9aa742a988fceeda9\": not found" Sep 12 17:27:51.392752 kubelet[3400]: E0912 17:27:51.392734 3400 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"df51837c16de93e41a2f38258481fb3d124e1482e0f42ed9aa742a988fceeda9\": not found" containerID="df51837c16de93e41a2f38258481fb3d124e1482e0f42ed9aa742a988fceeda9" Sep 12 17:27:51.392798 kubelet[3400]: I0912 17:27:51.392754 3400 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"df51837c16de93e41a2f38258481fb3d124e1482e0f42ed9aa742a988fceeda9"} err="failed to get container status \"df51837c16de93e41a2f38258481fb3d124e1482e0f42ed9aa742a988fceeda9\": rpc error: code = NotFound desc = an error occurred when try to find container \"df51837c16de93e41a2f38258481fb3d124e1482e0f42ed9aa742a988fceeda9\": not found" Sep 12 17:27:51.392798 kubelet[3400]: I0912 17:27:51.392784 3400 scope.go:117] "RemoveContainer" containerID="744518f0a06910f1b94ba008166601d0ee6a4ddc32f4e41640e5df895c31abb7" Sep 12 17:27:51.393008 containerd[1826]: time="2025-09-12T17:27:51.392979730Z" level=error msg="ContainerStatus for \"744518f0a06910f1b94ba008166601d0ee6a4ddc32f4e41640e5df895c31abb7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"744518f0a06910f1b94ba008166601d0ee6a4ddc32f4e41640e5df895c31abb7\": not found" Sep 12 17:27:51.393121 kubelet[3400]: E0912 17:27:51.393107 3400 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"744518f0a06910f1b94ba008166601d0ee6a4ddc32f4e41640e5df895c31abb7\": not found" containerID="744518f0a06910f1b94ba008166601d0ee6a4ddc32f4e41640e5df895c31abb7" Sep 12 17:27:51.393187 kubelet[3400]: I0912 17:27:51.393174 3400 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"744518f0a06910f1b94ba008166601d0ee6a4ddc32f4e41640e5df895c31abb7"} err="failed to get container status \"744518f0a06910f1b94ba008166601d0ee6a4ddc32f4e41640e5df895c31abb7\": rpc error: code = NotFound desc = an error occurred when try to find container \"744518f0a06910f1b94ba008166601d0ee6a4ddc32f4e41640e5df895c31abb7\": not found" Sep 12 17:27:51.393240 kubelet[3400]: I0912 17:27:51.393229 3400 scope.go:117] "RemoveContainer" containerID="9b6c4d51987cc36309e6a43ee8c709adadc4301a96cc6312ca673ea146ba6a18" Sep 12 17:27:51.394878 containerd[1826]: time="2025-09-12T17:27:51.393497931Z" level=error msg="ContainerStatus for \"9b6c4d51987cc36309e6a43ee8c709adadc4301a96cc6312ca673ea146ba6a18\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9b6c4d51987cc36309e6a43ee8c709adadc4301a96cc6312ca673ea146ba6a18\": not found" Sep 12 17:27:51.395008 kubelet[3400]: E0912 17:27:51.394981 3400 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9b6c4d51987cc36309e6a43ee8c709adadc4301a96cc6312ca673ea146ba6a18\": not found" containerID="9b6c4d51987cc36309e6a43ee8c709adadc4301a96cc6312ca673ea146ba6a18" Sep 12 17:27:51.395047 kubelet[3400]: I0912 17:27:51.395002 3400 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9b6c4d51987cc36309e6a43ee8c709adadc4301a96cc6312ca673ea146ba6a18"} err="failed to get container status \"9b6c4d51987cc36309e6a43ee8c709adadc4301a96cc6312ca673ea146ba6a18\": rpc error: code = NotFound desc = an error occurred when try to find container \"9b6c4d51987cc36309e6a43ee8c709adadc4301a96cc6312ca673ea146ba6a18\": not found" Sep 12 17:27:51.395047 kubelet[3400]: I0912 17:27:51.395025 3400 scope.go:117] "RemoveContainer" containerID="dbd1bb050aae9b404848ad2c0215f12734cce96ccca29cb3d2a878f8527bdf30" Sep 12 17:27:51.396644 containerd[1826]: time="2025-09-12T17:27:51.396306292Z" level=info msg="RemoveContainer for \"dbd1bb050aae9b404848ad2c0215f12734cce96ccca29cb3d2a878f8527bdf30\"" Sep 12 17:27:51.408650 containerd[1826]: time="2025-09-12T17:27:51.408625551Z" level=info msg="RemoveContainer for \"dbd1bb050aae9b404848ad2c0215f12734cce96ccca29cb3d2a878f8527bdf30\" returns successfully" Sep 12 17:27:51.408788 kubelet[3400]: I0912 17:27:51.408760 3400 scope.go:117] "RemoveContainer" containerID="dbd1bb050aae9b404848ad2c0215f12734cce96ccca29cb3d2a878f8527bdf30" Sep 12 17:27:51.409075 containerd[1826]: time="2025-09-12T17:27:51.409047510Z" level=error msg="ContainerStatus for \"dbd1bb050aae9b404848ad2c0215f12734cce96ccca29cb3d2a878f8527bdf30\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"dbd1bb050aae9b404848ad2c0215f12734cce96ccca29cb3d2a878f8527bdf30\": not found" Sep 12 17:27:51.409201 kubelet[3400]: E0912 17:27:51.409181 3400 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"dbd1bb050aae9b404848ad2c0215f12734cce96ccca29cb3d2a878f8527bdf30\": not found" containerID="dbd1bb050aae9b404848ad2c0215f12734cce96ccca29cb3d2a878f8527bdf30" Sep 12 17:27:51.409237 kubelet[3400]: I0912 17:27:51.409202 3400 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"dbd1bb050aae9b404848ad2c0215f12734cce96ccca29cb3d2a878f8527bdf30"} err="failed to get container status \"dbd1bb050aae9b404848ad2c0215f12734cce96ccca29cb3d2a878f8527bdf30\": rpc error: code = NotFound desc = an error occurred when try to find container \"dbd1bb050aae9b404848ad2c0215f12734cce96ccca29cb3d2a878f8527bdf30\": not found" Sep 12 17:27:51.451320 kubelet[3400]: I0912 17:27:51.451289 3400 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7przg\" (UniqueName: \"kubernetes.io/projected/a086ac3d-35ef-408f-8541-248f710d0583-kube-api-access-7przg\") pod \"a086ac3d-35ef-408f-8541-248f710d0583\" (UID: \"a086ac3d-35ef-408f-8541-248f710d0583\") " Sep 12 17:27:51.451320 kubelet[3400]: I0912 17:27:51.451323 3400 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a086ac3d-35ef-408f-8541-248f710d0583-cilium-config-path\") pod \"a086ac3d-35ef-408f-8541-248f710d0583\" (UID: \"a086ac3d-35ef-408f-8541-248f710d0583\") " Sep 12 17:27:51.451412 kubelet[3400]: I0912 17:27:51.451337 3400 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c7aba473-7601-4ebc-83c4-74c14847ce3a-cilium-run\") pod \"c7aba473-7601-4ebc-83c4-74c14847ce3a\" (UID: \"c7aba473-7601-4ebc-83c4-74c14847ce3a\") " Sep 12 17:27:51.451412 kubelet[3400]: I0912 17:27:51.451346 3400 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c7aba473-7601-4ebc-83c4-74c14847ce3a-hostproc\") pod \"c7aba473-7601-4ebc-83c4-74c14847ce3a\" (UID: \"c7aba473-7601-4ebc-83c4-74c14847ce3a\") " Sep 12 17:27:51.451412 kubelet[3400]: I0912 17:27:51.451354 3400 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c7aba473-7601-4ebc-83c4-74c14847ce3a-etc-cni-netd\") pod \"c7aba473-7601-4ebc-83c4-74c14847ce3a\" (UID: \"c7aba473-7601-4ebc-83c4-74c14847ce3a\") " Sep 12 17:27:51.451412 kubelet[3400]: I0912 17:27:51.451364 3400 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c7aba473-7601-4ebc-83c4-74c14847ce3a-bpf-maps\") pod \"c7aba473-7601-4ebc-83c4-74c14847ce3a\" (UID: \"c7aba473-7601-4ebc-83c4-74c14847ce3a\") " Sep 12 17:27:51.451412 kubelet[3400]: I0912 17:27:51.451392 3400 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c7aba473-7601-4ebc-83c4-74c14847ce3a-lib-modules\") pod \"c7aba473-7601-4ebc-83c4-74c14847ce3a\" (UID: \"c7aba473-7601-4ebc-83c4-74c14847ce3a\") " Sep 12 17:27:51.451412 kubelet[3400]: I0912 17:27:51.451400 3400 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c7aba473-7601-4ebc-83c4-74c14847ce3a-xtables-lock\") pod \"c7aba473-7601-4ebc-83c4-74c14847ce3a\" (UID: \"c7aba473-7601-4ebc-83c4-74c14847ce3a\") " Sep 12 17:27:51.451507 kubelet[3400]: I0912 17:27:51.451413 3400 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c7aba473-7601-4ebc-83c4-74c14847ce3a-clustermesh-secrets\") pod \"c7aba473-7601-4ebc-83c4-74c14847ce3a\" (UID: \"c7aba473-7601-4ebc-83c4-74c14847ce3a\") " Sep 12 17:27:51.451507 kubelet[3400]: I0912 17:27:51.451423 3400 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c7aba473-7601-4ebc-83c4-74c14847ce3a-cilium-config-path\") pod \"c7aba473-7601-4ebc-83c4-74c14847ce3a\" (UID: \"c7aba473-7601-4ebc-83c4-74c14847ce3a\") " Sep 12 17:27:51.451507 kubelet[3400]: I0912 17:27:51.451432 3400 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c7aba473-7601-4ebc-83c4-74c14847ce3a-cni-path\") pod \"c7aba473-7601-4ebc-83c4-74c14847ce3a\" (UID: \"c7aba473-7601-4ebc-83c4-74c14847ce3a\") " Sep 12 17:27:51.451507 kubelet[3400]: I0912 17:27:51.451441 3400 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c7aba473-7601-4ebc-83c4-74c14847ce3a-hubble-tls\") pod \"c7aba473-7601-4ebc-83c4-74c14847ce3a\" (UID: \"c7aba473-7601-4ebc-83c4-74c14847ce3a\") " Sep 12 17:27:51.451507 kubelet[3400]: I0912 17:27:51.451449 3400 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c7aba473-7601-4ebc-83c4-74c14847ce3a-host-proc-sys-net\") pod \"c7aba473-7601-4ebc-83c4-74c14847ce3a\" (UID: \"c7aba473-7601-4ebc-83c4-74c14847ce3a\") " Sep 12 17:27:51.451507 kubelet[3400]: I0912 17:27:51.451457 3400 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c7aba473-7601-4ebc-83c4-74c14847ce3a-cilium-cgroup\") pod \"c7aba473-7601-4ebc-83c4-74c14847ce3a\" (UID: \"c7aba473-7601-4ebc-83c4-74c14847ce3a\") " Sep 12 17:27:51.451590 kubelet[3400]: I0912 17:27:51.451468 3400 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c7aba473-7601-4ebc-83c4-74c14847ce3a-host-proc-sys-kernel\") pod \"c7aba473-7601-4ebc-83c4-74c14847ce3a\" (UID: \"c7aba473-7601-4ebc-83c4-74c14847ce3a\") " Sep 12 17:27:51.451590 kubelet[3400]: I0912 17:27:51.451479 3400 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nrjqm\" (UniqueName: \"kubernetes.io/projected/c7aba473-7601-4ebc-83c4-74c14847ce3a-kube-api-access-nrjqm\") pod \"c7aba473-7601-4ebc-83c4-74c14847ce3a\" (UID: \"c7aba473-7601-4ebc-83c4-74c14847ce3a\") " Sep 12 17:27:51.451900 kubelet[3400]: I0912 17:27:51.451664 3400 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c7aba473-7601-4ebc-83c4-74c14847ce3a-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "c7aba473-7601-4ebc-83c4-74c14847ce3a" (UID: "c7aba473-7601-4ebc-83c4-74c14847ce3a"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 17:27:51.452975 kubelet[3400]: I0912 17:27:51.452943 3400 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c7aba473-7601-4ebc-83c4-74c14847ce3a-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "c7aba473-7601-4ebc-83c4-74c14847ce3a" (UID: "c7aba473-7601-4ebc-83c4-74c14847ce3a"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 17:27:51.452975 kubelet[3400]: I0912 17:27:51.452976 3400 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c7aba473-7601-4ebc-83c4-74c14847ce3a-hostproc" (OuterVolumeSpecName: "hostproc") pod "c7aba473-7601-4ebc-83c4-74c14847ce3a" (UID: "c7aba473-7601-4ebc-83c4-74c14847ce3a"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 17:27:51.453049 kubelet[3400]: I0912 17:27:51.452987 3400 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c7aba473-7601-4ebc-83c4-74c14847ce3a-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "c7aba473-7601-4ebc-83c4-74c14847ce3a" (UID: "c7aba473-7601-4ebc-83c4-74c14847ce3a"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 17:27:51.453049 kubelet[3400]: I0912 17:27:51.453002 3400 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c7aba473-7601-4ebc-83c4-74c14847ce3a-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "c7aba473-7601-4ebc-83c4-74c14847ce3a" (UID: "c7aba473-7601-4ebc-83c4-74c14847ce3a"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 17:27:51.453049 kubelet[3400]: I0912 17:27:51.453011 3400 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c7aba473-7601-4ebc-83c4-74c14847ce3a-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "c7aba473-7601-4ebc-83c4-74c14847ce3a" (UID: "c7aba473-7601-4ebc-83c4-74c14847ce3a"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 17:27:51.453528 kubelet[3400]: I0912 17:27:51.453502 3400 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c7aba473-7601-4ebc-83c4-74c14847ce3a-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "c7aba473-7601-4ebc-83c4-74c14847ce3a" (UID: "c7aba473-7601-4ebc-83c4-74c14847ce3a"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 17:27:51.453528 kubelet[3400]: I0912 17:27:51.453530 3400 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c7aba473-7601-4ebc-83c4-74c14847ce3a-cni-path" (OuterVolumeSpecName: "cni-path") pod "c7aba473-7601-4ebc-83c4-74c14847ce3a" (UID: "c7aba473-7601-4ebc-83c4-74c14847ce3a"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 17:27:51.454654 kubelet[3400]: I0912 17:27:51.454631 3400 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7aba473-7601-4ebc-83c4-74c14847ce3a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c7aba473-7601-4ebc-83c4-74c14847ce3a" (UID: "c7aba473-7601-4ebc-83c4-74c14847ce3a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 12 17:27:51.455255 kubelet[3400]: I0912 17:27:51.455233 3400 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c7aba473-7601-4ebc-83c4-74c14847ce3a-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "c7aba473-7601-4ebc-83c4-74c14847ce3a" (UID: "c7aba473-7601-4ebc-83c4-74c14847ce3a"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 17:27:51.455345 kubelet[3400]: I0912 17:27:51.455332 3400 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c7aba473-7601-4ebc-83c4-74c14847ce3a-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "c7aba473-7601-4ebc-83c4-74c14847ce3a" (UID: "c7aba473-7601-4ebc-83c4-74c14847ce3a"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 17:27:51.456208 kubelet[3400]: I0912 17:27:51.456190 3400 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a086ac3d-35ef-408f-8541-248f710d0583-kube-api-access-7przg" (OuterVolumeSpecName: "kube-api-access-7przg") pod "a086ac3d-35ef-408f-8541-248f710d0583" (UID: "a086ac3d-35ef-408f-8541-248f710d0583"). InnerVolumeSpecName "kube-api-access-7przg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 12 17:27:51.456465 kubelet[3400]: I0912 17:27:51.456438 3400 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7aba473-7601-4ebc-83c4-74c14847ce3a-kube-api-access-nrjqm" (OuterVolumeSpecName: "kube-api-access-nrjqm") pod "c7aba473-7601-4ebc-83c4-74c14847ce3a" (UID: "c7aba473-7601-4ebc-83c4-74c14847ce3a"). InnerVolumeSpecName "kube-api-access-nrjqm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 12 17:27:51.456674 kubelet[3400]: I0912 17:27:51.456649 3400 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7aba473-7601-4ebc-83c4-74c14847ce3a-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "c7aba473-7601-4ebc-83c4-74c14847ce3a" (UID: "c7aba473-7601-4ebc-83c4-74c14847ce3a"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 12 17:27:51.456841 kubelet[3400]: I0912 17:27:51.456825 3400 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a086ac3d-35ef-408f-8541-248f710d0583-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a086ac3d-35ef-408f-8541-248f710d0583" (UID: "a086ac3d-35ef-408f-8541-248f710d0583"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 12 17:27:51.456984 kubelet[3400]: I0912 17:27:51.456959 3400 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7aba473-7601-4ebc-83c4-74c14847ce3a-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "c7aba473-7601-4ebc-83c4-74c14847ce3a" (UID: "c7aba473-7601-4ebc-83c4-74c14847ce3a"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 12 17:27:51.552107 kubelet[3400]: I0912 17:27:51.551978 3400 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a086ac3d-35ef-408f-8541-248f710d0583-cilium-config-path\") on node \"ci-4426.1.0-a-dfa5c25729\" DevicePath \"\"" Sep 12 17:27:51.552107 kubelet[3400]: I0912 17:27:51.552000 3400 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c7aba473-7601-4ebc-83c4-74c14847ce3a-cilium-run\") on node \"ci-4426.1.0-a-dfa5c25729\" DevicePath \"\"" Sep 12 17:27:51.552107 kubelet[3400]: I0912 17:27:51.552009 3400 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c7aba473-7601-4ebc-83c4-74c14847ce3a-hostproc\") on node \"ci-4426.1.0-a-dfa5c25729\" DevicePath \"\"" Sep 12 17:27:51.552107 kubelet[3400]: I0912 17:27:51.552015 3400 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c7aba473-7601-4ebc-83c4-74c14847ce3a-etc-cni-netd\") on node \"ci-4426.1.0-a-dfa5c25729\" DevicePath \"\"" Sep 12 17:27:51.552107 kubelet[3400]: I0912 17:27:51.552020 3400 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c7aba473-7601-4ebc-83c4-74c14847ce3a-bpf-maps\") on node \"ci-4426.1.0-a-dfa5c25729\" DevicePath \"\"" Sep 12 17:27:51.552107 kubelet[3400]: I0912 17:27:51.552025 3400 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c7aba473-7601-4ebc-83c4-74c14847ce3a-lib-modules\") on node \"ci-4426.1.0-a-dfa5c25729\" DevicePath \"\"" Sep 12 17:27:51.552107 kubelet[3400]: I0912 17:27:51.552032 3400 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c7aba473-7601-4ebc-83c4-74c14847ce3a-xtables-lock\") on node \"ci-4426.1.0-a-dfa5c25729\" DevicePath \"\"" Sep 12 17:27:51.552107 kubelet[3400]: I0912 17:27:51.552037 3400 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c7aba473-7601-4ebc-83c4-74c14847ce3a-clustermesh-secrets\") on node \"ci-4426.1.0-a-dfa5c25729\" DevicePath \"\"" Sep 12 17:27:51.552300 kubelet[3400]: I0912 17:27:51.552042 3400 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c7aba473-7601-4ebc-83c4-74c14847ce3a-cilium-config-path\") on node \"ci-4426.1.0-a-dfa5c25729\" DevicePath \"\"" Sep 12 17:27:51.552300 kubelet[3400]: I0912 17:27:51.552047 3400 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c7aba473-7601-4ebc-83c4-74c14847ce3a-cni-path\") on node \"ci-4426.1.0-a-dfa5c25729\" DevicePath \"\"" Sep 12 17:27:51.552300 kubelet[3400]: I0912 17:27:51.552052 3400 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c7aba473-7601-4ebc-83c4-74c14847ce3a-hubble-tls\") on node \"ci-4426.1.0-a-dfa5c25729\" DevicePath \"\"" Sep 12 17:27:51.552300 kubelet[3400]: I0912 17:27:51.552059 3400 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c7aba473-7601-4ebc-83c4-74c14847ce3a-host-proc-sys-net\") on node \"ci-4426.1.0-a-dfa5c25729\" DevicePath \"\"" Sep 12 17:27:51.552300 kubelet[3400]: I0912 17:27:51.552064 3400 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c7aba473-7601-4ebc-83c4-74c14847ce3a-cilium-cgroup\") on node \"ci-4426.1.0-a-dfa5c25729\" DevicePath \"\"" Sep 12 17:27:51.552300 kubelet[3400]: I0912 17:27:51.552081 3400 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c7aba473-7601-4ebc-83c4-74c14847ce3a-host-proc-sys-kernel\") on node \"ci-4426.1.0-a-dfa5c25729\" DevicePath \"\"" Sep 12 17:27:51.552300 kubelet[3400]: I0912 17:27:51.552086 3400 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nrjqm\" (UniqueName: \"kubernetes.io/projected/c7aba473-7601-4ebc-83c4-74c14847ce3a-kube-api-access-nrjqm\") on node \"ci-4426.1.0-a-dfa5c25729\" DevicePath \"\"" Sep 12 17:27:51.552300 kubelet[3400]: I0912 17:27:51.552093 3400 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7przg\" (UniqueName: \"kubernetes.io/projected/a086ac3d-35ef-408f-8541-248f710d0583-kube-api-access-7przg\") on node \"ci-4426.1.0-a-dfa5c25729\" DevicePath \"\"" Sep 12 17:27:51.637789 systemd[1]: Removed slice kubepods-besteffort-poda086ac3d_35ef_408f_8541_248f710d0583.slice - libcontainer container kubepods-besteffort-poda086ac3d_35ef_408f_8541_248f710d0583.slice. Sep 12 17:27:52.185580 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b454cc1e4948998445e831d06c190c75da1e6560bad602181024917f7ae92053-shm.mount: Deactivated successfully. Sep 12 17:27:52.186027 systemd[1]: var-lib-kubelet-pods-a086ac3d\x2d35ef\x2d408f\x2d8541\x2d248f710d0583-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7przg.mount: Deactivated successfully. Sep 12 17:27:52.186085 systemd[1]: var-lib-kubelet-pods-c7aba473\x2d7601\x2d4ebc\x2d83c4\x2d74c14847ce3a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnrjqm.mount: Deactivated successfully. Sep 12 17:27:52.186127 systemd[1]: var-lib-kubelet-pods-c7aba473\x2d7601\x2d4ebc\x2d83c4\x2d74c14847ce3a-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 12 17:27:52.186162 systemd[1]: var-lib-kubelet-pods-c7aba473\x2d7601\x2d4ebc\x2d83c4\x2d74c14847ce3a-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 12 17:27:52.341287 systemd[1]: Removed slice kubepods-burstable-podc7aba473_7601_4ebc_83c4_74c14847ce3a.slice - libcontainer container kubepods-burstable-podc7aba473_7601_4ebc_83c4_74c14847ce3a.slice. Sep 12 17:27:52.341469 systemd[1]: kubepods-burstable-podc7aba473_7601_4ebc_83c4_74c14847ce3a.slice: Consumed 4.498s CPU time, 140.9M memory peak, 144K read from disk, 12.9M written to disk. Sep 12 17:27:52.903492 kubelet[3400]: I0912 17:27:52.902880 3400 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a086ac3d-35ef-408f-8541-248f710d0583" path="/var/lib/kubelet/pods/a086ac3d-35ef-408f-8541-248f710d0583/volumes" Sep 12 17:27:52.903492 kubelet[3400]: I0912 17:27:52.903217 3400 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7aba473-7601-4ebc-83c4-74c14847ce3a" path="/var/lib/kubelet/pods/c7aba473-7601-4ebc-83c4-74c14847ce3a/volumes" Sep 12 17:27:53.157858 sshd[5085]: Connection closed by 10.200.16.10 port 57586 Sep 12 17:27:53.158335 sshd-session[5082]: pam_unix(sshd:session): session closed for user core Sep 12 17:27:53.161942 systemd[1]: sshd@20-10.200.20.38:22-10.200.16.10:57586.service: Deactivated successfully. Sep 12 17:27:53.164097 systemd[1]: session-23.scope: Deactivated successfully. Sep 12 17:27:53.164816 systemd-logind[1809]: Session 23 logged out. Waiting for processes to exit. Sep 12 17:27:53.166013 systemd-logind[1809]: Removed session 23. Sep 12 17:27:53.231071 systemd[1]: Started sshd@21-10.200.20.38:22-10.200.16.10:49718.service - OpenSSH per-connection server daemon (10.200.16.10:49718). Sep 12 17:27:53.641465 sshd[5239]: Accepted publickey for core from 10.200.16.10 port 49718 ssh2: RSA SHA256:+on2THTR/nRp7vYd/q00sis2kB6WPhgipQlvfvqeQ7E Sep 12 17:27:53.642461 sshd-session[5239]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:27:53.647704 systemd-logind[1809]: New session 24 of user core. Sep 12 17:27:53.650887 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 12 17:27:54.018827 kubelet[3400]: E0912 17:27:54.018791 3400 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 12 17:27:54.530886 systemd[1]: Created slice kubepods-burstable-podae641c74_c9ea_4c29_97f1_3d43be66f774.slice - libcontainer container kubepods-burstable-podae641c74_c9ea_4c29_97f1_3d43be66f774.slice. Sep 12 17:27:54.568289 sshd[5242]: Connection closed by 10.200.16.10 port 49718 Sep 12 17:27:54.569023 sshd-session[5239]: pam_unix(sshd:session): session closed for user core Sep 12 17:27:54.571907 systemd-logind[1809]: Session 24 logged out. Waiting for processes to exit. Sep 12 17:27:54.572403 systemd[1]: sshd@21-10.200.20.38:22-10.200.16.10:49718.service: Deactivated successfully. Sep 12 17:27:54.574755 systemd[1]: session-24.scope: Deactivated successfully. Sep 12 17:27:54.576916 systemd-logind[1809]: Removed session 24. Sep 12 17:27:54.649643 systemd[1]: Started sshd@22-10.200.20.38:22-10.200.16.10:49724.service - OpenSSH per-connection server daemon (10.200.16.10:49724). Sep 12 17:27:54.669986 kubelet[3400]: I0912 17:27:54.669947 3400 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ae641c74-c9ea-4c29-97f1-3d43be66f774-hostproc\") pod \"cilium-zc9r7\" (UID: \"ae641c74-c9ea-4c29-97f1-3d43be66f774\") " pod="kube-system/cilium-zc9r7" Sep 12 17:27:54.669986 kubelet[3400]: I0912 17:27:54.669975 3400 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ae641c74-c9ea-4c29-97f1-3d43be66f774-cni-path\") pod \"cilium-zc9r7\" (UID: \"ae641c74-c9ea-4c29-97f1-3d43be66f774\") " pod="kube-system/cilium-zc9r7" Sep 12 17:27:54.669986 kubelet[3400]: I0912 17:27:54.669990 3400 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ae641c74-c9ea-4c29-97f1-3d43be66f774-cilium-config-path\") pod \"cilium-zc9r7\" (UID: \"ae641c74-c9ea-4c29-97f1-3d43be66f774\") " pod="kube-system/cilium-zc9r7" Sep 12 17:27:54.670200 kubelet[3400]: I0912 17:27:54.670000 3400 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ae641c74-c9ea-4c29-97f1-3d43be66f774-hubble-tls\") pod \"cilium-zc9r7\" (UID: \"ae641c74-c9ea-4c29-97f1-3d43be66f774\") " pod="kube-system/cilium-zc9r7" Sep 12 17:27:54.670200 kubelet[3400]: I0912 17:27:54.670009 3400 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfrfp\" (UniqueName: \"kubernetes.io/projected/ae641c74-c9ea-4c29-97f1-3d43be66f774-kube-api-access-vfrfp\") pod \"cilium-zc9r7\" (UID: \"ae641c74-c9ea-4c29-97f1-3d43be66f774\") " pod="kube-system/cilium-zc9r7" Sep 12 17:27:54.670200 kubelet[3400]: I0912 17:27:54.670018 3400 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ae641c74-c9ea-4c29-97f1-3d43be66f774-clustermesh-secrets\") pod \"cilium-zc9r7\" (UID: \"ae641c74-c9ea-4c29-97f1-3d43be66f774\") " pod="kube-system/cilium-zc9r7" Sep 12 17:27:54.670200 kubelet[3400]: I0912 17:27:54.670028 3400 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ae641c74-c9ea-4c29-97f1-3d43be66f774-host-proc-sys-kernel\") pod \"cilium-zc9r7\" (UID: \"ae641c74-c9ea-4c29-97f1-3d43be66f774\") " pod="kube-system/cilium-zc9r7" Sep 12 17:27:54.670200 kubelet[3400]: I0912 17:27:54.670039 3400 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ae641c74-c9ea-4c29-97f1-3d43be66f774-cilium-run\") pod \"cilium-zc9r7\" (UID: \"ae641c74-c9ea-4c29-97f1-3d43be66f774\") " pod="kube-system/cilium-zc9r7" Sep 12 17:27:54.670200 kubelet[3400]: I0912 17:27:54.670048 3400 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ae641c74-c9ea-4c29-97f1-3d43be66f774-xtables-lock\") pod \"cilium-zc9r7\" (UID: \"ae641c74-c9ea-4c29-97f1-3d43be66f774\") " pod="kube-system/cilium-zc9r7" Sep 12 17:27:54.670319 kubelet[3400]: I0912 17:27:54.670058 3400 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ae641c74-c9ea-4c29-97f1-3d43be66f774-etc-cni-netd\") pod \"cilium-zc9r7\" (UID: \"ae641c74-c9ea-4c29-97f1-3d43be66f774\") " pod="kube-system/cilium-zc9r7" Sep 12 17:27:54.670319 kubelet[3400]: I0912 17:27:54.670067 3400 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ae641c74-c9ea-4c29-97f1-3d43be66f774-lib-modules\") pod \"cilium-zc9r7\" (UID: \"ae641c74-c9ea-4c29-97f1-3d43be66f774\") " pod="kube-system/cilium-zc9r7" Sep 12 17:27:54.670319 kubelet[3400]: I0912 17:27:54.670077 3400 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ae641c74-c9ea-4c29-97f1-3d43be66f774-cilium-ipsec-secrets\") pod \"cilium-zc9r7\" (UID: \"ae641c74-c9ea-4c29-97f1-3d43be66f774\") " pod="kube-system/cilium-zc9r7" Sep 12 17:27:54.670319 kubelet[3400]: I0912 17:27:54.670092 3400 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ae641c74-c9ea-4c29-97f1-3d43be66f774-bpf-maps\") pod \"cilium-zc9r7\" (UID: \"ae641c74-c9ea-4c29-97f1-3d43be66f774\") " pod="kube-system/cilium-zc9r7" Sep 12 17:27:54.670319 kubelet[3400]: I0912 17:27:54.670103 3400 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ae641c74-c9ea-4c29-97f1-3d43be66f774-cilium-cgroup\") pod \"cilium-zc9r7\" (UID: \"ae641c74-c9ea-4c29-97f1-3d43be66f774\") " pod="kube-system/cilium-zc9r7" Sep 12 17:27:54.670319 kubelet[3400]: I0912 17:27:54.670112 3400 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ae641c74-c9ea-4c29-97f1-3d43be66f774-host-proc-sys-net\") pod \"cilium-zc9r7\" (UID: \"ae641c74-c9ea-4c29-97f1-3d43be66f774\") " pod="kube-system/cilium-zc9r7" Sep 12 17:27:54.835420 containerd[1826]: time="2025-09-12T17:27:54.835132583Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zc9r7,Uid:ae641c74-c9ea-4c29-97f1-3d43be66f774,Namespace:kube-system,Attempt:0,}" Sep 12 17:27:54.872225 containerd[1826]: time="2025-09-12T17:27:54.872200499Z" level=info msg="connecting to shim e041560e42699732b3914805a78d6c9c862d764d15144d53650fce78fe5debb0" address="unix:///run/containerd/s/59a4dc5417daeb71911228a0be095ec3feb7f06491663db4b04a5d7923286198" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:27:54.888210 systemd[1]: Started cri-containerd-e041560e42699732b3914805a78d6c9c862d764d15144d53650fce78fe5debb0.scope - libcontainer container e041560e42699732b3914805a78d6c9c862d764d15144d53650fce78fe5debb0. Sep 12 17:27:54.908967 containerd[1826]: time="2025-09-12T17:27:54.908922234Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zc9r7,Uid:ae641c74-c9ea-4c29-97f1-3d43be66f774,Namespace:kube-system,Attempt:0,} returns sandbox id \"e041560e42699732b3914805a78d6c9c862d764d15144d53650fce78fe5debb0\"" Sep 12 17:27:54.917561 containerd[1826]: time="2025-09-12T17:27:54.917536318Z" level=info msg="CreateContainer within sandbox \"e041560e42699732b3914805a78d6c9c862d764d15144d53650fce78fe5debb0\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 12 17:27:54.932650 containerd[1826]: time="2025-09-12T17:27:54.932622633Z" level=info msg="Container 205880a82931c1ffb03812a9dc6000c640f82156b959aef73d630331186fe35f: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:27:54.945735 containerd[1826]: time="2025-09-12T17:27:54.945699737Z" level=info msg="CreateContainer within sandbox \"e041560e42699732b3914805a78d6c9c862d764d15144d53650fce78fe5debb0\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"205880a82931c1ffb03812a9dc6000c640f82156b959aef73d630331186fe35f\"" Sep 12 17:27:54.946555 containerd[1826]: time="2025-09-12T17:27:54.946476967Z" level=info msg="StartContainer for \"205880a82931c1ffb03812a9dc6000c640f82156b959aef73d630331186fe35f\"" Sep 12 17:27:54.947503 containerd[1826]: time="2025-09-12T17:27:54.947341293Z" level=info msg="connecting to shim 205880a82931c1ffb03812a9dc6000c640f82156b959aef73d630331186fe35f" address="unix:///run/containerd/s/59a4dc5417daeb71911228a0be095ec3feb7f06491663db4b04a5d7923286198" protocol=ttrpc version=3 Sep 12 17:27:54.965898 systemd[1]: Started cri-containerd-205880a82931c1ffb03812a9dc6000c640f82156b959aef73d630331186fe35f.scope - libcontainer container 205880a82931c1ffb03812a9dc6000c640f82156b959aef73d630331186fe35f. Sep 12 17:27:54.988850 containerd[1826]: time="2025-09-12T17:27:54.988754701Z" level=info msg="StartContainer for \"205880a82931c1ffb03812a9dc6000c640f82156b959aef73d630331186fe35f\" returns successfully" Sep 12 17:27:54.992850 systemd[1]: cri-containerd-205880a82931c1ffb03812a9dc6000c640f82156b959aef73d630331186fe35f.scope: Deactivated successfully. Sep 12 17:27:54.995359 containerd[1826]: time="2025-09-12T17:27:54.995327486Z" level=info msg="received exit event container_id:\"205880a82931c1ffb03812a9dc6000c640f82156b959aef73d630331186fe35f\" id:\"205880a82931c1ffb03812a9dc6000c640f82156b959aef73d630331186fe35f\" pid:5316 exited_at:{seconds:1757698074 nanos:994976728}" Sep 12 17:27:54.995724 containerd[1826]: time="2025-09-12T17:27:54.995693844Z" level=info msg="TaskExit event in podsandbox handler container_id:\"205880a82931c1ffb03812a9dc6000c640f82156b959aef73d630331186fe35f\" id:\"205880a82931c1ffb03812a9dc6000c640f82156b959aef73d630331186fe35f\" pid:5316 exited_at:{seconds:1757698074 nanos:994976728}" Sep 12 17:27:55.105683 sshd[5253]: Accepted publickey for core from 10.200.16.10 port 49724 ssh2: RSA SHA256:+on2THTR/nRp7vYd/q00sis2kB6WPhgipQlvfvqeQ7E Sep 12 17:27:55.106597 sshd-session[5253]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:27:55.110048 systemd-logind[1809]: New session 25 of user core. Sep 12 17:27:55.115874 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 12 17:27:55.350479 containerd[1826]: time="2025-09-12T17:27:55.350450688Z" level=info msg="CreateContainer within sandbox \"e041560e42699732b3914805a78d6c9c862d764d15144d53650fce78fe5debb0\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 12 17:27:55.373283 containerd[1826]: time="2025-09-12T17:27:55.373114942Z" level=info msg="Container 51c588a697460f94ba0921d92b6ea1896a2876b0a8ab3e549aedbb24dfdeb991: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:27:55.387802 containerd[1826]: time="2025-09-12T17:27:55.387752321Z" level=info msg="CreateContainer within sandbox \"e041560e42699732b3914805a78d6c9c862d764d15144d53650fce78fe5debb0\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"51c588a697460f94ba0921d92b6ea1896a2876b0a8ab3e549aedbb24dfdeb991\"" Sep 12 17:27:55.388687 containerd[1826]: time="2025-09-12T17:27:55.388667585Z" level=info msg="StartContainer for \"51c588a697460f94ba0921d92b6ea1896a2876b0a8ab3e549aedbb24dfdeb991\"" Sep 12 17:27:55.389234 containerd[1826]: time="2025-09-12T17:27:55.389209922Z" level=info msg="connecting to shim 51c588a697460f94ba0921d92b6ea1896a2876b0a8ab3e549aedbb24dfdeb991" address="unix:///run/containerd/s/59a4dc5417daeb71911228a0be095ec3feb7f06491663db4b04a5d7923286198" protocol=ttrpc version=3 Sep 12 17:27:55.408870 systemd[1]: Started cri-containerd-51c588a697460f94ba0921d92b6ea1896a2876b0a8ab3e549aedbb24dfdeb991.scope - libcontainer container 51c588a697460f94ba0921d92b6ea1896a2876b0a8ab3e549aedbb24dfdeb991. Sep 12 17:27:55.431313 systemd[1]: cri-containerd-51c588a697460f94ba0921d92b6ea1896a2876b0a8ab3e549aedbb24dfdeb991.scope: Deactivated successfully. Sep 12 17:27:55.433224 containerd[1826]: time="2025-09-12T17:27:55.433197125Z" level=info msg="received exit event container_id:\"51c588a697460f94ba0921d92b6ea1896a2876b0a8ab3e549aedbb24dfdeb991\" id:\"51c588a697460f94ba0921d92b6ea1896a2876b0a8ab3e549aedbb24dfdeb991\" pid:5366 exited_at:{seconds:1757698075 nanos:431962720}" Sep 12 17:27:55.433559 containerd[1826]: time="2025-09-12T17:27:55.433538243Z" level=info msg="TaskExit event in podsandbox handler container_id:\"51c588a697460f94ba0921d92b6ea1896a2876b0a8ab3e549aedbb24dfdeb991\" id:\"51c588a697460f94ba0921d92b6ea1896a2876b0a8ab3e549aedbb24dfdeb991\" pid:5366 exited_at:{seconds:1757698075 nanos:431962720}" Sep 12 17:27:55.433914 containerd[1826]: time="2025-09-12T17:27:55.433894217Z" level=info msg="StartContainer for \"51c588a697460f94ba0921d92b6ea1896a2876b0a8ab3e549aedbb24dfdeb991\" returns successfully" Sep 12 17:27:55.434469 sshd[5351]: Connection closed by 10.200.16.10 port 49724 Sep 12 17:27:55.435460 sshd-session[5253]: pam_unix(sshd:session): session closed for user core Sep 12 17:27:55.439398 systemd[1]: sshd@22-10.200.20.38:22-10.200.16.10:49724.service: Deactivated successfully. Sep 12 17:27:55.439884 systemd-logind[1809]: Session 25 logged out. Waiting for processes to exit. Sep 12 17:27:55.441207 systemd[1]: session-25.scope: Deactivated successfully. Sep 12 17:27:55.445175 systemd-logind[1809]: Removed session 25. Sep 12 17:27:55.512905 systemd[1]: Started sshd@23-10.200.20.38:22-10.200.16.10:49734.service - OpenSSH per-connection server daemon (10.200.16.10:49734). Sep 12 17:27:55.933921 sshd[5402]: Accepted publickey for core from 10.200.16.10 port 49734 ssh2: RSA SHA256:+on2THTR/nRp7vYd/q00sis2kB6WPhgipQlvfvqeQ7E Sep 12 17:27:55.934943 sshd-session[5402]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:27:55.938666 systemd-logind[1809]: New session 26 of user core. Sep 12 17:27:55.947883 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 12 17:27:56.354366 containerd[1826]: time="2025-09-12T17:27:56.354333337Z" level=info msg="CreateContainer within sandbox \"e041560e42699732b3914805a78d6c9c862d764d15144d53650fce78fe5debb0\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 12 17:27:56.380785 containerd[1826]: time="2025-09-12T17:27:56.379004393Z" level=info msg="Container 4ac305db297662a820dc1f34033745d7eab697948639bf9870f739c98aa30320: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:27:56.403626 containerd[1826]: time="2025-09-12T17:27:56.403595719Z" level=info msg="CreateContainer within sandbox \"e041560e42699732b3914805a78d6c9c862d764d15144d53650fce78fe5debb0\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"4ac305db297662a820dc1f34033745d7eab697948639bf9870f739c98aa30320\"" Sep 12 17:27:56.404247 containerd[1826]: time="2025-09-12T17:27:56.404226010Z" level=info msg="StartContainer for \"4ac305db297662a820dc1f34033745d7eab697948639bf9870f739c98aa30320\"" Sep 12 17:27:56.405423 containerd[1826]: time="2025-09-12T17:27:56.405401702Z" level=info msg="connecting to shim 4ac305db297662a820dc1f34033745d7eab697948639bf9870f739c98aa30320" address="unix:///run/containerd/s/59a4dc5417daeb71911228a0be095ec3feb7f06491663db4b04a5d7923286198" protocol=ttrpc version=3 Sep 12 17:27:56.426883 systemd[1]: Started cri-containerd-4ac305db297662a820dc1f34033745d7eab697948639bf9870f739c98aa30320.scope - libcontainer container 4ac305db297662a820dc1f34033745d7eab697948639bf9870f739c98aa30320. Sep 12 17:27:56.450683 systemd[1]: cri-containerd-4ac305db297662a820dc1f34033745d7eab697948639bf9870f739c98aa30320.scope: Deactivated successfully. Sep 12 17:27:56.451816 containerd[1826]: time="2025-09-12T17:27:56.451416636Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4ac305db297662a820dc1f34033745d7eab697948639bf9870f739c98aa30320\" id:\"4ac305db297662a820dc1f34033745d7eab697948639bf9870f739c98aa30320\" pid:5424 exited_at:{seconds:1757698076 nanos:451251497}" Sep 12 17:27:56.455071 containerd[1826]: time="2025-09-12T17:27:56.454986074Z" level=info msg="received exit event container_id:\"4ac305db297662a820dc1f34033745d7eab697948639bf9870f739c98aa30320\" id:\"4ac305db297662a820dc1f34033745d7eab697948639bf9870f739c98aa30320\" pid:5424 exited_at:{seconds:1757698076 nanos:451251497}" Sep 12 17:27:56.461110 containerd[1826]: time="2025-09-12T17:27:56.461086226Z" level=info msg="StartContainer for \"4ac305db297662a820dc1f34033745d7eab697948639bf9870f739c98aa30320\" returns successfully" Sep 12 17:27:56.471048 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4ac305db297662a820dc1f34033745d7eab697948639bf9870f739c98aa30320-rootfs.mount: Deactivated successfully. Sep 12 17:27:57.356825 containerd[1826]: time="2025-09-12T17:27:57.356152790Z" level=info msg="CreateContainer within sandbox \"e041560e42699732b3914805a78d6c9c862d764d15144d53650fce78fe5debb0\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 12 17:27:57.379672 containerd[1826]: time="2025-09-12T17:27:57.379645810Z" level=info msg="Container edb064c85bf98fb7e3edc0aa3554d813ef71c475ff0b32109b357348b12bd2a1: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:27:57.393609 containerd[1826]: time="2025-09-12T17:27:57.393583041Z" level=info msg="CreateContainer within sandbox \"e041560e42699732b3914805a78d6c9c862d764d15144d53650fce78fe5debb0\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"edb064c85bf98fb7e3edc0aa3554d813ef71c475ff0b32109b357348b12bd2a1\"" Sep 12 17:27:57.394056 containerd[1826]: time="2025-09-12T17:27:57.394039401Z" level=info msg="StartContainer for \"edb064c85bf98fb7e3edc0aa3554d813ef71c475ff0b32109b357348b12bd2a1\"" Sep 12 17:27:57.394792 containerd[1826]: time="2025-09-12T17:27:57.394714900Z" level=info msg="connecting to shim edb064c85bf98fb7e3edc0aa3554d813ef71c475ff0b32109b357348b12bd2a1" address="unix:///run/containerd/s/59a4dc5417daeb71911228a0be095ec3feb7f06491663db4b04a5d7923286198" protocol=ttrpc version=3 Sep 12 17:27:57.417893 systemd[1]: Started cri-containerd-edb064c85bf98fb7e3edc0aa3554d813ef71c475ff0b32109b357348b12bd2a1.scope - libcontainer container edb064c85bf98fb7e3edc0aa3554d813ef71c475ff0b32109b357348b12bd2a1. Sep 12 17:27:57.434940 systemd[1]: cri-containerd-edb064c85bf98fb7e3edc0aa3554d813ef71c475ff0b32109b357348b12bd2a1.scope: Deactivated successfully. Sep 12 17:27:57.436012 containerd[1826]: time="2025-09-12T17:27:57.435980313Z" level=info msg="TaskExit event in podsandbox handler container_id:\"edb064c85bf98fb7e3edc0aa3554d813ef71c475ff0b32109b357348b12bd2a1\" id:\"edb064c85bf98fb7e3edc0aa3554d813ef71c475ff0b32109b357348b12bd2a1\" pid:5463 exited_at:{seconds:1757698077 nanos:435752237}" Sep 12 17:27:57.441018 containerd[1826]: time="2025-09-12T17:27:57.440875701Z" level=info msg="received exit event container_id:\"edb064c85bf98fb7e3edc0aa3554d813ef71c475ff0b32109b357348b12bd2a1\" id:\"edb064c85bf98fb7e3edc0aa3554d813ef71c475ff0b32109b357348b12bd2a1\" pid:5463 exited_at:{seconds:1757698077 nanos:435752237}" Sep 12 17:27:57.441490 containerd[1826]: time="2025-09-12T17:27:57.441452911Z" level=info msg="StartContainer for \"edb064c85bf98fb7e3edc0aa3554d813ef71c475ff0b32109b357348b12bd2a1\" returns successfully" Sep 12 17:27:57.455856 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-edb064c85bf98fb7e3edc0aa3554d813ef71c475ff0b32109b357348b12bd2a1-rootfs.mount: Deactivated successfully. Sep 12 17:27:57.901381 kubelet[3400]: E0912 17:27:57.901154 3400 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-qgp29" podUID="b8ddf37f-2789-4f0c-9ca3-f02542ba523e" Sep 12 17:27:58.362747 containerd[1826]: time="2025-09-12T17:27:58.362711101Z" level=info msg="CreateContainer within sandbox \"e041560e42699732b3914805a78d6c9c862d764d15144d53650fce78fe5debb0\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 12 17:27:58.388532 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount596488010.mount: Deactivated successfully. Sep 12 17:27:58.393946 containerd[1826]: time="2025-09-12T17:27:58.392656103Z" level=info msg="Container 327ae1ede91932c25913ff7231b62bdfea6d579dd84d9548b957260472eae299: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:27:58.393617 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2505982336.mount: Deactivated successfully. Sep 12 17:27:58.409131 containerd[1826]: time="2025-09-12T17:27:58.409056129Z" level=info msg="CreateContainer within sandbox \"e041560e42699732b3914805a78d6c9c862d764d15144d53650fce78fe5debb0\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"327ae1ede91932c25913ff7231b62bdfea6d579dd84d9548b957260472eae299\"" Sep 12 17:27:58.409408 containerd[1826]: time="2025-09-12T17:27:58.409386206Z" level=info msg="StartContainer for \"327ae1ede91932c25913ff7231b62bdfea6d579dd84d9548b957260472eae299\"" Sep 12 17:27:58.409990 containerd[1826]: time="2025-09-12T17:27:58.409966936Z" level=info msg="connecting to shim 327ae1ede91932c25913ff7231b62bdfea6d579dd84d9548b957260472eae299" address="unix:///run/containerd/s/59a4dc5417daeb71911228a0be095ec3feb7f06491663db4b04a5d7923286198" protocol=ttrpc version=3 Sep 12 17:27:58.427869 systemd[1]: Started cri-containerd-327ae1ede91932c25913ff7231b62bdfea6d579dd84d9548b957260472eae299.scope - libcontainer container 327ae1ede91932c25913ff7231b62bdfea6d579dd84d9548b957260472eae299. Sep 12 17:27:58.459055 containerd[1826]: time="2025-09-12T17:27:58.459027659Z" level=info msg="StartContainer for \"327ae1ede91932c25913ff7231b62bdfea6d579dd84d9548b957260472eae299\" returns successfully" Sep 12 17:27:58.513063 containerd[1826]: time="2025-09-12T17:27:58.513019458Z" level=info msg="TaskExit event in podsandbox handler container_id:\"327ae1ede91932c25913ff7231b62bdfea6d579dd84d9548b957260472eae299\" id:\"c9f2cfa4dd97856ba2b56333d535b41178f44b3ffe6b40f3dc0bb262e7fdbc97\" pid:5531 exited_at:{seconds:1757698078 nanos:512843727}" Sep 12 17:27:58.819802 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Sep 12 17:27:58.919054 containerd[1826]: time="2025-09-12T17:27:58.918920964Z" level=info msg="StopPodSandbox for \"3340c7258545ac24a81fbd22dc156ce9dda054e33fe3d00b03e133102c768099\"" Sep 12 17:27:58.919054 containerd[1826]: time="2025-09-12T17:27:58.919029526Z" level=info msg="TearDown network for sandbox \"3340c7258545ac24a81fbd22dc156ce9dda054e33fe3d00b03e133102c768099\" successfully" Sep 12 17:27:58.919054 containerd[1826]: time="2025-09-12T17:27:58.919036574Z" level=info msg="StopPodSandbox for \"3340c7258545ac24a81fbd22dc156ce9dda054e33fe3d00b03e133102c768099\" returns successfully" Sep 12 17:27:58.919541 containerd[1826]: time="2025-09-12T17:27:58.919518439Z" level=info msg="RemovePodSandbox for \"3340c7258545ac24a81fbd22dc156ce9dda054e33fe3d00b03e133102c768099\"" Sep 12 17:27:58.919606 containerd[1826]: time="2025-09-12T17:27:58.919544280Z" level=info msg="Forcibly stopping sandbox \"3340c7258545ac24a81fbd22dc156ce9dda054e33fe3d00b03e133102c768099\"" Sep 12 17:27:58.919606 containerd[1826]: time="2025-09-12T17:27:58.919603569Z" level=info msg="TearDown network for sandbox \"3340c7258545ac24a81fbd22dc156ce9dda054e33fe3d00b03e133102c768099\" successfully" Sep 12 17:27:58.921612 containerd[1826]: time="2025-09-12T17:27:58.921588589Z" level=info msg="Ensure that sandbox 3340c7258545ac24a81fbd22dc156ce9dda054e33fe3d00b03e133102c768099 in task-service has been cleanup successfully" Sep 12 17:27:58.938311 containerd[1826]: time="2025-09-12T17:27:58.938279886Z" level=info msg="RemovePodSandbox \"3340c7258545ac24a81fbd22dc156ce9dda054e33fe3d00b03e133102c768099\" returns successfully" Sep 12 17:27:58.938735 containerd[1826]: time="2025-09-12T17:27:58.938589691Z" level=info msg="StopPodSandbox for \"b454cc1e4948998445e831d06c190c75da1e6560bad602181024917f7ae92053\"" Sep 12 17:27:58.938735 containerd[1826]: time="2025-09-12T17:27:58.938687029Z" level=info msg="TearDown network for sandbox \"b454cc1e4948998445e831d06c190c75da1e6560bad602181024917f7ae92053\" successfully" Sep 12 17:27:58.938735 containerd[1826]: time="2025-09-12T17:27:58.938696669Z" level=info msg="StopPodSandbox for \"b454cc1e4948998445e831d06c190c75da1e6560bad602181024917f7ae92053\" returns successfully" Sep 12 17:27:58.938974 containerd[1826]: time="2025-09-12T17:27:58.938937514Z" level=info msg="RemovePodSandbox for \"b454cc1e4948998445e831d06c190c75da1e6560bad602181024917f7ae92053\"" Sep 12 17:27:58.938974 containerd[1826]: time="2025-09-12T17:27:58.938956786Z" level=info msg="Forcibly stopping sandbox \"b454cc1e4948998445e831d06c190c75da1e6560bad602181024917f7ae92053\"" Sep 12 17:27:58.939846 containerd[1826]: time="2025-09-12T17:27:58.939098845Z" level=info msg="TearDown network for sandbox \"b454cc1e4948998445e831d06c190c75da1e6560bad602181024917f7ae92053\" successfully" Sep 12 17:27:58.939846 containerd[1826]: time="2025-09-12T17:27:58.939819074Z" level=info msg="Ensure that sandbox b454cc1e4948998445e831d06c190c75da1e6560bad602181024917f7ae92053 in task-service has been cleanup successfully" Sep 12 17:27:58.949855 containerd[1826]: time="2025-09-12T17:27:58.949829393Z" level=info msg="RemovePodSandbox \"b454cc1e4948998445e831d06c190c75da1e6560bad602181024917f7ae92053\" returns successfully" Sep 12 17:28:00.398100 containerd[1826]: time="2025-09-12T17:28:00.398049893Z" level=info msg="TaskExit event in podsandbox handler container_id:\"327ae1ede91932c25913ff7231b62bdfea6d579dd84d9548b957260472eae299\" id:\"4199db14c02e467ab9388772b4aa6786fd019b248841829115d8b6f7ada8da02\" pid:5689 exit_status:1 exited_at:{seconds:1757698080 nanos:397798289}" Sep 12 17:28:01.177844 systemd-networkd[1566]: lxc_health: Link UP Sep 12 17:28:01.191937 systemd-networkd[1566]: lxc_health: Gained carrier Sep 12 17:28:02.323908 systemd-networkd[1566]: lxc_health: Gained IPv6LL Sep 12 17:28:02.484795 containerd[1826]: time="2025-09-12T17:28:02.484053676Z" level=info msg="TaskExit event in podsandbox handler container_id:\"327ae1ede91932c25913ff7231b62bdfea6d579dd84d9548b957260472eae299\" id:\"c7a96be1694c229b48b3c285631075ea5e15c359356a2a17edbb768800b90b16\" pid:6060 exited_at:{seconds:1757698082 nanos:483578339}" Sep 12 17:28:02.853347 kubelet[3400]: I0912 17:28:02.852659 3400 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-zc9r7" podStartSLOduration=8.852646597 podStartE2EDuration="8.852646597s" podCreationTimestamp="2025-09-12 17:27:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:27:59.375563917 +0000 UTC m=+240.540541476" watchObservedRunningTime="2025-09-12 17:28:02.852646597 +0000 UTC m=+244.017624156" Sep 12 17:28:04.564614 containerd[1826]: time="2025-09-12T17:28:04.564508846Z" level=info msg="TaskExit event in podsandbox handler container_id:\"327ae1ede91932c25913ff7231b62bdfea6d579dd84d9548b957260472eae299\" id:\"2782a372fd1a9d9eb0965e34948750d6589b70d56dacf506edeeb39d79c23490\" pid:6094 exited_at:{seconds:1757698084 nanos:564304002}" Sep 12 17:28:06.641422 containerd[1826]: time="2025-09-12T17:28:06.641287140Z" level=info msg="TaskExit event in podsandbox handler container_id:\"327ae1ede91932c25913ff7231b62bdfea6d579dd84d9548b957260472eae299\" id:\"767a32b308eeba37b512290aebe012445471fa000c1c84fb148b4c3747dbc71f\" pid:6115 exited_at:{seconds:1757698086 nanos:640814148}" Sep 12 17:28:06.727693 sshd[5405]: Connection closed by 10.200.16.10 port 49734 Sep 12 17:28:06.727627 sshd-session[5402]: pam_unix(sshd:session): session closed for user core Sep 12 17:28:06.730706 systemd[1]: sshd@23-10.200.20.38:22-10.200.16.10:49734.service: Deactivated successfully. Sep 12 17:28:06.732569 systemd[1]: session-26.scope: Deactivated successfully. Sep 12 17:28:06.733316 systemd-logind[1809]: Session 26 logged out. Waiting for processes to exit. Sep 12 17:28:06.734409 systemd-logind[1809]: Removed session 26.