Sep 16 04:26:24.038591 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd490] Sep 16 04:26:24.038611 kernel: Linux version 6.12.47-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Tue Sep 16 03:05:48 -00 2025 Sep 16 04:26:24.038618 kernel: KASLR enabled Sep 16 04:26:24.038622 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Sep 16 04:26:24.038627 kernel: printk: legacy bootconsole [pl11] enabled Sep 16 04:26:24.038631 kernel: efi: EFI v2.7 by EDK II Sep 16 04:26:24.038636 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f20f698 RNG=0x3fd5f998 MEMRESERVE=0x3e477598 Sep 16 04:26:24.038640 kernel: random: crng init done Sep 16 04:26:24.038644 kernel: secureboot: Secure boot disabled Sep 16 04:26:24.038648 kernel: ACPI: Early table checksum verification disabled Sep 16 04:26:24.038652 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Sep 16 04:26:24.038656 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 16 04:26:24.038660 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 16 04:26:24.038665 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Sep 16 04:26:24.038670 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 16 04:26:24.038674 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 16 04:26:24.038679 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 16 04:26:24.038683 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 16 04:26:24.038688 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 16 04:26:24.038692 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 16 04:26:24.038696 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Sep 16 04:26:24.038701 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 16 04:26:24.038705 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Sep 16 04:26:24.038709 kernel: ACPI: Use ACPI SPCR as default console: No Sep 16 04:26:24.038713 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Sep 16 04:26:24.038718 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] hotplug Sep 16 04:26:24.038722 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] hotplug Sep 16 04:26:24.038726 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Sep 16 04:26:24.038730 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Sep 16 04:26:24.038735 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Sep 16 04:26:24.038740 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Sep 16 04:26:24.038744 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Sep 16 04:26:24.038748 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Sep 16 04:26:24.038752 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Sep 16 04:26:24.038771 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Sep 16 04:26:24.038775 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Sep 16 04:26:24.038779 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x1bfffffff] -> [mem 0x00000000-0x1bfffffff] Sep 16 04:26:24.038784 kernel: NODE_DATA(0) allocated [mem 0x1bf7fda00-0x1bf804fff] Sep 16 04:26:24.038788 kernel: Zone ranges: Sep 16 04:26:24.038792 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Sep 16 04:26:24.038800 kernel: DMA32 empty Sep 16 04:26:24.038804 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Sep 16 04:26:24.038808 kernel: Device empty Sep 16 04:26:24.038813 kernel: Movable zone start for each node Sep 16 04:26:24.038817 kernel: Early memory node ranges Sep 16 04:26:24.038822 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Sep 16 04:26:24.038827 kernel: node 0: [mem 0x0000000000824000-0x000000003e45ffff] Sep 16 04:26:24.038831 kernel: node 0: [mem 0x000000003e460000-0x000000003e46ffff] Sep 16 04:26:24.038835 kernel: node 0: [mem 0x000000003e470000-0x000000003e54ffff] Sep 16 04:26:24.038840 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Sep 16 04:26:24.038844 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Sep 16 04:26:24.038848 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Sep 16 04:26:24.038853 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Sep 16 04:26:24.038857 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Sep 16 04:26:24.038861 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Sep 16 04:26:24.038866 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Sep 16 04:26:24.038870 kernel: cma: Reserved 16 MiB at 0x000000003d400000 on node -1 Sep 16 04:26:24.038875 kernel: psci: probing for conduit method from ACPI. Sep 16 04:26:24.038880 kernel: psci: PSCIv1.1 detected in firmware. Sep 16 04:26:24.038884 kernel: psci: Using standard PSCI v0.2 function IDs Sep 16 04:26:24.038888 kernel: psci: MIGRATE_INFO_TYPE not supported. Sep 16 04:26:24.038893 kernel: psci: SMC Calling Convention v1.4 Sep 16 04:26:24.038897 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Sep 16 04:26:24.038901 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Sep 16 04:26:24.038906 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Sep 16 04:26:24.038910 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Sep 16 04:26:24.038915 kernel: pcpu-alloc: [0] 0 [0] 1 Sep 16 04:26:24.038919 kernel: Detected PIPT I-cache on CPU0 Sep 16 04:26:24.038924 kernel: CPU features: detected: Address authentication (architected QARMA5 algorithm) Sep 16 04:26:24.038929 kernel: CPU features: detected: GIC system register CPU interface Sep 16 04:26:24.038933 kernel: CPU features: detected: Spectre-v4 Sep 16 04:26:24.038937 kernel: CPU features: detected: Spectre-BHB Sep 16 04:26:24.038942 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 16 04:26:24.038946 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 16 04:26:24.038951 kernel: CPU features: detected: ARM erratum 2067961 or 2054223 Sep 16 04:26:24.038955 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 16 04:26:24.038959 kernel: alternatives: applying boot alternatives Sep 16 04:26:24.038965 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=eff5cc3c399cf6fc52e3071751a09276871b099078da6d1b1a498405d04a9313 Sep 16 04:26:24.038969 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 16 04:26:24.038975 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 16 04:26:24.038979 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 16 04:26:24.038984 kernel: Fallback order for Node 0: 0 Sep 16 04:26:24.038988 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048540 Sep 16 04:26:24.038992 kernel: Policy zone: Normal Sep 16 04:26:24.038997 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 16 04:26:24.039001 kernel: software IO TLB: area num 2. Sep 16 04:26:24.039006 kernel: software IO TLB: mapped [mem 0x0000000036280000-0x000000003a280000] (64MB) Sep 16 04:26:24.039010 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 16 04:26:24.039014 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 16 04:26:24.039019 kernel: rcu: RCU event tracing is enabled. Sep 16 04:26:24.039025 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 16 04:26:24.039029 kernel: Trampoline variant of Tasks RCU enabled. Sep 16 04:26:24.039034 kernel: Tracing variant of Tasks RCU enabled. Sep 16 04:26:24.039038 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 16 04:26:24.039043 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 16 04:26:24.039047 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 16 04:26:24.039052 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 16 04:26:24.039056 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 16 04:26:24.039061 kernel: GICv3: 960 SPIs implemented Sep 16 04:26:24.039065 kernel: GICv3: 0 Extended SPIs implemented Sep 16 04:26:24.039069 kernel: Root IRQ handler: gic_handle_irq Sep 16 04:26:24.039074 kernel: GICv3: GICv3 features: 16 PPIs, RSS Sep 16 04:26:24.039079 kernel: GICv3: GICD_CTRL.DS=0, SCR_EL3.FIQ=0 Sep 16 04:26:24.039083 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Sep 16 04:26:24.039088 kernel: ITS: No ITS available, not enabling LPIs Sep 16 04:26:24.039092 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 16 04:26:24.039097 kernel: arch_timer: cp15 timer(s) running at 1000.00MHz (virt). Sep 16 04:26:24.039101 kernel: clocksource: arch_sys_counter: mask: 0x1fffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 16 04:26:24.039106 kernel: sched_clock: 61 bits at 1000MHz, resolution 1ns, wraps every 4398046511103ns Sep 16 04:26:24.039110 kernel: Console: colour dummy device 80x25 Sep 16 04:26:24.039115 kernel: printk: legacy console [tty1] enabled Sep 16 04:26:24.039119 kernel: ACPI: Core revision 20240827 Sep 16 04:26:24.039124 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 2000.00 BogoMIPS (lpj=1000000) Sep 16 04:26:24.039129 kernel: pid_max: default: 32768 minimum: 301 Sep 16 04:26:24.039134 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 16 04:26:24.039138 kernel: landlock: Up and running. Sep 16 04:26:24.039143 kernel: SELinux: Initializing. Sep 16 04:26:24.039148 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 16 04:26:24.039155 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 16 04:26:24.039161 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x1a0000e, misc 0x31e1 Sep 16 04:26:24.039166 kernel: Hyper-V: Host Build 10.0.26100.1261-1-0 Sep 16 04:26:24.039171 kernel: Hyper-V: enabling crash_kexec_post_notifiers Sep 16 04:26:24.039175 kernel: rcu: Hierarchical SRCU implementation. Sep 16 04:26:24.039180 kernel: rcu: Max phase no-delay instances is 400. Sep 16 04:26:24.039185 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 16 04:26:24.039190 kernel: Remapping and enabling EFI services. Sep 16 04:26:24.039195 kernel: smp: Bringing up secondary CPUs ... Sep 16 04:26:24.039200 kernel: Detected PIPT I-cache on CPU1 Sep 16 04:26:24.039205 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Sep 16 04:26:24.039210 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd490] Sep 16 04:26:24.039215 kernel: smp: Brought up 1 node, 2 CPUs Sep 16 04:26:24.039220 kernel: SMP: Total of 2 processors activated. Sep 16 04:26:24.039224 kernel: CPU: All CPU(s) started at EL1 Sep 16 04:26:24.039229 kernel: CPU features: detected: 32-bit EL0 Support Sep 16 04:26:24.039234 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Sep 16 04:26:24.039239 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 16 04:26:24.039243 kernel: CPU features: detected: Common not Private translations Sep 16 04:26:24.039248 kernel: CPU features: detected: CRC32 instructions Sep 16 04:26:24.039254 kernel: CPU features: detected: Generic authentication (architected QARMA5 algorithm) Sep 16 04:26:24.039259 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 16 04:26:24.039263 kernel: CPU features: detected: LSE atomic instructions Sep 16 04:26:24.039268 kernel: CPU features: detected: Privileged Access Never Sep 16 04:26:24.039273 kernel: CPU features: detected: Speculation barrier (SB) Sep 16 04:26:24.039278 kernel: CPU features: detected: TLB range maintenance instructions Sep 16 04:26:24.039283 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 16 04:26:24.039287 kernel: CPU features: detected: Scalable Vector Extension Sep 16 04:26:24.039292 kernel: alternatives: applying system-wide alternatives Sep 16 04:26:24.039298 kernel: CPU features: detected: Hardware dirty bit management on CPU0-1 Sep 16 04:26:24.039302 kernel: SVE: maximum available vector length 16 bytes per vector Sep 16 04:26:24.039307 kernel: SVE: default vector length 16 bytes per vector Sep 16 04:26:24.039312 kernel: Memory: 3959604K/4194160K available (11136K kernel code, 2440K rwdata, 9068K rodata, 38976K init, 1038K bss, 213368K reserved, 16384K cma-reserved) Sep 16 04:26:24.039317 kernel: devtmpfs: initialized Sep 16 04:26:24.039322 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 16 04:26:24.039327 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 16 04:26:24.039331 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 16 04:26:24.039336 kernel: 0 pages in range for non-PLT usage Sep 16 04:26:24.039342 kernel: 508560 pages in range for PLT usage Sep 16 04:26:24.039346 kernel: pinctrl core: initialized pinctrl subsystem Sep 16 04:26:24.039351 kernel: SMBIOS 3.1.0 present. Sep 16 04:26:24.039356 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Sep 16 04:26:24.039361 kernel: DMI: Memory slots populated: 2/2 Sep 16 04:26:24.039366 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 16 04:26:24.039370 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 16 04:26:24.039375 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 16 04:26:24.039380 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 16 04:26:24.039386 kernel: audit: initializing netlink subsys (disabled) Sep 16 04:26:24.039390 kernel: audit: type=2000 audit(0.059:1): state=initialized audit_enabled=0 res=1 Sep 16 04:26:24.039395 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 16 04:26:24.039400 kernel: cpuidle: using governor menu Sep 16 04:26:24.039405 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 16 04:26:24.039409 kernel: ASID allocator initialised with 32768 entries Sep 16 04:26:24.039414 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 16 04:26:24.039419 kernel: Serial: AMBA PL011 UART driver Sep 16 04:26:24.039424 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 16 04:26:24.039429 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 16 04:26:24.039434 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 16 04:26:24.039439 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 16 04:26:24.039443 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 16 04:26:24.039448 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 16 04:26:24.039453 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 16 04:26:24.039458 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 16 04:26:24.039462 kernel: ACPI: Added _OSI(Module Device) Sep 16 04:26:24.039467 kernel: ACPI: Added _OSI(Processor Device) Sep 16 04:26:24.039473 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 16 04:26:24.039477 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 16 04:26:24.039482 kernel: ACPI: Interpreter enabled Sep 16 04:26:24.039487 kernel: ACPI: Using GIC for interrupt routing Sep 16 04:26:24.039492 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Sep 16 04:26:24.039496 kernel: printk: legacy console [ttyAMA0] enabled Sep 16 04:26:24.039501 kernel: printk: legacy bootconsole [pl11] disabled Sep 16 04:26:24.039506 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Sep 16 04:26:24.039511 kernel: ACPI: CPU0 has been hot-added Sep 16 04:26:24.039516 kernel: ACPI: CPU1 has been hot-added Sep 16 04:26:24.039521 kernel: iommu: Default domain type: Translated Sep 16 04:26:24.039526 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 16 04:26:24.039530 kernel: efivars: Registered efivars operations Sep 16 04:26:24.039535 kernel: vgaarb: loaded Sep 16 04:26:24.039540 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 16 04:26:24.039544 kernel: VFS: Disk quotas dquot_6.6.0 Sep 16 04:26:24.039549 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 16 04:26:24.039554 kernel: pnp: PnP ACPI init Sep 16 04:26:24.039559 kernel: pnp: PnP ACPI: found 0 devices Sep 16 04:26:24.039564 kernel: NET: Registered PF_INET protocol family Sep 16 04:26:24.039569 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 16 04:26:24.039574 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 16 04:26:24.039578 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 16 04:26:24.039583 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 16 04:26:24.039588 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 16 04:26:24.039593 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 16 04:26:24.039598 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 16 04:26:24.039603 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 16 04:26:24.039608 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 16 04:26:24.039613 kernel: PCI: CLS 0 bytes, default 64 Sep 16 04:26:24.039617 kernel: kvm [1]: HYP mode not available Sep 16 04:26:24.039622 kernel: Initialise system trusted keyrings Sep 16 04:26:24.039627 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 16 04:26:24.039631 kernel: Key type asymmetric registered Sep 16 04:26:24.039636 kernel: Asymmetric key parser 'x509' registered Sep 16 04:26:24.039641 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 16 04:26:24.039646 kernel: io scheduler mq-deadline registered Sep 16 04:26:24.039651 kernel: io scheduler kyber registered Sep 16 04:26:24.039656 kernel: io scheduler bfq registered Sep 16 04:26:24.039661 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 16 04:26:24.039665 kernel: thunder_xcv, ver 1.0 Sep 16 04:26:24.039670 kernel: thunder_bgx, ver 1.0 Sep 16 04:26:24.039675 kernel: nicpf, ver 1.0 Sep 16 04:26:24.039679 kernel: nicvf, ver 1.0 Sep 16 04:26:24.039805 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 16 04:26:24.039859 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-16T04:26:23 UTC (1757996783) Sep 16 04:26:24.039866 kernel: efifb: probing for efifb Sep 16 04:26:24.039870 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Sep 16 04:26:24.039875 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Sep 16 04:26:24.039880 kernel: efifb: scrolling: redraw Sep 16 04:26:24.039885 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 16 04:26:24.039890 kernel: Console: switching to colour frame buffer device 128x48 Sep 16 04:26:24.039894 kernel: fb0: EFI VGA frame buffer device Sep 16 04:26:24.039900 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Sep 16 04:26:24.039905 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 16 04:26:24.039910 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Sep 16 04:26:24.039915 kernel: watchdog: NMI not fully supported Sep 16 04:26:24.039920 kernel: watchdog: Hard watchdog permanently disabled Sep 16 04:26:24.039924 kernel: NET: Registered PF_INET6 protocol family Sep 16 04:26:24.039929 kernel: Segment Routing with IPv6 Sep 16 04:26:24.039934 kernel: In-situ OAM (IOAM) with IPv6 Sep 16 04:26:24.039939 kernel: NET: Registered PF_PACKET protocol family Sep 16 04:26:24.039944 kernel: Key type dns_resolver registered Sep 16 04:26:24.039949 kernel: registered taskstats version 1 Sep 16 04:26:24.039954 kernel: Loading compiled-in X.509 certificates Sep 16 04:26:24.039959 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.47-flatcar: 99eb88579c3d58869b2224a85ec8efa5647af805' Sep 16 04:26:24.039964 kernel: Demotion targets for Node 0: null Sep 16 04:26:24.039968 kernel: Key type .fscrypt registered Sep 16 04:26:24.039973 kernel: Key type fscrypt-provisioning registered Sep 16 04:26:24.039978 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 16 04:26:24.039983 kernel: ima: Allocated hash algorithm: sha1 Sep 16 04:26:24.039988 kernel: ima: No architecture policies found Sep 16 04:26:24.039993 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 16 04:26:24.039998 kernel: clk: Disabling unused clocks Sep 16 04:26:24.040003 kernel: PM: genpd: Disabling unused power domains Sep 16 04:26:24.040007 kernel: Warning: unable to open an initial console. Sep 16 04:26:24.040012 kernel: Freeing unused kernel memory: 38976K Sep 16 04:26:24.040017 kernel: Run /init as init process Sep 16 04:26:24.040022 kernel: with arguments: Sep 16 04:26:24.040026 kernel: /init Sep 16 04:26:24.040032 kernel: with environment: Sep 16 04:26:24.040036 kernel: HOME=/ Sep 16 04:26:24.040041 kernel: TERM=linux Sep 16 04:26:24.040046 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 16 04:26:24.040052 systemd[1]: Successfully made /usr/ read-only. Sep 16 04:26:24.040059 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 16 04:26:24.040064 systemd[1]: Detected virtualization microsoft. Sep 16 04:26:24.040070 systemd[1]: Detected architecture arm64. Sep 16 04:26:24.040075 systemd[1]: Running in initrd. Sep 16 04:26:24.040080 systemd[1]: No hostname configured, using default hostname. Sep 16 04:26:24.040086 systemd[1]: Hostname set to . Sep 16 04:26:24.040091 systemd[1]: Initializing machine ID from random generator. Sep 16 04:26:24.040096 systemd[1]: Queued start job for default target initrd.target. Sep 16 04:26:24.040101 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 16 04:26:24.040106 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 16 04:26:24.040112 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 16 04:26:24.040118 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 16 04:26:24.040123 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 16 04:26:24.040129 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 16 04:26:24.040135 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 16 04:26:24.040140 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 16 04:26:24.040146 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 16 04:26:24.040152 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 16 04:26:24.040157 systemd[1]: Reached target paths.target - Path Units. Sep 16 04:26:24.040162 systemd[1]: Reached target slices.target - Slice Units. Sep 16 04:26:24.040167 systemd[1]: Reached target swap.target - Swaps. Sep 16 04:26:24.040172 systemd[1]: Reached target timers.target - Timer Units. Sep 16 04:26:24.040178 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 16 04:26:24.040183 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 16 04:26:24.040188 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 16 04:26:24.040193 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 16 04:26:24.040200 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 16 04:26:24.040205 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 16 04:26:24.040210 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 16 04:26:24.040216 systemd[1]: Reached target sockets.target - Socket Units. Sep 16 04:26:24.040221 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 16 04:26:24.040226 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 16 04:26:24.040232 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 16 04:26:24.040237 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 16 04:26:24.040243 systemd[1]: Starting systemd-fsck-usr.service... Sep 16 04:26:24.040249 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 16 04:26:24.040254 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 16 04:26:24.040271 systemd-journald[225]: Collecting audit messages is disabled. Sep 16 04:26:24.040285 systemd-journald[225]: Journal started Sep 16 04:26:24.040299 systemd-journald[225]: Runtime Journal (/run/log/journal/e5bd382e8a7845c6b749bd0fe4a03fe2) is 8M, max 78.5M, 70.5M free. Sep 16 04:26:24.047802 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 16 04:26:24.053270 systemd-modules-load[227]: Inserted module 'overlay' Sep 16 04:26:24.074186 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 16 04:26:24.074227 systemd[1]: Started systemd-journald.service - Journal Service. Sep 16 04:26:24.081212 systemd-modules-load[227]: Inserted module 'br_netfilter' Sep 16 04:26:24.089873 kernel: Bridge firewalling registered Sep 16 04:26:24.081310 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 16 04:26:24.094368 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 16 04:26:24.100104 systemd[1]: Finished systemd-fsck-usr.service. Sep 16 04:26:24.107830 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 16 04:26:24.119793 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 16 04:26:24.130311 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 16 04:26:24.135076 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 16 04:26:24.154253 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 16 04:26:24.169080 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 16 04:26:24.179185 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 16 04:26:24.182445 systemd-tmpfiles[247]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 16 04:26:24.186798 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 16 04:26:24.194278 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 16 04:26:24.205023 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 16 04:26:24.216997 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 16 04:26:24.236686 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 16 04:26:24.243485 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 16 04:26:24.263919 dracut-cmdline[260]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=eff5cc3c399cf6fc52e3071751a09276871b099078da6d1b1a498405d04a9313 Sep 16 04:26:24.293843 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 16 04:26:24.309228 systemd-resolved[261]: Positive Trust Anchors: Sep 16 04:26:24.309244 systemd-resolved[261]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 16 04:26:24.309263 systemd-resolved[261]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 16 04:26:24.310940 systemd-resolved[261]: Defaulting to hostname 'linux'. Sep 16 04:26:24.312595 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 16 04:26:24.322754 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 16 04:26:24.409780 kernel: SCSI subsystem initialized Sep 16 04:26:24.414767 kernel: Loading iSCSI transport class v2.0-870. Sep 16 04:26:24.421767 kernel: iscsi: registered transport (tcp) Sep 16 04:26:24.435128 kernel: iscsi: registered transport (qla4xxx) Sep 16 04:26:24.435182 kernel: QLogic iSCSI HBA Driver Sep 16 04:26:24.448857 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 16 04:26:24.480694 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 16 04:26:24.488313 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 16 04:26:24.539791 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 16 04:26:24.546890 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 16 04:26:24.624792 kernel: raid6: neonx8 gen() 18550 MB/s Sep 16 04:26:24.633774 kernel: raid6: neonx4 gen() 18566 MB/s Sep 16 04:26:24.647767 kernel: raid6: neonx2 gen() 17087 MB/s Sep 16 04:26:24.666764 kernel: raid6: neonx1 gen() 15052 MB/s Sep 16 04:26:24.686765 kernel: raid6: int64x8 gen() 10543 MB/s Sep 16 04:26:24.705844 kernel: raid6: int64x4 gen() 10614 MB/s Sep 16 04:26:24.724845 kernel: raid6: int64x2 gen() 8979 MB/s Sep 16 04:26:24.746875 kernel: raid6: int64x1 gen() 7000 MB/s Sep 16 04:26:24.746910 kernel: raid6: using algorithm neonx4 gen() 18566 MB/s Sep 16 04:26:24.768700 kernel: raid6: .... xor() 15148 MB/s, rmw enabled Sep 16 04:26:24.768707 kernel: raid6: using neon recovery algorithm Sep 16 04:26:24.776462 kernel: xor: measuring software checksum speed Sep 16 04:26:24.776469 kernel: 8regs : 28679 MB/sec Sep 16 04:26:24.778858 kernel: 32regs : 28842 MB/sec Sep 16 04:26:24.781215 kernel: arm64_neon : 37718 MB/sec Sep 16 04:26:24.784049 kernel: xor: using function: arm64_neon (37718 MB/sec) Sep 16 04:26:24.822773 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 16 04:26:24.828343 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 16 04:26:24.839899 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 16 04:26:24.862013 systemd-udevd[472]: Using default interface naming scheme 'v255'. Sep 16 04:26:24.866804 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 16 04:26:24.874677 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 16 04:26:24.916249 dracut-pre-trigger[483]: rd.md=0: removing MD RAID activation Sep 16 04:26:24.936076 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 16 04:26:24.946095 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 16 04:26:24.990372 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 16 04:26:25.000889 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 16 04:26:25.058849 kernel: hv_vmbus: Vmbus version:5.3 Sep 16 04:26:25.063434 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 16 04:26:25.067878 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 16 04:26:25.080123 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 16 04:26:25.113500 kernel: hv_vmbus: registering driver hid_hyperv Sep 16 04:26:25.113521 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 16 04:26:25.113538 kernel: hv_vmbus: registering driver hyperv_keyboard Sep 16 04:26:25.113545 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 16 04:26:25.113552 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Sep 16 04:26:25.113560 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Sep 16 04:26:25.113566 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Sep 16 04:26:25.094981 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 16 04:26:25.441463 kernel: PTP clock support registered Sep 16 04:26:25.441483 kernel: hv_utils: Registering HyperV Utility Driver Sep 16 04:26:25.441490 kernel: hv_vmbus: registering driver hv_utils Sep 16 04:26:25.441496 kernel: hv_utils: Heartbeat IC version 3.0 Sep 16 04:26:25.441502 kernel: hv_vmbus: registering driver hv_storvsc Sep 16 04:26:25.441508 kernel: hv_utils: TimeSync IC version 4.0 Sep 16 04:26:25.441514 kernel: hv_vmbus: registering driver hv_netvsc Sep 16 04:26:25.441527 kernel: hv_utils: Shutdown IC version 3.2 Sep 16 04:26:25.441534 kernel: scsi host0: storvsc_host_t Sep 16 04:26:25.441670 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Sep 16 04:26:25.430717 systemd-resolved[261]: Clock change detected. Flushing caches. Sep 16 04:26:25.466684 kernel: scsi host1: storvsc_host_t Sep 16 04:26:25.466828 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Sep 16 04:26:25.438007 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 16 04:26:25.447609 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 16 04:26:25.509074 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Sep 16 04:26:25.509253 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Sep 16 04:26:25.509320 kernel: sd 0:0:0:0: [sda] Write Protect is off Sep 16 04:26:25.509380 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Sep 16 04:26:25.509499 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Sep 16 04:26:25.509560 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#61 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Sep 16 04:26:25.510018 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#4 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Sep 16 04:26:25.447690 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 16 04:26:25.465156 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 16 04:26:25.518758 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 16 04:26:25.545516 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 16 04:26:25.545540 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Sep 16 04:26:25.545667 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Sep 16 04:26:25.545736 kernel: hv_netvsc 002248c2-3600-0022-48c2-3600002248c2 eth0: VF slot 1 added Sep 16 04:26:25.545802 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 16 04:26:25.546437 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Sep 16 04:26:25.557522 kernel: hv_vmbus: registering driver hv_pci Sep 16 04:26:25.557550 kernel: hv_pci 395a9698-49ff-49ef-863c-8931b62f98c8: PCI VMBus probing: Using version 0x10004 Sep 16 04:26:25.573611 kernel: hv_pci 395a9698-49ff-49ef-863c-8931b62f98c8: PCI host bridge to bus 49ff:00 Sep 16 04:26:25.573791 kernel: pci_bus 49ff:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Sep 16 04:26:25.573874 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#84 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Sep 16 04:26:25.573937 kernel: pci_bus 49ff:00: No busn resource found for root bus, will use [bus 00-ff] Sep 16 04:26:25.580437 kernel: pci 49ff:00:02.0: [15b3:101a] type 00 class 0x020000 PCIe Endpoint Sep 16 04:26:25.590559 kernel: pci 49ff:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref] Sep 16 04:26:25.596502 kernel: pci 49ff:00:02.0: enabling Extended Tags Sep 16 04:26:25.609477 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#3 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Sep 16 04:26:25.609663 kernel: pci 49ff:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 49ff:00:02.0 (capable of 252.048 Gb/s with 16.0 GT/s PCIe x16 link) Sep 16 04:26:25.626568 kernel: pci_bus 49ff:00: busn_res: [bus 00-ff] end is updated to 00 Sep 16 04:26:25.626745 kernel: pci 49ff:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref]: assigned Sep 16 04:26:25.689043 kernel: mlx5_core 49ff:00:02.0: enabling device (0000 -> 0002) Sep 16 04:26:25.696995 kernel: mlx5_core 49ff:00:02.0: PTM is not supported by PCIe Sep 16 04:26:25.697157 kernel: mlx5_core 49ff:00:02.0: firmware version: 16.30.5006 Sep 16 04:26:25.869383 kernel: hv_netvsc 002248c2-3600-0022-48c2-3600002248c2 eth0: VF registering: eth1 Sep 16 04:26:25.869606 kernel: mlx5_core 49ff:00:02.0 eth1: joined to eth0 Sep 16 04:26:25.874161 kernel: mlx5_core 49ff:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Sep 16 04:26:25.883520 kernel: mlx5_core 49ff:00:02.0 enP18943s1: renamed from eth1 Sep 16 04:26:26.499362 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Sep 16 04:26:26.528014 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Sep 16 04:26:26.546428 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Sep 16 04:26:26.572781 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Sep 16 04:26:26.578086 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Sep 16 04:26:26.583501 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 16 04:26:26.592728 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 16 04:26:26.603571 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 16 04:26:26.612186 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 16 04:26:26.621685 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 16 04:26:26.642125 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 16 04:26:26.661434 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#22 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Sep 16 04:26:26.668789 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 16 04:26:26.681501 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 16 04:26:27.682033 disk-uuid[653]: Warning: The kernel is still using the old partition table. Sep 16 04:26:27.682033 disk-uuid[653]: The new table will be used at the next reboot or after you Sep 16 04:26:27.682033 disk-uuid[653]: run partprobe(8) or kpartx(8) Sep 16 04:26:27.682033 disk-uuid[653]: The operation has completed successfully. Sep 16 04:26:27.915308 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 16 04:26:27.915393 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 16 04:26:27.931315 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 16 04:26:27.948859 sh[767]: Success Sep 16 04:26:27.979984 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 16 04:26:27.980039 kernel: device-mapper: uevent: version 1.0.3 Sep 16 04:26:27.984794 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 16 04:26:27.994440 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Sep 16 04:26:28.416215 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 16 04:26:28.422554 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 16 04:26:28.440495 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 16 04:26:28.466168 kernel: BTRFS: device fsid 782b6948-7aaa-439e-9946-c8fdb4d8f287 devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (785) Sep 16 04:26:28.466214 kernel: BTRFS info (device dm-0): first mount of filesystem 782b6948-7aaa-439e-9946-c8fdb4d8f287 Sep 16 04:26:28.470430 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 16 04:26:28.817892 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 16 04:26:28.817950 kernel: BTRFS info (device dm-0): enabling free space tree Sep 16 04:26:28.845550 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 16 04:26:28.849617 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 16 04:26:28.856514 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 16 04:26:28.857302 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 16 04:26:28.882101 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 16 04:26:28.908504 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (808) Sep 16 04:26:28.918155 kernel: BTRFS info (device sda6): first mount of filesystem a546938e-7af2-44ea-b88d-218d567c463b Sep 16 04:26:28.918209 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 16 04:26:28.967433 kernel: BTRFS info (device sda6): turning on async discard Sep 16 04:26:28.967496 kernel: BTRFS info (device sda6): enabling free space tree Sep 16 04:26:28.976621 kernel: BTRFS info (device sda6): last unmount of filesystem a546938e-7af2-44ea-b88d-218d567c463b Sep 16 04:26:28.977486 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 16 04:26:28.986170 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 16 04:26:29.005036 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 16 04:26:29.015777 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 16 04:26:29.046553 systemd-networkd[954]: lo: Link UP Sep 16 04:26:29.046560 systemd-networkd[954]: lo: Gained carrier Sep 16 04:26:29.047278 systemd-networkd[954]: Enumeration completed Sep 16 04:26:29.049638 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 16 04:26:29.054519 systemd-networkd[954]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 16 04:26:29.054522 systemd-networkd[954]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 16 04:26:29.055576 systemd[1]: Reached target network.target - Network. Sep 16 04:26:29.132301 kernel: mlx5_core 49ff:00:02.0 enP18943s1: Link up Sep 16 04:26:29.132581 kernel: buffer_size[0]=0 is not enough for lossless buffer Sep 16 04:26:29.166269 systemd-networkd[954]: enP18943s1: Link UP Sep 16 04:26:29.169362 kernel: hv_netvsc 002248c2-3600-0022-48c2-3600002248c2 eth0: Data path switched to VF: enP18943s1 Sep 16 04:26:29.166333 systemd-networkd[954]: eth0: Link UP Sep 16 04:26:29.166479 systemd-networkd[954]: eth0: Gained carrier Sep 16 04:26:29.166494 systemd-networkd[954]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 16 04:26:29.184625 systemd-networkd[954]: enP18943s1: Gained carrier Sep 16 04:26:29.196452 systemd-networkd[954]: eth0: DHCPv4 address 10.200.20.14/24, gateway 10.200.20.1 acquired from 168.63.129.16 Sep 16 04:26:30.332995 ignition[943]: Ignition 2.22.0 Sep 16 04:26:30.333010 ignition[943]: Stage: fetch-offline Sep 16 04:26:30.336628 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 16 04:26:30.333100 ignition[943]: no configs at "/usr/lib/ignition/base.d" Sep 16 04:26:30.342952 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 16 04:26:30.333106 ignition[943]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 16 04:26:30.333173 ignition[943]: parsed url from cmdline: "" Sep 16 04:26:30.333175 ignition[943]: no config URL provided Sep 16 04:26:30.333178 ignition[943]: reading system config file "/usr/lib/ignition/user.ign" Sep 16 04:26:30.333184 ignition[943]: no config at "/usr/lib/ignition/user.ign" Sep 16 04:26:30.333187 ignition[943]: failed to fetch config: resource requires networking Sep 16 04:26:30.333476 ignition[943]: Ignition finished successfully Sep 16 04:26:30.380228 ignition[965]: Ignition 2.22.0 Sep 16 04:26:30.380233 ignition[965]: Stage: fetch Sep 16 04:26:30.380398 ignition[965]: no configs at "/usr/lib/ignition/base.d" Sep 16 04:26:30.380405 ignition[965]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 16 04:26:30.380494 ignition[965]: parsed url from cmdline: "" Sep 16 04:26:30.380496 ignition[965]: no config URL provided Sep 16 04:26:30.380500 ignition[965]: reading system config file "/usr/lib/ignition/user.ign" Sep 16 04:26:30.380506 ignition[965]: no config at "/usr/lib/ignition/user.ign" Sep 16 04:26:30.380520 ignition[965]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Sep 16 04:26:30.475245 ignition[965]: GET result: OK Sep 16 04:26:30.475320 ignition[965]: config has been read from IMDS userdata Sep 16 04:26:30.475341 ignition[965]: parsing config with SHA512: e11d60f5cfd3c384651fc8acf381ad9569f7f4d8a00a576a980a7822a3a351d538a5045ebceec3accde9376624d2f8eca087ad2524c97fdc6b2858194533b0a2 Sep 16 04:26:30.482576 unknown[965]: fetched base config from "system" Sep 16 04:26:30.485534 unknown[965]: fetched base config from "system" Sep 16 04:26:30.485783 ignition[965]: fetch: fetch complete Sep 16 04:26:30.485540 unknown[965]: fetched user config from "azure" Sep 16 04:26:30.485787 ignition[965]: fetch: fetch passed Sep 16 04:26:30.487823 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 16 04:26:30.485829 ignition[965]: Ignition finished successfully Sep 16 04:26:30.495178 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 16 04:26:30.533842 ignition[971]: Ignition 2.22.0 Sep 16 04:26:30.536467 ignition[971]: Stage: kargs Sep 16 04:26:30.536670 ignition[971]: no configs at "/usr/lib/ignition/base.d" Sep 16 04:26:30.539207 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 16 04:26:30.536678 ignition[971]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 16 04:26:30.546600 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 16 04:26:30.537280 ignition[971]: kargs: kargs passed Sep 16 04:26:30.537332 ignition[971]: Ignition finished successfully Sep 16 04:26:30.576542 ignition[977]: Ignition 2.22.0 Sep 16 04:26:30.576551 ignition[977]: Stage: disks Sep 16 04:26:30.580104 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 16 04:26:30.576795 ignition[977]: no configs at "/usr/lib/ignition/base.d" Sep 16 04:26:30.587242 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 16 04:26:30.576804 ignition[977]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 16 04:26:30.595078 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 16 04:26:30.577272 ignition[977]: disks: disks passed Sep 16 04:26:30.602921 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 16 04:26:30.577311 ignition[977]: Ignition finished successfully Sep 16 04:26:30.610706 systemd[1]: Reached target sysinit.target - System Initialization. Sep 16 04:26:30.619464 systemd[1]: Reached target basic.target - Basic System. Sep 16 04:26:30.629116 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 16 04:26:30.703552 systemd-networkd[954]: eth0: Gained IPv6LL Sep 16 04:26:30.707580 systemd-fsck[986]: ROOT: clean, 15/7326000 files, 477845/7359488 blocks Sep 16 04:26:30.715037 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 16 04:26:30.721273 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 16 04:26:32.653437 kernel: EXT4-fs (sda9): mounted filesystem a00d22d9-68b1-4a84-acfc-9fae1fca53dd r/w with ordered data mode. Quota mode: none. Sep 16 04:26:32.654299 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 16 04:26:32.657878 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 16 04:26:32.699271 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 16 04:26:32.716129 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 16 04:26:32.722591 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Sep 16 04:26:32.733892 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 16 04:26:32.733941 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 16 04:26:32.749115 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 16 04:26:32.760866 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 16 04:26:32.781445 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1000) Sep 16 04:26:32.791294 kernel: BTRFS info (device sda6): first mount of filesystem a546938e-7af2-44ea-b88d-218d567c463b Sep 16 04:26:32.791338 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 16 04:26:32.801650 kernel: BTRFS info (device sda6): turning on async discard Sep 16 04:26:32.801683 kernel: BTRFS info (device sda6): enabling free space tree Sep 16 04:26:32.803930 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 16 04:26:33.265014 coreos-metadata[1002]: Sep 16 04:26:33.264 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Sep 16 04:26:33.272858 coreos-metadata[1002]: Sep 16 04:26:33.272 INFO Fetch successful Sep 16 04:26:33.272858 coreos-metadata[1002]: Sep 16 04:26:33.272 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Sep 16 04:26:33.284684 coreos-metadata[1002]: Sep 16 04:26:33.284 INFO Fetch successful Sep 16 04:26:33.297862 coreos-metadata[1002]: Sep 16 04:26:33.297 INFO wrote hostname ci-4459.0.0-n-c6becb1dff to /sysroot/etc/hostname Sep 16 04:26:33.299608 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 16 04:26:33.544654 initrd-setup-root[1030]: cut: /sysroot/etc/passwd: No such file or directory Sep 16 04:26:33.580652 initrd-setup-root[1037]: cut: /sysroot/etc/group: No such file or directory Sep 16 04:26:33.599817 initrd-setup-root[1044]: cut: /sysroot/etc/shadow: No such file or directory Sep 16 04:26:33.619708 initrd-setup-root[1051]: cut: /sysroot/etc/gshadow: No such file or directory Sep 16 04:26:34.629031 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 16 04:26:34.634805 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 16 04:26:34.651126 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 16 04:26:34.663107 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 16 04:26:34.673434 kernel: BTRFS info (device sda6): last unmount of filesystem a546938e-7af2-44ea-b88d-218d567c463b Sep 16 04:26:34.694488 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 16 04:26:34.704705 ignition[1119]: INFO : Ignition 2.22.0 Sep 16 04:26:34.704705 ignition[1119]: INFO : Stage: mount Sep 16 04:26:34.711157 ignition[1119]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 16 04:26:34.711157 ignition[1119]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 16 04:26:34.711157 ignition[1119]: INFO : mount: mount passed Sep 16 04:26:34.711157 ignition[1119]: INFO : Ignition finished successfully Sep 16 04:26:34.709096 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 16 04:26:34.716266 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 16 04:26:34.743581 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 16 04:26:34.777559 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1130) Sep 16 04:26:34.777622 kernel: BTRFS info (device sda6): first mount of filesystem a546938e-7af2-44ea-b88d-218d567c463b Sep 16 04:26:34.782011 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 16 04:26:34.790351 kernel: BTRFS info (device sda6): turning on async discard Sep 16 04:26:34.790394 kernel: BTRFS info (device sda6): enabling free space tree Sep 16 04:26:34.791925 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 16 04:26:34.824456 ignition[1148]: INFO : Ignition 2.22.0 Sep 16 04:26:34.824456 ignition[1148]: INFO : Stage: files Sep 16 04:26:34.824456 ignition[1148]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 16 04:26:34.824456 ignition[1148]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 16 04:26:34.838142 ignition[1148]: DEBUG : files: compiled without relabeling support, skipping Sep 16 04:26:34.842631 ignition[1148]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 16 04:26:34.842631 ignition[1148]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 16 04:26:34.884094 ignition[1148]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 16 04:26:34.889453 ignition[1148]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 16 04:26:34.889453 ignition[1148]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 16 04:26:34.885798 unknown[1148]: wrote ssh authorized keys file for user: core Sep 16 04:26:34.938130 ignition[1148]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 16 04:26:34.945331 ignition[1148]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Sep 16 04:26:35.085668 ignition[1148]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 16 04:26:35.329734 ignition[1148]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 16 04:26:35.337605 ignition[1148]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 16 04:26:35.337605 ignition[1148]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Sep 16 04:26:35.493281 ignition[1148]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 16 04:26:35.580318 ignition[1148]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 16 04:26:35.580318 ignition[1148]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 16 04:26:35.593553 ignition[1148]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 16 04:26:35.593553 ignition[1148]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 16 04:26:35.593553 ignition[1148]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 16 04:26:35.593553 ignition[1148]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 16 04:26:35.593553 ignition[1148]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 16 04:26:35.593553 ignition[1148]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 16 04:26:35.593553 ignition[1148]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 16 04:26:35.666021 ignition[1148]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 16 04:26:35.672688 ignition[1148]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 16 04:26:35.672688 ignition[1148]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 16 04:26:35.672688 ignition[1148]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 16 04:26:35.672688 ignition[1148]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 16 04:26:35.672688 ignition[1148]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Sep 16 04:26:35.868521 ignition[1148]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 16 04:26:36.118709 ignition[1148]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 16 04:26:36.118709 ignition[1148]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 16 04:26:38.066574 ignition[1148]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 16 04:26:39.210118 ignition[1148]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 16 04:26:39.210118 ignition[1148]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 16 04:26:39.210118 ignition[1148]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Sep 16 04:26:39.210118 ignition[1148]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Sep 16 04:26:39.243737 ignition[1148]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 16 04:26:39.243737 ignition[1148]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 16 04:26:39.243737 ignition[1148]: INFO : files: files passed Sep 16 04:26:39.243737 ignition[1148]: INFO : Ignition finished successfully Sep 16 04:26:39.217733 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 16 04:26:39.228506 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 16 04:26:39.259300 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 16 04:26:39.282875 initrd-setup-root-after-ignition[1176]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 16 04:26:39.282875 initrd-setup-root-after-ignition[1176]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 16 04:26:39.299375 initrd-setup-root-after-ignition[1180]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 16 04:26:39.284879 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 16 04:26:39.294000 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 16 04:26:39.304333 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 16 04:26:39.355858 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 16 04:26:39.355944 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 16 04:26:39.366058 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 16 04:26:39.375644 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 16 04:26:40.457648 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 16 04:26:40.457737 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 16 04:26:40.462392 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 16 04:26:40.471011 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 16 04:26:40.504726 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 16 04:26:40.515366 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 16 04:26:40.533647 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 16 04:26:40.538423 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 16 04:26:40.547395 systemd[1]: Stopped target timers.target - Timer Units. Sep 16 04:26:40.555201 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 16 04:26:40.555307 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 16 04:26:40.566384 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 16 04:26:40.570612 systemd[1]: Stopped target basic.target - Basic System. Sep 16 04:26:40.578634 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 16 04:26:40.586888 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 16 04:26:40.594567 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 16 04:26:40.603079 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 16 04:26:40.611896 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 16 04:26:40.620462 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 16 04:26:40.629245 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 16 04:26:40.637339 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 16 04:26:40.645939 systemd[1]: Stopped target swap.target - Swaps. Sep 16 04:26:40.652878 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 16 04:26:40.652993 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 16 04:26:40.663815 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 16 04:26:40.668375 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 16 04:26:40.677031 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 16 04:26:40.680764 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 16 04:26:40.685837 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 16 04:26:40.685933 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 16 04:26:40.698154 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 16 04:26:40.698240 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 16 04:26:40.703029 systemd[1]: ignition-files.service: Deactivated successfully. Sep 16 04:26:40.703101 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 16 04:26:40.710693 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 16 04:26:40.710759 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 16 04:26:40.721167 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 16 04:26:40.748608 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 16 04:26:40.780596 ignition[1201]: INFO : Ignition 2.22.0 Sep 16 04:26:40.780596 ignition[1201]: INFO : Stage: umount Sep 16 04:26:40.780596 ignition[1201]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 16 04:26:40.780596 ignition[1201]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 16 04:26:40.780596 ignition[1201]: INFO : umount: umount passed Sep 16 04:26:40.780596 ignition[1201]: INFO : Ignition finished successfully Sep 16 04:26:40.759559 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 16 04:26:40.759703 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 16 04:26:40.771555 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 16 04:26:40.771661 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 16 04:26:40.788148 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 16 04:26:40.789442 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 16 04:26:40.794947 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 16 04:26:40.795218 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 16 04:26:40.803873 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 16 04:26:40.803932 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 16 04:26:40.811942 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 16 04:26:40.811988 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 16 04:26:40.818848 systemd[1]: Stopped target network.target - Network. Sep 16 04:26:40.825611 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 16 04:26:40.825676 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 16 04:26:40.835003 systemd[1]: Stopped target paths.target - Path Units. Sep 16 04:26:40.842666 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 16 04:26:40.846090 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 16 04:26:40.851476 systemd[1]: Stopped target slices.target - Slice Units. Sep 16 04:26:40.859170 systemd[1]: Stopped target sockets.target - Socket Units. Sep 16 04:26:40.866937 systemd[1]: iscsid.socket: Deactivated successfully. Sep 16 04:26:40.866979 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 16 04:26:40.874493 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 16 04:26:40.874521 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 16 04:26:40.882029 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 16 04:26:40.882084 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 16 04:26:40.889504 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 16 04:26:40.889535 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 16 04:26:40.897194 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 16 04:26:40.906498 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 16 04:26:40.914757 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 16 04:26:40.915258 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 16 04:26:40.916457 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 16 04:26:40.924221 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 16 04:26:40.924668 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 16 04:26:40.924747 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 16 04:26:40.931796 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 16 04:26:41.107191 kernel: hv_netvsc 002248c2-3600-0022-48c2-3600002248c2 eth0: Data path switched from VF: enP18943s1 Sep 16 04:26:40.931881 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 16 04:26:40.944856 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 16 04:26:40.945075 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 16 04:26:40.945147 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 16 04:26:40.953886 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 16 04:26:40.960052 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 16 04:26:40.960096 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 16 04:26:40.970451 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 16 04:26:40.970521 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 16 04:26:40.978899 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 16 04:26:40.987334 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 16 04:26:40.987400 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 16 04:26:40.995216 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 16 04:26:40.995270 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 16 04:26:41.005916 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 16 04:26:41.005959 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 16 04:26:41.010515 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 16 04:26:41.010561 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 16 04:26:41.023874 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 16 04:26:41.029213 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 16 04:26:41.029291 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 16 04:26:41.049751 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 16 04:26:41.052899 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 16 04:26:41.058670 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 16 04:26:41.058719 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 16 04:26:41.067810 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 16 04:26:41.067840 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 16 04:26:41.076158 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 16 04:26:41.076204 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 16 04:26:41.088978 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 16 04:26:41.089051 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 16 04:26:41.107277 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 16 04:26:41.107341 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 16 04:26:41.122001 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 16 04:26:41.135897 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 16 04:26:41.135971 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 16 04:26:41.148874 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 16 04:26:41.148921 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 16 04:26:41.163616 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 16 04:26:41.163674 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 16 04:26:41.172989 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 16 04:26:41.350647 systemd-journald[225]: Received SIGTERM from PID 1 (systemd). Sep 16 04:26:41.173034 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 16 04:26:41.178207 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 16 04:26:41.178247 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 16 04:26:41.191457 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Sep 16 04:26:41.191502 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Sep 16 04:26:41.191523 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 16 04:26:41.191553 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 16 04:26:41.191826 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 16 04:26:41.192031 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 16 04:26:41.212360 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 16 04:26:41.212572 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 16 04:26:41.220723 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 16 04:26:41.228943 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 16 04:26:41.262473 systemd[1]: Switching root. Sep 16 04:26:41.415471 systemd-journald[225]: Journal stopped Sep 16 04:27:03.306485 kernel: SELinux: policy capability network_peer_controls=1 Sep 16 04:27:03.306504 kernel: SELinux: policy capability open_perms=1 Sep 16 04:27:03.306512 kernel: SELinux: policy capability extended_socket_class=1 Sep 16 04:27:03.306517 kernel: SELinux: policy capability always_check_network=0 Sep 16 04:27:03.306523 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 16 04:27:03.306528 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 16 04:27:03.306534 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 16 04:27:03.306540 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 16 04:27:03.306545 kernel: SELinux: policy capability userspace_initial_context=0 Sep 16 04:27:03.306550 kernel: audit: type=1403 audit(1757996802.380:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 16 04:27:03.306557 systemd[1]: Successfully loaded SELinux policy in 169.094ms. Sep 16 04:27:03.306565 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 4.351ms. Sep 16 04:27:03.306572 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 16 04:27:03.306578 systemd[1]: Detected virtualization microsoft. Sep 16 04:27:03.306584 systemd[1]: Detected architecture arm64. Sep 16 04:27:03.306591 systemd[1]: Detected first boot. Sep 16 04:27:03.306598 systemd[1]: Hostname set to . Sep 16 04:27:03.306604 systemd[1]: Initializing machine ID from random generator. Sep 16 04:27:03.306609 zram_generator::config[1243]: No configuration found. Sep 16 04:27:03.306616 kernel: NET: Registered PF_VSOCK protocol family Sep 16 04:27:03.306621 systemd[1]: Populated /etc with preset unit settings. Sep 16 04:27:03.306627 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 16 04:27:03.306636 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 16 04:27:03.306641 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 16 04:27:03.306647 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 16 04:27:03.306653 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 16 04:27:03.306660 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 16 04:27:03.306666 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 16 04:27:03.306672 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 16 04:27:03.306679 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 16 04:27:03.306685 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 16 04:27:03.306691 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 16 04:27:03.306697 systemd[1]: Created slice user.slice - User and Session Slice. Sep 16 04:27:03.306703 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 16 04:27:03.306709 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 16 04:27:03.306715 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 16 04:27:03.306721 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 16 04:27:03.306727 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 16 04:27:03.306734 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 16 04:27:03.306740 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 16 04:27:03.306748 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 16 04:27:03.306754 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 16 04:27:03.306760 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 16 04:27:03.306767 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 16 04:27:03.306773 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 16 04:27:03.306780 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 16 04:27:03.306786 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 16 04:27:03.306792 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 16 04:27:03.306798 systemd[1]: Reached target slices.target - Slice Units. Sep 16 04:27:03.306804 systemd[1]: Reached target swap.target - Swaps. Sep 16 04:27:03.306811 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 16 04:27:03.306817 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 16 04:27:03.306824 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 16 04:27:03.306830 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 16 04:27:03.306837 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 16 04:27:03.306843 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 16 04:27:03.306849 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 16 04:27:03.306855 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 16 04:27:03.306862 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 16 04:27:03.306868 systemd[1]: Mounting media.mount - External Media Directory... Sep 16 04:27:03.306875 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 16 04:27:03.306881 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 16 04:27:03.306887 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 16 04:27:03.306893 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 16 04:27:03.306900 systemd[1]: Reached target machines.target - Containers. Sep 16 04:27:03.306907 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 16 04:27:03.306914 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 16 04:27:03.306920 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 16 04:27:03.306926 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 16 04:27:03.306932 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 16 04:27:03.306939 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 16 04:27:03.306945 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 16 04:27:03.306951 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 16 04:27:03.306957 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 16 04:27:03.306963 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 16 04:27:03.306971 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 16 04:27:03.306977 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 16 04:27:03.306983 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 16 04:27:03.306989 kernel: fuse: init (API version 7.41) Sep 16 04:27:03.306995 systemd[1]: Stopped systemd-fsck-usr.service. Sep 16 04:27:03.307001 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 16 04:27:03.307008 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 16 04:27:03.307014 kernel: loop: module loaded Sep 16 04:27:03.307020 kernel: ACPI: bus type drm_connector registered Sep 16 04:27:03.307026 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 16 04:27:03.307033 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 16 04:27:03.307039 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 16 04:27:03.307060 systemd-journald[1332]: Collecting audit messages is disabled. Sep 16 04:27:03.307076 systemd-journald[1332]: Journal started Sep 16 04:27:03.307091 systemd-journald[1332]: Runtime Journal (/run/log/journal/771f5a2ab4be4833a57f6bc279ca9339) is 8M, max 78.5M, 70.5M free. Sep 16 04:27:02.550049 systemd[1]: Queued start job for default target multi-user.target. Sep 16 04:27:02.555830 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Sep 16 04:27:02.556191 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 16 04:27:02.556471 systemd[1]: systemd-journald.service: Consumed 2.364s CPU time. Sep 16 04:27:03.323934 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 16 04:27:03.332105 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 16 04:27:03.342124 systemd[1]: verity-setup.service: Deactivated successfully. Sep 16 04:27:03.342165 systemd[1]: Stopped verity-setup.service. Sep 16 04:27:03.354762 systemd[1]: Started systemd-journald.service - Journal Service. Sep 16 04:27:03.355366 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 16 04:27:03.359644 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 16 04:27:03.364321 systemd[1]: Mounted media.mount - External Media Directory. Sep 16 04:27:03.368241 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 16 04:27:03.372696 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 16 04:27:03.377127 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 16 04:27:03.380853 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 16 04:27:03.385349 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 16 04:27:03.390363 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 16 04:27:03.390510 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 16 04:27:03.395204 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 16 04:27:03.395327 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 16 04:27:03.399886 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 16 04:27:03.400009 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 16 04:27:03.404283 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 16 04:27:03.404403 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 16 04:27:03.410675 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 16 04:27:03.410812 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 16 04:27:03.415277 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 16 04:27:03.415400 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 16 04:27:03.419987 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 16 04:27:03.424669 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 16 04:27:03.429611 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 16 04:27:03.434850 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 16 04:27:03.447700 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 16 04:27:03.453142 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 16 04:27:03.459045 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 16 04:27:03.468135 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 16 04:27:03.472893 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 16 04:27:03.472922 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 16 04:27:03.477718 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 16 04:27:03.483322 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 16 04:27:03.487891 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 16 04:27:03.518195 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 16 04:27:03.529027 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 16 04:27:03.533337 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 16 04:27:03.534120 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 16 04:27:03.538652 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 16 04:27:03.539386 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 16 04:27:03.545548 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 16 04:27:03.560144 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 16 04:27:03.566222 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 16 04:27:03.571006 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 16 04:27:03.576559 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 16 04:27:03.582592 systemd-journald[1332]: Time spent on flushing to /var/log/journal/771f5a2ab4be4833a57f6bc279ca9339 is 11.224ms for 948 entries. Sep 16 04:27:03.582592 systemd-journald[1332]: System Journal (/var/log/journal/771f5a2ab4be4833a57f6bc279ca9339) is 8M, max 2.6G, 2.6G free. Sep 16 04:27:03.623241 systemd-journald[1332]: Received client request to flush runtime journal. Sep 16 04:27:03.582138 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 16 04:27:03.594644 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 16 04:27:03.624814 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 16 04:27:03.649790 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 16 04:27:03.652458 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 16 04:27:03.677447 kernel: loop0: detected capacity change from 0 to 100632 Sep 16 04:27:03.722880 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 16 04:27:03.796108 systemd-tmpfiles[1384]: ACLs are not supported, ignoring. Sep 16 04:27:03.796121 systemd-tmpfiles[1384]: ACLs are not supported, ignoring. Sep 16 04:27:03.798947 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 16 04:27:03.805451 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 16 04:27:04.203457 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 16 04:27:04.267486 kernel: loop1: detected capacity change from 0 to 27936 Sep 16 04:27:04.545584 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 16 04:27:04.551753 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 16 04:27:04.572607 systemd-tmpfiles[1403]: ACLs are not supported, ignoring. Sep 16 04:27:04.572617 systemd-tmpfiles[1403]: ACLs are not supported, ignoring. Sep 16 04:27:04.575036 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 16 04:27:04.745451 kernel: loop2: detected capacity change from 0 to 203944 Sep 16 04:27:04.812447 kernel: loop3: detected capacity change from 0 to 119368 Sep 16 04:27:05.262441 kernel: loop4: detected capacity change from 0 to 100632 Sep 16 04:27:05.275579 kernel: loop5: detected capacity change from 0 to 27936 Sep 16 04:27:05.276299 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 16 04:27:05.283604 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 16 04:27:05.296469 kernel: loop6: detected capacity change from 0 to 203944 Sep 16 04:27:05.311253 systemd-udevd[1411]: Using default interface naming scheme 'v255'. Sep 16 04:27:05.316432 kernel: loop7: detected capacity change from 0 to 119368 Sep 16 04:27:05.322797 (sd-merge)[1409]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Sep 16 04:27:05.323157 (sd-merge)[1409]: Merged extensions into '/usr'. Sep 16 04:27:05.326672 systemd[1]: Reload requested from client PID 1382 ('systemd-sysext') (unit systemd-sysext.service)... Sep 16 04:27:05.326770 systemd[1]: Reloading... Sep 16 04:27:05.385445 zram_generator::config[1454]: No configuration found. Sep 16 04:27:05.519398 systemd[1]: Reloading finished in 192 ms. Sep 16 04:27:05.536528 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 16 04:27:05.547350 systemd[1]: Starting ensure-sysext.service... Sep 16 04:27:05.552554 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 16 04:27:05.582056 systemd[1]: Reload requested from client PID 1492 ('systemctl') (unit ensure-sysext.service)... Sep 16 04:27:05.582069 systemd[1]: Reloading... Sep 16 04:27:05.592027 systemd-tmpfiles[1494]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 16 04:27:05.592050 systemd-tmpfiles[1494]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 16 04:27:05.592277 systemd-tmpfiles[1494]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 16 04:27:05.592408 systemd-tmpfiles[1494]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 16 04:27:05.592866 systemd-tmpfiles[1494]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 16 04:27:05.593008 systemd-tmpfiles[1494]: ACLs are not supported, ignoring. Sep 16 04:27:05.593039 systemd-tmpfiles[1494]: ACLs are not supported, ignoring. Sep 16 04:27:05.640672 systemd-tmpfiles[1494]: Detected autofs mount point /boot during canonicalization of boot. Sep 16 04:27:05.640685 systemd-tmpfiles[1494]: Skipping /boot Sep 16 04:27:05.641466 zram_generator::config[1528]: No configuration found. Sep 16 04:27:05.645820 systemd-tmpfiles[1494]: Detected autofs mount point /boot during canonicalization of boot. Sep 16 04:27:05.645834 systemd-tmpfiles[1494]: Skipping /boot Sep 16 04:27:05.778830 systemd[1]: Reloading finished in 196 ms. Sep 16 04:27:05.797443 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 16 04:27:05.814560 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 16 04:27:05.861454 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 16 04:27:05.875076 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 16 04:27:05.882565 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 16 04:27:05.887503 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 16 04:27:05.894436 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 16 04:27:05.897192 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 16 04:27:05.903629 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 16 04:27:05.909041 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 16 04:27:05.914869 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 16 04:27:05.915059 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 16 04:27:05.916460 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 16 04:27:05.916649 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 16 04:27:05.922216 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 16 04:27:05.922342 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 16 04:27:05.928231 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 16 04:27:05.928365 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 16 04:27:05.939609 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 16 04:27:05.940594 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 16 04:27:05.946587 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 16 04:27:05.956625 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 16 04:27:05.960811 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 16 04:27:05.960905 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 16 04:27:05.962587 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 16 04:27:05.968487 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 16 04:27:05.974138 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 16 04:27:05.974361 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 16 04:27:05.979887 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 16 04:27:05.980115 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 16 04:27:05.985549 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 16 04:27:05.985790 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 16 04:27:06.000174 systemd[1]: Finished ensure-sysext.service. Sep 16 04:27:06.004363 systemd[1]: Expecting device dev-ptp_hyperv.device - /dev/ptp_hyperv... Sep 16 04:27:06.009150 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 16 04:27:06.012595 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 16 04:27:06.019958 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 16 04:27:06.024847 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 16 04:27:06.040181 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 16 04:27:06.045122 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 16 04:27:06.045161 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 16 04:27:06.045206 systemd[1]: Reached target time-set.target - System Time Set. Sep 16 04:27:06.052520 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 16 04:27:06.056993 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 16 04:27:06.063627 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 16 04:27:06.069619 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 16 04:27:06.069761 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 16 04:27:06.074144 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 16 04:27:06.074269 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 16 04:27:06.079939 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 16 04:27:06.080078 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 16 04:27:06.086512 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 16 04:27:06.086590 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 16 04:27:06.125152 systemd-resolved[1582]: Positive Trust Anchors: Sep 16 04:27:06.125165 systemd-resolved[1582]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 16 04:27:06.125184 systemd-resolved[1582]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 16 04:27:06.169640 systemd-resolved[1582]: Using system hostname 'ci-4459.0.0-n-c6becb1dff'. Sep 16 04:27:06.183554 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 16 04:27:06.188454 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 16 04:27:06.197019 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 16 04:27:06.206875 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 16 04:27:06.237485 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 16 04:27:06.281607 augenrules[1663]: No rules Sep 16 04:27:06.283837 systemd[1]: audit-rules.service: Deactivated successfully. Sep 16 04:27:06.285473 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 16 04:27:06.337550 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Sep 16 04:27:06.410530 kernel: mousedev: PS/2 mouse device common for all mice Sep 16 04:27:06.410596 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#35 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Sep 16 04:27:06.442187 kernel: hv_vmbus: registering driver hv_balloon Sep 16 04:27:06.442270 kernel: hv_vmbus: registering driver hyperv_fb Sep 16 04:27:06.442283 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Sep 16 04:27:06.446724 kernel: hv_balloon: Memory hot add disabled on ARM64 Sep 16 04:27:06.461562 systemd[1]: Condition check resulted in dev-ptp_hyperv.device - /dev/ptp_hyperv being skipped. Sep 16 04:27:06.463704 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Sep 16 04:27:06.470429 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Sep 16 04:27:06.475408 kernel: Console: switching to colour dummy device 80x25 Sep 16 04:27:06.476815 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 16 04:27:06.483437 kernel: Console: switching to colour frame buffer device 128x48 Sep 16 04:27:06.516918 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 16 04:27:06.519169 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 16 04:27:06.525791 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 16 04:27:06.528989 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 16 04:27:06.536885 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 16 04:27:06.537060 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 16 04:27:06.546517 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 16 04:27:06.593930 systemd-networkd[1639]: lo: Link UP Sep 16 04:27:06.593938 systemd-networkd[1639]: lo: Gained carrier Sep 16 04:27:06.596325 systemd-networkd[1639]: Enumeration completed Sep 16 04:27:06.596453 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 16 04:27:06.601603 systemd-networkd[1639]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 16 04:27:06.601611 systemd-networkd[1639]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 16 04:27:06.604459 systemd[1]: Reached target network.target - Network. Sep 16 04:27:06.611558 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 16 04:27:06.619798 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 16 04:27:06.636085 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Sep 16 04:27:06.641372 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 16 04:27:06.665371 kernel: mlx5_core 49ff:00:02.0 enP18943s1: Link up Sep 16 04:27:06.666509 kernel: buffer_size[0]=0 is not enough for lossless buffer Sep 16 04:27:06.690710 kernel: hv_netvsc 002248c2-3600-0022-48c2-3600002248c2 eth0: Data path switched to VF: enP18943s1 Sep 16 04:27:06.690843 systemd-networkd[1639]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 16 04:27:06.691552 systemd-networkd[1639]: enP18943s1: Link UP Sep 16 04:27:06.691694 systemd-networkd[1639]: eth0: Link UP Sep 16 04:27:06.691698 systemd-networkd[1639]: eth0: Gained carrier Sep 16 04:27:06.691711 systemd-networkd[1639]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 16 04:27:06.697303 systemd-networkd[1639]: enP18943s1: Gained carrier Sep 16 04:27:06.708502 systemd-networkd[1639]: eth0: DHCPv4 address 10.200.20.14/24, gateway 10.200.20.1 acquired from 168.63.129.16 Sep 16 04:27:06.713664 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 16 04:27:06.743463 kernel: MACsec IEEE 802.1AE Sep 16 04:27:06.744993 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 16 04:27:07.809149 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 16 04:27:08.207547 systemd-networkd[1639]: eth0: Gained IPv6LL Sep 16 04:27:08.209736 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 16 04:27:08.215278 systemd[1]: Reached target network-online.target - Network is Online. Sep 16 04:27:08.667854 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 16 04:27:08.674200 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 16 04:27:12.026453 ldconfig[1377]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 16 04:27:12.034620 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 16 04:27:12.041232 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 16 04:27:12.073565 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 16 04:27:12.079766 systemd[1]: Reached target sysinit.target - System Initialization. Sep 16 04:27:12.084580 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 16 04:27:12.089942 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 16 04:27:12.096331 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 16 04:27:12.101571 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 16 04:27:12.107197 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 16 04:27:12.113925 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 16 04:27:12.113953 systemd[1]: Reached target paths.target - Path Units. Sep 16 04:27:12.117692 systemd[1]: Reached target timers.target - Timer Units. Sep 16 04:27:12.148794 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 16 04:27:12.154752 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 16 04:27:12.161288 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 16 04:27:12.166913 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 16 04:27:12.173111 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 16 04:27:12.179326 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 16 04:27:12.184022 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 16 04:27:12.189978 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 16 04:27:12.194647 systemd[1]: Reached target sockets.target - Socket Units. Sep 16 04:27:12.198632 systemd[1]: Reached target basic.target - Basic System. Sep 16 04:27:12.203817 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 16 04:27:12.203844 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 16 04:27:12.241833 systemd[1]: Starting chronyd.service - NTP client/server... Sep 16 04:27:12.254534 systemd[1]: Starting containerd.service - containerd container runtime... Sep 16 04:27:12.263970 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 16 04:27:12.272824 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 16 04:27:12.280749 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 16 04:27:12.293527 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 16 04:27:12.306135 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 16 04:27:12.310698 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 16 04:27:12.313953 chronyd[1786]: chronyd version 4.7 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Sep 16 04:27:12.314957 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Sep 16 04:27:12.319589 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Sep 16 04:27:12.320940 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 04:27:12.326733 jq[1794]: false Sep 16 04:27:12.329516 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 16 04:27:12.330513 KVP[1796]: KVP starting; pid is:1796 Sep 16 04:27:12.336577 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 16 04:27:12.346453 KVP[1796]: KVP LIC Version: 3.1 Sep 16 04:27:12.347670 kernel: hv_utils: KVP IC version 4.0 Sep 16 04:27:12.348633 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 16 04:27:12.354410 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 16 04:27:12.367692 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 16 04:27:12.372929 chronyd[1786]: Timezone right/UTC failed leap second check, ignoring Sep 16 04:27:12.373117 chronyd[1786]: Loaded seccomp filter (level 2) Sep 16 04:27:12.377771 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 16 04:27:12.384415 extend-filesystems[1795]: Found /dev/sda6 Sep 16 04:27:12.384722 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 16 04:27:12.390742 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 16 04:27:12.391511 systemd[1]: Starting update-engine.service - Update Engine... Sep 16 04:27:12.400610 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 16 04:27:12.408536 extend-filesystems[1795]: Found /dev/sda9 Sep 16 04:27:12.409359 systemd[1]: Started chronyd.service - NTP client/server. Sep 16 04:27:12.416417 extend-filesystems[1795]: Checking size of /dev/sda9 Sep 16 04:27:12.422925 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 16 04:27:12.423472 jq[1818]: true Sep 16 04:27:12.430961 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 16 04:27:12.431653 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 16 04:27:12.433281 systemd[1]: motdgen.service: Deactivated successfully. Sep 16 04:27:12.433960 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 16 04:27:12.442026 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 16 04:27:12.442502 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 16 04:27:12.456877 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 16 04:27:12.468540 extend-filesystems[1795]: Old size kept for /dev/sda9 Sep 16 04:27:12.481805 update_engine[1815]: I20250916 04:27:12.479882 1815 main.cc:92] Flatcar Update Engine starting Sep 16 04:27:12.482015 jq[1829]: true Sep 16 04:27:12.472075 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 16 04:27:12.473510 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 16 04:27:12.475766 (ntainerd)[1831]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 16 04:27:12.489695 systemd-logind[1808]: New seat seat0. Sep 16 04:27:12.495981 systemd-logind[1808]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Sep 16 04:27:12.498154 systemd[1]: Started systemd-logind.service - User Login Management. Sep 16 04:27:12.510459 tar[1826]: linux-arm64/helm Sep 16 04:27:12.659375 bash[1879]: Updated "/home/core/.ssh/authorized_keys" Sep 16 04:27:12.661476 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 16 04:27:12.670902 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 16 04:27:12.826072 sshd_keygen[1825]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 16 04:27:12.851465 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 16 04:27:12.859562 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 16 04:27:12.865698 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Sep 16 04:27:12.878679 systemd[1]: issuegen.service: Deactivated successfully. Sep 16 04:27:12.878867 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 16 04:27:12.883485 tar[1826]: linux-arm64/LICENSE Sep 16 04:27:12.883550 tar[1826]: linux-arm64/README.md Sep 16 04:27:12.884712 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 16 04:27:12.903537 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 16 04:27:12.910666 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Sep 16 04:27:12.915691 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 16 04:27:12.925090 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 16 04:27:12.932578 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 16 04:27:12.938136 systemd[1]: Reached target getty.target - Login Prompts. Sep 16 04:27:12.942545 dbus-daemon[1789]: [system] SELinux support is enabled Sep 16 04:27:12.943059 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 16 04:27:12.952682 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 16 04:27:12.952711 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 16 04:27:12.958545 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 16 04:27:12.958566 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 16 04:27:12.964942 update_engine[1815]: I20250916 04:27:12.964520 1815 update_check_scheduler.cc:74] Next update check in 8m16s Sep 16 04:27:12.965550 systemd[1]: Started update-engine.service - Update Engine. Sep 16 04:27:12.969819 dbus-daemon[1789]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 16 04:27:12.972630 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 16 04:27:13.047794 coreos-metadata[1788]: Sep 16 04:27:13.047 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Sep 16 04:27:13.052051 coreos-metadata[1788]: Sep 16 04:27:13.052 INFO Fetch successful Sep 16 04:27:13.052051 coreos-metadata[1788]: Sep 16 04:27:13.052 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Sep 16 04:27:13.056192 coreos-metadata[1788]: Sep 16 04:27:13.056 INFO Fetch successful Sep 16 04:27:13.056484 coreos-metadata[1788]: Sep 16 04:27:13.056 INFO Fetching http://168.63.129.16/machine/f2910224-487b-438b-8be6-d5a9a754969f/621d02f6%2Dc22a%2D43a5%2Db1ed%2Dce6f62fa47b3.%5Fci%2D4459.0.0%2Dn%2Dc6becb1dff?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Sep 16 04:27:13.057511 coreos-metadata[1788]: Sep 16 04:27:13.057 INFO Fetch successful Sep 16 04:27:13.057727 coreos-metadata[1788]: Sep 16 04:27:13.057 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Sep 16 04:27:13.064819 coreos-metadata[1788]: Sep 16 04:27:13.064 INFO Fetch successful Sep 16 04:27:13.092460 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 16 04:27:13.097178 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 16 04:27:13.135269 locksmithd[1959]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 16 04:27:13.299835 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:27:13.305772 containerd[1831]: time="2025-09-16T04:27:13Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 16 04:27:13.307315 containerd[1831]: time="2025-09-16T04:27:13.307271780Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 16 04:27:13.312799 containerd[1831]: time="2025-09-16T04:27:13.312765548Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="7.984µs" Sep 16 04:27:13.312868 containerd[1831]: time="2025-09-16T04:27:13.312855380Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 16 04:27:13.312930 containerd[1831]: time="2025-09-16T04:27:13.312919012Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 16 04:27:13.313088 containerd[1831]: time="2025-09-16T04:27:13.313072812Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 16 04:27:13.313142 containerd[1831]: time="2025-09-16T04:27:13.313130820Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 16 04:27:13.313200 containerd[1831]: time="2025-09-16T04:27:13.313189324Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 16 04:27:13.313297 containerd[1831]: time="2025-09-16T04:27:13.313281548Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 16 04:27:13.313353 containerd[1831]: time="2025-09-16T04:27:13.313339988Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 16 04:27:13.313607 containerd[1831]: time="2025-09-16T04:27:13.313585684Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 16 04:27:13.313673 containerd[1831]: time="2025-09-16T04:27:13.313660204Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 16 04:27:13.313723 containerd[1831]: time="2025-09-16T04:27:13.313711428Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 16 04:27:13.313760 containerd[1831]: time="2025-09-16T04:27:13.313751420Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 16 04:27:13.313876 containerd[1831]: time="2025-09-16T04:27:13.313861940Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 16 04:27:13.314102 containerd[1831]: time="2025-09-16T04:27:13.314081340Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 16 04:27:13.314170 containerd[1831]: time="2025-09-16T04:27:13.314159588Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 16 04:27:13.314217 containerd[1831]: time="2025-09-16T04:27:13.314205404Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 16 04:27:13.314320 containerd[1831]: time="2025-09-16T04:27:13.314307604Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 16 04:27:13.314538 containerd[1831]: time="2025-09-16T04:27:13.314521924Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 16 04:27:13.314645 containerd[1831]: time="2025-09-16T04:27:13.314631348Z" level=info msg="metadata content store policy set" policy=shared Sep 16 04:27:13.324914 containerd[1831]: time="2025-09-16T04:27:13.324875804Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 16 04:27:13.325069 containerd[1831]: time="2025-09-16T04:27:13.325055756Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 16 04:27:13.325623 containerd[1831]: time="2025-09-16T04:27:13.325139244Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 16 04:27:13.325623 containerd[1831]: time="2025-09-16T04:27:13.325154924Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 16 04:27:13.325623 containerd[1831]: time="2025-09-16T04:27:13.325163812Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 16 04:27:13.325623 containerd[1831]: time="2025-09-16T04:27:13.325173076Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 16 04:27:13.325623 containerd[1831]: time="2025-09-16T04:27:13.325181628Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 16 04:27:13.325623 containerd[1831]: time="2025-09-16T04:27:13.325203596Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 16 04:27:13.325623 containerd[1831]: time="2025-09-16T04:27:13.325211972Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 16 04:27:13.325623 containerd[1831]: time="2025-09-16T04:27:13.325218604Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 16 04:27:13.325623 containerd[1831]: time="2025-09-16T04:27:13.325225148Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 16 04:27:13.325623 containerd[1831]: time="2025-09-16T04:27:13.325233812Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 16 04:27:13.325623 containerd[1831]: time="2025-09-16T04:27:13.325368820Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 16 04:27:13.325623 containerd[1831]: time="2025-09-16T04:27:13.325384844Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 16 04:27:13.325623 containerd[1831]: time="2025-09-16T04:27:13.325396052Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 16 04:27:13.325623 containerd[1831]: time="2025-09-16T04:27:13.325448940Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 16 04:27:13.325821 containerd[1831]: time="2025-09-16T04:27:13.325459556Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 16 04:27:13.325821 containerd[1831]: time="2025-09-16T04:27:13.325467324Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 16 04:27:13.325821 containerd[1831]: time="2025-09-16T04:27:13.325474668Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 16 04:27:13.325821 containerd[1831]: time="2025-09-16T04:27:13.325483628Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 16 04:27:13.325821 containerd[1831]: time="2025-09-16T04:27:13.325490620Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 16 04:27:13.325821 containerd[1831]: time="2025-09-16T04:27:13.325497332Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 16 04:27:13.325821 containerd[1831]: time="2025-09-16T04:27:13.325504308Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 16 04:27:13.325821 containerd[1831]: time="2025-09-16T04:27:13.325580804Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 16 04:27:13.325821 containerd[1831]: time="2025-09-16T04:27:13.325592220Z" level=info msg="Start snapshots syncer" Sep 16 04:27:13.325970 containerd[1831]: time="2025-09-16T04:27:13.325953892Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 16 04:27:13.326255 containerd[1831]: time="2025-09-16T04:27:13.326223772Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 16 04:27:13.326462 containerd[1831]: time="2025-09-16T04:27:13.326443028Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 16 04:27:13.326592 containerd[1831]: time="2025-09-16T04:27:13.326578916Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 16 04:27:13.326810 containerd[1831]: time="2025-09-16T04:27:13.326790132Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 16 04:27:13.326876 containerd[1831]: time="2025-09-16T04:27:13.326863660Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 16 04:27:13.326921 containerd[1831]: time="2025-09-16T04:27:13.326910548Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 16 04:27:13.326963 containerd[1831]: time="2025-09-16T04:27:13.326951356Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 16 04:27:13.327004 containerd[1831]: time="2025-09-16T04:27:13.326993780Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 16 04:27:13.327061 containerd[1831]: time="2025-09-16T04:27:13.327050724Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 16 04:27:13.327107 containerd[1831]: time="2025-09-16T04:27:13.327098028Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 16 04:27:13.327167 containerd[1831]: time="2025-09-16T04:27:13.327156844Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 16 04:27:13.327223 containerd[1831]: time="2025-09-16T04:27:13.327212884Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 16 04:27:13.327275 containerd[1831]: time="2025-09-16T04:27:13.327263756Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 16 04:27:13.327404 containerd[1831]: time="2025-09-16T04:27:13.327346692Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 16 04:27:13.327404 containerd[1831]: time="2025-09-16T04:27:13.327364092Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 16 04:27:13.327404 containerd[1831]: time="2025-09-16T04:27:13.327371420Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 16 04:27:13.327404 containerd[1831]: time="2025-09-16T04:27:13.327377412Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 16 04:27:13.327404 containerd[1831]: time="2025-09-16T04:27:13.327382308Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 16 04:27:13.327404 containerd[1831]: time="2025-09-16T04:27:13.327388692Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 16 04:27:13.327625 containerd[1831]: time="2025-09-16T04:27:13.327608172Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 16 04:27:13.327694 containerd[1831]: time="2025-09-16T04:27:13.327682836Z" level=info msg="runtime interface created" Sep 16 04:27:13.327730 containerd[1831]: time="2025-09-16T04:27:13.327720276Z" level=info msg="created NRI interface" Sep 16 04:27:13.327768 containerd[1831]: time="2025-09-16T04:27:13.327757820Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 16 04:27:13.327808 containerd[1831]: time="2025-09-16T04:27:13.327799244Z" level=info msg="Connect containerd service" Sep 16 04:27:13.327889 containerd[1831]: time="2025-09-16T04:27:13.327875836Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 16 04:27:13.330905 containerd[1831]: time="2025-09-16T04:27:13.329664188Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 16 04:27:13.652841 (kubelet)[1979]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 16 04:27:13.950985 containerd[1831]: time="2025-09-16T04:27:13.950855388Z" level=info msg="Start subscribing containerd event" Sep 16 04:27:13.951241 containerd[1831]: time="2025-09-16T04:27:13.951101020Z" level=info msg="Start recovering state" Sep 16 04:27:13.951440 containerd[1831]: time="2025-09-16T04:27:13.951369860Z" level=info msg="Start event monitor" Sep 16 04:27:13.951440 containerd[1831]: time="2025-09-16T04:27:13.951393316Z" level=info msg="Start cni network conf syncer for default" Sep 16 04:27:13.951440 containerd[1831]: time="2025-09-16T04:27:13.951401796Z" level=info msg="Start streaming server" Sep 16 04:27:13.951440 containerd[1831]: time="2025-09-16T04:27:13.951408540Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 16 04:27:13.951440 containerd[1831]: time="2025-09-16T04:27:13.951413532Z" level=info msg="runtime interface starting up..." Sep 16 04:27:13.951706 containerd[1831]: time="2025-09-16T04:27:13.951417220Z" level=info msg="starting plugins..." Sep 16 04:27:13.951706 containerd[1831]: time="2025-09-16T04:27:13.951656100Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 16 04:27:13.951808 containerd[1831]: time="2025-09-16T04:27:13.951795260Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 16 04:27:13.951880 containerd[1831]: time="2025-09-16T04:27:13.951870052Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 16 04:27:13.952329 systemd[1]: Started containerd.service - containerd container runtime. Sep 16 04:27:13.959367 containerd[1831]: time="2025-09-16T04:27:13.958959028Z" level=info msg="containerd successfully booted in 0.653821s" Sep 16 04:27:13.959865 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 16 04:27:13.969512 systemd[1]: Startup finished in 1.608s (kernel) + 18.316s (initrd) + 31.755s (userspace) = 51.681s. Sep 16 04:27:14.011367 kubelet[1979]: E0916 04:27:14.011302 1979 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 16 04:27:14.013147 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 16 04:27:14.013250 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 16 04:27:14.013841 systemd[1]: kubelet.service: Consumed 546ms CPU time, 256.2M memory peak. Sep 16 04:27:14.504794 login[1955]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:27:14.505215 login[1956]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:27:14.514015 systemd-logind[1808]: New session 1 of user core. Sep 16 04:27:14.515946 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 16 04:27:14.517104 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 16 04:27:14.519703 systemd-logind[1808]: New session 2 of user core. Sep 16 04:27:14.546875 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 16 04:27:14.549113 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 16 04:27:14.575207 (systemd)[2006]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 16 04:27:14.577551 systemd-logind[1808]: New session c1 of user core. Sep 16 04:27:14.940762 systemd[2006]: Queued start job for default target default.target. Sep 16 04:27:14.950571 systemd[2006]: Created slice app.slice - User Application Slice. Sep 16 04:27:14.950597 systemd[2006]: Reached target paths.target - Paths. Sep 16 04:27:14.950628 systemd[2006]: Reached target timers.target - Timers. Sep 16 04:27:14.951571 systemd[2006]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 16 04:27:14.958832 systemd[2006]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 16 04:27:14.958875 systemd[2006]: Reached target sockets.target - Sockets. Sep 16 04:27:14.958906 systemd[2006]: Reached target basic.target - Basic System. Sep 16 04:27:14.958928 systemd[2006]: Reached target default.target - Main User Target. Sep 16 04:27:14.958949 systemd[2006]: Startup finished in 376ms. Sep 16 04:27:14.959205 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 16 04:27:14.971800 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 16 04:27:14.973003 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 16 04:27:15.186171 waagent[1952]: 2025-09-16T04:27:15.186091Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Sep 16 04:27:15.194079 waagent[1952]: 2025-09-16T04:27:15.190726Z INFO Daemon Daemon OS: flatcar 4459.0.0 Sep 16 04:27:15.194251 waagent[1952]: 2025-09-16T04:27:15.194209Z INFO Daemon Daemon Python: 3.11.13 Sep 16 04:27:15.198286 waagent[1952]: 2025-09-16T04:27:15.198237Z INFO Daemon Daemon Run daemon Sep 16 04:27:15.203423 waagent[1952]: 2025-09-16T04:27:15.201555Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4459.0.0' Sep 16 04:27:15.207980 waagent[1952]: 2025-09-16T04:27:15.207942Z INFO Daemon Daemon Using waagent for provisioning Sep 16 04:27:15.212050 waagent[1952]: 2025-09-16T04:27:15.212017Z INFO Daemon Daemon Activate resource disk Sep 16 04:27:15.215763 waagent[1952]: 2025-09-16T04:27:15.215733Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Sep 16 04:27:15.223771 waagent[1952]: 2025-09-16T04:27:15.223736Z INFO Daemon Daemon Found device: None Sep 16 04:27:15.227237 waagent[1952]: 2025-09-16T04:27:15.227211Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Sep 16 04:27:15.233152 waagent[1952]: 2025-09-16T04:27:15.233129Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Sep 16 04:27:15.241554 waagent[1952]: 2025-09-16T04:27:15.241516Z INFO Daemon Daemon Clean protocol and wireserver endpoint Sep 16 04:27:15.245608 waagent[1952]: 2025-09-16T04:27:15.245580Z INFO Daemon Daemon Running default provisioning handler Sep 16 04:27:15.254530 waagent[1952]: 2025-09-16T04:27:15.254490Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Sep 16 04:27:15.264413 waagent[1952]: 2025-09-16T04:27:15.264376Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Sep 16 04:27:15.271304 waagent[1952]: 2025-09-16T04:27:15.271275Z INFO Daemon Daemon cloud-init is enabled: False Sep 16 04:27:15.275108 waagent[1952]: 2025-09-16T04:27:15.275087Z INFO Daemon Daemon Copying ovf-env.xml Sep 16 04:27:15.357217 waagent[1952]: 2025-09-16T04:27:15.356986Z INFO Daemon Daemon Successfully mounted dvd Sep 16 04:27:15.381640 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Sep 16 04:27:15.386446 waagent[1952]: 2025-09-16T04:27:15.383388Z INFO Daemon Daemon Detect protocol endpoint Sep 16 04:27:15.386847 waagent[1952]: 2025-09-16T04:27:15.386813Z INFO Daemon Daemon Clean protocol and wireserver endpoint Sep 16 04:27:15.390993 waagent[1952]: 2025-09-16T04:27:15.390961Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Sep 16 04:27:15.395586 waagent[1952]: 2025-09-16T04:27:15.395560Z INFO Daemon Daemon Test for route to 168.63.129.16 Sep 16 04:27:15.400288 waagent[1952]: 2025-09-16T04:27:15.400243Z INFO Daemon Daemon Route to 168.63.129.16 exists Sep 16 04:27:15.403886 waagent[1952]: 2025-09-16T04:27:15.403853Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Sep 16 04:27:15.447355 waagent[1952]: 2025-09-16T04:27:15.447275Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Sep 16 04:27:15.451998 waagent[1952]: 2025-09-16T04:27:15.451980Z INFO Daemon Daemon Wire protocol version:2012-11-30 Sep 16 04:27:15.455659 waagent[1952]: 2025-09-16T04:27:15.455638Z INFO Daemon Daemon Server preferred version:2015-04-05 Sep 16 04:27:15.563464 waagent[1952]: 2025-09-16T04:27:15.562779Z INFO Daemon Daemon Initializing goal state during protocol detection Sep 16 04:27:15.567621 waagent[1952]: 2025-09-16T04:27:15.567577Z INFO Daemon Daemon Forcing an update of the goal state. Sep 16 04:27:15.574090 waagent[1952]: 2025-09-16T04:27:15.574054Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Sep 16 04:27:15.626914 waagent[1952]: 2025-09-16T04:27:15.626876Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.175 Sep 16 04:27:15.631141 waagent[1952]: 2025-09-16T04:27:15.631106Z INFO Daemon Sep 16 04:27:15.633097 waagent[1952]: 2025-09-16T04:27:15.633065Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 4eb9bd00-df7b-4350-809b-5c10ee23e0fe eTag: 15295275492287171785 source: Fabric] Sep 16 04:27:15.641168 waagent[1952]: 2025-09-16T04:27:15.641134Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Sep 16 04:27:15.645967 waagent[1952]: 2025-09-16T04:27:15.645938Z INFO Daemon Sep 16 04:27:15.647955 waagent[1952]: 2025-09-16T04:27:15.647933Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Sep 16 04:27:15.655857 waagent[1952]: 2025-09-16T04:27:15.655828Z INFO Daemon Daemon Downloading artifacts profile blob Sep 16 04:27:15.713218 waagent[1952]: 2025-09-16T04:27:15.713114Z INFO Daemon Downloaded certificate {'thumbprint': '95385900AFEBCC69D137720B00D3CC270B4DD507', 'hasPrivateKey': True} Sep 16 04:27:15.720782 waagent[1952]: 2025-09-16T04:27:15.720743Z INFO Daemon Fetch goal state completed Sep 16 04:27:15.729310 waagent[1952]: 2025-09-16T04:27:15.729282Z INFO Daemon Daemon Starting provisioning Sep 16 04:27:15.733025 waagent[1952]: 2025-09-16T04:27:15.732996Z INFO Daemon Daemon Handle ovf-env.xml. Sep 16 04:27:15.736876 waagent[1952]: 2025-09-16T04:27:15.736853Z INFO Daemon Daemon Set hostname [ci-4459.0.0-n-c6becb1dff] Sep 16 04:27:15.766728 waagent[1952]: 2025-09-16T04:27:15.766682Z INFO Daemon Daemon Publish hostname [ci-4459.0.0-n-c6becb1dff] Sep 16 04:27:15.773434 waagent[1952]: 2025-09-16T04:27:15.771401Z INFO Daemon Daemon Examine /proc/net/route for primary interface Sep 16 04:27:15.775992 waagent[1952]: 2025-09-16T04:27:15.775958Z INFO Daemon Daemon Primary interface is [eth0] Sep 16 04:27:15.798670 systemd-networkd[1639]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 16 04:27:15.798675 systemd-networkd[1639]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 16 04:27:15.798707 systemd-networkd[1639]: eth0: DHCP lease lost Sep 16 04:27:15.799672 waagent[1952]: 2025-09-16T04:27:15.799617Z INFO Daemon Daemon Create user account if not exists Sep 16 04:27:15.803799 waagent[1952]: 2025-09-16T04:27:15.803762Z INFO Daemon Daemon User core already exists, skip useradd Sep 16 04:27:15.807860 waagent[1952]: 2025-09-16T04:27:15.807821Z INFO Daemon Daemon Configure sudoer Sep 16 04:27:15.814988 waagent[1952]: 2025-09-16T04:27:15.814944Z INFO Daemon Daemon Configure sshd Sep 16 04:27:15.818494 systemd-networkd[1639]: eth0: DHCPv4 address 10.200.20.14/24, gateway 10.200.20.1 acquired from 168.63.129.16 Sep 16 04:27:15.821667 waagent[1952]: 2025-09-16T04:27:15.821622Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Sep 16 04:27:15.830608 waagent[1952]: 2025-09-16T04:27:15.830578Z INFO Daemon Daemon Deploy ssh public key. Sep 16 04:27:16.945209 waagent[1952]: 2025-09-16T04:27:16.945165Z INFO Daemon Daemon Provisioning complete Sep 16 04:27:16.958783 waagent[1952]: 2025-09-16T04:27:16.958744Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Sep 16 04:27:16.963500 waagent[1952]: 2025-09-16T04:27:16.963463Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Sep 16 04:27:16.970483 waagent[1952]: 2025-09-16T04:27:16.970453Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Sep 16 04:27:17.069457 waagent[2056]: 2025-09-16T04:27:17.069313Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Sep 16 04:27:17.070453 waagent[2056]: 2025-09-16T04:27:17.069822Z INFO ExtHandler ExtHandler OS: flatcar 4459.0.0 Sep 16 04:27:17.070453 waagent[2056]: 2025-09-16T04:27:17.069896Z INFO ExtHandler ExtHandler Python: 3.11.13 Sep 16 04:27:17.070453 waagent[2056]: 2025-09-16T04:27:17.069937Z INFO ExtHandler ExtHandler CPU Arch: aarch64 Sep 16 04:27:17.127167 waagent[2056]: 2025-09-16T04:27:17.127099Z INFO ExtHandler ExtHandler Distro: flatcar-4459.0.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.13; Arch: aarch64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Sep 16 04:27:17.127536 waagent[2056]: 2025-09-16T04:27:17.127498Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 16 04:27:17.127665 waagent[2056]: 2025-09-16T04:27:17.127638Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 16 04:27:17.133101 waagent[2056]: 2025-09-16T04:27:17.133051Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Sep 16 04:27:17.137765 waagent[2056]: 2025-09-16T04:27:17.137730Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.175 Sep 16 04:27:17.138232 waagent[2056]: 2025-09-16T04:27:17.138197Z INFO ExtHandler Sep 16 04:27:17.138348 waagent[2056]: 2025-09-16T04:27:17.138328Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 307cf4a1-5f19-45b3-9e61-6a83452c220b eTag: 15295275492287171785 source: Fabric] Sep 16 04:27:17.138678 waagent[2056]: 2025-09-16T04:27:17.138647Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Sep 16 04:27:17.139181 waagent[2056]: 2025-09-16T04:27:17.139147Z INFO ExtHandler Sep 16 04:27:17.139305 waagent[2056]: 2025-09-16T04:27:17.139281Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Sep 16 04:27:17.141941 waagent[2056]: 2025-09-16T04:27:17.141910Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Sep 16 04:27:17.190096 waagent[2056]: 2025-09-16T04:27:17.190042Z INFO ExtHandler Downloaded certificate {'thumbprint': '95385900AFEBCC69D137720B00D3CC270B4DD507', 'hasPrivateKey': True} Sep 16 04:27:17.190650 waagent[2056]: 2025-09-16T04:27:17.190616Z INFO ExtHandler Fetch goal state completed Sep 16 04:27:17.201476 waagent[2056]: 2025-09-16T04:27:17.201385Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.4.2 1 Jul 2025 (Library: OpenSSL 3.4.2 1 Jul 2025) Sep 16 04:27:17.204875 waagent[2056]: 2025-09-16T04:27:17.204829Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 2056 Sep 16 04:27:17.205068 waagent[2056]: 2025-09-16T04:27:17.205039Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Sep 16 04:27:17.205406 waagent[2056]: 2025-09-16T04:27:17.205376Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Sep 16 04:27:17.206654 waagent[2056]: 2025-09-16T04:27:17.206620Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4459.0.0', '', 'Flatcar Container Linux by Kinvolk'] Sep 16 04:27:17.206964 waagent[2056]: 2025-09-16T04:27:17.206935Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4459.0.0', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Sep 16 04:27:17.207074 waagent[2056]: 2025-09-16T04:27:17.207055Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Sep 16 04:27:17.207541 waagent[2056]: 2025-09-16T04:27:17.207513Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Sep 16 04:27:17.293783 waagent[2056]: 2025-09-16T04:27:17.293743Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Sep 16 04:27:17.293953 waagent[2056]: 2025-09-16T04:27:17.293927Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Sep 16 04:27:17.298465 waagent[2056]: 2025-09-16T04:27:17.298303Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Sep 16 04:27:17.303045 systemd[1]: Reload requested from client PID 2071 ('systemctl') (unit waagent.service)... Sep 16 04:27:17.303057 systemd[1]: Reloading... Sep 16 04:27:17.375486 zram_generator::config[2106]: No configuration found. Sep 16 04:27:17.533356 systemd[1]: Reloading finished in 230 ms. Sep 16 04:27:17.549213 waagent[2056]: 2025-09-16T04:27:17.548558Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Sep 16 04:27:17.549213 waagent[2056]: 2025-09-16T04:27:17.548701Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Sep 16 04:27:18.614467 waagent[2056]: 2025-09-16T04:27:18.614290Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Sep 16 04:27:18.614785 waagent[2056]: 2025-09-16T04:27:18.614633Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Sep 16 04:27:18.615301 waagent[2056]: 2025-09-16T04:27:18.615258Z INFO ExtHandler ExtHandler Starting env monitor service. Sep 16 04:27:18.615573 waagent[2056]: 2025-09-16T04:27:18.615537Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Sep 16 04:27:18.616350 waagent[2056]: 2025-09-16T04:27:18.615767Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 16 04:27:18.616350 waagent[2056]: 2025-09-16T04:27:18.615841Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 16 04:27:18.616350 waagent[2056]: 2025-09-16T04:27:18.615998Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Sep 16 04:27:18.616350 waagent[2056]: 2025-09-16T04:27:18.616128Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Sep 16 04:27:18.616350 waagent[2056]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Sep 16 04:27:18.616350 waagent[2056]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Sep 16 04:27:18.616350 waagent[2056]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Sep 16 04:27:18.616350 waagent[2056]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Sep 16 04:27:18.616350 waagent[2056]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 16 04:27:18.616350 waagent[2056]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 16 04:27:18.616661 waagent[2056]: 2025-09-16T04:27:18.616620Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Sep 16 04:27:18.616742 waagent[2056]: 2025-09-16T04:27:18.616708Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 16 04:27:18.616795 waagent[2056]: 2025-09-16T04:27:18.616776Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 16 04:27:18.616900 waagent[2056]: 2025-09-16T04:27:18.616871Z INFO EnvHandler ExtHandler Configure routes Sep 16 04:27:18.616939 waagent[2056]: 2025-09-16T04:27:18.616921Z INFO EnvHandler ExtHandler Gateway:None Sep 16 04:27:18.616964 waagent[2056]: 2025-09-16T04:27:18.616950Z INFO EnvHandler ExtHandler Routes:None Sep 16 04:27:18.617193 waagent[2056]: 2025-09-16T04:27:18.617165Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Sep 16 04:27:18.617476 waagent[2056]: 2025-09-16T04:27:18.617409Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Sep 16 04:27:18.617652 waagent[2056]: 2025-09-16T04:27:18.617577Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Sep 16 04:27:18.617786 waagent[2056]: 2025-09-16T04:27:18.617753Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Sep 16 04:27:18.622667 waagent[2056]: 2025-09-16T04:27:18.622605Z INFO ExtHandler ExtHandler Sep 16 04:27:18.622726 waagent[2056]: 2025-09-16T04:27:18.622689Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: e1d8d077-1c7e-465f-8161-628f81d0eb4b correlation 81bf0cc6-3ab5-4814-8e68-e2fcf6f11a4c created: 2025-09-16T04:25:40.327530Z] Sep 16 04:27:18.623218 waagent[2056]: 2025-09-16T04:27:18.623180Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Sep 16 04:27:18.623617 waagent[2056]: 2025-09-16T04:27:18.623590Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 0 ms] Sep 16 04:27:18.651055 waagent[2056]: 2025-09-16T04:27:18.650611Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Sep 16 04:27:18.651055 waagent[2056]: Try `iptables -h' or 'iptables --help' for more information.) Sep 16 04:27:18.651055 waagent[2056]: 2025-09-16T04:27:18.650970Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 4B13DC70-3142-4147-AB71-3E0DF83A896F;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Sep 16 04:27:18.741483 waagent[2056]: 2025-09-16T04:27:18.741298Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Sep 16 04:27:18.741483 waagent[2056]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Sep 16 04:27:18.741483 waagent[2056]: pkts bytes target prot opt in out source destination Sep 16 04:27:18.741483 waagent[2056]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Sep 16 04:27:18.741483 waagent[2056]: pkts bytes target prot opt in out source destination Sep 16 04:27:18.741483 waagent[2056]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Sep 16 04:27:18.741483 waagent[2056]: pkts bytes target prot opt in out source destination Sep 16 04:27:18.741483 waagent[2056]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Sep 16 04:27:18.741483 waagent[2056]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Sep 16 04:27:18.741483 waagent[2056]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Sep 16 04:27:18.743797 waagent[2056]: 2025-09-16T04:27:18.743749Z INFO EnvHandler ExtHandler Current Firewall rules: Sep 16 04:27:18.743797 waagent[2056]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Sep 16 04:27:18.743797 waagent[2056]: pkts bytes target prot opt in out source destination Sep 16 04:27:18.743797 waagent[2056]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Sep 16 04:27:18.743797 waagent[2056]: pkts bytes target prot opt in out source destination Sep 16 04:27:18.743797 waagent[2056]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Sep 16 04:27:18.743797 waagent[2056]: pkts bytes target prot opt in out source destination Sep 16 04:27:18.743797 waagent[2056]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Sep 16 04:27:18.743797 waagent[2056]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Sep 16 04:27:18.743797 waagent[2056]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Sep 16 04:27:18.743975 waagent[2056]: 2025-09-16T04:27:18.743955Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Sep 16 04:27:18.764301 waagent[2056]: 2025-09-16T04:27:18.763968Z INFO MonitorHandler ExtHandler Network interfaces: Sep 16 04:27:18.764301 waagent[2056]: Executing ['ip', '-a', '-o', 'link']: Sep 16 04:27:18.764301 waagent[2056]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Sep 16 04:27:18.764301 waagent[2056]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:c2:36:00 brd ff:ff:ff:ff:ff:ff Sep 16 04:27:18.764301 waagent[2056]: 3: enP18943s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:c2:36:00 brd ff:ff:ff:ff:ff:ff\ altname enP18943p0s2 Sep 16 04:27:18.764301 waagent[2056]: Executing ['ip', '-4', '-a', '-o', 'address']: Sep 16 04:27:18.764301 waagent[2056]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Sep 16 04:27:18.764301 waagent[2056]: 2: eth0 inet 10.200.20.14/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Sep 16 04:27:18.764301 waagent[2056]: Executing ['ip', '-6', '-a', '-o', 'address']: Sep 16 04:27:18.764301 waagent[2056]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Sep 16 04:27:18.764301 waagent[2056]: 2: eth0 inet6 fe80::222:48ff:fec2:3600/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Sep 16 04:27:24.064530 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 16 04:27:24.065721 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 04:27:24.384272 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:27:24.391764 (kubelet)[2205]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 16 04:27:24.424854 kubelet[2205]: E0916 04:27:24.424781 2205 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 16 04:27:24.427540 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 16 04:27:24.427653 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 16 04:27:24.428152 systemd[1]: kubelet.service: Consumed 109ms CPU time, 106.1M memory peak. Sep 16 04:27:27.171735 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 16 04:27:27.173610 systemd[1]: Started sshd@0-10.200.20.14:22-10.200.16.10:53234.service - OpenSSH per-connection server daemon (10.200.16.10:53234). Sep 16 04:27:27.743327 sshd[2213]: Accepted publickey for core from 10.200.16.10 port 53234 ssh2: RSA SHA256:I71fjGTKGCyypT9ALVqAOHTk+maJkjWBdnFioZ0bBCo Sep 16 04:27:27.744383 sshd-session[2213]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:27:27.747810 systemd-logind[1808]: New session 3 of user core. Sep 16 04:27:27.755545 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 16 04:27:28.138922 systemd[1]: Started sshd@1-10.200.20.14:22-10.200.16.10:53240.service - OpenSSH per-connection server daemon (10.200.16.10:53240). Sep 16 04:27:28.592717 sshd[2219]: Accepted publickey for core from 10.200.16.10 port 53240 ssh2: RSA SHA256:I71fjGTKGCyypT9ALVqAOHTk+maJkjWBdnFioZ0bBCo Sep 16 04:27:28.593761 sshd-session[2219]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:27:28.597333 systemd-logind[1808]: New session 4 of user core. Sep 16 04:27:28.607547 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 16 04:27:28.930564 sshd[2222]: Connection closed by 10.200.16.10 port 53240 Sep 16 04:27:28.931203 sshd-session[2219]: pam_unix(sshd:session): session closed for user core Sep 16 04:27:28.934680 systemd[1]: sshd@1-10.200.20.14:22-10.200.16.10:53240.service: Deactivated successfully. Sep 16 04:27:28.936057 systemd[1]: session-4.scope: Deactivated successfully. Sep 16 04:27:28.936658 systemd-logind[1808]: Session 4 logged out. Waiting for processes to exit. Sep 16 04:27:28.937762 systemd-logind[1808]: Removed session 4. Sep 16 04:27:29.004928 systemd[1]: Started sshd@2-10.200.20.14:22-10.200.16.10:53248.service - OpenSSH per-connection server daemon (10.200.16.10:53248). Sep 16 04:27:29.418429 sshd[2228]: Accepted publickey for core from 10.200.16.10 port 53248 ssh2: RSA SHA256:I71fjGTKGCyypT9ALVqAOHTk+maJkjWBdnFioZ0bBCo Sep 16 04:27:29.419484 sshd-session[2228]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:27:29.422838 systemd-logind[1808]: New session 5 of user core. Sep 16 04:27:29.430553 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 16 04:27:29.720690 sshd[2231]: Connection closed by 10.200.16.10 port 53248 Sep 16 04:27:29.720532 sshd-session[2228]: pam_unix(sshd:session): session closed for user core Sep 16 04:27:29.723595 systemd-logind[1808]: Session 5 logged out. Waiting for processes to exit. Sep 16 04:27:29.723846 systemd[1]: sshd@2-10.200.20.14:22-10.200.16.10:53248.service: Deactivated successfully. Sep 16 04:27:29.725361 systemd[1]: session-5.scope: Deactivated successfully. Sep 16 04:27:29.727097 systemd-logind[1808]: Removed session 5. Sep 16 04:27:29.797897 systemd[1]: Started sshd@3-10.200.20.14:22-10.200.16.10:53254.service - OpenSSH per-connection server daemon (10.200.16.10:53254). Sep 16 04:27:30.211473 sshd[2237]: Accepted publickey for core from 10.200.16.10 port 53254 ssh2: RSA SHA256:I71fjGTKGCyypT9ALVqAOHTk+maJkjWBdnFioZ0bBCo Sep 16 04:27:30.212525 sshd-session[2237]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:27:30.215963 systemd-logind[1808]: New session 6 of user core. Sep 16 04:27:30.224536 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 16 04:27:30.516105 sshd[2240]: Connection closed by 10.200.16.10 port 53254 Sep 16 04:27:30.516724 sshd-session[2237]: pam_unix(sshd:session): session closed for user core Sep 16 04:27:30.519808 systemd[1]: sshd@3-10.200.20.14:22-10.200.16.10:53254.service: Deactivated successfully. Sep 16 04:27:30.521156 systemd[1]: session-6.scope: Deactivated successfully. Sep 16 04:27:30.521757 systemd-logind[1808]: Session 6 logged out. Waiting for processes to exit. Sep 16 04:27:30.522932 systemd-logind[1808]: Removed session 6. Sep 16 04:27:30.595312 systemd[1]: Started sshd@4-10.200.20.14:22-10.200.16.10:46296.service - OpenSSH per-connection server daemon (10.200.16.10:46296). Sep 16 04:27:31.015232 sshd[2246]: Accepted publickey for core from 10.200.16.10 port 46296 ssh2: RSA SHA256:I71fjGTKGCyypT9ALVqAOHTk+maJkjWBdnFioZ0bBCo Sep 16 04:27:31.016300 sshd-session[2246]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:27:31.019677 systemd-logind[1808]: New session 7 of user core. Sep 16 04:27:31.025554 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 16 04:27:31.425583 sudo[2250]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 16 04:27:31.425803 sudo[2250]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 16 04:27:31.448770 sudo[2250]: pam_unix(sudo:session): session closed for user root Sep 16 04:27:31.520657 sshd[2249]: Connection closed by 10.200.16.10 port 46296 Sep 16 04:27:31.521291 sshd-session[2246]: pam_unix(sshd:session): session closed for user core Sep 16 04:27:31.524674 systemd[1]: sshd@4-10.200.20.14:22-10.200.16.10:46296.service: Deactivated successfully. Sep 16 04:27:31.526057 systemd[1]: session-7.scope: Deactivated successfully. Sep 16 04:27:31.526720 systemd-logind[1808]: Session 7 logged out. Waiting for processes to exit. Sep 16 04:27:31.527942 systemd-logind[1808]: Removed session 7. Sep 16 04:27:31.596072 systemd[1]: Started sshd@5-10.200.20.14:22-10.200.16.10:46300.service - OpenSSH per-connection server daemon (10.200.16.10:46300). Sep 16 04:27:32.013859 sshd[2256]: Accepted publickey for core from 10.200.16.10 port 46300 ssh2: RSA SHA256:I71fjGTKGCyypT9ALVqAOHTk+maJkjWBdnFioZ0bBCo Sep 16 04:27:32.014993 sshd-session[2256]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:27:32.018408 systemd-logind[1808]: New session 8 of user core. Sep 16 04:27:32.025776 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 16 04:27:32.250988 sudo[2261]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 16 04:27:32.251192 sudo[2261]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 16 04:27:32.256577 sudo[2261]: pam_unix(sudo:session): session closed for user root Sep 16 04:27:32.260191 sudo[2260]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 16 04:27:32.260382 sudo[2260]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 16 04:27:32.267902 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 16 04:27:32.296565 augenrules[2283]: No rules Sep 16 04:27:32.297722 systemd[1]: audit-rules.service: Deactivated successfully. Sep 16 04:27:32.299460 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 16 04:27:32.300618 sudo[2260]: pam_unix(sudo:session): session closed for user root Sep 16 04:27:32.381957 sshd[2259]: Connection closed by 10.200.16.10 port 46300 Sep 16 04:27:32.381862 sshd-session[2256]: pam_unix(sshd:session): session closed for user core Sep 16 04:27:32.384715 systemd-logind[1808]: Session 8 logged out. Waiting for processes to exit. Sep 16 04:27:32.384830 systemd[1]: sshd@5-10.200.20.14:22-10.200.16.10:46300.service: Deactivated successfully. Sep 16 04:27:32.386167 systemd[1]: session-8.scope: Deactivated successfully. Sep 16 04:27:32.388052 systemd-logind[1808]: Removed session 8. Sep 16 04:27:32.455905 systemd[1]: Started sshd@6-10.200.20.14:22-10.200.16.10:46314.service - OpenSSH per-connection server daemon (10.200.16.10:46314). Sep 16 04:27:32.867848 sshd[2292]: Accepted publickey for core from 10.200.16.10 port 46314 ssh2: RSA SHA256:I71fjGTKGCyypT9ALVqAOHTk+maJkjWBdnFioZ0bBCo Sep 16 04:27:32.868926 sshd-session[2292]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:27:32.872320 systemd-logind[1808]: New session 9 of user core. Sep 16 04:27:32.879561 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 16 04:27:33.102603 sudo[2296]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 16 04:27:33.102797 sudo[2296]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 16 04:27:34.564499 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 16 04:27:34.566635 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 04:27:34.739031 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 16 04:27:34.747672 (dockerd)[2317]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 16 04:27:35.091485 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:27:35.101658 (kubelet)[2323]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 16 04:27:35.131416 kubelet[2323]: E0916 04:27:35.131359 2323 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 16 04:27:35.133631 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 16 04:27:35.133844 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 16 04:27:35.134352 systemd[1]: kubelet.service: Consumed 108ms CPU time, 105.8M memory peak. Sep 16 04:27:36.097519 dockerd[2317]: time="2025-09-16T04:27:36.097467276Z" level=info msg="Starting up" Sep 16 04:27:36.098110 dockerd[2317]: time="2025-09-16T04:27:36.098089620Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 16 04:27:36.106145 dockerd[2317]: time="2025-09-16T04:27:36.106118084Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 16 04:27:36.162757 chronyd[1786]: Selected source PHC0 Sep 16 04:27:36.231430 dockerd[2317]: time="2025-09-16T04:27:36.231385101Z" level=info msg="Loading containers: start." Sep 16 04:27:36.297463 kernel: Initializing XFRM netlink socket Sep 16 04:27:36.730550 systemd-networkd[1639]: docker0: Link UP Sep 16 04:27:36.746459 dockerd[2317]: time="2025-09-16T04:27:36.746037387Z" level=info msg="Loading containers: done." Sep 16 04:27:36.757746 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1658281157-merged.mount: Deactivated successfully. Sep 16 04:27:36.763785 dockerd[2317]: time="2025-09-16T04:27:36.763751742Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 16 04:27:36.763858 dockerd[2317]: time="2025-09-16T04:27:36.763836352Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 16 04:27:36.763925 dockerd[2317]: time="2025-09-16T04:27:36.763909194Z" level=info msg="Initializing buildkit" Sep 16 04:27:36.804340 dockerd[2317]: time="2025-09-16T04:27:36.804288176Z" level=info msg="Completed buildkit initialization" Sep 16 04:27:36.809669 dockerd[2317]: time="2025-09-16T04:27:36.809611613Z" level=info msg="Daemon has completed initialization" Sep 16 04:27:36.809972 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 16 04:27:36.810544 dockerd[2317]: time="2025-09-16T04:27:36.809999737Z" level=info msg="API listen on /run/docker.sock" Sep 16 04:27:37.645192 containerd[1831]: time="2025-09-16T04:27:37.645149857Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\"" Sep 16 04:27:38.678905 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1261808594.mount: Deactivated successfully. Sep 16 04:27:39.615155 containerd[1831]: time="2025-09-16T04:27:39.614516719Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:27:39.617329 containerd[1831]: time="2025-09-16T04:27:39.617299279Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.13: active requests=0, bytes read=25687325" Sep 16 04:27:39.620058 containerd[1831]: time="2025-09-16T04:27:39.620033887Z" level=info msg="ImageCreate event name:\"sha256:0b1c07d8fd4a3526d5c44502e682df3627a3b01c1e608e5e24c3519c8fb337b6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:27:39.624238 containerd[1831]: time="2025-09-16T04:27:39.624208343Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:27:39.624794 containerd[1831]: time="2025-09-16T04:27:39.624689879Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.13\" with image id \"sha256:0b1c07d8fd4a3526d5c44502e682df3627a3b01c1e608e5e24c3519c8fb337b6\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.13\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\", size \"25683924\" in 1.979503094s" Sep 16 04:27:39.624869 containerd[1831]: time="2025-09-16T04:27:39.624856855Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\" returns image reference \"sha256:0b1c07d8fd4a3526d5c44502e682df3627a3b01c1e608e5e24c3519c8fb337b6\"" Sep 16 04:27:39.626214 containerd[1831]: time="2025-09-16T04:27:39.626165071Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\"" Sep 16 04:27:40.695609 containerd[1831]: time="2025-09-16T04:27:40.695556719Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:27:40.698688 containerd[1831]: time="2025-09-16T04:27:40.698659447Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.13: active requests=0, bytes read=22459767" Sep 16 04:27:40.701390 containerd[1831]: time="2025-09-16T04:27:40.701351015Z" level=info msg="ImageCreate event name:\"sha256:c359cb88f3d2147f2cb4c5ada4fbdeadc4b1c009d66c8f33f3856efaf04ee6ef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:27:40.705025 containerd[1831]: time="2025-09-16T04:27:40.704982031Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:27:40.705561 containerd[1831]: time="2025-09-16T04:27:40.705439711Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.13\" with image id \"sha256:c359cb88f3d2147f2cb4c5ada4fbdeadc4b1c009d66c8f33f3856efaf04ee6ef\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.13\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\", size \"24028542\" in 1.07925024s" Sep 16 04:27:40.705561 containerd[1831]: time="2025-09-16T04:27:40.705472375Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\" returns image reference \"sha256:c359cb88f3d2147f2cb4c5ada4fbdeadc4b1c009d66c8f33f3856efaf04ee6ef\"" Sep 16 04:27:40.705893 containerd[1831]: time="2025-09-16T04:27:40.705841327Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\"" Sep 16 04:27:41.674437 containerd[1831]: time="2025-09-16T04:27:41.674359839Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:27:41.676819 containerd[1831]: time="2025-09-16T04:27:41.676789191Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.13: active requests=0, bytes read=17127506" Sep 16 04:27:41.680493 containerd[1831]: time="2025-09-16T04:27:41.680456319Z" level=info msg="ImageCreate event name:\"sha256:5e3cbe2ba7db787c6aebfcf4484156dd4ebd7ede811ef72e8929593e59a5fa27\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:27:41.685447 containerd[1831]: time="2025-09-16T04:27:41.684934223Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:27:41.685572 containerd[1831]: time="2025-09-16T04:27:41.685550207Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.13\" with image id \"sha256:5e3cbe2ba7db787c6aebfcf4484156dd4ebd7ede811ef72e8929593e59a5fa27\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.13\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\", size \"18696299\" in 979.679528ms" Sep 16 04:27:41.685630 containerd[1831]: time="2025-09-16T04:27:41.685619127Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\" returns image reference \"sha256:5e3cbe2ba7db787c6aebfcf4484156dd4ebd7ede811ef72e8929593e59a5fa27\"" Sep 16 04:27:41.686070 containerd[1831]: time="2025-09-16T04:27:41.686052759Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\"" Sep 16 04:27:42.508227 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2003035245.mount: Deactivated successfully. Sep 16 04:27:42.745001 containerd[1831]: time="2025-09-16T04:27:42.744540279Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:27:42.746953 containerd[1831]: time="2025-09-16T04:27:42.746924655Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.13: active requests=0, bytes read=26954907" Sep 16 04:27:42.750057 containerd[1831]: time="2025-09-16T04:27:42.750029183Z" level=info msg="ImageCreate event name:\"sha256:c15699f0b7002450249485b10f20211982dfd2bec4d61c86c35acebc659e794e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:27:42.753191 containerd[1831]: time="2025-09-16T04:27:42.753139255Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:27:42.753400 containerd[1831]: time="2025-09-16T04:27:42.753374199Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.13\" with image id \"sha256:c15699f0b7002450249485b10f20211982dfd2bec4d61c86c35acebc659e794e\", repo tag \"registry.k8s.io/kube-proxy:v1.31.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\", size \"26953926\" in 1.067238472s" Sep 16 04:27:42.753554 containerd[1831]: time="2025-09-16T04:27:42.753403663Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\" returns image reference \"sha256:c15699f0b7002450249485b10f20211982dfd2bec4d61c86c35acebc659e794e\"" Sep 16 04:27:42.753945 containerd[1831]: time="2025-09-16T04:27:42.753922807Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 16 04:27:43.962217 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1701746497.mount: Deactivated successfully. Sep 16 04:27:44.673126 containerd[1831]: time="2025-09-16T04:27:44.673005427Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:27:44.675659 containerd[1831]: time="2025-09-16T04:27:44.675632534Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" Sep 16 04:27:44.678485 containerd[1831]: time="2025-09-16T04:27:44.678460446Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:27:44.681923 containerd[1831]: time="2025-09-16T04:27:44.681885874Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:27:44.682648 containerd[1831]: time="2025-09-16T04:27:44.682616799Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.928664166s" Sep 16 04:27:44.682747 containerd[1831]: time="2025-09-16T04:27:44.682730634Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Sep 16 04:27:44.683194 containerd[1831]: time="2025-09-16T04:27:44.683172383Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 16 04:27:45.204688 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 16 04:27:45.206218 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 04:27:45.209467 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3686162846.mount: Deactivated successfully. Sep 16 04:27:45.229266 containerd[1831]: time="2025-09-16T04:27:45.228832112Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 16 04:27:45.233374 containerd[1831]: time="2025-09-16T04:27:45.233351958Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Sep 16 04:27:45.239902 containerd[1831]: time="2025-09-16T04:27:45.239878850Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 16 04:27:45.258509 containerd[1831]: time="2025-09-16T04:27:45.258445998Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 16 04:27:45.258925 containerd[1831]: time="2025-09-16T04:27:45.258893692Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 575.692935ms" Sep 16 04:27:45.258925 containerd[1831]: time="2025-09-16T04:27:45.258922029Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 16 04:27:45.259707 containerd[1831]: time="2025-09-16T04:27:45.259527592Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 16 04:27:45.304704 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:27:45.308661 (kubelet)[2675]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 16 04:27:45.421840 kubelet[2675]: E0916 04:27:45.421802 2675 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 16 04:27:45.423803 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 16 04:27:45.423907 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 16 04:27:45.424392 systemd[1]: kubelet.service: Consumed 101ms CPU time, 107M memory peak. Sep 16 04:27:46.341550 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3792301370.mount: Deactivated successfully. Sep 16 04:27:48.588643 containerd[1831]: time="2025-09-16T04:27:48.588445916Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:27:48.590979 containerd[1831]: time="2025-09-16T04:27:48.590951898Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66537161" Sep 16 04:27:48.592972 containerd[1831]: time="2025-09-16T04:27:48.592949008Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:27:48.597003 containerd[1831]: time="2025-09-16T04:27:48.596973646Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:27:48.597472 containerd[1831]: time="2025-09-16T04:27:48.597326649Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 3.337776536s" Sep 16 04:27:48.597472 containerd[1831]: time="2025-09-16T04:27:48.597350898Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Sep 16 04:27:50.990980 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:27:50.991163 systemd[1]: kubelet.service: Consumed 101ms CPU time, 107M memory peak. Sep 16 04:27:50.992842 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 04:27:51.014462 systemd[1]: Reload requested from client PID 2762 ('systemctl') (unit session-9.scope)... Sep 16 04:27:51.014584 systemd[1]: Reloading... Sep 16 04:27:51.098464 zram_generator::config[2815]: No configuration found. Sep 16 04:27:51.249928 systemd[1]: Reloading finished in 235 ms. Sep 16 04:27:51.303890 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 16 04:27:51.303971 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 16 04:27:51.304185 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:27:51.304237 systemd[1]: kubelet.service: Consumed 62ms CPU time, 89.4M memory peak. Sep 16 04:27:51.307365 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 04:27:51.486145 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:27:51.496674 (kubelet)[2873]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 16 04:27:51.520186 kubelet[2873]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 16 04:27:51.520484 kubelet[2873]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 16 04:27:51.520522 kubelet[2873]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 16 04:27:51.520654 kubelet[2873]: I0916 04:27:51.520631 2873 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 16 04:27:51.715925 kubelet[2873]: I0916 04:27:51.715882 2873 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 16 04:27:51.715925 kubelet[2873]: I0916 04:27:51.715916 2873 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 16 04:27:51.716115 kubelet[2873]: I0916 04:27:51.716099 2873 server.go:934] "Client rotation is on, will bootstrap in background" Sep 16 04:27:51.733610 kubelet[2873]: I0916 04:27:51.733445 2873 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 16 04:27:51.734001 kubelet[2873]: E0916 04:27:51.733983 2873 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.14:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.14:6443: connect: connection refused" logger="UnhandledError" Sep 16 04:27:51.738556 kubelet[2873]: I0916 04:27:51.738540 2873 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 16 04:27:51.741747 kubelet[2873]: I0916 04:27:51.741727 2873 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 16 04:27:51.741835 kubelet[2873]: I0916 04:27:51.741820 2873 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 16 04:27:51.741923 kubelet[2873]: I0916 04:27:51.741903 2873 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 16 04:27:51.742052 kubelet[2873]: I0916 04:27:51.741920 2873 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.0.0-n-c6becb1dff","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 16 04:27:51.742129 kubelet[2873]: I0916 04:27:51.742056 2873 topology_manager.go:138] "Creating topology manager with none policy" Sep 16 04:27:51.742129 kubelet[2873]: I0916 04:27:51.742063 2873 container_manager_linux.go:300] "Creating device plugin manager" Sep 16 04:27:51.742175 kubelet[2873]: I0916 04:27:51.742164 2873 state_mem.go:36] "Initialized new in-memory state store" Sep 16 04:27:51.744177 kubelet[2873]: I0916 04:27:51.744157 2873 kubelet.go:408] "Attempting to sync node with API server" Sep 16 04:27:51.744201 kubelet[2873]: I0916 04:27:51.744181 2873 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 16 04:27:51.744201 kubelet[2873]: I0916 04:27:51.744200 2873 kubelet.go:314] "Adding apiserver pod source" Sep 16 04:27:51.744229 kubelet[2873]: I0916 04:27:51.744211 2873 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 16 04:27:51.744791 kubelet[2873]: W0916 04:27:51.744750 2873 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.0.0-n-c6becb1dff&limit=500&resourceVersion=0": dial tcp 10.200.20.14:6443: connect: connection refused Sep 16 04:27:51.744814 kubelet[2873]: E0916 04:27:51.744801 2873 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.0.0-n-c6becb1dff&limit=500&resourceVersion=0\": dial tcp 10.200.20.14:6443: connect: connection refused" logger="UnhandledError" Sep 16 04:27:51.751113 kubelet[2873]: W0916 04:27:51.751073 2873 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.14:6443: connect: connection refused Sep 16 04:27:51.751143 kubelet[2873]: E0916 04:27:51.751127 2873 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.14:6443: connect: connection refused" logger="UnhandledError" Sep 16 04:27:51.751424 kubelet[2873]: I0916 04:27:51.751407 2873 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 16 04:27:51.752228 kubelet[2873]: I0916 04:27:51.751756 2873 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 16 04:27:51.752228 kubelet[2873]: W0916 04:27:51.751794 2873 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 16 04:27:51.752296 kubelet[2873]: I0916 04:27:51.752259 2873 server.go:1274] "Started kubelet" Sep 16 04:27:51.758397 kubelet[2873]: I0916 04:27:51.758177 2873 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 16 04:27:51.758944 kubelet[2873]: E0916 04:27:51.758067 2873 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.14:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.14:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459.0.0-n-c6becb1dff.1865a8d7310a593b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459.0.0-n-c6becb1dff,UID:ci-4459.0.0-n-c6becb1dff,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459.0.0-n-c6becb1dff,},FirstTimestamp:2025-09-16 04:27:51.752243515 +0000 UTC m=+0.252988030,LastTimestamp:2025-09-16 04:27:51.752243515 +0000 UTC m=+0.252988030,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459.0.0-n-c6becb1dff,}" Sep 16 04:27:51.760161 kubelet[2873]: E0916 04:27:51.760145 2873 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 16 04:27:51.760317 kubelet[2873]: I0916 04:27:51.760308 2873 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 16 04:27:51.760493 kubelet[2873]: I0916 04:27:51.760470 2873 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 16 04:27:51.761133 kubelet[2873]: I0916 04:27:51.761112 2873 server.go:449] "Adding debug handlers to kubelet server" Sep 16 04:27:51.761845 kubelet[2873]: I0916 04:27:51.761809 2873 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 16 04:27:51.762078 kubelet[2873]: I0916 04:27:51.762066 2873 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 16 04:27:51.762738 kubelet[2873]: I0916 04:27:51.762556 2873 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 16 04:27:51.762738 kubelet[2873]: E0916 04:27:51.762715 2873 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4459.0.0-n-c6becb1dff\" not found" Sep 16 04:27:51.763132 kubelet[2873]: E0916 04:27:51.763106 2873 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.0.0-n-c6becb1dff?timeout=10s\": dial tcp 10.200.20.14:6443: connect: connection refused" interval="200ms" Sep 16 04:27:51.763334 kubelet[2873]: I0916 04:27:51.763321 2873 factory.go:221] Registration of the systemd container factory successfully Sep 16 04:27:51.763457 kubelet[2873]: I0916 04:27:51.763442 2873 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 16 04:27:51.764436 kubelet[2873]: I0916 04:27:51.764402 2873 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 16 04:27:51.764492 kubelet[2873]: I0916 04:27:51.764459 2873 reconciler.go:26] "Reconciler: start to sync state" Sep 16 04:27:51.765048 kubelet[2873]: W0916 04:27:51.765019 2873 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.14:6443: connect: connection refused Sep 16 04:27:51.765132 kubelet[2873]: E0916 04:27:51.765118 2873 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.14:6443: connect: connection refused" logger="UnhandledError" Sep 16 04:27:51.765375 kubelet[2873]: I0916 04:27:51.765361 2873 factory.go:221] Registration of the containerd container factory successfully Sep 16 04:27:51.781076 kubelet[2873]: I0916 04:27:51.780984 2873 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 16 04:27:51.781076 kubelet[2873]: I0916 04:27:51.780997 2873 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 16 04:27:51.781076 kubelet[2873]: I0916 04:27:51.781013 2873 state_mem.go:36] "Initialized new in-memory state store" Sep 16 04:27:51.862955 kubelet[2873]: E0916 04:27:51.862922 2873 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4459.0.0-n-c6becb1dff\" not found" Sep 16 04:27:51.963327 kubelet[2873]: E0916 04:27:51.963282 2873 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4459.0.0-n-c6becb1dff\" not found" Sep 16 04:27:51.963783 kubelet[2873]: E0916 04:27:51.963757 2873 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.0.0-n-c6becb1dff?timeout=10s\": dial tcp 10.200.20.14:6443: connect: connection refused" interval="400ms" Sep 16 04:27:52.019305 kubelet[2873]: I0916 04:27:52.019253 2873 policy_none.go:49] "None policy: Start" Sep 16 04:27:52.020169 kubelet[2873]: I0916 04:27:52.020108 2873 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 16 04:27:52.020169 kubelet[2873]: I0916 04:27:52.020131 2873 state_mem.go:35] "Initializing new in-memory state store" Sep 16 04:27:52.029269 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 16 04:27:52.043506 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 16 04:27:52.046764 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 16 04:27:52.057092 kubelet[2873]: I0916 04:27:52.057073 2873 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 16 04:27:52.057253 kubelet[2873]: I0916 04:27:52.057233 2873 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 16 04:27:52.057389 kubelet[2873]: I0916 04:27:52.057251 2873 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 16 04:27:52.057664 kubelet[2873]: I0916 04:27:52.057650 2873 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 16 04:27:52.058995 kubelet[2873]: E0916 04:27:52.058981 2873 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4459.0.0-n-c6becb1dff\" not found" Sep 16 04:27:52.072488 kubelet[2873]: I0916 04:27:52.072466 2873 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 16 04:27:52.073328 kubelet[2873]: I0916 04:27:52.073315 2873 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 16 04:27:52.073584 kubelet[2873]: I0916 04:27:52.073570 2873 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 16 04:27:52.073738 kubelet[2873]: I0916 04:27:52.073665 2873 kubelet.go:2321] "Starting kubelet main sync loop" Sep 16 04:27:52.073738 kubelet[2873]: E0916 04:27:52.073696 2873 kubelet.go:2345] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Sep 16 04:27:52.074628 kubelet[2873]: W0916 04:27:52.074602 2873 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.14:6443: connect: connection refused Sep 16 04:27:52.074628 kubelet[2873]: E0916 04:27:52.074636 2873 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.14:6443: connect: connection refused" logger="UnhandledError" Sep 16 04:27:52.159539 kubelet[2873]: I0916 04:27:52.159472 2873 kubelet_node_status.go:72] "Attempting to register node" node="ci-4459.0.0-n-c6becb1dff" Sep 16 04:27:52.159820 kubelet[2873]: E0916 04:27:52.159793 2873 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.14:6443/api/v1/nodes\": dial tcp 10.200.20.14:6443: connect: connection refused" node="ci-4459.0.0-n-c6becb1dff" Sep 16 04:27:52.181931 systemd[1]: Created slice kubepods-burstable-podb860a30c467e26c3a76ebf4fc66a4708.slice - libcontainer container kubepods-burstable-podb860a30c467e26c3a76ebf4fc66a4708.slice. Sep 16 04:27:52.206637 systemd[1]: Created slice kubepods-burstable-pod860aa7892eb4e0ea8ffcb5d247d2acb2.slice - libcontainer container kubepods-burstable-pod860aa7892eb4e0ea8ffcb5d247d2acb2.slice. Sep 16 04:27:52.223509 systemd[1]: Created slice kubepods-burstable-podc520c1aceea8d64911e9e94f3942ada9.slice - libcontainer container kubepods-burstable-podc520c1aceea8d64911e9e94f3942ada9.slice. Sep 16 04:27:52.266110 kubelet[2873]: I0916 04:27:52.266068 2873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/860aa7892eb4e0ea8ffcb5d247d2acb2-k8s-certs\") pod \"kube-apiserver-ci-4459.0.0-n-c6becb1dff\" (UID: \"860aa7892eb4e0ea8ffcb5d247d2acb2\") " pod="kube-system/kube-apiserver-ci-4459.0.0-n-c6becb1dff" Sep 16 04:27:52.266236 kubelet[2873]: I0916 04:27:52.266158 2873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/860aa7892eb4e0ea8ffcb5d247d2acb2-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.0.0-n-c6becb1dff\" (UID: \"860aa7892eb4e0ea8ffcb5d247d2acb2\") " pod="kube-system/kube-apiserver-ci-4459.0.0-n-c6becb1dff" Sep 16 04:27:52.266236 kubelet[2873]: I0916 04:27:52.266177 2873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c520c1aceea8d64911e9e94f3942ada9-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.0.0-n-c6becb1dff\" (UID: \"c520c1aceea8d64911e9e94f3942ada9\") " pod="kube-system/kube-controller-manager-ci-4459.0.0-n-c6becb1dff" Sep 16 04:27:52.266236 kubelet[2873]: I0916 04:27:52.266188 2873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c520c1aceea8d64911e9e94f3942ada9-kubeconfig\") pod \"kube-controller-manager-ci-4459.0.0-n-c6becb1dff\" (UID: \"c520c1aceea8d64911e9e94f3942ada9\") " pod="kube-system/kube-controller-manager-ci-4459.0.0-n-c6becb1dff" Sep 16 04:27:52.266236 kubelet[2873]: I0916 04:27:52.266198 2873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c520c1aceea8d64911e9e94f3942ada9-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.0.0-n-c6becb1dff\" (UID: \"c520c1aceea8d64911e9e94f3942ada9\") " pod="kube-system/kube-controller-manager-ci-4459.0.0-n-c6becb1dff" Sep 16 04:27:52.266308 kubelet[2873]: I0916 04:27:52.266243 2873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b860a30c467e26c3a76ebf4fc66a4708-kubeconfig\") pod \"kube-scheduler-ci-4459.0.0-n-c6becb1dff\" (UID: \"b860a30c467e26c3a76ebf4fc66a4708\") " pod="kube-system/kube-scheduler-ci-4459.0.0-n-c6becb1dff" Sep 16 04:27:52.266308 kubelet[2873]: I0916 04:27:52.266253 2873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/860aa7892eb4e0ea8ffcb5d247d2acb2-ca-certs\") pod \"kube-apiserver-ci-4459.0.0-n-c6becb1dff\" (UID: \"860aa7892eb4e0ea8ffcb5d247d2acb2\") " pod="kube-system/kube-apiserver-ci-4459.0.0-n-c6becb1dff" Sep 16 04:27:52.266308 kubelet[2873]: I0916 04:27:52.266262 2873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c520c1aceea8d64911e9e94f3942ada9-ca-certs\") pod \"kube-controller-manager-ci-4459.0.0-n-c6becb1dff\" (UID: \"c520c1aceea8d64911e9e94f3942ada9\") " pod="kube-system/kube-controller-manager-ci-4459.0.0-n-c6becb1dff" Sep 16 04:27:52.266308 kubelet[2873]: I0916 04:27:52.266285 2873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c520c1aceea8d64911e9e94f3942ada9-k8s-certs\") pod \"kube-controller-manager-ci-4459.0.0-n-c6becb1dff\" (UID: \"c520c1aceea8d64911e9e94f3942ada9\") " pod="kube-system/kube-controller-manager-ci-4459.0.0-n-c6becb1dff" Sep 16 04:27:52.361606 kubelet[2873]: I0916 04:27:52.361573 2873 kubelet_node_status.go:72] "Attempting to register node" node="ci-4459.0.0-n-c6becb1dff" Sep 16 04:27:52.361961 kubelet[2873]: E0916 04:27:52.361933 2873 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.14:6443/api/v1/nodes\": dial tcp 10.200.20.14:6443: connect: connection refused" node="ci-4459.0.0-n-c6becb1dff" Sep 16 04:27:52.364259 kubelet[2873]: E0916 04:27:52.364223 2873 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.0.0-n-c6becb1dff?timeout=10s\": dial tcp 10.200.20.14:6443: connect: connection refused" interval="800ms" Sep 16 04:27:52.504850 containerd[1831]: time="2025-09-16T04:27:52.504607361Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.0.0-n-c6becb1dff,Uid:b860a30c467e26c3a76ebf4fc66a4708,Namespace:kube-system,Attempt:0,}" Sep 16 04:27:52.522107 containerd[1831]: time="2025-09-16T04:27:52.522071883Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.0.0-n-c6becb1dff,Uid:860aa7892eb4e0ea8ffcb5d247d2acb2,Namespace:kube-system,Attempt:0,}" Sep 16 04:27:52.527819 containerd[1831]: time="2025-09-16T04:27:52.527736004Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.0.0-n-c6becb1dff,Uid:c520c1aceea8d64911e9e94f3942ada9,Namespace:kube-system,Attempt:0,}" Sep 16 04:27:52.549644 containerd[1831]: time="2025-09-16T04:27:52.549619828Z" level=info msg="connecting to shim cbf46dbbb5183c82f2197bdc52326cf997950d4dca5769ee170193c372d2ba69" address="unix:///run/containerd/s/c2a3ef16ecb36807336b437d4e085f237623526624e25cbf58e3de7b0b411a45" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:27:52.571560 systemd[1]: Started cri-containerd-cbf46dbbb5183c82f2197bdc52326cf997950d4dca5769ee170193c372d2ba69.scope - libcontainer container cbf46dbbb5183c82f2197bdc52326cf997950d4dca5769ee170193c372d2ba69. Sep 16 04:27:52.592435 containerd[1831]: time="2025-09-16T04:27:52.591524104Z" level=info msg="connecting to shim 40d4811044e47a918e981badf3633428aa8f25391f47c4cdd6571bc68c8c7618" address="unix:///run/containerd/s/a3b063bafe4005bca14e007dd4778ca8bab6e9a803af52905bd5ab238fa185b6" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:27:52.612832 containerd[1831]: time="2025-09-16T04:27:52.612748096Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.0.0-n-c6becb1dff,Uid:b860a30c467e26c3a76ebf4fc66a4708,Namespace:kube-system,Attempt:0,} returns sandbox id \"cbf46dbbb5183c82f2197bdc52326cf997950d4dca5769ee170193c372d2ba69\"" Sep 16 04:27:52.615156 containerd[1831]: time="2025-09-16T04:27:52.615128045Z" level=info msg="CreateContainer within sandbox \"cbf46dbbb5183c82f2197bdc52326cf997950d4dca5769ee170193c372d2ba69\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 16 04:27:52.615662 systemd[1]: Started cri-containerd-40d4811044e47a918e981badf3633428aa8f25391f47c4cdd6571bc68c8c7618.scope - libcontainer container 40d4811044e47a918e981badf3633428aa8f25391f47c4cdd6571bc68c8c7618. Sep 16 04:27:52.628202 containerd[1831]: time="2025-09-16T04:27:52.628170795Z" level=info msg="connecting to shim c1e8c4b9a2cce82b2fc70d571529e2903e56663dd99bd1edaf962a40a48ad03f" address="unix:///run/containerd/s/3819593c6d2b6622a7b8a2bdae86c095e917651fd3d11389f385330e0111b32c" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:27:52.641766 containerd[1831]: time="2025-09-16T04:27:52.641724003Z" level=info msg="Container 69c8c46e71b55e5f346dd26de79be2d80409c6448a974dd6cf1e7d46458baf2a: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:27:52.660661 containerd[1831]: time="2025-09-16T04:27:52.660625401Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.0.0-n-c6becb1dff,Uid:c520c1aceea8d64911e9e94f3942ada9,Namespace:kube-system,Attempt:0,} returns sandbox id \"40d4811044e47a918e981badf3633428aa8f25391f47c4cdd6571bc68c8c7618\"" Sep 16 04:27:52.662705 containerd[1831]: time="2025-09-16T04:27:52.662464858Z" level=info msg="CreateContainer within sandbox \"40d4811044e47a918e981badf3633428aa8f25391f47c4cdd6571bc68c8c7618\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 16 04:27:52.662568 systemd[1]: Started cri-containerd-c1e8c4b9a2cce82b2fc70d571529e2903e56663dd99bd1edaf962a40a48ad03f.scope - libcontainer container c1e8c4b9a2cce82b2fc70d571529e2903e56663dd99bd1edaf962a40a48ad03f. Sep 16 04:27:52.682375 containerd[1831]: time="2025-09-16T04:27:52.681633313Z" level=info msg="CreateContainer within sandbox \"cbf46dbbb5183c82f2197bdc52326cf997950d4dca5769ee170193c372d2ba69\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"69c8c46e71b55e5f346dd26de79be2d80409c6448a974dd6cf1e7d46458baf2a\"" Sep 16 04:27:52.683193 containerd[1831]: time="2025-09-16T04:27:52.683174392Z" level=info msg="StartContainer for \"69c8c46e71b55e5f346dd26de79be2d80409c6448a974dd6cf1e7d46458baf2a\"" Sep 16 04:27:52.684467 containerd[1831]: time="2025-09-16T04:27:52.684446157Z" level=info msg="connecting to shim 69c8c46e71b55e5f346dd26de79be2d80409c6448a974dd6cf1e7d46458baf2a" address="unix:///run/containerd/s/c2a3ef16ecb36807336b437d4e085f237623526624e25cbf58e3de7b0b411a45" protocol=ttrpc version=3 Sep 16 04:27:52.687762 containerd[1831]: time="2025-09-16T04:27:52.687571452Z" level=info msg="Container 55dc696a938438544e54ab69cac90c4a6ab0e7a276ffae28a2c6db28230f791a: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:27:52.700914 containerd[1831]: time="2025-09-16T04:27:52.700852442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.0.0-n-c6becb1dff,Uid:860aa7892eb4e0ea8ffcb5d247d2acb2,Namespace:kube-system,Attempt:0,} returns sandbox id \"c1e8c4b9a2cce82b2fc70d571529e2903e56663dd99bd1edaf962a40a48ad03f\"" Sep 16 04:27:52.701577 systemd[1]: Started cri-containerd-69c8c46e71b55e5f346dd26de79be2d80409c6448a974dd6cf1e7d46458baf2a.scope - libcontainer container 69c8c46e71b55e5f346dd26de79be2d80409c6448a974dd6cf1e7d46458baf2a. Sep 16 04:27:52.705532 containerd[1831]: time="2025-09-16T04:27:52.705494383Z" level=info msg="CreateContainer within sandbox \"c1e8c4b9a2cce82b2fc70d571529e2903e56663dd99bd1edaf962a40a48ad03f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 16 04:27:52.708362 containerd[1831]: time="2025-09-16T04:27:52.708333331Z" level=info msg="CreateContainer within sandbox \"40d4811044e47a918e981badf3633428aa8f25391f47c4cdd6571bc68c8c7618\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"55dc696a938438544e54ab69cac90c4a6ab0e7a276ffae28a2c6db28230f791a\"" Sep 16 04:27:52.708864 containerd[1831]: time="2025-09-16T04:27:52.708818981Z" level=info msg="StartContainer for \"55dc696a938438544e54ab69cac90c4a6ab0e7a276ffae28a2c6db28230f791a\"" Sep 16 04:27:52.710038 containerd[1831]: time="2025-09-16T04:27:52.709985814Z" level=info msg="connecting to shim 55dc696a938438544e54ab69cac90c4a6ab0e7a276ffae28a2c6db28230f791a" address="unix:///run/containerd/s/a3b063bafe4005bca14e007dd4778ca8bab6e9a803af52905bd5ab238fa185b6" protocol=ttrpc version=3 Sep 16 04:27:52.725563 containerd[1831]: time="2025-09-16T04:27:52.725527957Z" level=info msg="Container 14deffa55e0228f32fda688aec26dfaffd50702d46c03cfe652a980a7e524dbd: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:27:52.726558 systemd[1]: Started cri-containerd-55dc696a938438544e54ab69cac90c4a6ab0e7a276ffae28a2c6db28230f791a.scope - libcontainer container 55dc696a938438544e54ab69cac90c4a6ab0e7a276ffae28a2c6db28230f791a. Sep 16 04:27:52.741035 containerd[1831]: time="2025-09-16T04:27:52.740979656Z" level=info msg="CreateContainer within sandbox \"c1e8c4b9a2cce82b2fc70d571529e2903e56663dd99bd1edaf962a40a48ad03f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"14deffa55e0228f32fda688aec26dfaffd50702d46c03cfe652a980a7e524dbd\"" Sep 16 04:27:52.742090 containerd[1831]: time="2025-09-16T04:27:52.742051374Z" level=info msg="StartContainer for \"14deffa55e0228f32fda688aec26dfaffd50702d46c03cfe652a980a7e524dbd\"" Sep 16 04:27:52.743363 containerd[1831]: time="2025-09-16T04:27:52.743323875Z" level=info msg="connecting to shim 14deffa55e0228f32fda688aec26dfaffd50702d46c03cfe652a980a7e524dbd" address="unix:///run/containerd/s/3819593c6d2b6622a7b8a2bdae86c095e917651fd3d11389f385330e0111b32c" protocol=ttrpc version=3 Sep 16 04:27:52.758062 containerd[1831]: time="2025-09-16T04:27:52.757892984Z" level=info msg="StartContainer for \"69c8c46e71b55e5f346dd26de79be2d80409c6448a974dd6cf1e7d46458baf2a\" returns successfully" Sep 16 04:27:52.765748 systemd[1]: Started cri-containerd-14deffa55e0228f32fda688aec26dfaffd50702d46c03cfe652a980a7e524dbd.scope - libcontainer container 14deffa55e0228f32fda688aec26dfaffd50702d46c03cfe652a980a7e524dbd. Sep 16 04:27:52.766281 kubelet[2873]: I0916 04:27:52.766150 2873 kubelet_node_status.go:72] "Attempting to register node" node="ci-4459.0.0-n-c6becb1dff" Sep 16 04:27:52.768935 kubelet[2873]: E0916 04:27:52.768912 2873 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.14:6443/api/v1/nodes\": dial tcp 10.200.20.14:6443: connect: connection refused" node="ci-4459.0.0-n-c6becb1dff" Sep 16 04:27:52.788024 containerd[1831]: time="2025-09-16T04:27:52.787139980Z" level=info msg="StartContainer for \"55dc696a938438544e54ab69cac90c4a6ab0e7a276ffae28a2c6db28230f791a\" returns successfully" Sep 16 04:27:52.827784 containerd[1831]: time="2025-09-16T04:27:52.827747051Z" level=info msg="StartContainer for \"14deffa55e0228f32fda688aec26dfaffd50702d46c03cfe652a980a7e524dbd\" returns successfully" Sep 16 04:27:53.571164 kubelet[2873]: I0916 04:27:53.571132 2873 kubelet_node_status.go:72] "Attempting to register node" node="ci-4459.0.0-n-c6becb1dff" Sep 16 04:27:53.895581 kubelet[2873]: E0916 04:27:53.895466 2873 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4459.0.0-n-c6becb1dff\" not found" node="ci-4459.0.0-n-c6becb1dff" Sep 16 04:27:54.101441 kubelet[2873]: I0916 04:27:54.100980 2873 kubelet_node_status.go:75] "Successfully registered node" node="ci-4459.0.0-n-c6becb1dff" Sep 16 04:27:54.101441 kubelet[2873]: E0916 04:27:54.101007 2873 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4459.0.0-n-c6becb1dff\": node \"ci-4459.0.0-n-c6becb1dff\" not found" Sep 16 04:27:54.591194 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Sep 16 04:27:54.747838 kubelet[2873]: I0916 04:27:54.747800 2873 apiserver.go:52] "Watching apiserver" Sep 16 04:27:54.764923 kubelet[2873]: I0916 04:27:54.764874 2873 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 16 04:27:56.034546 systemd[1]: Reload requested from client PID 3149 ('systemctl') (unit session-9.scope)... Sep 16 04:27:56.034824 systemd[1]: Reloading... Sep 16 04:27:56.120452 zram_generator::config[3199]: No configuration found. Sep 16 04:27:56.277363 systemd[1]: Reloading finished in 242 ms. Sep 16 04:27:56.306332 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 04:27:56.318264 systemd[1]: kubelet.service: Deactivated successfully. Sep 16 04:27:56.318693 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:27:56.318748 systemd[1]: kubelet.service: Consumed 504ms CPU time, 125.3M memory peak. Sep 16 04:27:56.322569 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 04:27:56.410154 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:27:56.418821 (kubelet)[3260]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 16 04:27:56.447054 kubelet[3260]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 16 04:27:56.447054 kubelet[3260]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 16 04:27:56.447054 kubelet[3260]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 16 04:27:56.447054 kubelet[3260]: I0916 04:27:56.447009 3260 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 16 04:27:56.452460 kubelet[3260]: I0916 04:27:56.451893 3260 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 16 04:27:56.452460 kubelet[3260]: I0916 04:27:56.451919 3260 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 16 04:27:56.452460 kubelet[3260]: I0916 04:27:56.452080 3260 server.go:934] "Client rotation is on, will bootstrap in background" Sep 16 04:27:56.453301 kubelet[3260]: I0916 04:27:56.453273 3260 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 16 04:27:56.454956 kubelet[3260]: I0916 04:27:56.454928 3260 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 16 04:27:56.458712 kubelet[3260]: I0916 04:27:56.458649 3260 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 16 04:27:56.462442 kubelet[3260]: I0916 04:27:56.461652 3260 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 16 04:27:56.462442 kubelet[3260]: I0916 04:27:56.461774 3260 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 16 04:27:56.462442 kubelet[3260]: I0916 04:27:56.461846 3260 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 16 04:27:56.462442 kubelet[3260]: I0916 04:27:56.461863 3260 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.0.0-n-c6becb1dff","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 16 04:27:56.462611 kubelet[3260]: I0916 04:27:56.462036 3260 topology_manager.go:138] "Creating topology manager with none policy" Sep 16 04:27:56.462611 kubelet[3260]: I0916 04:27:56.462043 3260 container_manager_linux.go:300] "Creating device plugin manager" Sep 16 04:27:56.462611 kubelet[3260]: I0916 04:27:56.462073 3260 state_mem.go:36] "Initialized new in-memory state store" Sep 16 04:27:56.462611 kubelet[3260]: I0916 04:27:56.462153 3260 kubelet.go:408] "Attempting to sync node with API server" Sep 16 04:27:56.462611 kubelet[3260]: I0916 04:27:56.462161 3260 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 16 04:27:56.462611 kubelet[3260]: I0916 04:27:56.462176 3260 kubelet.go:314] "Adding apiserver pod source" Sep 16 04:27:56.462611 kubelet[3260]: I0916 04:27:56.462183 3260 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 16 04:27:56.465058 kubelet[3260]: I0916 04:27:56.465025 3260 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 16 04:27:56.465357 kubelet[3260]: I0916 04:27:56.465336 3260 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 16 04:27:56.466080 kubelet[3260]: I0916 04:27:56.466043 3260 server.go:1274] "Started kubelet" Sep 16 04:27:56.467214 kubelet[3260]: I0916 04:27:56.467183 3260 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 16 04:27:56.467411 kubelet[3260]: I0916 04:27:56.467260 3260 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 16 04:27:56.467723 kubelet[3260]: I0916 04:27:56.467689 3260 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 16 04:27:56.468132 kubelet[3260]: I0916 04:27:56.468112 3260 server.go:449] "Adding debug handlers to kubelet server" Sep 16 04:27:56.472112 kubelet[3260]: I0916 04:27:56.472088 3260 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 16 04:27:56.476250 kubelet[3260]: I0916 04:27:56.476226 3260 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 16 04:27:56.477087 kubelet[3260]: I0916 04:27:56.477070 3260 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 16 04:27:56.477331 kubelet[3260]: E0916 04:27:56.477307 3260 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4459.0.0-n-c6becb1dff\" not found" Sep 16 04:27:56.478787 kubelet[3260]: I0916 04:27:56.478766 3260 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 16 04:27:56.479020 kubelet[3260]: I0916 04:27:56.478963 3260 reconciler.go:26] "Reconciler: start to sync state" Sep 16 04:27:56.480379 kubelet[3260]: I0916 04:27:56.480347 3260 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 16 04:27:56.482552 kubelet[3260]: I0916 04:27:56.482530 3260 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 16 04:27:56.482643 kubelet[3260]: I0916 04:27:56.482636 3260 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 16 04:27:56.482707 kubelet[3260]: I0916 04:27:56.482700 3260 kubelet.go:2321] "Starting kubelet main sync loop" Sep 16 04:27:56.482809 kubelet[3260]: E0916 04:27:56.482780 3260 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 16 04:27:56.485277 kubelet[3260]: I0916 04:27:56.485239 3260 factory.go:221] Registration of the systemd container factory successfully Sep 16 04:27:56.485360 kubelet[3260]: I0916 04:27:56.485341 3260 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 16 04:27:56.489721 kubelet[3260]: I0916 04:27:56.489695 3260 factory.go:221] Registration of the containerd container factory successfully Sep 16 04:27:56.528117 kubelet[3260]: I0916 04:27:56.528087 3260 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 16 04:27:56.528279 kubelet[3260]: I0916 04:27:56.528266 3260 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 16 04:27:56.528335 kubelet[3260]: I0916 04:27:56.528327 3260 state_mem.go:36] "Initialized new in-memory state store" Sep 16 04:27:56.528529 kubelet[3260]: I0916 04:27:56.528512 3260 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 16 04:27:56.528614 kubelet[3260]: I0916 04:27:56.528590 3260 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 16 04:27:56.528659 kubelet[3260]: I0916 04:27:56.528652 3260 policy_none.go:49] "None policy: Start" Sep 16 04:27:56.529183 kubelet[3260]: I0916 04:27:56.529167 3260 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 16 04:27:56.529294 kubelet[3260]: I0916 04:27:56.529284 3260 state_mem.go:35] "Initializing new in-memory state store" Sep 16 04:27:56.529501 kubelet[3260]: I0916 04:27:56.529485 3260 state_mem.go:75] "Updated machine memory state" Sep 16 04:27:56.534170 kubelet[3260]: I0916 04:27:56.534144 3260 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 16 04:27:56.534313 kubelet[3260]: I0916 04:27:56.534296 3260 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 16 04:27:56.534333 kubelet[3260]: I0916 04:27:56.534311 3260 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 16 04:27:56.535571 kubelet[3260]: I0916 04:27:56.534824 3260 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 16 04:27:56.594041 kubelet[3260]: W0916 04:27:56.594000 3260 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 16 04:27:56.598521 kubelet[3260]: W0916 04:27:56.598489 3260 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 16 04:27:56.599272 kubelet[3260]: W0916 04:27:56.599253 3260 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 16 04:27:56.638803 kubelet[3260]: I0916 04:27:56.638607 3260 kubelet_node_status.go:72] "Attempting to register node" node="ci-4459.0.0-n-c6becb1dff" Sep 16 04:27:56.650475 kubelet[3260]: I0916 04:27:56.650229 3260 kubelet_node_status.go:111] "Node was previously registered" node="ci-4459.0.0-n-c6becb1dff" Sep 16 04:27:56.650475 kubelet[3260]: I0916 04:27:56.650299 3260 kubelet_node_status.go:75] "Successfully registered node" node="ci-4459.0.0-n-c6becb1dff" Sep 16 04:27:56.680111 kubelet[3260]: I0916 04:27:56.680076 3260 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/860aa7892eb4e0ea8ffcb5d247d2acb2-ca-certs\") pod \"kube-apiserver-ci-4459.0.0-n-c6becb1dff\" (UID: \"860aa7892eb4e0ea8ffcb5d247d2acb2\") " pod="kube-system/kube-apiserver-ci-4459.0.0-n-c6becb1dff" Sep 16 04:27:56.680111 kubelet[3260]: I0916 04:27:56.680107 3260 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c520c1aceea8d64911e9e94f3942ada9-ca-certs\") pod \"kube-controller-manager-ci-4459.0.0-n-c6becb1dff\" (UID: \"c520c1aceea8d64911e9e94f3942ada9\") " pod="kube-system/kube-controller-manager-ci-4459.0.0-n-c6becb1dff" Sep 16 04:27:56.680111 kubelet[3260]: I0916 04:27:56.680121 3260 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c520c1aceea8d64911e9e94f3942ada9-k8s-certs\") pod \"kube-controller-manager-ci-4459.0.0-n-c6becb1dff\" (UID: \"c520c1aceea8d64911e9e94f3942ada9\") " pod="kube-system/kube-controller-manager-ci-4459.0.0-n-c6becb1dff" Sep 16 04:27:56.680280 kubelet[3260]: I0916 04:27:56.680130 3260 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c520c1aceea8d64911e9e94f3942ada9-kubeconfig\") pod \"kube-controller-manager-ci-4459.0.0-n-c6becb1dff\" (UID: \"c520c1aceea8d64911e9e94f3942ada9\") " pod="kube-system/kube-controller-manager-ci-4459.0.0-n-c6becb1dff" Sep 16 04:27:56.680280 kubelet[3260]: I0916 04:27:56.680141 3260 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c520c1aceea8d64911e9e94f3942ada9-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.0.0-n-c6becb1dff\" (UID: \"c520c1aceea8d64911e9e94f3942ada9\") " pod="kube-system/kube-controller-manager-ci-4459.0.0-n-c6becb1dff" Sep 16 04:27:56.680280 kubelet[3260]: I0916 04:27:56.680165 3260 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b860a30c467e26c3a76ebf4fc66a4708-kubeconfig\") pod \"kube-scheduler-ci-4459.0.0-n-c6becb1dff\" (UID: \"b860a30c467e26c3a76ebf4fc66a4708\") " pod="kube-system/kube-scheduler-ci-4459.0.0-n-c6becb1dff" Sep 16 04:27:56.680280 kubelet[3260]: I0916 04:27:56.680176 3260 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/860aa7892eb4e0ea8ffcb5d247d2acb2-k8s-certs\") pod \"kube-apiserver-ci-4459.0.0-n-c6becb1dff\" (UID: \"860aa7892eb4e0ea8ffcb5d247d2acb2\") " pod="kube-system/kube-apiserver-ci-4459.0.0-n-c6becb1dff" Sep 16 04:27:56.680280 kubelet[3260]: I0916 04:27:56.680188 3260 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/860aa7892eb4e0ea8ffcb5d247d2acb2-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.0.0-n-c6becb1dff\" (UID: \"860aa7892eb4e0ea8ffcb5d247d2acb2\") " pod="kube-system/kube-apiserver-ci-4459.0.0-n-c6becb1dff" Sep 16 04:27:56.680359 kubelet[3260]: I0916 04:27:56.680196 3260 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c520c1aceea8d64911e9e94f3942ada9-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.0.0-n-c6becb1dff\" (UID: \"c520c1aceea8d64911e9e94f3942ada9\") " pod="kube-system/kube-controller-manager-ci-4459.0.0-n-c6becb1dff" Sep 16 04:27:57.062578 sudo[3291]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 16 04:27:57.062784 sudo[3291]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 16 04:27:57.291149 sudo[3291]: pam_unix(sudo:session): session closed for user root Sep 16 04:27:57.465501 kubelet[3260]: I0916 04:27:57.465454 3260 apiserver.go:52] "Watching apiserver" Sep 16 04:27:57.479789 kubelet[3260]: I0916 04:27:57.479756 3260 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 16 04:27:57.528025 kubelet[3260]: W0916 04:27:57.527990 3260 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 16 04:27:57.528203 kubelet[3260]: E0916 04:27:57.528042 3260 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4459.0.0-n-c6becb1dff\" already exists" pod="kube-system/kube-apiserver-ci-4459.0.0-n-c6becb1dff" Sep 16 04:27:57.539701 kubelet[3260]: I0916 04:27:57.539357 3260 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4459.0.0-n-c6becb1dff" podStartSLOduration=1.539345935 podStartE2EDuration="1.539345935s" podCreationTimestamp="2025-09-16 04:27:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 04:27:57.538515865 +0000 UTC m=+1.117182452" watchObservedRunningTime="2025-09-16 04:27:57.539345935 +0000 UTC m=+1.118012506" Sep 16 04:27:57.550471 kubelet[3260]: I0916 04:27:57.550415 3260 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4459.0.0-n-c6becb1dff" podStartSLOduration=1.550402423 podStartE2EDuration="1.550402423s" podCreationTimestamp="2025-09-16 04:27:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 04:27:57.549512599 +0000 UTC m=+1.128179170" watchObservedRunningTime="2025-09-16 04:27:57.550402423 +0000 UTC m=+1.129068986" Sep 16 04:27:58.274785 sudo[2296]: pam_unix(sudo:session): session closed for user root Sep 16 04:27:58.289508 update_engine[1815]: I20250916 04:27:58.289449 1815 update_attempter.cc:509] Updating boot flags... Sep 16 04:27:58.349041 sshd[2295]: Connection closed by 10.200.16.10 port 46314 Sep 16 04:27:58.349595 sshd-session[2292]: pam_unix(sshd:session): session closed for user core Sep 16 04:27:58.353313 systemd[1]: sshd@6-10.200.20.14:22-10.200.16.10:46314.service: Deactivated successfully. Sep 16 04:27:58.355940 systemd[1]: session-9.scope: Deactivated successfully. Sep 16 04:27:58.356345 systemd[1]: session-9.scope: Consumed 2.863s CPU time, 258.5M memory peak. Sep 16 04:27:58.358609 systemd-logind[1808]: Session 9 logged out. Waiting for processes to exit. Sep 16 04:27:58.359963 systemd-logind[1808]: Removed session 9. Sep 16 04:28:02.418321 kubelet[3260]: I0916 04:28:02.418280 3260 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 16 04:28:02.419286 containerd[1831]: time="2025-09-16T04:28:02.419056491Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 16 04:28:02.420029 kubelet[3260]: I0916 04:28:02.419648 3260 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 16 04:28:03.448168 kubelet[3260]: I0916 04:28:03.447441 3260 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4459.0.0-n-c6becb1dff" podStartSLOduration=7.447413241 podStartE2EDuration="7.447413241s" podCreationTimestamp="2025-09-16 04:27:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 04:27:57.560551158 +0000 UTC m=+1.139217721" watchObservedRunningTime="2025-09-16 04:28:03.447413241 +0000 UTC m=+7.026079812" Sep 16 04:28:03.460208 systemd[1]: Created slice kubepods-burstable-pod1b1440ff_1094_4cdf_a528_00a9e9bb43d0.slice - libcontainer container kubepods-burstable-pod1b1440ff_1094_4cdf_a528_00a9e9bb43d0.slice. Sep 16 04:28:03.467262 systemd[1]: Created slice kubepods-besteffort-pod4072b933_49b6_413d_a141_c3aa8a31265d.slice - libcontainer container kubepods-besteffort-pod4072b933_49b6_413d_a141_c3aa8a31265d.slice. Sep 16 04:28:03.519319 kubelet[3260]: I0916 04:28:03.519237 3260 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1b1440ff-1094-4cdf-a528-00a9e9bb43d0-bpf-maps\") pod \"cilium-rbqcl\" (UID: \"1b1440ff-1094-4cdf-a528-00a9e9bb43d0\") " pod="kube-system/cilium-rbqcl" Sep 16 04:28:03.519319 kubelet[3260]: I0916 04:28:03.519286 3260 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1b1440ff-1094-4cdf-a528-00a9e9bb43d0-cilium-config-path\") pod \"cilium-rbqcl\" (UID: \"1b1440ff-1094-4cdf-a528-00a9e9bb43d0\") " pod="kube-system/cilium-rbqcl" Sep 16 04:28:03.519319 kubelet[3260]: I0916 04:28:03.519301 3260 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdr6z\" (UniqueName: \"kubernetes.io/projected/1b1440ff-1094-4cdf-a528-00a9e9bb43d0-kube-api-access-wdr6z\") pod \"cilium-rbqcl\" (UID: \"1b1440ff-1094-4cdf-a528-00a9e9bb43d0\") " pod="kube-system/cilium-rbqcl" Sep 16 04:28:03.519319 kubelet[3260]: I0916 04:28:03.519321 3260 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5hgq\" (UniqueName: \"kubernetes.io/projected/4072b933-49b6-413d-a141-c3aa8a31265d-kube-api-access-x5hgq\") pod \"kube-proxy-r446x\" (UID: \"4072b933-49b6-413d-a141-c3aa8a31265d\") " pod="kube-system/kube-proxy-r446x" Sep 16 04:28:03.519814 kubelet[3260]: I0916 04:28:03.519341 3260 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1b1440ff-1094-4cdf-a528-00a9e9bb43d0-hostproc\") pod \"cilium-rbqcl\" (UID: \"1b1440ff-1094-4cdf-a528-00a9e9bb43d0\") " pod="kube-system/cilium-rbqcl" Sep 16 04:28:03.519814 kubelet[3260]: I0916 04:28:03.519353 3260 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1b1440ff-1094-4cdf-a528-00a9e9bb43d0-etc-cni-netd\") pod \"cilium-rbqcl\" (UID: \"1b1440ff-1094-4cdf-a528-00a9e9bb43d0\") " pod="kube-system/cilium-rbqcl" Sep 16 04:28:03.519814 kubelet[3260]: I0916 04:28:03.519365 3260 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1b1440ff-1094-4cdf-a528-00a9e9bb43d0-cilium-run\") pod \"cilium-rbqcl\" (UID: \"1b1440ff-1094-4cdf-a528-00a9e9bb43d0\") " pod="kube-system/cilium-rbqcl" Sep 16 04:28:03.519814 kubelet[3260]: I0916 04:28:03.519376 3260 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1b1440ff-1094-4cdf-a528-00a9e9bb43d0-xtables-lock\") pod \"cilium-rbqcl\" (UID: \"1b1440ff-1094-4cdf-a528-00a9e9bb43d0\") " pod="kube-system/cilium-rbqcl" Sep 16 04:28:03.519814 kubelet[3260]: I0916 04:28:03.519385 3260 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1b1440ff-1094-4cdf-a528-00a9e9bb43d0-host-proc-sys-kernel\") pod \"cilium-rbqcl\" (UID: \"1b1440ff-1094-4cdf-a528-00a9e9bb43d0\") " pod="kube-system/cilium-rbqcl" Sep 16 04:28:03.519814 kubelet[3260]: I0916 04:28:03.519398 3260 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4072b933-49b6-413d-a141-c3aa8a31265d-lib-modules\") pod \"kube-proxy-r446x\" (UID: \"4072b933-49b6-413d-a141-c3aa8a31265d\") " pod="kube-system/kube-proxy-r446x" Sep 16 04:28:03.520131 kubelet[3260]: I0916 04:28:03.519416 3260 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1b1440ff-1094-4cdf-a528-00a9e9bb43d0-clustermesh-secrets\") pod \"cilium-rbqcl\" (UID: \"1b1440ff-1094-4cdf-a528-00a9e9bb43d0\") " pod="kube-system/cilium-rbqcl" Sep 16 04:28:03.520131 kubelet[3260]: I0916 04:28:03.519434 3260 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1b1440ff-1094-4cdf-a528-00a9e9bb43d0-host-proc-sys-net\") pod \"cilium-rbqcl\" (UID: \"1b1440ff-1094-4cdf-a528-00a9e9bb43d0\") " pod="kube-system/cilium-rbqcl" Sep 16 04:28:03.520131 kubelet[3260]: I0916 04:28:03.519445 3260 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4072b933-49b6-413d-a141-c3aa8a31265d-xtables-lock\") pod \"kube-proxy-r446x\" (UID: \"4072b933-49b6-413d-a141-c3aa8a31265d\") " pod="kube-system/kube-proxy-r446x" Sep 16 04:28:03.520131 kubelet[3260]: I0916 04:28:03.519456 3260 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1b1440ff-1094-4cdf-a528-00a9e9bb43d0-cilium-cgroup\") pod \"cilium-rbqcl\" (UID: \"1b1440ff-1094-4cdf-a528-00a9e9bb43d0\") " pod="kube-system/cilium-rbqcl" Sep 16 04:28:03.520131 kubelet[3260]: I0916 04:28:03.519478 3260 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1b1440ff-1094-4cdf-a528-00a9e9bb43d0-lib-modules\") pod \"cilium-rbqcl\" (UID: \"1b1440ff-1094-4cdf-a528-00a9e9bb43d0\") " pod="kube-system/cilium-rbqcl" Sep 16 04:28:03.520131 kubelet[3260]: I0916 04:28:03.519494 3260 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1b1440ff-1094-4cdf-a528-00a9e9bb43d0-cni-path\") pod \"cilium-rbqcl\" (UID: \"1b1440ff-1094-4cdf-a528-00a9e9bb43d0\") " pod="kube-system/cilium-rbqcl" Sep 16 04:28:03.520601 kubelet[3260]: I0916 04:28:03.519505 3260 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1b1440ff-1094-4cdf-a528-00a9e9bb43d0-hubble-tls\") pod \"cilium-rbqcl\" (UID: \"1b1440ff-1094-4cdf-a528-00a9e9bb43d0\") " pod="kube-system/cilium-rbqcl" Sep 16 04:28:03.520601 kubelet[3260]: I0916 04:28:03.519519 3260 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4072b933-49b6-413d-a141-c3aa8a31265d-kube-proxy\") pod \"kube-proxy-r446x\" (UID: \"4072b933-49b6-413d-a141-c3aa8a31265d\") " pod="kube-system/kube-proxy-r446x" Sep 16 04:28:03.568373 systemd[1]: Created slice kubepods-besteffort-pod0fd8d47c_c9af_4f97_a721_236ffcef2728.slice - libcontainer container kubepods-besteffort-pod0fd8d47c_c9af_4f97_a721_236ffcef2728.slice. Sep 16 04:28:03.620207 kubelet[3260]: I0916 04:28:03.620135 3260 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwzsn\" (UniqueName: \"kubernetes.io/projected/0fd8d47c-c9af-4f97-a721-236ffcef2728-kube-api-access-dwzsn\") pod \"cilium-operator-5d85765b45-hg2j2\" (UID: \"0fd8d47c-c9af-4f97-a721-236ffcef2728\") " pod="kube-system/cilium-operator-5d85765b45-hg2j2" Sep 16 04:28:03.620605 kubelet[3260]: I0916 04:28:03.620234 3260 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0fd8d47c-c9af-4f97-a721-236ffcef2728-cilium-config-path\") pod \"cilium-operator-5d85765b45-hg2j2\" (UID: \"0fd8d47c-c9af-4f97-a721-236ffcef2728\") " pod="kube-system/cilium-operator-5d85765b45-hg2j2" Sep 16 04:28:03.767027 containerd[1831]: time="2025-09-16T04:28:03.766903321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rbqcl,Uid:1b1440ff-1094-4cdf-a528-00a9e9bb43d0,Namespace:kube-system,Attempt:0,}" Sep 16 04:28:03.776661 containerd[1831]: time="2025-09-16T04:28:03.776516217Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-r446x,Uid:4072b933-49b6-413d-a141-c3aa8a31265d,Namespace:kube-system,Attempt:0,}" Sep 16 04:28:03.804818 containerd[1831]: time="2025-09-16T04:28:03.804734868Z" level=info msg="connecting to shim 5401d0996498343bd6fb163953499ff681740769c71aa95d2a6100d10589494a" address="unix:///run/containerd/s/fca684b09017507c3d08684a243ded66e32795e6aff88307908c3920c08ec21b" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:28:03.817832 containerd[1831]: time="2025-09-16T04:28:03.817517512Z" level=info msg="connecting to shim 24e55f6c524d18286fb9127db07036c11b9d84a1f07a2a9a3ebca7ffa7bf2b41" address="unix:///run/containerd/s/4658ab83154e5a73dfd55dc7b69b4d0fafb06dfca5feee6b547c68820550437c" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:28:03.826780 systemd[1]: Started cri-containerd-5401d0996498343bd6fb163953499ff681740769c71aa95d2a6100d10589494a.scope - libcontainer container 5401d0996498343bd6fb163953499ff681740769c71aa95d2a6100d10589494a. Sep 16 04:28:03.844637 systemd[1]: Started cri-containerd-24e55f6c524d18286fb9127db07036c11b9d84a1f07a2a9a3ebca7ffa7bf2b41.scope - libcontainer container 24e55f6c524d18286fb9127db07036c11b9d84a1f07a2a9a3ebca7ffa7bf2b41. Sep 16 04:28:03.869126 containerd[1831]: time="2025-09-16T04:28:03.868838572Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rbqcl,Uid:1b1440ff-1094-4cdf-a528-00a9e9bb43d0,Namespace:kube-system,Attempt:0,} returns sandbox id \"5401d0996498343bd6fb163953499ff681740769c71aa95d2a6100d10589494a\"" Sep 16 04:28:03.871057 containerd[1831]: time="2025-09-16T04:28:03.870597739Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 16 04:28:03.872450 containerd[1831]: time="2025-09-16T04:28:03.872404789Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-hg2j2,Uid:0fd8d47c-c9af-4f97-a721-236ffcef2728,Namespace:kube-system,Attempt:0,}" Sep 16 04:28:03.888741 containerd[1831]: time="2025-09-16T04:28:03.888698255Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-r446x,Uid:4072b933-49b6-413d-a141-c3aa8a31265d,Namespace:kube-system,Attempt:0,} returns sandbox id \"24e55f6c524d18286fb9127db07036c11b9d84a1f07a2a9a3ebca7ffa7bf2b41\"" Sep 16 04:28:03.891637 containerd[1831]: time="2025-09-16T04:28:03.891601139Z" level=info msg="CreateContainer within sandbox \"24e55f6c524d18286fb9127db07036c11b9d84a1f07a2a9a3ebca7ffa7bf2b41\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 16 04:28:03.916856 containerd[1831]: time="2025-09-16T04:28:03.916801231Z" level=info msg="Container 6a1ad0ebe24f369c6cc0631beaeb45d611761dfcc1385a2e1fb6a6def629ecc1: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:28:03.934209 containerd[1831]: time="2025-09-16T04:28:03.934168707Z" level=info msg="connecting to shim 55ccb8da4a6de2cec88cb10f77af4b4978489fc9e38ec7578dd10643fb3f9a91" address="unix:///run/containerd/s/fc3cff557618942ee51fddb6afabd261423d3cce126dd35feb9c804accb3449a" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:28:03.934514 containerd[1831]: time="2025-09-16T04:28:03.934451388Z" level=info msg="CreateContainer within sandbox \"24e55f6c524d18286fb9127db07036c11b9d84a1f07a2a9a3ebca7ffa7bf2b41\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6a1ad0ebe24f369c6cc0631beaeb45d611761dfcc1385a2e1fb6a6def629ecc1\"" Sep 16 04:28:03.935891 containerd[1831]: time="2025-09-16T04:28:03.935858712Z" level=info msg="StartContainer for \"6a1ad0ebe24f369c6cc0631beaeb45d611761dfcc1385a2e1fb6a6def629ecc1\"" Sep 16 04:28:03.937474 containerd[1831]: time="2025-09-16T04:28:03.937412137Z" level=info msg="connecting to shim 6a1ad0ebe24f369c6cc0631beaeb45d611761dfcc1385a2e1fb6a6def629ecc1" address="unix:///run/containerd/s/4658ab83154e5a73dfd55dc7b69b4d0fafb06dfca5feee6b547c68820550437c" protocol=ttrpc version=3 Sep 16 04:28:03.956578 systemd[1]: Started cri-containerd-55ccb8da4a6de2cec88cb10f77af4b4978489fc9e38ec7578dd10643fb3f9a91.scope - libcontainer container 55ccb8da4a6de2cec88cb10f77af4b4978489fc9e38ec7578dd10643fb3f9a91. Sep 16 04:28:03.959037 systemd[1]: Started cri-containerd-6a1ad0ebe24f369c6cc0631beaeb45d611761dfcc1385a2e1fb6a6def629ecc1.scope - libcontainer container 6a1ad0ebe24f369c6cc0631beaeb45d611761dfcc1385a2e1fb6a6def629ecc1. Sep 16 04:28:04.004018 containerd[1831]: time="2025-09-16T04:28:04.003896389Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-hg2j2,Uid:0fd8d47c-c9af-4f97-a721-236ffcef2728,Namespace:kube-system,Attempt:0,} returns sandbox id \"55ccb8da4a6de2cec88cb10f77af4b4978489fc9e38ec7578dd10643fb3f9a91\"" Sep 16 04:28:04.007444 containerd[1831]: time="2025-09-16T04:28:04.006795672Z" level=info msg="StartContainer for \"6a1ad0ebe24f369c6cc0631beaeb45d611761dfcc1385a2e1fb6a6def629ecc1\" returns successfully" Sep 16 04:28:04.545984 kubelet[3260]: I0916 04:28:04.545925 3260 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-r446x" podStartSLOduration=1.545910311 podStartE2EDuration="1.545910311s" podCreationTimestamp="2025-09-16 04:28:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 04:28:04.545067468 +0000 UTC m=+8.123734039" watchObservedRunningTime="2025-09-16 04:28:04.545910311 +0000 UTC m=+8.124576874" Sep 16 04:28:12.890940 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount477176728.mount: Deactivated successfully. Sep 16 04:28:14.675456 containerd[1831]: time="2025-09-16T04:28:14.675022335Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:28:14.677231 containerd[1831]: time="2025-09-16T04:28:14.677201798Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Sep 16 04:28:14.680060 containerd[1831]: time="2025-09-16T04:28:14.679995793Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:28:14.681512 containerd[1831]: time="2025-09-16T04:28:14.681486194Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 10.81085199s" Sep 16 04:28:14.681681 containerd[1831]: time="2025-09-16T04:28:14.681597845Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Sep 16 04:28:14.683449 containerd[1831]: time="2025-09-16T04:28:14.683101478Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 16 04:28:14.684660 containerd[1831]: time="2025-09-16T04:28:14.684635336Z" level=info msg="CreateContainer within sandbox \"5401d0996498343bd6fb163953499ff681740769c71aa95d2a6100d10589494a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 16 04:28:14.738627 containerd[1831]: time="2025-09-16T04:28:14.738570758Z" level=info msg="Container 1a2aa765453d9bca699e40e790bab835edf18de500940e907cf47513c55aa890: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:28:15.383624 containerd[1831]: time="2025-09-16T04:28:15.383546884Z" level=info msg="CreateContainer within sandbox \"5401d0996498343bd6fb163953499ff681740769c71aa95d2a6100d10589494a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"1a2aa765453d9bca699e40e790bab835edf18de500940e907cf47513c55aa890\"" Sep 16 04:28:15.384363 containerd[1831]: time="2025-09-16T04:28:15.384213562Z" level=info msg="StartContainer for \"1a2aa765453d9bca699e40e790bab835edf18de500940e907cf47513c55aa890\"" Sep 16 04:28:15.385263 containerd[1831]: time="2025-09-16T04:28:15.385204914Z" level=info msg="connecting to shim 1a2aa765453d9bca699e40e790bab835edf18de500940e907cf47513c55aa890" address="unix:///run/containerd/s/fca684b09017507c3d08684a243ded66e32795e6aff88307908c3920c08ec21b" protocol=ttrpc version=3 Sep 16 04:28:15.402548 systemd[1]: Started cri-containerd-1a2aa765453d9bca699e40e790bab835edf18de500940e907cf47513c55aa890.scope - libcontainer container 1a2aa765453d9bca699e40e790bab835edf18de500940e907cf47513c55aa890. Sep 16 04:28:15.428646 containerd[1831]: time="2025-09-16T04:28:15.428599992Z" level=info msg="StartContainer for \"1a2aa765453d9bca699e40e790bab835edf18de500940e907cf47513c55aa890\" returns successfully" Sep 16 04:28:15.435920 systemd[1]: cri-containerd-1a2aa765453d9bca699e40e790bab835edf18de500940e907cf47513c55aa890.scope: Deactivated successfully. Sep 16 04:28:15.438927 containerd[1831]: time="2025-09-16T04:28:15.438847750Z" level=info msg="received exit event container_id:\"1a2aa765453d9bca699e40e790bab835edf18de500940e907cf47513c55aa890\" id:\"1a2aa765453d9bca699e40e790bab835edf18de500940e907cf47513c55aa890\" pid:3730 exited_at:{seconds:1757996895 nanos:438025371}" Sep 16 04:28:15.439069 containerd[1831]: time="2025-09-16T04:28:15.439046772Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1a2aa765453d9bca699e40e790bab835edf18de500940e907cf47513c55aa890\" id:\"1a2aa765453d9bca699e40e790bab835edf18de500940e907cf47513c55aa890\" pid:3730 exited_at:{seconds:1757996895 nanos:438025371}" Sep 16 04:28:15.455487 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1a2aa765453d9bca699e40e790bab835edf18de500940e907cf47513c55aa890-rootfs.mount: Deactivated successfully. Sep 16 04:28:16.562833 containerd[1831]: time="2025-09-16T04:28:16.562788606Z" level=info msg="CreateContainer within sandbox \"5401d0996498343bd6fb163953499ff681740769c71aa95d2a6100d10589494a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 16 04:28:16.585359 containerd[1831]: time="2025-09-16T04:28:16.582889721Z" level=info msg="Container 037166dfae319034912e574d147b2af8db2e7de6d8f92d7ef1bcc5dddb3b3db1: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:28:16.597935 containerd[1831]: time="2025-09-16T04:28:16.597894650Z" level=info msg="CreateContainer within sandbox \"5401d0996498343bd6fb163953499ff681740769c71aa95d2a6100d10589494a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"037166dfae319034912e574d147b2af8db2e7de6d8f92d7ef1bcc5dddb3b3db1\"" Sep 16 04:28:16.599957 containerd[1831]: time="2025-09-16T04:28:16.599823100Z" level=info msg="StartContainer for \"037166dfae319034912e574d147b2af8db2e7de6d8f92d7ef1bcc5dddb3b3db1\"" Sep 16 04:28:16.601575 containerd[1831]: time="2025-09-16T04:28:16.601539056Z" level=info msg="connecting to shim 037166dfae319034912e574d147b2af8db2e7de6d8f92d7ef1bcc5dddb3b3db1" address="unix:///run/containerd/s/fca684b09017507c3d08684a243ded66e32795e6aff88307908c3920c08ec21b" protocol=ttrpc version=3 Sep 16 04:28:16.619542 systemd[1]: Started cri-containerd-037166dfae319034912e574d147b2af8db2e7de6d8f92d7ef1bcc5dddb3b3db1.scope - libcontainer container 037166dfae319034912e574d147b2af8db2e7de6d8f92d7ef1bcc5dddb3b3db1. Sep 16 04:28:16.644485 containerd[1831]: time="2025-09-16T04:28:16.644350075Z" level=info msg="StartContainer for \"037166dfae319034912e574d147b2af8db2e7de6d8f92d7ef1bcc5dddb3b3db1\" returns successfully" Sep 16 04:28:16.650860 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 16 04:28:16.651390 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 16 04:28:16.652362 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 16 04:28:16.654362 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 16 04:28:16.656332 containerd[1831]: time="2025-09-16T04:28:16.654483962Z" level=info msg="received exit event container_id:\"037166dfae319034912e574d147b2af8db2e7de6d8f92d7ef1bcc5dddb3b3db1\" id:\"037166dfae319034912e574d147b2af8db2e7de6d8f92d7ef1bcc5dddb3b3db1\" pid:3776 exited_at:{seconds:1757996896 nanos:653818951}" Sep 16 04:28:16.655790 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 16 04:28:16.656066 systemd[1]: cri-containerd-037166dfae319034912e574d147b2af8db2e7de6d8f92d7ef1bcc5dddb3b3db1.scope: Deactivated successfully. Sep 16 04:28:16.657517 containerd[1831]: time="2025-09-16T04:28:16.657044567Z" level=info msg="TaskExit event in podsandbox handler container_id:\"037166dfae319034912e574d147b2af8db2e7de6d8f92d7ef1bcc5dddb3b3db1\" id:\"037166dfae319034912e574d147b2af8db2e7de6d8f92d7ef1bcc5dddb3b3db1\" pid:3776 exited_at:{seconds:1757996896 nanos:653818951}" Sep 16 04:28:16.673487 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 16 04:28:17.570601 containerd[1831]: time="2025-09-16T04:28:17.570562289Z" level=info msg="CreateContainer within sandbox \"5401d0996498343bd6fb163953499ff681740769c71aa95d2a6100d10589494a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 16 04:28:17.580829 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-037166dfae319034912e574d147b2af8db2e7de6d8f92d7ef1bcc5dddb3b3db1-rootfs.mount: Deactivated successfully. Sep 16 04:28:17.603029 containerd[1831]: time="2025-09-16T04:28:17.602698924Z" level=info msg="Container 0f3585fbd06d2a2b14e694c73aef5bb147cdce251c121f5fc7384817f9bdbec0: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:28:17.618970 containerd[1831]: time="2025-09-16T04:28:17.618846816Z" level=info msg="CreateContainer within sandbox \"5401d0996498343bd6fb163953499ff681740769c71aa95d2a6100d10589494a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"0f3585fbd06d2a2b14e694c73aef5bb147cdce251c121f5fc7384817f9bdbec0\"" Sep 16 04:28:17.620139 containerd[1831]: time="2025-09-16T04:28:17.620070365Z" level=info msg="StartContainer for \"0f3585fbd06d2a2b14e694c73aef5bb147cdce251c121f5fc7384817f9bdbec0\"" Sep 16 04:28:17.622704 containerd[1831]: time="2025-09-16T04:28:17.622676579Z" level=info msg="connecting to shim 0f3585fbd06d2a2b14e694c73aef5bb147cdce251c121f5fc7384817f9bdbec0" address="unix:///run/containerd/s/fca684b09017507c3d08684a243ded66e32795e6aff88307908c3920c08ec21b" protocol=ttrpc version=3 Sep 16 04:28:17.644624 systemd[1]: Started cri-containerd-0f3585fbd06d2a2b14e694c73aef5bb147cdce251c121f5fc7384817f9bdbec0.scope - libcontainer container 0f3585fbd06d2a2b14e694c73aef5bb147cdce251c121f5fc7384817f9bdbec0. Sep 16 04:28:17.681387 systemd[1]: cri-containerd-0f3585fbd06d2a2b14e694c73aef5bb147cdce251c121f5fc7384817f9bdbec0.scope: Deactivated successfully. Sep 16 04:28:17.683095 containerd[1831]: time="2025-09-16T04:28:17.682939977Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0f3585fbd06d2a2b14e694c73aef5bb147cdce251c121f5fc7384817f9bdbec0\" id:\"0f3585fbd06d2a2b14e694c73aef5bb147cdce251c121f5fc7384817f9bdbec0\" pid:3837 exited_at:{seconds:1757996897 nanos:681742645}" Sep 16 04:28:17.683736 containerd[1831]: time="2025-09-16T04:28:17.683694008Z" level=info msg="received exit event container_id:\"0f3585fbd06d2a2b14e694c73aef5bb147cdce251c121f5fc7384817f9bdbec0\" id:\"0f3585fbd06d2a2b14e694c73aef5bb147cdce251c121f5fc7384817f9bdbec0\" pid:3837 exited_at:{seconds:1757996897 nanos:681742645}" Sep 16 04:28:17.685268 containerd[1831]: time="2025-09-16T04:28:17.685190548Z" level=info msg="StartContainer for \"0f3585fbd06d2a2b14e694c73aef5bb147cdce251c121f5fc7384817f9bdbec0\" returns successfully" Sep 16 04:28:17.712389 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0f3585fbd06d2a2b14e694c73aef5bb147cdce251c121f5fc7384817f9bdbec0-rootfs.mount: Deactivated successfully. Sep 16 04:28:18.092944 containerd[1831]: time="2025-09-16T04:28:18.092445754Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:28:18.095330 containerd[1831]: time="2025-09-16T04:28:18.095291791Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Sep 16 04:28:18.098185 containerd[1831]: time="2025-09-16T04:28:18.098156837Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:28:18.099366 containerd[1831]: time="2025-09-16T04:28:18.099339272Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 3.416209209s" Sep 16 04:28:18.099398 containerd[1831]: time="2025-09-16T04:28:18.099371385Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Sep 16 04:28:18.101694 containerd[1831]: time="2025-09-16T04:28:18.101665222Z" level=info msg="CreateContainer within sandbox \"55ccb8da4a6de2cec88cb10f77af4b4978489fc9e38ec7578dd10643fb3f9a91\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 16 04:28:18.115904 containerd[1831]: time="2025-09-16T04:28:18.115870744Z" level=info msg="Container e047381fc8dee175a6a17808509566a39b3248daba05834892281da70cd4114c: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:28:18.135117 containerd[1831]: time="2025-09-16T04:28:18.135080895Z" level=info msg="CreateContainer within sandbox \"55ccb8da4a6de2cec88cb10f77af4b4978489fc9e38ec7578dd10643fb3f9a91\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"e047381fc8dee175a6a17808509566a39b3248daba05834892281da70cd4114c\"" Sep 16 04:28:18.135719 containerd[1831]: time="2025-09-16T04:28:18.135693218Z" level=info msg="StartContainer for \"e047381fc8dee175a6a17808509566a39b3248daba05834892281da70cd4114c\"" Sep 16 04:28:18.136699 containerd[1831]: time="2025-09-16T04:28:18.136674295Z" level=info msg="connecting to shim e047381fc8dee175a6a17808509566a39b3248daba05834892281da70cd4114c" address="unix:///run/containerd/s/fc3cff557618942ee51fddb6afabd261423d3cce126dd35feb9c804accb3449a" protocol=ttrpc version=3 Sep 16 04:28:18.150695 systemd[1]: Started cri-containerd-e047381fc8dee175a6a17808509566a39b3248daba05834892281da70cd4114c.scope - libcontainer container e047381fc8dee175a6a17808509566a39b3248daba05834892281da70cd4114c. Sep 16 04:28:18.175283 containerd[1831]: time="2025-09-16T04:28:18.175210946Z" level=info msg="StartContainer for \"e047381fc8dee175a6a17808509566a39b3248daba05834892281da70cd4114c\" returns successfully" Sep 16 04:28:18.579736 containerd[1831]: time="2025-09-16T04:28:18.579698572Z" level=info msg="CreateContainer within sandbox \"5401d0996498343bd6fb163953499ff681740769c71aa95d2a6100d10589494a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 16 04:28:18.588284 kubelet[3260]: I0916 04:28:18.588103 3260 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-hg2j2" podStartSLOduration=1.4936359559999999 podStartE2EDuration="15.588084296s" podCreationTimestamp="2025-09-16 04:28:03 +0000 UTC" firstStartedPulling="2025-09-16 04:28:04.005506767 +0000 UTC m=+7.584173330" lastFinishedPulling="2025-09-16 04:28:18.099955107 +0000 UTC m=+21.678621670" observedRunningTime="2025-09-16 04:28:18.586835018 +0000 UTC m=+22.165501621" watchObservedRunningTime="2025-09-16 04:28:18.588084296 +0000 UTC m=+22.166750891" Sep 16 04:28:18.605273 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3245011873.mount: Deactivated successfully. Sep 16 04:28:18.608969 containerd[1831]: time="2025-09-16T04:28:18.608510932Z" level=info msg="Container bc5a6091c90affced7f81b9fef75583c63f677ca46378186ca32d4bf2fb7ed3b: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:28:18.622669 containerd[1831]: time="2025-09-16T04:28:18.622632123Z" level=info msg="CreateContainer within sandbox \"5401d0996498343bd6fb163953499ff681740769c71aa95d2a6100d10589494a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"bc5a6091c90affced7f81b9fef75583c63f677ca46378186ca32d4bf2fb7ed3b\"" Sep 16 04:28:18.624619 containerd[1831]: time="2025-09-16T04:28:18.624584893Z" level=info msg="StartContainer for \"bc5a6091c90affced7f81b9fef75583c63f677ca46378186ca32d4bf2fb7ed3b\"" Sep 16 04:28:18.625307 containerd[1831]: time="2025-09-16T04:28:18.625280978Z" level=info msg="connecting to shim bc5a6091c90affced7f81b9fef75583c63f677ca46378186ca32d4bf2fb7ed3b" address="unix:///run/containerd/s/fca684b09017507c3d08684a243ded66e32795e6aff88307908c3920c08ec21b" protocol=ttrpc version=3 Sep 16 04:28:18.645567 systemd[1]: Started cri-containerd-bc5a6091c90affced7f81b9fef75583c63f677ca46378186ca32d4bf2fb7ed3b.scope - libcontainer container bc5a6091c90affced7f81b9fef75583c63f677ca46378186ca32d4bf2fb7ed3b. Sep 16 04:28:18.668474 systemd[1]: cri-containerd-bc5a6091c90affced7f81b9fef75583c63f677ca46378186ca32d4bf2fb7ed3b.scope: Deactivated successfully. Sep 16 04:28:18.673506 containerd[1831]: time="2025-09-16T04:28:18.673469583Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bc5a6091c90affced7f81b9fef75583c63f677ca46378186ca32d4bf2fb7ed3b\" id:\"bc5a6091c90affced7f81b9fef75583c63f677ca46378186ca32d4bf2fb7ed3b\" pid:3913 exited_at:{seconds:1757996898 nanos:671049566}" Sep 16 04:28:18.678407 containerd[1831]: time="2025-09-16T04:28:18.678343425Z" level=info msg="received exit event container_id:\"bc5a6091c90affced7f81b9fef75583c63f677ca46378186ca32d4bf2fb7ed3b\" id:\"bc5a6091c90affced7f81b9fef75583c63f677ca46378186ca32d4bf2fb7ed3b\" pid:3913 exited_at:{seconds:1757996898 nanos:671049566}" Sep 16 04:28:18.678831 containerd[1831]: time="2025-09-16T04:28:18.678813687Z" level=info msg="StartContainer for \"bc5a6091c90affced7f81b9fef75583c63f677ca46378186ca32d4bf2fb7ed3b\" returns successfully" Sep 16 04:28:19.581405 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bc5a6091c90affced7f81b9fef75583c63f677ca46378186ca32d4bf2fb7ed3b-rootfs.mount: Deactivated successfully. Sep 16 04:28:19.585682 containerd[1831]: time="2025-09-16T04:28:19.585633992Z" level=info msg="CreateContainer within sandbox \"5401d0996498343bd6fb163953499ff681740769c71aa95d2a6100d10589494a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 16 04:28:19.613279 containerd[1831]: time="2025-09-16T04:28:19.613245908Z" level=info msg="Container eaf98f77f7b9b6a9e99aa88dbff85428b578c0f42988d99a33545ffc10409eb2: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:28:19.626039 containerd[1831]: time="2025-09-16T04:28:19.625932112Z" level=info msg="CreateContainer within sandbox \"5401d0996498343bd6fb163953499ff681740769c71aa95d2a6100d10589494a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"eaf98f77f7b9b6a9e99aa88dbff85428b578c0f42988d99a33545ffc10409eb2\"" Sep 16 04:28:19.627682 containerd[1831]: time="2025-09-16T04:28:19.627657820Z" level=info msg="StartContainer for \"eaf98f77f7b9b6a9e99aa88dbff85428b578c0f42988d99a33545ffc10409eb2\"" Sep 16 04:28:19.628319 containerd[1831]: time="2025-09-16T04:28:19.628292759Z" level=info msg="connecting to shim eaf98f77f7b9b6a9e99aa88dbff85428b578c0f42988d99a33545ffc10409eb2" address="unix:///run/containerd/s/fca684b09017507c3d08684a243ded66e32795e6aff88307908c3920c08ec21b" protocol=ttrpc version=3 Sep 16 04:28:19.650554 systemd[1]: Started cri-containerd-eaf98f77f7b9b6a9e99aa88dbff85428b578c0f42988d99a33545ffc10409eb2.scope - libcontainer container eaf98f77f7b9b6a9e99aa88dbff85428b578c0f42988d99a33545ffc10409eb2. Sep 16 04:28:19.677913 containerd[1831]: time="2025-09-16T04:28:19.677876541Z" level=info msg="StartContainer for \"eaf98f77f7b9b6a9e99aa88dbff85428b578c0f42988d99a33545ffc10409eb2\" returns successfully" Sep 16 04:28:19.735144 containerd[1831]: time="2025-09-16T04:28:19.735094503Z" level=info msg="TaskExit event in podsandbox handler container_id:\"eaf98f77f7b9b6a9e99aa88dbff85428b578c0f42988d99a33545ffc10409eb2\" id:\"1e3976a95c3f779b9c5ed8bcafa612ce18f072cb7c4df3fdf1bb10e759c02e0a\" pid:3980 exited_at:{seconds:1757996899 nanos:734809967}" Sep 16 04:28:19.798987 kubelet[3260]: I0916 04:28:19.798915 3260 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 16 04:28:19.847224 systemd[1]: Created slice kubepods-burstable-pod19ca2fbf_2156_46c3_a2b3_d2bdc4aeb370.slice - libcontainer container kubepods-burstable-pod19ca2fbf_2156_46c3_a2b3_d2bdc4aeb370.slice. Sep 16 04:28:19.854980 systemd[1]: Created slice kubepods-burstable-podb9546688_3f4e_4885_836c_6501e997c6fb.slice - libcontainer container kubepods-burstable-podb9546688_3f4e_4885_836c_6501e997c6fb.slice. Sep 16 04:28:19.907894 kubelet[3260]: I0916 04:28:19.907682 3260 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fz2zw\" (UniqueName: \"kubernetes.io/projected/19ca2fbf-2156-46c3-a2b3-d2bdc4aeb370-kube-api-access-fz2zw\") pod \"coredns-7c65d6cfc9-ggtld\" (UID: \"19ca2fbf-2156-46c3-a2b3-d2bdc4aeb370\") " pod="kube-system/coredns-7c65d6cfc9-ggtld" Sep 16 04:28:19.907894 kubelet[3260]: I0916 04:28:19.907751 3260 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgqbm\" (UniqueName: \"kubernetes.io/projected/b9546688-3f4e-4885-836c-6501e997c6fb-kube-api-access-rgqbm\") pod \"coredns-7c65d6cfc9-w8f7w\" (UID: \"b9546688-3f4e-4885-836c-6501e997c6fb\") " pod="kube-system/coredns-7c65d6cfc9-w8f7w" Sep 16 04:28:19.907894 kubelet[3260]: I0916 04:28:19.907767 3260 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/19ca2fbf-2156-46c3-a2b3-d2bdc4aeb370-config-volume\") pod \"coredns-7c65d6cfc9-ggtld\" (UID: \"19ca2fbf-2156-46c3-a2b3-d2bdc4aeb370\") " pod="kube-system/coredns-7c65d6cfc9-ggtld" Sep 16 04:28:19.907894 kubelet[3260]: I0916 04:28:19.907778 3260 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b9546688-3f4e-4885-836c-6501e997c6fb-config-volume\") pod \"coredns-7c65d6cfc9-w8f7w\" (UID: \"b9546688-3f4e-4885-836c-6501e997c6fb\") " pod="kube-system/coredns-7c65d6cfc9-w8f7w" Sep 16 04:28:20.153041 containerd[1831]: time="2025-09-16T04:28:20.152684634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-ggtld,Uid:19ca2fbf-2156-46c3-a2b3-d2bdc4aeb370,Namespace:kube-system,Attempt:0,}" Sep 16 04:28:20.158762 containerd[1831]: time="2025-09-16T04:28:20.158725935Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-w8f7w,Uid:b9546688-3f4e-4885-836c-6501e997c6fb,Namespace:kube-system,Attempt:0,}" Sep 16 04:28:20.616975 kubelet[3260]: I0916 04:28:20.616914 3260 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-rbqcl" podStartSLOduration=6.804559197 podStartE2EDuration="17.616896898s" podCreationTimestamp="2025-09-16 04:28:03 +0000 UTC" firstStartedPulling="2025-09-16 04:28:03.870141421 +0000 UTC m=+7.448807984" lastFinishedPulling="2025-09-16 04:28:14.682479122 +0000 UTC m=+18.261145685" observedRunningTime="2025-09-16 04:28:20.615670117 +0000 UTC m=+24.194336680" watchObservedRunningTime="2025-09-16 04:28:20.616896898 +0000 UTC m=+24.195563469" Sep 16 04:28:21.918984 systemd-networkd[1639]: cilium_host: Link UP Sep 16 04:28:21.919697 systemd-networkd[1639]: cilium_net: Link UP Sep 16 04:28:21.920236 systemd-networkd[1639]: cilium_net: Gained carrier Sep 16 04:28:21.920353 systemd-networkd[1639]: cilium_host: Gained carrier Sep 16 04:28:22.084627 systemd-networkd[1639]: cilium_vxlan: Link UP Sep 16 04:28:22.084632 systemd-networkd[1639]: cilium_vxlan: Gained carrier Sep 16 04:28:22.335599 systemd-networkd[1639]: cilium_net: Gained IPv6LL Sep 16 04:28:22.447634 systemd-networkd[1639]: cilium_host: Gained IPv6LL Sep 16 04:28:22.497451 kernel: NET: Registered PF_ALG protocol family Sep 16 04:28:23.102147 systemd-networkd[1639]: lxc_health: Link UP Sep 16 04:28:23.114505 systemd-networkd[1639]: lxc_health: Gained carrier Sep 16 04:28:23.196851 systemd-networkd[1639]: lxc57dc7e916749: Link UP Sep 16 04:28:23.206467 kernel: eth0: renamed from tmp62ecd Sep 16 04:28:23.206505 systemd-networkd[1639]: lxc57dc7e916749: Gained carrier Sep 16 04:28:23.684671 systemd-networkd[1639]: lxc561d7384ebdb: Link UP Sep 16 04:28:23.687444 kernel: eth0: renamed from tmp52678 Sep 16 04:28:23.687415 systemd-networkd[1639]: lxc561d7384ebdb: Gained carrier Sep 16 04:28:23.728522 systemd-networkd[1639]: cilium_vxlan: Gained IPv6LL Sep 16 04:28:24.495606 systemd-networkd[1639]: lxc57dc7e916749: Gained IPv6LL Sep 16 04:28:24.815609 systemd-networkd[1639]: lxc_health: Gained IPv6LL Sep 16 04:28:25.456576 systemd-networkd[1639]: lxc561d7384ebdb: Gained IPv6LL Sep 16 04:28:25.789045 containerd[1831]: time="2025-09-16T04:28:25.788948366Z" level=info msg="connecting to shim 62ecde8a54242828aeacccc14d3039df2869dea3e8ab2c9633d802bcf7a43a70" address="unix:///run/containerd/s/864874e39214b087d6f127c7397c08bd0f5f06cc3826886b6448116fc3bdd7a0" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:28:25.790644 containerd[1831]: time="2025-09-16T04:28:25.789531113Z" level=info msg="connecting to shim 52678e93953f01f015699fbe04881049484b60a29e2594f0e0ee6ac5d28b397c" address="unix:///run/containerd/s/edb3a55e7ddef8777e5f14f2d087f3768ed05b8e52d540089e7a6549bdb9545a" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:28:25.824553 systemd[1]: Started cri-containerd-52678e93953f01f015699fbe04881049484b60a29e2594f0e0ee6ac5d28b397c.scope - libcontainer container 52678e93953f01f015699fbe04881049484b60a29e2594f0e0ee6ac5d28b397c. Sep 16 04:28:25.828384 systemd[1]: Started cri-containerd-62ecde8a54242828aeacccc14d3039df2869dea3e8ab2c9633d802bcf7a43a70.scope - libcontainer container 62ecde8a54242828aeacccc14d3039df2869dea3e8ab2c9633d802bcf7a43a70. Sep 16 04:28:25.860873 containerd[1831]: time="2025-09-16T04:28:25.860836552Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-ggtld,Uid:19ca2fbf-2156-46c3-a2b3-d2bdc4aeb370,Namespace:kube-system,Attempt:0,} returns sandbox id \"52678e93953f01f015699fbe04881049484b60a29e2594f0e0ee6ac5d28b397c\"" Sep 16 04:28:25.863580 containerd[1831]: time="2025-09-16T04:28:25.863534464Z" level=info msg="CreateContainer within sandbox \"52678e93953f01f015699fbe04881049484b60a29e2594f0e0ee6ac5d28b397c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 16 04:28:25.864298 containerd[1831]: time="2025-09-16T04:28:25.864268063Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-w8f7w,Uid:b9546688-3f4e-4885-836c-6501e997c6fb,Namespace:kube-system,Attempt:0,} returns sandbox id \"62ecde8a54242828aeacccc14d3039df2869dea3e8ab2c9633d802bcf7a43a70\"" Sep 16 04:28:25.866806 containerd[1831]: time="2025-09-16T04:28:25.866784080Z" level=info msg="CreateContainer within sandbox \"62ecde8a54242828aeacccc14d3039df2869dea3e8ab2c9633d802bcf7a43a70\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 16 04:28:25.890233 containerd[1831]: time="2025-09-16T04:28:25.890197629Z" level=info msg="Container cfd95be01b7f91ad8bb9bbe825deda132f0383a96ecf40f19ead5e46d53811b0: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:28:25.896176 containerd[1831]: time="2025-09-16T04:28:25.896147485Z" level=info msg="Container 1fa15bd67b84d22a295b90a66e92f0299bff7893cd5f269a77dae45aaa134d4e: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:28:25.905771 containerd[1831]: time="2025-09-16T04:28:25.905736171Z" level=info msg="CreateContainer within sandbox \"52678e93953f01f015699fbe04881049484b60a29e2594f0e0ee6ac5d28b397c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"cfd95be01b7f91ad8bb9bbe825deda132f0383a96ecf40f19ead5e46d53811b0\"" Sep 16 04:28:25.906368 containerd[1831]: time="2025-09-16T04:28:25.906220970Z" level=info msg="StartContainer for \"cfd95be01b7f91ad8bb9bbe825deda132f0383a96ecf40f19ead5e46d53811b0\"" Sep 16 04:28:25.907143 containerd[1831]: time="2025-09-16T04:28:25.907107551Z" level=info msg="connecting to shim cfd95be01b7f91ad8bb9bbe825deda132f0383a96ecf40f19ead5e46d53811b0" address="unix:///run/containerd/s/edb3a55e7ddef8777e5f14f2d087f3768ed05b8e52d540089e7a6549bdb9545a" protocol=ttrpc version=3 Sep 16 04:28:25.918276 containerd[1831]: time="2025-09-16T04:28:25.918243151Z" level=info msg="CreateContainer within sandbox \"62ecde8a54242828aeacccc14d3039df2869dea3e8ab2c9633d802bcf7a43a70\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1fa15bd67b84d22a295b90a66e92f0299bff7893cd5f269a77dae45aaa134d4e\"" Sep 16 04:28:25.919461 containerd[1831]: time="2025-09-16T04:28:25.918710126Z" level=info msg="StartContainer for \"1fa15bd67b84d22a295b90a66e92f0299bff7893cd5f269a77dae45aaa134d4e\"" Sep 16 04:28:25.919461 containerd[1831]: time="2025-09-16T04:28:25.919298441Z" level=info msg="connecting to shim 1fa15bd67b84d22a295b90a66e92f0299bff7893cd5f269a77dae45aaa134d4e" address="unix:///run/containerd/s/864874e39214b087d6f127c7397c08bd0f5f06cc3826886b6448116fc3bdd7a0" protocol=ttrpc version=3 Sep 16 04:28:25.927589 systemd[1]: Started cri-containerd-cfd95be01b7f91ad8bb9bbe825deda132f0383a96ecf40f19ead5e46d53811b0.scope - libcontainer container cfd95be01b7f91ad8bb9bbe825deda132f0383a96ecf40f19ead5e46d53811b0. Sep 16 04:28:25.945536 systemd[1]: Started cri-containerd-1fa15bd67b84d22a295b90a66e92f0299bff7893cd5f269a77dae45aaa134d4e.scope - libcontainer container 1fa15bd67b84d22a295b90a66e92f0299bff7893cd5f269a77dae45aaa134d4e. Sep 16 04:28:25.973749 containerd[1831]: time="2025-09-16T04:28:25.973622868Z" level=info msg="StartContainer for \"cfd95be01b7f91ad8bb9bbe825deda132f0383a96ecf40f19ead5e46d53811b0\" returns successfully" Sep 16 04:28:25.980665 containerd[1831]: time="2025-09-16T04:28:25.980374678Z" level=info msg="StartContainer for \"1fa15bd67b84d22a295b90a66e92f0299bff7893cd5f269a77dae45aaa134d4e\" returns successfully" Sep 16 04:28:26.625733 kubelet[3260]: I0916 04:28:26.625086 3260 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-w8f7w" podStartSLOduration=23.625073682 podStartE2EDuration="23.625073682s" podCreationTimestamp="2025-09-16 04:28:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 04:28:26.62420059 +0000 UTC m=+30.202867161" watchObservedRunningTime="2025-09-16 04:28:26.625073682 +0000 UTC m=+30.203740245" Sep 16 04:28:26.645208 kubelet[3260]: I0916 04:28:26.644781 3260 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-ggtld" podStartSLOduration=23.644767479 podStartE2EDuration="23.644767479s" podCreationTimestamp="2025-09-16 04:28:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 04:28:26.644699996 +0000 UTC m=+30.223366559" watchObservedRunningTime="2025-09-16 04:28:26.644767479 +0000 UTC m=+30.223434042" Sep 16 04:30:09.277202 systemd[1]: Started sshd@7-10.200.20.14:22-10.200.16.10:59154.service - OpenSSH per-connection server daemon (10.200.16.10:59154). Sep 16 04:30:09.697121 sshd[4644]: Accepted publickey for core from 10.200.16.10 port 59154 ssh2: RSA SHA256:I71fjGTKGCyypT9ALVqAOHTk+maJkjWBdnFioZ0bBCo Sep 16 04:30:09.698201 sshd-session[4644]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:30:09.701758 systemd-logind[1808]: New session 10 of user core. Sep 16 04:30:09.709701 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 16 04:30:10.092470 sshd[4647]: Connection closed by 10.200.16.10 port 59154 Sep 16 04:30:10.092999 sshd-session[4644]: pam_unix(sshd:session): session closed for user core Sep 16 04:30:10.096063 systemd[1]: sshd@7-10.200.20.14:22-10.200.16.10:59154.service: Deactivated successfully. Sep 16 04:30:10.097993 systemd[1]: session-10.scope: Deactivated successfully. Sep 16 04:30:10.098965 systemd-logind[1808]: Session 10 logged out. Waiting for processes to exit. Sep 16 04:30:10.100413 systemd-logind[1808]: Removed session 10. Sep 16 04:30:15.168679 systemd[1]: Started sshd@8-10.200.20.14:22-10.200.16.10:35718.service - OpenSSH per-connection server daemon (10.200.16.10:35718). Sep 16 04:30:15.579814 sshd[4660]: Accepted publickey for core from 10.200.16.10 port 35718 ssh2: RSA SHA256:I71fjGTKGCyypT9ALVqAOHTk+maJkjWBdnFioZ0bBCo Sep 16 04:30:15.580862 sshd-session[4660]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:30:15.584387 systemd-logind[1808]: New session 11 of user core. Sep 16 04:30:15.590537 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 16 04:30:15.933686 sshd[4663]: Connection closed by 10.200.16.10 port 35718 Sep 16 04:30:15.934187 sshd-session[4660]: pam_unix(sshd:session): session closed for user core Sep 16 04:30:15.937678 systemd[1]: sshd@8-10.200.20.14:22-10.200.16.10:35718.service: Deactivated successfully. Sep 16 04:30:15.939227 systemd[1]: session-11.scope: Deactivated successfully. Sep 16 04:30:15.939939 systemd-logind[1808]: Session 11 logged out. Waiting for processes to exit. Sep 16 04:30:15.941064 systemd-logind[1808]: Removed session 11. Sep 16 04:30:21.011763 systemd[1]: Started sshd@9-10.200.20.14:22-10.200.16.10:57074.service - OpenSSH per-connection server daemon (10.200.16.10:57074). Sep 16 04:30:21.421462 sshd[4676]: Accepted publickey for core from 10.200.16.10 port 57074 ssh2: RSA SHA256:I71fjGTKGCyypT9ALVqAOHTk+maJkjWBdnFioZ0bBCo Sep 16 04:30:21.422466 sshd-session[4676]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:30:21.425963 systemd-logind[1808]: New session 12 of user core. Sep 16 04:30:21.436539 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 16 04:30:21.774300 sshd[4679]: Connection closed by 10.200.16.10 port 57074 Sep 16 04:30:21.774833 sshd-session[4676]: pam_unix(sshd:session): session closed for user core Sep 16 04:30:21.777952 systemd[1]: sshd@9-10.200.20.14:22-10.200.16.10:57074.service: Deactivated successfully. Sep 16 04:30:21.780638 systemd[1]: session-12.scope: Deactivated successfully. Sep 16 04:30:21.781152 systemd-logind[1808]: Session 12 logged out. Waiting for processes to exit. Sep 16 04:30:21.782355 systemd-logind[1808]: Removed session 12. Sep 16 04:30:26.857203 systemd[1]: Started sshd@10-10.200.20.14:22-10.200.16.10:57078.service - OpenSSH per-connection server daemon (10.200.16.10:57078). Sep 16 04:30:27.312737 sshd[4692]: Accepted publickey for core from 10.200.16.10 port 57078 ssh2: RSA SHA256:I71fjGTKGCyypT9ALVqAOHTk+maJkjWBdnFioZ0bBCo Sep 16 04:30:27.313907 sshd-session[4692]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:30:27.317466 systemd-logind[1808]: New session 13 of user core. Sep 16 04:30:27.325549 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 16 04:30:27.683544 sshd[4695]: Connection closed by 10.200.16.10 port 57078 Sep 16 04:30:27.684118 sshd-session[4692]: pam_unix(sshd:session): session closed for user core Sep 16 04:30:27.687155 systemd[1]: sshd@10-10.200.20.14:22-10.200.16.10:57078.service: Deactivated successfully. Sep 16 04:30:27.688909 systemd[1]: session-13.scope: Deactivated successfully. Sep 16 04:30:27.689604 systemd-logind[1808]: Session 13 logged out. Waiting for processes to exit. Sep 16 04:30:27.690925 systemd-logind[1808]: Removed session 13. Sep 16 04:30:32.760610 systemd[1]: Started sshd@11-10.200.20.14:22-10.200.16.10:43870.service - OpenSSH per-connection server daemon (10.200.16.10:43870). Sep 16 04:30:33.171854 sshd[4708]: Accepted publickey for core from 10.200.16.10 port 43870 ssh2: RSA SHA256:I71fjGTKGCyypT9ALVqAOHTk+maJkjWBdnFioZ0bBCo Sep 16 04:30:33.172900 sshd-session[4708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:30:33.176411 systemd-logind[1808]: New session 14 of user core. Sep 16 04:30:33.182539 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 16 04:30:33.536857 sshd[4711]: Connection closed by 10.200.16.10 port 43870 Sep 16 04:30:33.537491 sshd-session[4708]: pam_unix(sshd:session): session closed for user core Sep 16 04:30:33.540552 systemd[1]: sshd@11-10.200.20.14:22-10.200.16.10:43870.service: Deactivated successfully. Sep 16 04:30:33.541915 systemd[1]: session-14.scope: Deactivated successfully. Sep 16 04:30:33.542733 systemd-logind[1808]: Session 14 logged out. Waiting for processes to exit. Sep 16 04:30:33.543703 systemd-logind[1808]: Removed session 14. Sep 16 04:30:38.615983 systemd[1]: Started sshd@12-10.200.20.14:22-10.200.16.10:43876.service - OpenSSH per-connection server daemon (10.200.16.10:43876). Sep 16 04:30:39.029354 sshd[4725]: Accepted publickey for core from 10.200.16.10 port 43876 ssh2: RSA SHA256:I71fjGTKGCyypT9ALVqAOHTk+maJkjWBdnFioZ0bBCo Sep 16 04:30:39.030344 sshd-session[4725]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:30:39.034363 systemd-logind[1808]: New session 15 of user core. Sep 16 04:30:39.041542 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 16 04:30:39.366403 sshd[4728]: Connection closed by 10.200.16.10 port 43876 Sep 16 04:30:39.366316 sshd-session[4725]: pam_unix(sshd:session): session closed for user core Sep 16 04:30:39.369469 systemd[1]: sshd@12-10.200.20.14:22-10.200.16.10:43876.service: Deactivated successfully. Sep 16 04:30:39.371192 systemd[1]: session-15.scope: Deactivated successfully. Sep 16 04:30:39.371847 systemd-logind[1808]: Session 15 logged out. Waiting for processes to exit. Sep 16 04:30:39.372808 systemd-logind[1808]: Removed session 15. Sep 16 04:30:44.440861 systemd[1]: Started sshd@13-10.200.20.14:22-10.200.16.10:60078.service - OpenSSH per-connection server daemon (10.200.16.10:60078). Sep 16 04:30:44.855175 sshd[4740]: Accepted publickey for core from 10.200.16.10 port 60078 ssh2: RSA SHA256:I71fjGTKGCyypT9ALVqAOHTk+maJkjWBdnFioZ0bBCo Sep 16 04:30:44.856204 sshd-session[4740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:30:44.859611 systemd-logind[1808]: New session 16 of user core. Sep 16 04:30:44.868533 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 16 04:30:45.197794 sshd[4743]: Connection closed by 10.200.16.10 port 60078 Sep 16 04:30:45.197630 sshd-session[4740]: pam_unix(sshd:session): session closed for user core Sep 16 04:30:45.200787 systemd[1]: sshd@13-10.200.20.14:22-10.200.16.10:60078.service: Deactivated successfully. Sep 16 04:30:45.200925 systemd-logind[1808]: Session 16 logged out. Waiting for processes to exit. Sep 16 04:30:45.202662 systemd[1]: session-16.scope: Deactivated successfully. Sep 16 04:30:45.204723 systemd-logind[1808]: Removed session 16. Sep 16 04:30:50.290328 systemd[1]: Started sshd@14-10.200.20.14:22-10.200.16.10:45834.service - OpenSSH per-connection server daemon (10.200.16.10:45834). Sep 16 04:30:50.708013 sshd[4755]: Accepted publickey for core from 10.200.16.10 port 45834 ssh2: RSA SHA256:I71fjGTKGCyypT9ALVqAOHTk+maJkjWBdnFioZ0bBCo Sep 16 04:30:50.709279 sshd-session[4755]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:30:50.713602 systemd-logind[1808]: New session 17 of user core. Sep 16 04:30:50.721652 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 16 04:30:51.054366 sshd[4758]: Connection closed by 10.200.16.10 port 45834 Sep 16 04:30:51.054892 sshd-session[4755]: pam_unix(sshd:session): session closed for user core Sep 16 04:30:51.058463 systemd-logind[1808]: Session 17 logged out. Waiting for processes to exit. Sep 16 04:30:51.058622 systemd[1]: sshd@14-10.200.20.14:22-10.200.16.10:45834.service: Deactivated successfully. Sep 16 04:30:51.060914 systemd[1]: session-17.scope: Deactivated successfully. Sep 16 04:30:51.062937 systemd-logind[1808]: Removed session 17. Sep 16 04:30:51.130206 systemd[1]: Started sshd@15-10.200.20.14:22-10.200.16.10:45840.service - OpenSSH per-connection server daemon (10.200.16.10:45840). Sep 16 04:30:51.544645 sshd[4771]: Accepted publickey for core from 10.200.16.10 port 45840 ssh2: RSA SHA256:I71fjGTKGCyypT9ALVqAOHTk+maJkjWBdnFioZ0bBCo Sep 16 04:30:51.545709 sshd-session[4771]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:30:51.549264 systemd-logind[1808]: New session 18 of user core. Sep 16 04:30:51.557645 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 16 04:30:51.931820 sshd[4774]: Connection closed by 10.200.16.10 port 45840 Sep 16 04:30:51.932356 sshd-session[4771]: pam_unix(sshd:session): session closed for user core Sep 16 04:30:51.936041 systemd-logind[1808]: Session 18 logged out. Waiting for processes to exit. Sep 16 04:30:51.936459 systemd[1]: sshd@15-10.200.20.14:22-10.200.16.10:45840.service: Deactivated successfully. Sep 16 04:30:51.938332 systemd[1]: session-18.scope: Deactivated successfully. Sep 16 04:30:51.939793 systemd-logind[1808]: Removed session 18. Sep 16 04:30:52.012616 systemd[1]: Started sshd@16-10.200.20.14:22-10.200.16.10:45844.service - OpenSSH per-connection server daemon (10.200.16.10:45844). Sep 16 04:30:52.467518 sshd[4784]: Accepted publickey for core from 10.200.16.10 port 45844 ssh2: RSA SHA256:I71fjGTKGCyypT9ALVqAOHTk+maJkjWBdnFioZ0bBCo Sep 16 04:30:52.468602 sshd-session[4784]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:30:52.472498 systemd-logind[1808]: New session 19 of user core. Sep 16 04:30:52.481549 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 16 04:30:52.843973 sshd[4787]: Connection closed by 10.200.16.10 port 45844 Sep 16 04:30:52.844428 sshd-session[4784]: pam_unix(sshd:session): session closed for user core Sep 16 04:30:52.847161 systemd[1]: sshd@16-10.200.20.14:22-10.200.16.10:45844.service: Deactivated successfully. Sep 16 04:30:52.849007 systemd[1]: session-19.scope: Deactivated successfully. Sep 16 04:30:52.850331 systemd-logind[1808]: Session 19 logged out. Waiting for processes to exit. Sep 16 04:30:52.851814 systemd-logind[1808]: Removed session 19. Sep 16 04:30:57.923171 systemd[1]: Started sshd@17-10.200.20.14:22-10.200.16.10:45856.service - OpenSSH per-connection server daemon (10.200.16.10:45856). Sep 16 04:30:58.339455 sshd[4800]: Accepted publickey for core from 10.200.16.10 port 45856 ssh2: RSA SHA256:I71fjGTKGCyypT9ALVqAOHTk+maJkjWBdnFioZ0bBCo Sep 16 04:30:58.340835 sshd-session[4800]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:30:58.344497 systemd-logind[1808]: New session 20 of user core. Sep 16 04:30:58.349557 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 16 04:30:58.695540 sshd[4803]: Connection closed by 10.200.16.10 port 45856 Sep 16 04:30:58.696182 sshd-session[4800]: pam_unix(sshd:session): session closed for user core Sep 16 04:30:58.699623 systemd[1]: sshd@17-10.200.20.14:22-10.200.16.10:45856.service: Deactivated successfully. Sep 16 04:30:58.701303 systemd[1]: session-20.scope: Deactivated successfully. Sep 16 04:30:58.702036 systemd-logind[1808]: Session 20 logged out. Waiting for processes to exit. Sep 16 04:30:58.703868 systemd-logind[1808]: Removed session 20. Sep 16 04:31:03.771311 systemd[1]: Started sshd@18-10.200.20.14:22-10.200.16.10:51880.service - OpenSSH per-connection server daemon (10.200.16.10:51880). Sep 16 04:31:04.185541 sshd[4814]: Accepted publickey for core from 10.200.16.10 port 51880 ssh2: RSA SHA256:I71fjGTKGCyypT9ALVqAOHTk+maJkjWBdnFioZ0bBCo Sep 16 04:31:04.187092 sshd-session[4814]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:31:04.190593 systemd-logind[1808]: New session 21 of user core. Sep 16 04:31:04.196558 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 16 04:31:04.540274 sshd[4820]: Connection closed by 10.200.16.10 port 51880 Sep 16 04:31:04.540990 sshd-session[4814]: pam_unix(sshd:session): session closed for user core Sep 16 04:31:04.544178 systemd[1]: sshd@18-10.200.20.14:22-10.200.16.10:51880.service: Deactivated successfully. Sep 16 04:31:04.545599 systemd[1]: session-21.scope: Deactivated successfully. Sep 16 04:31:04.546252 systemd-logind[1808]: Session 21 logged out. Waiting for processes to exit. Sep 16 04:31:04.547365 systemd-logind[1808]: Removed session 21. Sep 16 04:31:04.619302 systemd[1]: Started sshd@19-10.200.20.14:22-10.200.16.10:51896.service - OpenSSH per-connection server daemon (10.200.16.10:51896). Sep 16 04:31:05.032317 sshd[4831]: Accepted publickey for core from 10.200.16.10 port 51896 ssh2: RSA SHA256:I71fjGTKGCyypT9ALVqAOHTk+maJkjWBdnFioZ0bBCo Sep 16 04:31:05.033345 sshd-session[4831]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:31:05.036662 systemd-logind[1808]: New session 22 of user core. Sep 16 04:31:05.042711 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 16 04:31:05.531608 sshd[4834]: Connection closed by 10.200.16.10 port 51896 Sep 16 04:31:05.531520 sshd-session[4831]: pam_unix(sshd:session): session closed for user core Sep 16 04:31:05.535956 systemd[1]: sshd@19-10.200.20.14:22-10.200.16.10:51896.service: Deactivated successfully. Sep 16 04:31:05.537825 systemd[1]: session-22.scope: Deactivated successfully. Sep 16 04:31:05.538768 systemd-logind[1808]: Session 22 logged out. Waiting for processes to exit. Sep 16 04:31:05.540835 systemd-logind[1808]: Removed session 22. Sep 16 04:31:05.610623 systemd[1]: Started sshd@20-10.200.20.14:22-10.200.16.10:51902.service - OpenSSH per-connection server daemon (10.200.16.10:51902). Sep 16 04:31:06.024018 sshd[4844]: Accepted publickey for core from 10.200.16.10 port 51902 ssh2: RSA SHA256:I71fjGTKGCyypT9ALVqAOHTk+maJkjWBdnFioZ0bBCo Sep 16 04:31:06.025229 sshd-session[4844]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:31:06.029197 systemd-logind[1808]: New session 23 of user core. Sep 16 04:31:06.032549 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 16 04:31:07.158885 sshd[4847]: Connection closed by 10.200.16.10 port 51902 Sep 16 04:31:07.159659 sshd-session[4844]: pam_unix(sshd:session): session closed for user core Sep 16 04:31:07.162179 systemd[1]: sshd@20-10.200.20.14:22-10.200.16.10:51902.service: Deactivated successfully. Sep 16 04:31:07.164693 systemd[1]: session-23.scope: Deactivated successfully. Sep 16 04:31:07.166210 systemd-logind[1808]: Session 23 logged out. Waiting for processes to exit. Sep 16 04:31:07.167927 systemd-logind[1808]: Removed session 23. Sep 16 04:31:07.238668 systemd[1]: Started sshd@21-10.200.20.14:22-10.200.16.10:51914.service - OpenSSH per-connection server daemon (10.200.16.10:51914). Sep 16 04:31:07.655297 sshd[4864]: Accepted publickey for core from 10.200.16.10 port 51914 ssh2: RSA SHA256:I71fjGTKGCyypT9ALVqAOHTk+maJkjWBdnFioZ0bBCo Sep 16 04:31:07.656388 sshd-session[4864]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:31:07.659726 systemd-logind[1808]: New session 24 of user core. Sep 16 04:31:07.670791 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 16 04:31:08.110460 sshd[4867]: Connection closed by 10.200.16.10 port 51914 Sep 16 04:31:08.110542 sshd-session[4864]: pam_unix(sshd:session): session closed for user core Sep 16 04:31:08.114210 systemd[1]: sshd@21-10.200.20.14:22-10.200.16.10:51914.service: Deactivated successfully. Sep 16 04:31:08.115670 systemd[1]: session-24.scope: Deactivated successfully. Sep 16 04:31:08.116319 systemd-logind[1808]: Session 24 logged out. Waiting for processes to exit. Sep 16 04:31:08.117375 systemd-logind[1808]: Removed session 24. Sep 16 04:31:08.191638 systemd[1]: Started sshd@22-10.200.20.14:22-10.200.16.10:51918.service - OpenSSH per-connection server daemon (10.200.16.10:51918). Sep 16 04:31:08.603787 sshd[4876]: Accepted publickey for core from 10.200.16.10 port 51918 ssh2: RSA SHA256:I71fjGTKGCyypT9ALVqAOHTk+maJkjWBdnFioZ0bBCo Sep 16 04:31:08.604868 sshd-session[4876]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:31:08.608330 systemd-logind[1808]: New session 25 of user core. Sep 16 04:31:08.613534 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 16 04:31:08.958537 sshd[4879]: Connection closed by 10.200.16.10 port 51918 Sep 16 04:31:08.958153 sshd-session[4876]: pam_unix(sshd:session): session closed for user core Sep 16 04:31:08.961678 systemd-logind[1808]: Session 25 logged out. Waiting for processes to exit. Sep 16 04:31:08.961854 systemd[1]: sshd@22-10.200.20.14:22-10.200.16.10:51918.service: Deactivated successfully. Sep 16 04:31:08.963574 systemd[1]: session-25.scope: Deactivated successfully. Sep 16 04:31:08.964880 systemd-logind[1808]: Removed session 25. Sep 16 04:31:14.034636 systemd[1]: Started sshd@23-10.200.20.14:22-10.200.16.10:38040.service - OpenSSH per-connection server daemon (10.200.16.10:38040). Sep 16 04:31:14.446744 sshd[4891]: Accepted publickey for core from 10.200.16.10 port 38040 ssh2: RSA SHA256:I71fjGTKGCyypT9ALVqAOHTk+maJkjWBdnFioZ0bBCo Sep 16 04:31:14.447858 sshd-session[4891]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:31:14.452723 systemd-logind[1808]: New session 26 of user core. Sep 16 04:31:14.459574 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 16 04:31:14.789004 sshd[4898]: Connection closed by 10.200.16.10 port 38040 Sep 16 04:31:14.789675 sshd-session[4891]: pam_unix(sshd:session): session closed for user core Sep 16 04:31:14.792861 systemd[1]: sshd@23-10.200.20.14:22-10.200.16.10:38040.service: Deactivated successfully. Sep 16 04:31:14.794723 systemd[1]: session-26.scope: Deactivated successfully. Sep 16 04:31:14.796366 systemd-logind[1808]: Session 26 logged out. Waiting for processes to exit. Sep 16 04:31:14.797769 systemd-logind[1808]: Removed session 26. Sep 16 04:31:19.865860 systemd[1]: Started sshd@24-10.200.20.14:22-10.200.16.10:59720.service - OpenSSH per-connection server daemon (10.200.16.10:59720). Sep 16 04:31:20.287337 sshd[4910]: Accepted publickey for core from 10.200.16.10 port 59720 ssh2: RSA SHA256:I71fjGTKGCyypT9ALVqAOHTk+maJkjWBdnFioZ0bBCo Sep 16 04:31:20.288479 sshd-session[4910]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:31:20.292484 systemd-logind[1808]: New session 27 of user core. Sep 16 04:31:20.297568 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 16 04:31:20.640756 sshd[4913]: Connection closed by 10.200.16.10 port 59720 Sep 16 04:31:20.641325 sshd-session[4910]: pam_unix(sshd:session): session closed for user core Sep 16 04:31:20.644682 systemd-logind[1808]: Session 27 logged out. Waiting for processes to exit. Sep 16 04:31:20.645112 systemd[1]: sshd@24-10.200.20.14:22-10.200.16.10:59720.service: Deactivated successfully. Sep 16 04:31:20.649069 systemd[1]: session-27.scope: Deactivated successfully. Sep 16 04:31:20.651311 systemd-logind[1808]: Removed session 27. Sep 16 04:31:25.716713 systemd[1]: Started sshd@25-10.200.20.14:22-10.200.16.10:59730.service - OpenSSH per-connection server daemon (10.200.16.10:59730). Sep 16 04:31:26.128880 sshd[4925]: Accepted publickey for core from 10.200.16.10 port 59730 ssh2: RSA SHA256:I71fjGTKGCyypT9ALVqAOHTk+maJkjWBdnFioZ0bBCo Sep 16 04:31:26.130069 sshd-session[4925]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:31:26.133630 systemd-logind[1808]: New session 28 of user core. Sep 16 04:31:26.143524 systemd[1]: Started session-28.scope - Session 28 of User core. Sep 16 04:31:26.480900 sshd[4929]: Connection closed by 10.200.16.10 port 59730 Sep 16 04:31:26.480737 sshd-session[4925]: pam_unix(sshd:session): session closed for user core Sep 16 04:31:26.485236 systemd[1]: sshd@25-10.200.20.14:22-10.200.16.10:59730.service: Deactivated successfully. Sep 16 04:31:26.485556 systemd-logind[1808]: Session 28 logged out. Waiting for processes to exit. Sep 16 04:31:26.487975 systemd[1]: session-28.scope: Deactivated successfully. Sep 16 04:31:26.490058 systemd-logind[1808]: Removed session 28. Sep 16 04:31:31.561156 systemd[1]: Started sshd@26-10.200.20.14:22-10.200.16.10:41784.service - OpenSSH per-connection server daemon (10.200.16.10:41784). Sep 16 04:31:31.975957 sshd[4940]: Accepted publickey for core from 10.200.16.10 port 41784 ssh2: RSA SHA256:I71fjGTKGCyypT9ALVqAOHTk+maJkjWBdnFioZ0bBCo Sep 16 04:31:31.977027 sshd-session[4940]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:31:31.980678 systemd-logind[1808]: New session 29 of user core. Sep 16 04:31:31.988990 systemd[1]: Started session-29.scope - Session 29 of User core. Sep 16 04:31:32.327414 sshd[4943]: Connection closed by 10.200.16.10 port 41784 Sep 16 04:31:32.327965 sshd-session[4940]: pam_unix(sshd:session): session closed for user core Sep 16 04:31:32.331247 systemd[1]: sshd@26-10.200.20.14:22-10.200.16.10:41784.service: Deactivated successfully. Sep 16 04:31:32.332760 systemd[1]: session-29.scope: Deactivated successfully. Sep 16 04:31:32.333374 systemd-logind[1808]: Session 29 logged out. Waiting for processes to exit. Sep 16 04:31:32.334544 systemd-logind[1808]: Removed session 29. Sep 16 04:31:32.403161 systemd[1]: Started sshd@27-10.200.20.14:22-10.200.16.10:41788.service - OpenSSH per-connection server daemon (10.200.16.10:41788). Sep 16 04:31:32.822063 sshd[4954]: Accepted publickey for core from 10.200.16.10 port 41788 ssh2: RSA SHA256:I71fjGTKGCyypT9ALVqAOHTk+maJkjWBdnFioZ0bBCo Sep 16 04:31:32.823622 sshd-session[4954]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:31:32.827717 systemd-logind[1808]: New session 30 of user core. Sep 16 04:31:32.830556 systemd[1]: Started session-30.scope - Session 30 of User core. Sep 16 04:31:34.360435 containerd[1831]: time="2025-09-16T04:31:34.360353926Z" level=info msg="StopContainer for \"e047381fc8dee175a6a17808509566a39b3248daba05834892281da70cd4114c\" with timeout 30 (s)" Sep 16 04:31:34.362702 containerd[1831]: time="2025-09-16T04:31:34.362668229Z" level=info msg="Stop container \"e047381fc8dee175a6a17808509566a39b3248daba05834892281da70cd4114c\" with signal terminated" Sep 16 04:31:34.378219 containerd[1831]: time="2025-09-16T04:31:34.378175160Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 16 04:31:34.392723 containerd[1831]: time="2025-09-16T04:31:34.392680981Z" level=info msg="TaskExit event in podsandbox handler container_id:\"eaf98f77f7b9b6a9e99aa88dbff85428b578c0f42988d99a33545ffc10409eb2\" id:\"43e10b9031a1d6c4d39039c1a00a4743d5b3e99875850ff9c2f138a13d0b5660\" pid:4978 exited_at:{seconds:1757997094 nanos:391830515}" Sep 16 04:31:34.394751 containerd[1831]: time="2025-09-16T04:31:34.394707947Z" level=info msg="StopContainer for \"eaf98f77f7b9b6a9e99aa88dbff85428b578c0f42988d99a33545ffc10409eb2\" with timeout 2 (s)" Sep 16 04:31:34.395001 containerd[1831]: time="2025-09-16T04:31:34.394980332Z" level=info msg="Stop container \"eaf98f77f7b9b6a9e99aa88dbff85428b578c0f42988d99a33545ffc10409eb2\" with signal terminated" Sep 16 04:31:34.404881 systemd-networkd[1639]: lxc_health: Link DOWN Sep 16 04:31:34.404886 systemd-networkd[1639]: lxc_health: Lost carrier Sep 16 04:31:34.415765 systemd[1]: cri-containerd-e047381fc8dee175a6a17808509566a39b3248daba05834892281da70cd4114c.scope: Deactivated successfully. Sep 16 04:31:34.417813 containerd[1831]: time="2025-09-16T04:31:34.417748206Z" level=info msg="received exit event container_id:\"e047381fc8dee175a6a17808509566a39b3248daba05834892281da70cd4114c\" id:\"e047381fc8dee175a6a17808509566a39b3248daba05834892281da70cd4114c\" pid:3881 exited_at:{seconds:1757997094 nanos:417489198}" Sep 16 04:31:34.418017 containerd[1831]: time="2025-09-16T04:31:34.417994205Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e047381fc8dee175a6a17808509566a39b3248daba05834892281da70cd4114c\" id:\"e047381fc8dee175a6a17808509566a39b3248daba05834892281da70cd4114c\" pid:3881 exited_at:{seconds:1757997094 nanos:417489198}" Sep 16 04:31:34.419618 systemd[1]: cri-containerd-eaf98f77f7b9b6a9e99aa88dbff85428b578c0f42988d99a33545ffc10409eb2.scope: Deactivated successfully. Sep 16 04:31:34.420033 systemd[1]: cri-containerd-eaf98f77f7b9b6a9e99aa88dbff85428b578c0f42988d99a33545ffc10409eb2.scope: Consumed 4.424s CPU time, 126.7M memory peak, 120K read from disk, 12.9M written to disk. Sep 16 04:31:34.422737 containerd[1831]: time="2025-09-16T04:31:34.422644516Z" level=info msg="TaskExit event in podsandbox handler container_id:\"eaf98f77f7b9b6a9e99aa88dbff85428b578c0f42988d99a33545ffc10409eb2\" id:\"eaf98f77f7b9b6a9e99aa88dbff85428b578c0f42988d99a33545ffc10409eb2\" pid:3951 exited_at:{seconds:1757997094 nanos:422393140}" Sep 16 04:31:34.422737 containerd[1831]: time="2025-09-16T04:31:34.422704374Z" level=info msg="received exit event container_id:\"eaf98f77f7b9b6a9e99aa88dbff85428b578c0f42988d99a33545ffc10409eb2\" id:\"eaf98f77f7b9b6a9e99aa88dbff85428b578c0f42988d99a33545ffc10409eb2\" pid:3951 exited_at:{seconds:1757997094 nanos:422393140}" Sep 16 04:31:34.444483 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e047381fc8dee175a6a17808509566a39b3248daba05834892281da70cd4114c-rootfs.mount: Deactivated successfully. Sep 16 04:31:34.447571 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eaf98f77f7b9b6a9e99aa88dbff85428b578c0f42988d99a33545ffc10409eb2-rootfs.mount: Deactivated successfully. Sep 16 04:31:34.491976 containerd[1831]: time="2025-09-16T04:31:34.491848622Z" level=info msg="StopContainer for \"eaf98f77f7b9b6a9e99aa88dbff85428b578c0f42988d99a33545ffc10409eb2\" returns successfully" Sep 16 04:31:34.492535 containerd[1831]: time="2025-09-16T04:31:34.492278451Z" level=info msg="StopPodSandbox for \"5401d0996498343bd6fb163953499ff681740769c71aa95d2a6100d10589494a\"" Sep 16 04:31:34.492535 containerd[1831]: time="2025-09-16T04:31:34.492338341Z" level=info msg="Container to stop \"1a2aa765453d9bca699e40e790bab835edf18de500940e907cf47513c55aa890\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 16 04:31:34.492535 containerd[1831]: time="2025-09-16T04:31:34.492346101Z" level=info msg="Container to stop \"037166dfae319034912e574d147b2af8db2e7de6d8f92d7ef1bcc5dddb3b3db1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 16 04:31:34.492535 containerd[1831]: time="2025-09-16T04:31:34.492351125Z" level=info msg="Container to stop \"0f3585fbd06d2a2b14e694c73aef5bb147cdce251c121f5fc7384817f9bdbec0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 16 04:31:34.492535 containerd[1831]: time="2025-09-16T04:31:34.492359573Z" level=info msg="Container to stop \"bc5a6091c90affced7f81b9fef75583c63f677ca46378186ca32d4bf2fb7ed3b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 16 04:31:34.492535 containerd[1831]: time="2025-09-16T04:31:34.492365350Z" level=info msg="Container to stop \"eaf98f77f7b9b6a9e99aa88dbff85428b578c0f42988d99a33545ffc10409eb2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 16 04:31:34.493226 containerd[1831]: time="2025-09-16T04:31:34.493179350Z" level=info msg="StopContainer for \"e047381fc8dee175a6a17808509566a39b3248daba05834892281da70cd4114c\" returns successfully" Sep 16 04:31:34.493969 containerd[1831]: time="2025-09-16T04:31:34.493755168Z" level=info msg="StopPodSandbox for \"55ccb8da4a6de2cec88cb10f77af4b4978489fc9e38ec7578dd10643fb3f9a91\"" Sep 16 04:31:34.493969 containerd[1831]: time="2025-09-16T04:31:34.493833915Z" level=info msg="Container to stop \"e047381fc8dee175a6a17808509566a39b3248daba05834892281da70cd4114c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 16 04:31:34.498060 systemd[1]: cri-containerd-5401d0996498343bd6fb163953499ff681740769c71aa95d2a6100d10589494a.scope: Deactivated successfully. Sep 16 04:31:34.500002 containerd[1831]: time="2025-09-16T04:31:34.499945182Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5401d0996498343bd6fb163953499ff681740769c71aa95d2a6100d10589494a\" id:\"5401d0996498343bd6fb163953499ff681740769c71aa95d2a6100d10589494a\" pid:3453 exit_status:137 exited_at:{seconds:1757997094 nanos:499038946}" Sep 16 04:31:34.501699 systemd[1]: cri-containerd-55ccb8da4a6de2cec88cb10f77af4b4978489fc9e38ec7578dd10643fb3f9a91.scope: Deactivated successfully. Sep 16 04:31:34.525820 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-55ccb8da4a6de2cec88cb10f77af4b4978489fc9e38ec7578dd10643fb3f9a91-rootfs.mount: Deactivated successfully. Sep 16 04:31:34.525937 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5401d0996498343bd6fb163953499ff681740769c71aa95d2a6100d10589494a-rootfs.mount: Deactivated successfully. Sep 16 04:31:34.542455 containerd[1831]: time="2025-09-16T04:31:34.541546529Z" level=info msg="shim disconnected" id=55ccb8da4a6de2cec88cb10f77af4b4978489fc9e38ec7578dd10643fb3f9a91 namespace=k8s.io Sep 16 04:31:34.542455 containerd[1831]: time="2025-09-16T04:31:34.541579882Z" level=warning msg="cleaning up after shim disconnected" id=55ccb8da4a6de2cec88cb10f77af4b4978489fc9e38ec7578dd10643fb3f9a91 namespace=k8s.io Sep 16 04:31:34.542455 containerd[1831]: time="2025-09-16T04:31:34.541604395Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 16 04:31:34.546809 containerd[1831]: time="2025-09-16T04:31:34.546641670Z" level=info msg="shim disconnected" id=5401d0996498343bd6fb163953499ff681740769c71aa95d2a6100d10589494a namespace=k8s.io Sep 16 04:31:34.546809 containerd[1831]: time="2025-09-16T04:31:34.546678087Z" level=warning msg="cleaning up after shim disconnected" id=5401d0996498343bd6fb163953499ff681740769c71aa95d2a6100d10589494a namespace=k8s.io Sep 16 04:31:34.546809 containerd[1831]: time="2025-09-16T04:31:34.546701064Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 16 04:31:34.553613 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-55ccb8da4a6de2cec88cb10f77af4b4978489fc9e38ec7578dd10643fb3f9a91-shm.mount: Deactivated successfully. Sep 16 04:31:34.554761 containerd[1831]: time="2025-09-16T04:31:34.554270368Z" level=info msg="received exit event sandbox_id:\"55ccb8da4a6de2cec88cb10f77af4b4978489fc9e38ec7578dd10643fb3f9a91\" exit_status:137 exited_at:{seconds:1757997094 nanos:507612161}" Sep 16 04:31:34.554994 containerd[1831]: time="2025-09-16T04:31:34.554967525Z" level=info msg="TearDown network for sandbox \"55ccb8da4a6de2cec88cb10f77af4b4978489fc9e38ec7578dd10643fb3f9a91\" successfully" Sep 16 04:31:34.554994 containerd[1831]: time="2025-09-16T04:31:34.554991174Z" level=info msg="StopPodSandbox for \"55ccb8da4a6de2cec88cb10f77af4b4978489fc9e38ec7578dd10643fb3f9a91\" returns successfully" Sep 16 04:31:34.561348 containerd[1831]: time="2025-09-16T04:31:34.561314960Z" level=info msg="TaskExit event in podsandbox handler container_id:\"55ccb8da4a6de2cec88cb10f77af4b4978489fc9e38ec7578dd10643fb3f9a91\" id:\"55ccb8da4a6de2cec88cb10f77af4b4978489fc9e38ec7578dd10643fb3f9a91\" pid:3528 exit_status:137 exited_at:{seconds:1757997094 nanos:507612161}" Sep 16 04:31:34.561721 containerd[1831]: time="2025-09-16T04:31:34.561699459Z" level=info msg="received exit event sandbox_id:\"5401d0996498343bd6fb163953499ff681740769c71aa95d2a6100d10589494a\" exit_status:137 exited_at:{seconds:1757997094 nanos:499038946}" Sep 16 04:31:34.562261 containerd[1831]: time="2025-09-16T04:31:34.562238020Z" level=info msg="Events for \"5401d0996498343bd6fb163953499ff681740769c71aa95d2a6100d10589494a\" is in backoff, enqueue event container_id:\"5401d0996498343bd6fb163953499ff681740769c71aa95d2a6100d10589494a\" id:\"5401d0996498343bd6fb163953499ff681740769c71aa95d2a6100d10589494a\" pid:3453 exit_status:137 exited_at:{seconds:1757997094 nanos:558756089}" Sep 16 04:31:34.562697 containerd[1831]: time="2025-09-16T04:31:34.562522229Z" level=info msg="TearDown network for sandbox \"5401d0996498343bd6fb163953499ff681740769c71aa95d2a6100d10589494a\" successfully" Sep 16 04:31:34.562697 containerd[1831]: time="2025-09-16T04:31:34.562632464Z" level=info msg="StopPodSandbox for \"5401d0996498343bd6fb163953499ff681740769c71aa95d2a6100d10589494a\" returns successfully" Sep 16 04:31:34.611699 kubelet[3260]: I0916 04:31:34.610560 3260 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1b1440ff-1094-4cdf-a528-00a9e9bb43d0-bpf-maps\") pod \"1b1440ff-1094-4cdf-a528-00a9e9bb43d0\" (UID: \"1b1440ff-1094-4cdf-a528-00a9e9bb43d0\") " Sep 16 04:31:34.611699 kubelet[3260]: I0916 04:31:34.610666 3260 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1b1440ff-1094-4cdf-a528-00a9e9bb43d0-cilium-config-path\") pod \"1b1440ff-1094-4cdf-a528-00a9e9bb43d0\" (UID: \"1b1440ff-1094-4cdf-a528-00a9e9bb43d0\") " Sep 16 04:31:34.611699 kubelet[3260]: I0916 04:31:34.610653 3260 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1b1440ff-1094-4cdf-a528-00a9e9bb43d0-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "1b1440ff-1094-4cdf-a528-00a9e9bb43d0" (UID: "1b1440ff-1094-4cdf-a528-00a9e9bb43d0"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 16 04:31:34.611699 kubelet[3260]: I0916 04:31:34.610688 3260 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1b1440ff-1094-4cdf-a528-00a9e9bb43d0-cni-path\") pod \"1b1440ff-1094-4cdf-a528-00a9e9bb43d0\" (UID: \"1b1440ff-1094-4cdf-a528-00a9e9bb43d0\") " Sep 16 04:31:34.611699 kubelet[3260]: I0916 04:31:34.610701 3260 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1b1440ff-1094-4cdf-a528-00a9e9bb43d0-xtables-lock\") pod \"1b1440ff-1094-4cdf-a528-00a9e9bb43d0\" (UID: \"1b1440ff-1094-4cdf-a528-00a9e9bb43d0\") " Sep 16 04:31:34.611699 kubelet[3260]: I0916 04:31:34.610715 3260 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1b1440ff-1094-4cdf-a528-00a9e9bb43d0-clustermesh-secrets\") pod \"1b1440ff-1094-4cdf-a528-00a9e9bb43d0\" (UID: \"1b1440ff-1094-4cdf-a528-00a9e9bb43d0\") " Sep 16 04:31:34.612101 kubelet[3260]: I0916 04:31:34.610725 3260 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1b1440ff-1094-4cdf-a528-00a9e9bb43d0-lib-modules\") pod \"1b1440ff-1094-4cdf-a528-00a9e9bb43d0\" (UID: \"1b1440ff-1094-4cdf-a528-00a9e9bb43d0\") " Sep 16 04:31:34.612101 kubelet[3260]: I0916 04:31:34.610738 3260 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1b1440ff-1094-4cdf-a528-00a9e9bb43d0-hubble-tls\") pod \"1b1440ff-1094-4cdf-a528-00a9e9bb43d0\" (UID: \"1b1440ff-1094-4cdf-a528-00a9e9bb43d0\") " Sep 16 04:31:34.612101 kubelet[3260]: I0916 04:31:34.610749 3260 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1b1440ff-1094-4cdf-a528-00a9e9bb43d0-cilium-run\") pod \"1b1440ff-1094-4cdf-a528-00a9e9bb43d0\" (UID: \"1b1440ff-1094-4cdf-a528-00a9e9bb43d0\") " Sep 16 04:31:34.612101 kubelet[3260]: I0916 04:31:34.610758 3260 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1b1440ff-1094-4cdf-a528-00a9e9bb43d0-host-proc-sys-kernel\") pod \"1b1440ff-1094-4cdf-a528-00a9e9bb43d0\" (UID: \"1b1440ff-1094-4cdf-a528-00a9e9bb43d0\") " Sep 16 04:31:34.612101 kubelet[3260]: I0916 04:31:34.610767 3260 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1b1440ff-1094-4cdf-a528-00a9e9bb43d0-etc-cni-netd\") pod \"1b1440ff-1094-4cdf-a528-00a9e9bb43d0\" (UID: \"1b1440ff-1094-4cdf-a528-00a9e9bb43d0\") " Sep 16 04:31:34.612101 kubelet[3260]: I0916 04:31:34.610776 3260 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1b1440ff-1094-4cdf-a528-00a9e9bb43d0-hostproc\") pod \"1b1440ff-1094-4cdf-a528-00a9e9bb43d0\" (UID: \"1b1440ff-1094-4cdf-a528-00a9e9bb43d0\") " Sep 16 04:31:34.612202 kubelet[3260]: I0916 04:31:34.610784 3260 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1b1440ff-1094-4cdf-a528-00a9e9bb43d0-host-proc-sys-net\") pod \"1b1440ff-1094-4cdf-a528-00a9e9bb43d0\" (UID: \"1b1440ff-1094-4cdf-a528-00a9e9bb43d0\") " Sep 16 04:31:34.612202 kubelet[3260]: I0916 04:31:34.610794 3260 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1b1440ff-1094-4cdf-a528-00a9e9bb43d0-cilium-cgroup\") pod \"1b1440ff-1094-4cdf-a528-00a9e9bb43d0\" (UID: \"1b1440ff-1094-4cdf-a528-00a9e9bb43d0\") " Sep 16 04:31:34.612202 kubelet[3260]: I0916 04:31:34.610806 3260 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0fd8d47c-c9af-4f97-a721-236ffcef2728-cilium-config-path\") pod \"0fd8d47c-c9af-4f97-a721-236ffcef2728\" (UID: \"0fd8d47c-c9af-4f97-a721-236ffcef2728\") " Sep 16 04:31:34.612202 kubelet[3260]: I0916 04:31:34.610817 3260 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wdr6z\" (UniqueName: \"kubernetes.io/projected/1b1440ff-1094-4cdf-a528-00a9e9bb43d0-kube-api-access-wdr6z\") pod \"1b1440ff-1094-4cdf-a528-00a9e9bb43d0\" (UID: \"1b1440ff-1094-4cdf-a528-00a9e9bb43d0\") " Sep 16 04:31:34.612202 kubelet[3260]: I0916 04:31:34.610830 3260 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dwzsn\" (UniqueName: \"kubernetes.io/projected/0fd8d47c-c9af-4f97-a721-236ffcef2728-kube-api-access-dwzsn\") pod \"0fd8d47c-c9af-4f97-a721-236ffcef2728\" (UID: \"0fd8d47c-c9af-4f97-a721-236ffcef2728\") " Sep 16 04:31:34.612202 kubelet[3260]: I0916 04:31:34.610855 3260 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1b1440ff-1094-4cdf-a528-00a9e9bb43d0-bpf-maps\") on node \"ci-4459.0.0-n-c6becb1dff\" DevicePath \"\"" Sep 16 04:31:34.612405 kubelet[3260]: I0916 04:31:34.612295 3260 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b1440ff-1094-4cdf-a528-00a9e9bb43d0-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "1b1440ff-1094-4cdf-a528-00a9e9bb43d0" (UID: "1b1440ff-1094-4cdf-a528-00a9e9bb43d0"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 16 04:31:34.612405 kubelet[3260]: I0916 04:31:34.612354 3260 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1b1440ff-1094-4cdf-a528-00a9e9bb43d0-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "1b1440ff-1094-4cdf-a528-00a9e9bb43d0" (UID: "1b1440ff-1094-4cdf-a528-00a9e9bb43d0"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 16 04:31:34.612405 kubelet[3260]: I0916 04:31:34.612367 3260 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1b1440ff-1094-4cdf-a528-00a9e9bb43d0-cni-path" (OuterVolumeSpecName: "cni-path") pod "1b1440ff-1094-4cdf-a528-00a9e9bb43d0" (UID: "1b1440ff-1094-4cdf-a528-00a9e9bb43d0"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 16 04:31:34.612405 kubelet[3260]: I0916 04:31:34.612376 3260 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1b1440ff-1094-4cdf-a528-00a9e9bb43d0-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "1b1440ff-1094-4cdf-a528-00a9e9bb43d0" (UID: "1b1440ff-1094-4cdf-a528-00a9e9bb43d0"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 16 04:31:34.613439 kubelet[3260]: I0916 04:31:34.612920 3260 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1b1440ff-1094-4cdf-a528-00a9e9bb43d0-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "1b1440ff-1094-4cdf-a528-00a9e9bb43d0" (UID: "1b1440ff-1094-4cdf-a528-00a9e9bb43d0"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 16 04:31:34.613439 kubelet[3260]: I0916 04:31:34.612951 3260 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1b1440ff-1094-4cdf-a528-00a9e9bb43d0-hostproc" (OuterVolumeSpecName: "hostproc") pod "1b1440ff-1094-4cdf-a528-00a9e9bb43d0" (UID: "1b1440ff-1094-4cdf-a528-00a9e9bb43d0"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 16 04:31:34.613439 kubelet[3260]: I0916 04:31:34.612964 3260 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1b1440ff-1094-4cdf-a528-00a9e9bb43d0-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "1b1440ff-1094-4cdf-a528-00a9e9bb43d0" (UID: "1b1440ff-1094-4cdf-a528-00a9e9bb43d0"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 16 04:31:34.613439 kubelet[3260]: I0916 04:31:34.612973 3260 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1b1440ff-1094-4cdf-a528-00a9e9bb43d0-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "1b1440ff-1094-4cdf-a528-00a9e9bb43d0" (UID: "1b1440ff-1094-4cdf-a528-00a9e9bb43d0"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 16 04:31:34.616586 kubelet[3260]: I0916 04:31:34.616557 3260 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1b1440ff-1094-4cdf-a528-00a9e9bb43d0-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "1b1440ff-1094-4cdf-a528-00a9e9bb43d0" (UID: "1b1440ff-1094-4cdf-a528-00a9e9bb43d0"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 16 04:31:34.617407 kubelet[3260]: I0916 04:31:34.617300 3260 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0fd8d47c-c9af-4f97-a721-236ffcef2728-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "0fd8d47c-c9af-4f97-a721-236ffcef2728" (UID: "0fd8d47c-c9af-4f97-a721-236ffcef2728"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 16 04:31:34.618113 kubelet[3260]: I0916 04:31:34.617653 3260 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b1440ff-1094-4cdf-a528-00a9e9bb43d0-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "1b1440ff-1094-4cdf-a528-00a9e9bb43d0" (UID: "1b1440ff-1094-4cdf-a528-00a9e9bb43d0"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 16 04:31:34.618205 kubelet[3260]: I0916 04:31:34.617715 3260 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0fd8d47c-c9af-4f97-a721-236ffcef2728-kube-api-access-dwzsn" (OuterVolumeSpecName: "kube-api-access-dwzsn") pod "0fd8d47c-c9af-4f97-a721-236ffcef2728" (UID: "0fd8d47c-c9af-4f97-a721-236ffcef2728"). InnerVolumeSpecName "kube-api-access-dwzsn". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 16 04:31:34.618268 kubelet[3260]: I0916 04:31:34.617734 3260 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1b1440ff-1094-4cdf-a528-00a9e9bb43d0-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "1b1440ff-1094-4cdf-a528-00a9e9bb43d0" (UID: "1b1440ff-1094-4cdf-a528-00a9e9bb43d0"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 16 04:31:34.618715 kubelet[3260]: I0916 04:31:34.618691 3260 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b1440ff-1094-4cdf-a528-00a9e9bb43d0-kube-api-access-wdr6z" (OuterVolumeSpecName: "kube-api-access-wdr6z") pod "1b1440ff-1094-4cdf-a528-00a9e9bb43d0" (UID: "1b1440ff-1094-4cdf-a528-00a9e9bb43d0"). InnerVolumeSpecName "kube-api-access-wdr6z". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 16 04:31:34.619970 kubelet[3260]: I0916 04:31:34.619942 3260 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b1440ff-1094-4cdf-a528-00a9e9bb43d0-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "1b1440ff-1094-4cdf-a528-00a9e9bb43d0" (UID: "1b1440ff-1094-4cdf-a528-00a9e9bb43d0"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 16 04:31:34.711430 kubelet[3260]: I0916 04:31:34.711383 3260 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dwzsn\" (UniqueName: \"kubernetes.io/projected/0fd8d47c-c9af-4f97-a721-236ffcef2728-kube-api-access-dwzsn\") on node \"ci-4459.0.0-n-c6becb1dff\" DevicePath \"\"" Sep 16 04:31:34.711430 kubelet[3260]: I0916 04:31:34.711416 3260 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1b1440ff-1094-4cdf-a528-00a9e9bb43d0-cilium-config-path\") on node \"ci-4459.0.0-n-c6becb1dff\" DevicePath \"\"" Sep 16 04:31:34.711430 kubelet[3260]: I0916 04:31:34.711445 3260 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1b1440ff-1094-4cdf-a528-00a9e9bb43d0-cni-path\") on node \"ci-4459.0.0-n-c6becb1dff\" DevicePath \"\"" Sep 16 04:31:34.711627 kubelet[3260]: I0916 04:31:34.711453 3260 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1b1440ff-1094-4cdf-a528-00a9e9bb43d0-hubble-tls\") on node \"ci-4459.0.0-n-c6becb1dff\" DevicePath \"\"" Sep 16 04:31:34.711627 kubelet[3260]: I0916 04:31:34.711460 3260 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1b1440ff-1094-4cdf-a528-00a9e9bb43d0-xtables-lock\") on node \"ci-4459.0.0-n-c6becb1dff\" DevicePath \"\"" Sep 16 04:31:34.711627 kubelet[3260]: I0916 04:31:34.711466 3260 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1b1440ff-1094-4cdf-a528-00a9e9bb43d0-clustermesh-secrets\") on node \"ci-4459.0.0-n-c6becb1dff\" DevicePath \"\"" Sep 16 04:31:34.711627 kubelet[3260]: I0916 04:31:34.711472 3260 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1b1440ff-1094-4cdf-a528-00a9e9bb43d0-lib-modules\") on node \"ci-4459.0.0-n-c6becb1dff\" DevicePath \"\"" Sep 16 04:31:34.711627 kubelet[3260]: I0916 04:31:34.711478 3260 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1b1440ff-1094-4cdf-a528-00a9e9bb43d0-etc-cni-netd\") on node \"ci-4459.0.0-n-c6becb1dff\" DevicePath \"\"" Sep 16 04:31:34.711627 kubelet[3260]: I0916 04:31:34.711484 3260 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1b1440ff-1094-4cdf-a528-00a9e9bb43d0-cilium-run\") on node \"ci-4459.0.0-n-c6becb1dff\" DevicePath \"\"" Sep 16 04:31:34.711627 kubelet[3260]: I0916 04:31:34.711489 3260 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1b1440ff-1094-4cdf-a528-00a9e9bb43d0-host-proc-sys-kernel\") on node \"ci-4459.0.0-n-c6becb1dff\" DevicePath \"\"" Sep 16 04:31:34.711627 kubelet[3260]: I0916 04:31:34.711500 3260 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1b1440ff-1094-4cdf-a528-00a9e9bb43d0-hostproc\") on node \"ci-4459.0.0-n-c6becb1dff\" DevicePath \"\"" Sep 16 04:31:34.711753 kubelet[3260]: I0916 04:31:34.711511 3260 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1b1440ff-1094-4cdf-a528-00a9e9bb43d0-host-proc-sys-net\") on node \"ci-4459.0.0-n-c6becb1dff\" DevicePath \"\"" Sep 16 04:31:34.711753 kubelet[3260]: I0916 04:31:34.711518 3260 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wdr6z\" (UniqueName: \"kubernetes.io/projected/1b1440ff-1094-4cdf-a528-00a9e9bb43d0-kube-api-access-wdr6z\") on node \"ci-4459.0.0-n-c6becb1dff\" DevicePath \"\"" Sep 16 04:31:34.711753 kubelet[3260]: I0916 04:31:34.711525 3260 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1b1440ff-1094-4cdf-a528-00a9e9bb43d0-cilium-cgroup\") on node \"ci-4459.0.0-n-c6becb1dff\" DevicePath \"\"" Sep 16 04:31:34.711753 kubelet[3260]: I0916 04:31:34.711532 3260 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0fd8d47c-c9af-4f97-a721-236ffcef2728-cilium-config-path\") on node \"ci-4459.0.0-n-c6becb1dff\" DevicePath \"\"" Sep 16 04:31:34.938148 kubelet[3260]: I0916 04:31:34.937981 3260 scope.go:117] "RemoveContainer" containerID="eaf98f77f7b9b6a9e99aa88dbff85428b578c0f42988d99a33545ffc10409eb2" Sep 16 04:31:34.942562 containerd[1831]: time="2025-09-16T04:31:34.942280952Z" level=info msg="RemoveContainer for \"eaf98f77f7b9b6a9e99aa88dbff85428b578c0f42988d99a33545ffc10409eb2\"" Sep 16 04:31:34.946914 systemd[1]: Removed slice kubepods-burstable-pod1b1440ff_1094_4cdf_a528_00a9e9bb43d0.slice - libcontainer container kubepods-burstable-pod1b1440ff_1094_4cdf_a528_00a9e9bb43d0.slice. Sep 16 04:31:34.947017 systemd[1]: kubepods-burstable-pod1b1440ff_1094_4cdf_a528_00a9e9bb43d0.slice: Consumed 4.484s CPU time, 127.1M memory peak, 120K read from disk, 12.9M written to disk. Sep 16 04:31:34.948682 systemd[1]: Removed slice kubepods-besteffort-pod0fd8d47c_c9af_4f97_a721_236ffcef2728.slice - libcontainer container kubepods-besteffort-pod0fd8d47c_c9af_4f97_a721_236ffcef2728.slice. Sep 16 04:31:34.959245 containerd[1831]: time="2025-09-16T04:31:34.959205287Z" level=info msg="RemoveContainer for \"eaf98f77f7b9b6a9e99aa88dbff85428b578c0f42988d99a33545ffc10409eb2\" returns successfully" Sep 16 04:31:34.959742 kubelet[3260]: I0916 04:31:34.959694 3260 scope.go:117] "RemoveContainer" containerID="bc5a6091c90affced7f81b9fef75583c63f677ca46378186ca32d4bf2fb7ed3b" Sep 16 04:31:34.963466 containerd[1831]: time="2025-09-16T04:31:34.963323854Z" level=info msg="RemoveContainer for \"bc5a6091c90affced7f81b9fef75583c63f677ca46378186ca32d4bf2fb7ed3b\"" Sep 16 04:31:34.971945 containerd[1831]: time="2025-09-16T04:31:34.971903933Z" level=info msg="RemoveContainer for \"bc5a6091c90affced7f81b9fef75583c63f677ca46378186ca32d4bf2fb7ed3b\" returns successfully" Sep 16 04:31:34.972148 kubelet[3260]: I0916 04:31:34.972122 3260 scope.go:117] "RemoveContainer" containerID="0f3585fbd06d2a2b14e694c73aef5bb147cdce251c121f5fc7384817f9bdbec0" Sep 16 04:31:34.975026 containerd[1831]: time="2025-09-16T04:31:34.975000204Z" level=info msg="RemoveContainer for \"0f3585fbd06d2a2b14e694c73aef5bb147cdce251c121f5fc7384817f9bdbec0\"" Sep 16 04:31:34.984188 containerd[1831]: time="2025-09-16T04:31:34.984146076Z" level=info msg="RemoveContainer for \"0f3585fbd06d2a2b14e694c73aef5bb147cdce251c121f5fc7384817f9bdbec0\" returns successfully" Sep 16 04:31:34.984645 kubelet[3260]: I0916 04:31:34.984545 3260 scope.go:117] "RemoveContainer" containerID="037166dfae319034912e574d147b2af8db2e7de6d8f92d7ef1bcc5dddb3b3db1" Sep 16 04:31:34.985744 containerd[1831]: time="2025-09-16T04:31:34.985707420Z" level=info msg="RemoveContainer for \"037166dfae319034912e574d147b2af8db2e7de6d8f92d7ef1bcc5dddb3b3db1\"" Sep 16 04:31:34.998948 containerd[1831]: time="2025-09-16T04:31:34.998869751Z" level=info msg="RemoveContainer for \"037166dfae319034912e574d147b2af8db2e7de6d8f92d7ef1bcc5dddb3b3db1\" returns successfully" Sep 16 04:31:35.001925 kubelet[3260]: I0916 04:31:35.001896 3260 scope.go:117] "RemoveContainer" containerID="1a2aa765453d9bca699e40e790bab835edf18de500940e907cf47513c55aa890" Sep 16 04:31:35.003746 containerd[1831]: time="2025-09-16T04:31:35.003674115Z" level=info msg="RemoveContainer for \"1a2aa765453d9bca699e40e790bab835edf18de500940e907cf47513c55aa890\"" Sep 16 04:31:35.010767 containerd[1831]: time="2025-09-16T04:31:35.010741348Z" level=info msg="RemoveContainer for \"1a2aa765453d9bca699e40e790bab835edf18de500940e907cf47513c55aa890\" returns successfully" Sep 16 04:31:35.010966 kubelet[3260]: I0916 04:31:35.010932 3260 scope.go:117] "RemoveContainer" containerID="eaf98f77f7b9b6a9e99aa88dbff85428b578c0f42988d99a33545ffc10409eb2" Sep 16 04:31:35.011275 containerd[1831]: time="2025-09-16T04:31:35.011243819Z" level=error msg="ContainerStatus for \"eaf98f77f7b9b6a9e99aa88dbff85428b578c0f42988d99a33545ffc10409eb2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"eaf98f77f7b9b6a9e99aa88dbff85428b578c0f42988d99a33545ffc10409eb2\": not found" Sep 16 04:31:35.011438 kubelet[3260]: E0916 04:31:35.011377 3260 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"eaf98f77f7b9b6a9e99aa88dbff85428b578c0f42988d99a33545ffc10409eb2\": not found" containerID="eaf98f77f7b9b6a9e99aa88dbff85428b578c0f42988d99a33545ffc10409eb2" Sep 16 04:31:35.011513 kubelet[3260]: I0916 04:31:35.011448 3260 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"eaf98f77f7b9b6a9e99aa88dbff85428b578c0f42988d99a33545ffc10409eb2"} err="failed to get container status \"eaf98f77f7b9b6a9e99aa88dbff85428b578c0f42988d99a33545ffc10409eb2\": rpc error: code = NotFound desc = an error occurred when try to find container \"eaf98f77f7b9b6a9e99aa88dbff85428b578c0f42988d99a33545ffc10409eb2\": not found" Sep 16 04:31:35.011533 kubelet[3260]: I0916 04:31:35.011516 3260 scope.go:117] "RemoveContainer" containerID="bc5a6091c90affced7f81b9fef75583c63f677ca46378186ca32d4bf2fb7ed3b" Sep 16 04:31:35.011675 containerd[1831]: time="2025-09-16T04:31:35.011648519Z" level=error msg="ContainerStatus for \"bc5a6091c90affced7f81b9fef75583c63f677ca46378186ca32d4bf2fb7ed3b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bc5a6091c90affced7f81b9fef75583c63f677ca46378186ca32d4bf2fb7ed3b\": not found" Sep 16 04:31:35.011765 kubelet[3260]: E0916 04:31:35.011747 3260 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bc5a6091c90affced7f81b9fef75583c63f677ca46378186ca32d4bf2fb7ed3b\": not found" containerID="bc5a6091c90affced7f81b9fef75583c63f677ca46378186ca32d4bf2fb7ed3b" Sep 16 04:31:35.011794 kubelet[3260]: I0916 04:31:35.011767 3260 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bc5a6091c90affced7f81b9fef75583c63f677ca46378186ca32d4bf2fb7ed3b"} err="failed to get container status \"bc5a6091c90affced7f81b9fef75583c63f677ca46378186ca32d4bf2fb7ed3b\": rpc error: code = NotFound desc = an error occurred when try to find container \"bc5a6091c90affced7f81b9fef75583c63f677ca46378186ca32d4bf2fb7ed3b\": not found" Sep 16 04:31:35.011794 kubelet[3260]: I0916 04:31:35.011780 3260 scope.go:117] "RemoveContainer" containerID="0f3585fbd06d2a2b14e694c73aef5bb147cdce251c121f5fc7384817f9bdbec0" Sep 16 04:31:35.012003 containerd[1831]: time="2025-09-16T04:31:35.011976569Z" level=error msg="ContainerStatus for \"0f3585fbd06d2a2b14e694c73aef5bb147cdce251c121f5fc7384817f9bdbec0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0f3585fbd06d2a2b14e694c73aef5bb147cdce251c121f5fc7384817f9bdbec0\": not found" Sep 16 04:31:35.012100 kubelet[3260]: E0916 04:31:35.012082 3260 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0f3585fbd06d2a2b14e694c73aef5bb147cdce251c121f5fc7384817f9bdbec0\": not found" containerID="0f3585fbd06d2a2b14e694c73aef5bb147cdce251c121f5fc7384817f9bdbec0" Sep 16 04:31:35.012162 kubelet[3260]: I0916 04:31:35.012102 3260 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0f3585fbd06d2a2b14e694c73aef5bb147cdce251c121f5fc7384817f9bdbec0"} err="failed to get container status \"0f3585fbd06d2a2b14e694c73aef5bb147cdce251c121f5fc7384817f9bdbec0\": rpc error: code = NotFound desc = an error occurred when try to find container \"0f3585fbd06d2a2b14e694c73aef5bb147cdce251c121f5fc7384817f9bdbec0\": not found" Sep 16 04:31:35.012162 kubelet[3260]: I0916 04:31:35.012159 3260 scope.go:117] "RemoveContainer" containerID="037166dfae319034912e574d147b2af8db2e7de6d8f92d7ef1bcc5dddb3b3db1" Sep 16 04:31:35.012389 containerd[1831]: time="2025-09-16T04:31:35.012347789Z" level=error msg="ContainerStatus for \"037166dfae319034912e574d147b2af8db2e7de6d8f92d7ef1bcc5dddb3b3db1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"037166dfae319034912e574d147b2af8db2e7de6d8f92d7ef1bcc5dddb3b3db1\": not found" Sep 16 04:31:35.012536 kubelet[3260]: E0916 04:31:35.012500 3260 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"037166dfae319034912e574d147b2af8db2e7de6d8f92d7ef1bcc5dddb3b3db1\": not found" containerID="037166dfae319034912e574d147b2af8db2e7de6d8f92d7ef1bcc5dddb3b3db1" Sep 16 04:31:35.012572 kubelet[3260]: I0916 04:31:35.012538 3260 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"037166dfae319034912e574d147b2af8db2e7de6d8f92d7ef1bcc5dddb3b3db1"} err="failed to get container status \"037166dfae319034912e574d147b2af8db2e7de6d8f92d7ef1bcc5dddb3b3db1\": rpc error: code = NotFound desc = an error occurred when try to find container \"037166dfae319034912e574d147b2af8db2e7de6d8f92d7ef1bcc5dddb3b3db1\": not found" Sep 16 04:31:35.012572 kubelet[3260]: I0916 04:31:35.012551 3260 scope.go:117] "RemoveContainer" containerID="1a2aa765453d9bca699e40e790bab835edf18de500940e907cf47513c55aa890" Sep 16 04:31:35.012799 containerd[1831]: time="2025-09-16T04:31:35.012772946Z" level=error msg="ContainerStatus for \"1a2aa765453d9bca699e40e790bab835edf18de500940e907cf47513c55aa890\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1a2aa765453d9bca699e40e790bab835edf18de500940e907cf47513c55aa890\": not found" Sep 16 04:31:35.012913 kubelet[3260]: E0916 04:31:35.012880 3260 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1a2aa765453d9bca699e40e790bab835edf18de500940e907cf47513c55aa890\": not found" containerID="1a2aa765453d9bca699e40e790bab835edf18de500940e907cf47513c55aa890" Sep 16 04:31:35.012913 kubelet[3260]: I0916 04:31:35.012902 3260 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1a2aa765453d9bca699e40e790bab835edf18de500940e907cf47513c55aa890"} err="failed to get container status \"1a2aa765453d9bca699e40e790bab835edf18de500940e907cf47513c55aa890\": rpc error: code = NotFound desc = an error occurred when try to find container \"1a2aa765453d9bca699e40e790bab835edf18de500940e907cf47513c55aa890\": not found" Sep 16 04:31:35.012913 kubelet[3260]: I0916 04:31:35.012914 3260 scope.go:117] "RemoveContainer" containerID="e047381fc8dee175a6a17808509566a39b3248daba05834892281da70cd4114c" Sep 16 04:31:35.014313 containerd[1831]: time="2025-09-16T04:31:35.014281496Z" level=info msg="RemoveContainer for \"e047381fc8dee175a6a17808509566a39b3248daba05834892281da70cd4114c\"" Sep 16 04:31:35.023108 containerd[1831]: time="2025-09-16T04:31:35.023078270Z" level=info msg="RemoveContainer for \"e047381fc8dee175a6a17808509566a39b3248daba05834892281da70cd4114c\" returns successfully" Sep 16 04:31:35.023372 kubelet[3260]: I0916 04:31:35.023348 3260 scope.go:117] "RemoveContainer" containerID="e047381fc8dee175a6a17808509566a39b3248daba05834892281da70cd4114c" Sep 16 04:31:35.023598 containerd[1831]: time="2025-09-16T04:31:35.023567461Z" level=error msg="ContainerStatus for \"e047381fc8dee175a6a17808509566a39b3248daba05834892281da70cd4114c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e047381fc8dee175a6a17808509566a39b3248daba05834892281da70cd4114c\": not found" Sep 16 04:31:35.023835 kubelet[3260]: E0916 04:31:35.023809 3260 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e047381fc8dee175a6a17808509566a39b3248daba05834892281da70cd4114c\": not found" containerID="e047381fc8dee175a6a17808509566a39b3248daba05834892281da70cd4114c" Sep 16 04:31:35.023883 kubelet[3260]: I0916 04:31:35.023836 3260 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e047381fc8dee175a6a17808509566a39b3248daba05834892281da70cd4114c"} err="failed to get container status \"e047381fc8dee175a6a17808509566a39b3248daba05834892281da70cd4114c\": rpc error: code = NotFound desc = an error occurred when try to find container \"e047381fc8dee175a6a17808509566a39b3248daba05834892281da70cd4114c\": not found" Sep 16 04:31:35.443785 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5401d0996498343bd6fb163953499ff681740769c71aa95d2a6100d10589494a-shm.mount: Deactivated successfully. Sep 16 04:31:35.443878 systemd[1]: var-lib-kubelet-pods-0fd8d47c\x2dc9af\x2d4f97\x2da721\x2d236ffcef2728-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddwzsn.mount: Deactivated successfully. Sep 16 04:31:35.443926 systemd[1]: var-lib-kubelet-pods-1b1440ff\x2d1094\x2d4cdf\x2da528\x2d00a9e9bb43d0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwdr6z.mount: Deactivated successfully. Sep 16 04:31:35.443965 systemd[1]: var-lib-kubelet-pods-1b1440ff\x2d1094\x2d4cdf\x2da528\x2d00a9e9bb43d0-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 16 04:31:35.444003 systemd[1]: var-lib-kubelet-pods-1b1440ff\x2d1094\x2d4cdf\x2da528\x2d00a9e9bb43d0-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 16 04:31:36.327485 containerd[1831]: time="2025-09-16T04:31:36.327409030Z" level=info msg="TaskExit event in podsandbox handler exit_status:137 exited_at:{seconds:1757997094 nanos:499038946}" Sep 16 04:31:36.327854 containerd[1831]: time="2025-09-16T04:31:36.327512489Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5401d0996498343bd6fb163953499ff681740769c71aa95d2a6100d10589494a\" id:\"5401d0996498343bd6fb163953499ff681740769c71aa95d2a6100d10589494a\" pid:3453 exit_status:137 exited_at:{seconds:1757997094 nanos:558756089}" Sep 16 04:31:36.371463 sshd[4957]: Connection closed by 10.200.16.10 port 41788 Sep 16 04:31:36.372010 sshd-session[4954]: pam_unix(sshd:session): session closed for user core Sep 16 04:31:36.374690 systemd-logind[1808]: Session 30 logged out. Waiting for processes to exit. Sep 16 04:31:36.375109 systemd[1]: sshd@27-10.200.20.14:22-10.200.16.10:41788.service: Deactivated successfully. Sep 16 04:31:36.376927 systemd[1]: session-30.scope: Deactivated successfully. Sep 16 04:31:36.379275 systemd-logind[1808]: Removed session 30. Sep 16 04:31:36.445296 systemd[1]: Started sshd@28-10.200.20.14:22-10.200.16.10:41792.service - OpenSSH per-connection server daemon (10.200.16.10:41792). Sep 16 04:31:36.489987 kubelet[3260]: I0916 04:31:36.489940 3260 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0fd8d47c-c9af-4f97-a721-236ffcef2728" path="/var/lib/kubelet/pods/0fd8d47c-c9af-4f97-a721-236ffcef2728/volumes" Sep 16 04:31:36.490317 kubelet[3260]: I0916 04:31:36.490241 3260 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b1440ff-1094-4cdf-a528-00a9e9bb43d0" path="/var/lib/kubelet/pods/1b1440ff-1094-4cdf-a528-00a9e9bb43d0/volumes" Sep 16 04:31:36.577268 kubelet[3260]: E0916 04:31:36.577226 3260 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 16 04:31:36.859250 sshd[5110]: Accepted publickey for core from 10.200.16.10 port 41792 ssh2: RSA SHA256:I71fjGTKGCyypT9ALVqAOHTk+maJkjWBdnFioZ0bBCo Sep 16 04:31:36.859710 sshd-session[5110]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:31:36.863546 systemd-logind[1808]: New session 31 of user core. Sep 16 04:31:36.872575 systemd[1]: Started session-31.scope - Session 31 of User core. Sep 16 04:31:37.467261 kubelet[3260]: E0916 04:31:37.467184 3260 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1b1440ff-1094-4cdf-a528-00a9e9bb43d0" containerName="mount-cgroup" Sep 16 04:31:37.467261 kubelet[3260]: E0916 04:31:37.467217 3260 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1b1440ff-1094-4cdf-a528-00a9e9bb43d0" containerName="mount-bpf-fs" Sep 16 04:31:37.467261 kubelet[3260]: E0916 04:31:37.467223 3260 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1b1440ff-1094-4cdf-a528-00a9e9bb43d0" containerName="clean-cilium-state" Sep 16 04:31:37.467261 kubelet[3260]: E0916 04:31:37.467227 3260 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1b1440ff-1094-4cdf-a528-00a9e9bb43d0" containerName="cilium-agent" Sep 16 04:31:37.467261 kubelet[3260]: E0916 04:31:37.467231 3260 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1b1440ff-1094-4cdf-a528-00a9e9bb43d0" containerName="apply-sysctl-overwrites" Sep 16 04:31:37.467624 kubelet[3260]: E0916 04:31:37.467236 3260 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0fd8d47c-c9af-4f97-a721-236ffcef2728" containerName="cilium-operator" Sep 16 04:31:37.468141 kubelet[3260]: I0916 04:31:37.467775 3260 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b1440ff-1094-4cdf-a528-00a9e9bb43d0" containerName="cilium-agent" Sep 16 04:31:37.468141 kubelet[3260]: I0916 04:31:37.467792 3260 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fd8d47c-c9af-4f97-a721-236ffcef2728" containerName="cilium-operator" Sep 16 04:31:37.476280 systemd[1]: Created slice kubepods-burstable-pod5eb89fd0_01a7_4119_8672_e9eb04cce76e.slice - libcontainer container kubepods-burstable-pod5eb89fd0_01a7_4119_8672_e9eb04cce76e.slice. Sep 16 04:31:37.477316 kubelet[3260]: W0916 04:31:37.477137 3260 reflector.go:561] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-4459.0.0-n-c6becb1dff" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4459.0.0-n-c6becb1dff' and this object Sep 16 04:31:37.478510 kubelet[3260]: W0916 04:31:37.477745 3260 reflector.go:561] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ci-4459.0.0-n-c6becb1dff" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4459.0.0-n-c6becb1dff' and this object Sep 16 04:31:37.478510 kubelet[3260]: E0916 04:31:37.478204 3260 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-ipsec-keys\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-ipsec-keys\" is forbidden: User \"system:node:ci-4459.0.0-n-c6becb1dff\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4459.0.0-n-c6becb1dff' and this object" logger="UnhandledError" Sep 16 04:31:37.478510 kubelet[3260]: W0916 04:31:37.477764 3260 reflector.go:561] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-4459.0.0-n-c6becb1dff" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4459.0.0-n-c6becb1dff' and this object Sep 16 04:31:37.478510 kubelet[3260]: E0916 04:31:37.478229 3260 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:ci-4459.0.0-n-c6becb1dff\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4459.0.0-n-c6becb1dff' and this object" logger="UnhandledError" Sep 16 04:31:37.479297 kubelet[3260]: E0916 04:31:37.479272 3260 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:ci-4459.0.0-n-c6becb1dff\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4459.0.0-n-c6becb1dff' and this object" logger="UnhandledError" Sep 16 04:31:37.520431 sshd[5113]: Connection closed by 10.200.16.10 port 41792 Sep 16 04:31:37.521631 sshd-session[5110]: pam_unix(sshd:session): session closed for user core Sep 16 04:31:37.524815 systemd-logind[1808]: Session 31 logged out. Waiting for processes to exit. Sep 16 04:31:37.524964 systemd[1]: sshd@28-10.200.20.14:22-10.200.16.10:41792.service: Deactivated successfully. Sep 16 04:31:37.526920 systemd[1]: session-31.scope: Deactivated successfully. Sep 16 04:31:37.528827 systemd-logind[1808]: Removed session 31. Sep 16 04:31:37.596170 systemd[1]: Started sshd@29-10.200.20.14:22-10.200.16.10:41802.service - OpenSSH per-connection server daemon (10.200.16.10:41802). Sep 16 04:31:37.620841 kubelet[3260]: I0916 04:31:37.620554 3260 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5eb89fd0-01a7-4119-8672-e9eb04cce76e-cilium-config-path\") pod \"cilium-29svr\" (UID: \"5eb89fd0-01a7-4119-8672-e9eb04cce76e\") " pod="kube-system/cilium-29svr" Sep 16 04:31:37.620841 kubelet[3260]: I0916 04:31:37.620596 3260 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5eb89fd0-01a7-4119-8672-e9eb04cce76e-hostproc\") pod \"cilium-29svr\" (UID: \"5eb89fd0-01a7-4119-8672-e9eb04cce76e\") " pod="kube-system/cilium-29svr" Sep 16 04:31:37.620841 kubelet[3260]: I0916 04:31:37.620607 3260 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5eb89fd0-01a7-4119-8672-e9eb04cce76e-cilium-cgroup\") pod \"cilium-29svr\" (UID: \"5eb89fd0-01a7-4119-8672-e9eb04cce76e\") " pod="kube-system/cilium-29svr" Sep 16 04:31:37.620841 kubelet[3260]: I0916 04:31:37.620619 3260 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5eb89fd0-01a7-4119-8672-e9eb04cce76e-host-proc-sys-kernel\") pod \"cilium-29svr\" (UID: \"5eb89fd0-01a7-4119-8672-e9eb04cce76e\") " pod="kube-system/cilium-29svr" Sep 16 04:31:37.620841 kubelet[3260]: I0916 04:31:37.620630 3260 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5eb89fd0-01a7-4119-8672-e9eb04cce76e-xtables-lock\") pod \"cilium-29svr\" (UID: \"5eb89fd0-01a7-4119-8672-e9eb04cce76e\") " pod="kube-system/cilium-29svr" Sep 16 04:31:37.620841 kubelet[3260]: I0916 04:31:37.620642 3260 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5eb89fd0-01a7-4119-8672-e9eb04cce76e-clustermesh-secrets\") pod \"cilium-29svr\" (UID: \"5eb89fd0-01a7-4119-8672-e9eb04cce76e\") " pod="kube-system/cilium-29svr" Sep 16 04:31:37.621194 kubelet[3260]: I0916 04:31:37.620656 3260 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5eb89fd0-01a7-4119-8672-e9eb04cce76e-etc-cni-netd\") pod \"cilium-29svr\" (UID: \"5eb89fd0-01a7-4119-8672-e9eb04cce76e\") " pod="kube-system/cilium-29svr" Sep 16 04:31:37.621194 kubelet[3260]: I0916 04:31:37.620665 3260 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5eb89fd0-01a7-4119-8672-e9eb04cce76e-lib-modules\") pod \"cilium-29svr\" (UID: \"5eb89fd0-01a7-4119-8672-e9eb04cce76e\") " pod="kube-system/cilium-29svr" Sep 16 04:31:37.621194 kubelet[3260]: I0916 04:31:37.620674 3260 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5eb89fd0-01a7-4119-8672-e9eb04cce76e-host-proc-sys-net\") pod \"cilium-29svr\" (UID: \"5eb89fd0-01a7-4119-8672-e9eb04cce76e\") " pod="kube-system/cilium-29svr" Sep 16 04:31:37.621194 kubelet[3260]: I0916 04:31:37.620684 3260 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5eb89fd0-01a7-4119-8672-e9eb04cce76e-hubble-tls\") pod \"cilium-29svr\" (UID: \"5eb89fd0-01a7-4119-8672-e9eb04cce76e\") " pod="kube-system/cilium-29svr" Sep 16 04:31:37.621194 kubelet[3260]: I0916 04:31:37.620696 3260 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5eb89fd0-01a7-4119-8672-e9eb04cce76e-cilium-run\") pod \"cilium-29svr\" (UID: \"5eb89fd0-01a7-4119-8672-e9eb04cce76e\") " pod="kube-system/cilium-29svr" Sep 16 04:31:37.621194 kubelet[3260]: I0916 04:31:37.620705 3260 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5eb89fd0-01a7-4119-8672-e9eb04cce76e-cni-path\") pod \"cilium-29svr\" (UID: \"5eb89fd0-01a7-4119-8672-e9eb04cce76e\") " pod="kube-system/cilium-29svr" Sep 16 04:31:37.621288 kubelet[3260]: I0916 04:31:37.620714 3260 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5eb89fd0-01a7-4119-8672-e9eb04cce76e-bpf-maps\") pod \"cilium-29svr\" (UID: \"5eb89fd0-01a7-4119-8672-e9eb04cce76e\") " pod="kube-system/cilium-29svr" Sep 16 04:31:37.621288 kubelet[3260]: I0916 04:31:37.620723 3260 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpxdz\" (UniqueName: \"kubernetes.io/projected/5eb89fd0-01a7-4119-8672-e9eb04cce76e-kube-api-access-qpxdz\") pod \"cilium-29svr\" (UID: \"5eb89fd0-01a7-4119-8672-e9eb04cce76e\") " pod="kube-system/cilium-29svr" Sep 16 04:31:37.621288 kubelet[3260]: I0916 04:31:37.620735 3260 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/5eb89fd0-01a7-4119-8672-e9eb04cce76e-cilium-ipsec-secrets\") pod \"cilium-29svr\" (UID: \"5eb89fd0-01a7-4119-8672-e9eb04cce76e\") " pod="kube-system/cilium-29svr" Sep 16 04:31:38.008449 sshd[5124]: Accepted publickey for core from 10.200.16.10 port 41802 ssh2: RSA SHA256:I71fjGTKGCyypT9ALVqAOHTk+maJkjWBdnFioZ0bBCo Sep 16 04:31:38.009953 sshd-session[5124]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:31:38.014283 systemd-logind[1808]: New session 32 of user core. Sep 16 04:31:38.019549 systemd[1]: Started session-32.scope - Session 32 of User core. Sep 16 04:31:38.341809 sshd[5128]: Connection closed by 10.200.16.10 port 41802 Sep 16 04:31:38.341632 sshd-session[5124]: pam_unix(sshd:session): session closed for user core Sep 16 04:31:38.345166 systemd[1]: sshd@29-10.200.20.14:22-10.200.16.10:41802.service: Deactivated successfully. Sep 16 04:31:38.346870 systemd[1]: session-32.scope: Deactivated successfully. Sep 16 04:31:38.348542 systemd-logind[1808]: Session 32 logged out. Waiting for processes to exit. Sep 16 04:31:38.349656 systemd-logind[1808]: Removed session 32. Sep 16 04:31:38.425628 systemd[1]: Started sshd@30-10.200.20.14:22-10.200.16.10:41806.service - OpenSSH per-connection server daemon (10.200.16.10:41806). Sep 16 04:31:38.722259 kubelet[3260]: E0916 04:31:38.722138 3260 secret.go:189] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Sep 16 04:31:38.722703 kubelet[3260]: E0916 04:31:38.722681 3260 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5eb89fd0-01a7-4119-8672-e9eb04cce76e-clustermesh-secrets podName:5eb89fd0-01a7-4119-8672-e9eb04cce76e nodeName:}" failed. No retries permitted until 2025-09-16 04:31:39.222656754 +0000 UTC m=+222.801323325 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/5eb89fd0-01a7-4119-8672-e9eb04cce76e-clustermesh-secrets") pod "cilium-29svr" (UID: "5eb89fd0-01a7-4119-8672-e9eb04cce76e") : failed to sync secret cache: timed out waiting for the condition Sep 16 04:31:38.723169 kubelet[3260]: E0916 04:31:38.722142 3260 projected.go:263] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Sep 16 04:31:38.723324 kubelet[3260]: E0916 04:31:38.723243 3260 projected.go:194] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-29svr: failed to sync secret cache: timed out waiting for the condition Sep 16 04:31:38.723324 kubelet[3260]: E0916 04:31:38.723297 3260 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5eb89fd0-01a7-4119-8672-e9eb04cce76e-hubble-tls podName:5eb89fd0-01a7-4119-8672-e9eb04cce76e nodeName:}" failed. No retries permitted until 2025-09-16 04:31:39.223286733 +0000 UTC m=+222.801953296 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/5eb89fd0-01a7-4119-8672-e9eb04cce76e-hubble-tls") pod "cilium-29svr" (UID: "5eb89fd0-01a7-4119-8672-e9eb04cce76e") : failed to sync secret cache: timed out waiting for the condition Sep 16 04:31:38.837290 sshd[5136]: Accepted publickey for core from 10.200.16.10 port 41806 ssh2: RSA SHA256:I71fjGTKGCyypT9ALVqAOHTk+maJkjWBdnFioZ0bBCo Sep 16 04:31:38.838368 sshd-session[5136]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:31:38.842324 systemd-logind[1808]: New session 33 of user core. Sep 16 04:31:38.846552 systemd[1]: Started session-33.scope - Session 33 of User core. Sep 16 04:31:39.280769 containerd[1831]: time="2025-09-16T04:31:39.280491276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-29svr,Uid:5eb89fd0-01a7-4119-8672-e9eb04cce76e,Namespace:kube-system,Attempt:0,}" Sep 16 04:31:39.314637 containerd[1831]: time="2025-09-16T04:31:39.314567728Z" level=info msg="connecting to shim 9a519aad959c593564a9df75a04fe6cccec2f79740d55a9a2fc05fdde581fdf3" address="unix:///run/containerd/s/c1418a635a2a58a73e6245bde4966e0c82ca460b8acbfb6770a9ec17389626fb" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:31:39.334553 systemd[1]: Started cri-containerd-9a519aad959c593564a9df75a04fe6cccec2f79740d55a9a2fc05fdde581fdf3.scope - libcontainer container 9a519aad959c593564a9df75a04fe6cccec2f79740d55a9a2fc05fdde581fdf3. Sep 16 04:31:39.356703 containerd[1831]: time="2025-09-16T04:31:39.356658760Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-29svr,Uid:5eb89fd0-01a7-4119-8672-e9eb04cce76e,Namespace:kube-system,Attempt:0,} returns sandbox id \"9a519aad959c593564a9df75a04fe6cccec2f79740d55a9a2fc05fdde581fdf3\"" Sep 16 04:31:39.360503 containerd[1831]: time="2025-09-16T04:31:39.360456452Z" level=info msg="CreateContainer within sandbox \"9a519aad959c593564a9df75a04fe6cccec2f79740d55a9a2fc05fdde581fdf3\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 16 04:31:39.377607 containerd[1831]: time="2025-09-16T04:31:39.377568316Z" level=info msg="Container 65f628f38e7e1d523159c70cec02fc2283ed65d38ab8fc6744ab2b742e110ca7: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:31:39.390614 containerd[1831]: time="2025-09-16T04:31:39.390575360Z" level=info msg="CreateContainer within sandbox \"9a519aad959c593564a9df75a04fe6cccec2f79740d55a9a2fc05fdde581fdf3\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"65f628f38e7e1d523159c70cec02fc2283ed65d38ab8fc6744ab2b742e110ca7\"" Sep 16 04:31:39.391377 containerd[1831]: time="2025-09-16T04:31:39.391349143Z" level=info msg="StartContainer for \"65f628f38e7e1d523159c70cec02fc2283ed65d38ab8fc6744ab2b742e110ca7\"" Sep 16 04:31:39.392179 containerd[1831]: time="2025-09-16T04:31:39.392155072Z" level=info msg="connecting to shim 65f628f38e7e1d523159c70cec02fc2283ed65d38ab8fc6744ab2b742e110ca7" address="unix:///run/containerd/s/c1418a635a2a58a73e6245bde4966e0c82ca460b8acbfb6770a9ec17389626fb" protocol=ttrpc version=3 Sep 16 04:31:39.406537 systemd[1]: Started cri-containerd-65f628f38e7e1d523159c70cec02fc2283ed65d38ab8fc6744ab2b742e110ca7.scope - libcontainer container 65f628f38e7e1d523159c70cec02fc2283ed65d38ab8fc6744ab2b742e110ca7. Sep 16 04:31:39.430367 containerd[1831]: time="2025-09-16T04:31:39.430331921Z" level=info msg="StartContainer for \"65f628f38e7e1d523159c70cec02fc2283ed65d38ab8fc6744ab2b742e110ca7\" returns successfully" Sep 16 04:31:39.434554 systemd[1]: cri-containerd-65f628f38e7e1d523159c70cec02fc2283ed65d38ab8fc6744ab2b742e110ca7.scope: Deactivated successfully. Sep 16 04:31:39.437303 containerd[1831]: time="2025-09-16T04:31:39.437252755Z" level=info msg="received exit event container_id:\"65f628f38e7e1d523159c70cec02fc2283ed65d38ab8fc6744ab2b742e110ca7\" id:\"65f628f38e7e1d523159c70cec02fc2283ed65d38ab8fc6744ab2b742e110ca7\" pid:5202 exited_at:{seconds:1757997099 nanos:437043853}" Sep 16 04:31:39.438901 containerd[1831]: time="2025-09-16T04:31:39.438820867Z" level=info msg="TaskExit event in podsandbox handler container_id:\"65f628f38e7e1d523159c70cec02fc2283ed65d38ab8fc6744ab2b742e110ca7\" id:\"65f628f38e7e1d523159c70cec02fc2283ed65d38ab8fc6744ab2b742e110ca7\" pid:5202 exited_at:{seconds:1757997099 nanos:437043853}" Sep 16 04:31:39.959458 containerd[1831]: time="2025-09-16T04:31:39.958402898Z" level=info msg="CreateContainer within sandbox \"9a519aad959c593564a9df75a04fe6cccec2f79740d55a9a2fc05fdde581fdf3\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 16 04:31:39.971431 containerd[1831]: time="2025-09-16T04:31:39.971376132Z" level=info msg="Container d93750101548de1059e1bb167f238eb7130057d6f7d8a47109f7e6d9a53cbbba: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:31:39.982786 containerd[1831]: time="2025-09-16T04:31:39.982742270Z" level=info msg="CreateContainer within sandbox \"9a519aad959c593564a9df75a04fe6cccec2f79740d55a9a2fc05fdde581fdf3\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d93750101548de1059e1bb167f238eb7130057d6f7d8a47109f7e6d9a53cbbba\"" Sep 16 04:31:39.983515 containerd[1831]: time="2025-09-16T04:31:39.983456196Z" level=info msg="StartContainer for \"d93750101548de1059e1bb167f238eb7130057d6f7d8a47109f7e6d9a53cbbba\"" Sep 16 04:31:39.984381 containerd[1831]: time="2025-09-16T04:31:39.984348047Z" level=info msg="connecting to shim d93750101548de1059e1bb167f238eb7130057d6f7d8a47109f7e6d9a53cbbba" address="unix:///run/containerd/s/c1418a635a2a58a73e6245bde4966e0c82ca460b8acbfb6770a9ec17389626fb" protocol=ttrpc version=3 Sep 16 04:31:40.005562 systemd[1]: Started cri-containerd-d93750101548de1059e1bb167f238eb7130057d6f7d8a47109f7e6d9a53cbbba.scope - libcontainer container d93750101548de1059e1bb167f238eb7130057d6f7d8a47109f7e6d9a53cbbba. Sep 16 04:31:40.030148 containerd[1831]: time="2025-09-16T04:31:40.030093734Z" level=info msg="StartContainer for \"d93750101548de1059e1bb167f238eb7130057d6f7d8a47109f7e6d9a53cbbba\" returns successfully" Sep 16 04:31:40.033555 systemd[1]: cri-containerd-d93750101548de1059e1bb167f238eb7130057d6f7d8a47109f7e6d9a53cbbba.scope: Deactivated successfully. Sep 16 04:31:40.035457 containerd[1831]: time="2025-09-16T04:31:40.034562246Z" level=info msg="received exit event container_id:\"d93750101548de1059e1bb167f238eb7130057d6f7d8a47109f7e6d9a53cbbba\" id:\"d93750101548de1059e1bb167f238eb7130057d6f7d8a47109f7e6d9a53cbbba\" pid:5247 exited_at:{seconds:1757997100 nanos:34332455}" Sep 16 04:31:40.035457 containerd[1831]: time="2025-09-16T04:31:40.034752411Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d93750101548de1059e1bb167f238eb7130057d6f7d8a47109f7e6d9a53cbbba\" id:\"d93750101548de1059e1bb167f238eb7130057d6f7d8a47109f7e6d9a53cbbba\" pid:5247 exited_at:{seconds:1757997100 nanos:34332455}" Sep 16 04:31:40.234881 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2268299235.mount: Deactivated successfully. Sep 16 04:31:40.480087 kubelet[3260]: I0916 04:31:40.480041 3260 setters.go:600] "Node became not ready" node="ci-4459.0.0-n-c6becb1dff" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-16T04:31:40Z","lastTransitionTime":"2025-09-16T04:31:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 16 04:31:40.961190 containerd[1831]: time="2025-09-16T04:31:40.961130917Z" level=info msg="CreateContainer within sandbox \"9a519aad959c593564a9df75a04fe6cccec2f79740d55a9a2fc05fdde581fdf3\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 16 04:31:40.984456 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3301444637.mount: Deactivated successfully. Sep 16 04:31:40.985675 containerd[1831]: time="2025-09-16T04:31:40.985633262Z" level=info msg="Container 9e208591f445330cdc47fe9bd1231a020eafec21064bf5277a6d3fba4a9066ce: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:31:41.004734 containerd[1831]: time="2025-09-16T04:31:41.004695561Z" level=info msg="CreateContainer within sandbox \"9a519aad959c593564a9df75a04fe6cccec2f79740d55a9a2fc05fdde581fdf3\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"9e208591f445330cdc47fe9bd1231a020eafec21064bf5277a6d3fba4a9066ce\"" Sep 16 04:31:41.005383 containerd[1831]: time="2025-09-16T04:31:41.005334221Z" level=info msg="StartContainer for \"9e208591f445330cdc47fe9bd1231a020eafec21064bf5277a6d3fba4a9066ce\"" Sep 16 04:31:41.007323 containerd[1831]: time="2025-09-16T04:31:41.006692910Z" level=info msg="connecting to shim 9e208591f445330cdc47fe9bd1231a020eafec21064bf5277a6d3fba4a9066ce" address="unix:///run/containerd/s/c1418a635a2a58a73e6245bde4966e0c82ca460b8acbfb6770a9ec17389626fb" protocol=ttrpc version=3 Sep 16 04:31:41.022548 systemd[1]: Started cri-containerd-9e208591f445330cdc47fe9bd1231a020eafec21064bf5277a6d3fba4a9066ce.scope - libcontainer container 9e208591f445330cdc47fe9bd1231a020eafec21064bf5277a6d3fba4a9066ce. Sep 16 04:31:41.046282 systemd[1]: cri-containerd-9e208591f445330cdc47fe9bd1231a020eafec21064bf5277a6d3fba4a9066ce.scope: Deactivated successfully. Sep 16 04:31:41.048656 containerd[1831]: time="2025-09-16T04:31:41.048609609Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9e208591f445330cdc47fe9bd1231a020eafec21064bf5277a6d3fba4a9066ce\" id:\"9e208591f445330cdc47fe9bd1231a020eafec21064bf5277a6d3fba4a9066ce\" pid:5290 exited_at:{seconds:1757997101 nanos:47857706}" Sep 16 04:31:41.048831 containerd[1831]: time="2025-09-16T04:31:41.048720580Z" level=info msg="received exit event container_id:\"9e208591f445330cdc47fe9bd1231a020eafec21064bf5277a6d3fba4a9066ce\" id:\"9e208591f445330cdc47fe9bd1231a020eafec21064bf5277a6d3fba4a9066ce\" pid:5290 exited_at:{seconds:1757997101 nanos:47857706}" Sep 16 04:31:41.055758 containerd[1831]: time="2025-09-16T04:31:41.055699896Z" level=info msg="StartContainer for \"9e208591f445330cdc47fe9bd1231a020eafec21064bf5277a6d3fba4a9066ce\" returns successfully" Sep 16 04:31:41.235011 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9e208591f445330cdc47fe9bd1231a020eafec21064bf5277a6d3fba4a9066ce-rootfs.mount: Deactivated successfully. Sep 16 04:31:41.578311 kubelet[3260]: E0916 04:31:41.578253 3260 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 16 04:31:41.964142 containerd[1831]: time="2025-09-16T04:31:41.964019164Z" level=info msg="CreateContainer within sandbox \"9a519aad959c593564a9df75a04fe6cccec2f79740d55a9a2fc05fdde581fdf3\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 16 04:31:41.984674 containerd[1831]: time="2025-09-16T04:31:41.984210506Z" level=info msg="Container ea29eba404fb425910a2d6f2954ac72d433925e56e294d2e28141ac14d46e567: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:31:42.001202 containerd[1831]: time="2025-09-16T04:31:42.001105844Z" level=info msg="CreateContainer within sandbox \"9a519aad959c593564a9df75a04fe6cccec2f79740d55a9a2fc05fdde581fdf3\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ea29eba404fb425910a2d6f2954ac72d433925e56e294d2e28141ac14d46e567\"" Sep 16 04:31:42.001927 containerd[1831]: time="2025-09-16T04:31:42.001869395Z" level=info msg="StartContainer for \"ea29eba404fb425910a2d6f2954ac72d433925e56e294d2e28141ac14d46e567\"" Sep 16 04:31:42.003037 containerd[1831]: time="2025-09-16T04:31:42.003016742Z" level=info msg="connecting to shim ea29eba404fb425910a2d6f2954ac72d433925e56e294d2e28141ac14d46e567" address="unix:///run/containerd/s/c1418a635a2a58a73e6245bde4966e0c82ca460b8acbfb6770a9ec17389626fb" protocol=ttrpc version=3 Sep 16 04:31:42.016656 systemd[1]: Started cri-containerd-ea29eba404fb425910a2d6f2954ac72d433925e56e294d2e28141ac14d46e567.scope - libcontainer container ea29eba404fb425910a2d6f2954ac72d433925e56e294d2e28141ac14d46e567. Sep 16 04:31:42.035323 systemd[1]: cri-containerd-ea29eba404fb425910a2d6f2954ac72d433925e56e294d2e28141ac14d46e567.scope: Deactivated successfully. Sep 16 04:31:42.037646 containerd[1831]: time="2025-09-16T04:31:42.036505008Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ea29eba404fb425910a2d6f2954ac72d433925e56e294d2e28141ac14d46e567\" id:\"ea29eba404fb425910a2d6f2954ac72d433925e56e294d2e28141ac14d46e567\" pid:5328 exited_at:{seconds:1757997102 nanos:36017521}" Sep 16 04:31:42.041341 containerd[1831]: time="2025-09-16T04:31:42.041232560Z" level=info msg="received exit event container_id:\"ea29eba404fb425910a2d6f2954ac72d433925e56e294d2e28141ac14d46e567\" id:\"ea29eba404fb425910a2d6f2954ac72d433925e56e294d2e28141ac14d46e567\" pid:5328 exited_at:{seconds:1757997102 nanos:36017521}" Sep 16 04:31:42.046331 containerd[1831]: time="2025-09-16T04:31:42.046307586Z" level=info msg="StartContainer for \"ea29eba404fb425910a2d6f2954ac72d433925e56e294d2e28141ac14d46e567\" returns successfully" Sep 16 04:31:42.235347 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ea29eba404fb425910a2d6f2954ac72d433925e56e294d2e28141ac14d46e567-rootfs.mount: Deactivated successfully. Sep 16 04:31:42.969652 containerd[1831]: time="2025-09-16T04:31:42.969612158Z" level=info msg="CreateContainer within sandbox \"9a519aad959c593564a9df75a04fe6cccec2f79740d55a9a2fc05fdde581fdf3\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 16 04:31:42.993430 containerd[1831]: time="2025-09-16T04:31:42.993121081Z" level=info msg="Container e35c3a025b4822ef5939a4217b7ef9eab3223619f4e1ff987a056464424cc063: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:31:43.061144 containerd[1831]: time="2025-09-16T04:31:43.061099996Z" level=info msg="CreateContainer within sandbox \"9a519aad959c593564a9df75a04fe6cccec2f79740d55a9a2fc05fdde581fdf3\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e35c3a025b4822ef5939a4217b7ef9eab3223619f4e1ff987a056464424cc063\"" Sep 16 04:31:43.063027 containerd[1831]: time="2025-09-16T04:31:43.062984389Z" level=info msg="StartContainer for \"e35c3a025b4822ef5939a4217b7ef9eab3223619f4e1ff987a056464424cc063\"" Sep 16 04:31:43.063813 containerd[1831]: time="2025-09-16T04:31:43.063775381Z" level=info msg="connecting to shim e35c3a025b4822ef5939a4217b7ef9eab3223619f4e1ff987a056464424cc063" address="unix:///run/containerd/s/c1418a635a2a58a73e6245bde4966e0c82ca460b8acbfb6770a9ec17389626fb" protocol=ttrpc version=3 Sep 16 04:31:43.085563 systemd[1]: Started cri-containerd-e35c3a025b4822ef5939a4217b7ef9eab3223619f4e1ff987a056464424cc063.scope - libcontainer container e35c3a025b4822ef5939a4217b7ef9eab3223619f4e1ff987a056464424cc063. Sep 16 04:31:43.114023 containerd[1831]: time="2025-09-16T04:31:43.113984292Z" level=info msg="StartContainer for \"e35c3a025b4822ef5939a4217b7ef9eab3223619f4e1ff987a056464424cc063\" returns successfully" Sep 16 04:31:43.176202 containerd[1831]: time="2025-09-16T04:31:43.176145238Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e35c3a025b4822ef5939a4217b7ef9eab3223619f4e1ff987a056464424cc063\" id:\"3ea5bf94fbe7c98661764c1df30320ad423ac1d5de427a930e829021ad8350b6\" pid:5392 exited_at:{seconds:1757997103 nanos:175833437}" Sep 16 04:31:43.609446 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Sep 16 04:31:43.993056 kubelet[3260]: I0916 04:31:43.992725 3260 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-29svr" podStartSLOduration=6.992708316 podStartE2EDuration="6.992708316s" podCreationTimestamp="2025-09-16 04:31:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 04:31:43.990257618 +0000 UTC m=+227.568924181" watchObservedRunningTime="2025-09-16 04:31:43.992708316 +0000 UTC m=+227.571374887" Sep 16 04:31:45.245285 containerd[1831]: time="2025-09-16T04:31:45.245246730Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e35c3a025b4822ef5939a4217b7ef9eab3223619f4e1ff987a056464424cc063\" id:\"3993d41b973f5116b625ad944a45ed937b317a6c0aa45b48444231b86e557b5f\" pid:5552 exit_status:1 exited_at:{seconds:1757997105 nanos:244916961}" Sep 16 04:31:46.009594 systemd-networkd[1639]: lxc_health: Link UP Sep 16 04:31:46.015102 systemd-networkd[1639]: lxc_health: Gained carrier Sep 16 04:31:47.356722 containerd[1831]: time="2025-09-16T04:31:47.356681116Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e35c3a025b4822ef5939a4217b7ef9eab3223619f4e1ff987a056464424cc063\" id:\"0824b7e922f1e36328cf71d66acb8d163354a36b7429d1a7ab3af70e6160c7d4\" pid:5912 exited_at:{seconds:1757997107 nanos:355587532}" Sep 16 04:31:47.887734 systemd-networkd[1639]: lxc_health: Gained IPv6LL Sep 16 04:31:49.434400 containerd[1831]: time="2025-09-16T04:31:49.434355782Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e35c3a025b4822ef5939a4217b7ef9eab3223619f4e1ff987a056464424cc063\" id:\"af75c6bf884bf80c3b4aca7cc7ddfa79db1ffa03fb660a0e6fc4bc7e63cdae64\" pid:5956 exited_at:{seconds:1757997109 nanos:433713723}" Sep 16 04:31:51.514220 containerd[1831]: time="2025-09-16T04:31:51.514117493Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e35c3a025b4822ef5939a4217b7ef9eab3223619f4e1ff987a056464424cc063\" id:\"5d2fa31af051ac91023274197fd26d05fa68558fa05bdc3aeb0bd1cd2f4528a5\" pid:5978 exited_at:{seconds:1757997111 nanos:513695561}" Sep 16 04:31:53.593355 containerd[1831]: time="2025-09-16T04:31:53.593315984Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e35c3a025b4822ef5939a4217b7ef9eab3223619f4e1ff987a056464424cc063\" id:\"3743902e3d1344fecdd1b84efd6d2d5d7433e0316f959428d46811e4d65116c5\" pid:6000 exited_at:{seconds:1757997113 nanos:592999783}" Sep 16 04:31:55.686676 containerd[1831]: time="2025-09-16T04:31:55.686632766Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e35c3a025b4822ef5939a4217b7ef9eab3223619f4e1ff987a056464424cc063\" id:\"2ea8e1855ae31256e2dfaaa09271c07b9353de0526cc11f5be1180bdc68c6b50\" pid:6022 exited_at:{seconds:1757997115 nanos:686124407}" Sep 16 04:31:56.500440 containerd[1831]: time="2025-09-16T04:31:56.500302045Z" level=info msg="StopPodSandbox for \"5401d0996498343bd6fb163953499ff681740769c71aa95d2a6100d10589494a\"" Sep 16 04:31:56.500440 containerd[1831]: time="2025-09-16T04:31:56.500416016Z" level=info msg="TearDown network for sandbox \"5401d0996498343bd6fb163953499ff681740769c71aa95d2a6100d10589494a\" successfully" Sep 16 04:31:56.500440 containerd[1831]: time="2025-09-16T04:31:56.500438113Z" level=info msg="StopPodSandbox for \"5401d0996498343bd6fb163953499ff681740769c71aa95d2a6100d10589494a\" returns successfully" Sep 16 04:31:56.500864 containerd[1831]: time="2025-09-16T04:31:56.500834805Z" level=info msg="RemovePodSandbox for \"5401d0996498343bd6fb163953499ff681740769c71aa95d2a6100d10589494a\"" Sep 16 04:31:56.500900 containerd[1831]: time="2025-09-16T04:31:56.500867894Z" level=info msg="Forcibly stopping sandbox \"5401d0996498343bd6fb163953499ff681740769c71aa95d2a6100d10589494a\"" Sep 16 04:31:56.500939 containerd[1831]: time="2025-09-16T04:31:56.500922496Z" level=info msg="TearDown network for sandbox \"5401d0996498343bd6fb163953499ff681740769c71aa95d2a6100d10589494a\" successfully" Sep 16 04:31:56.501757 containerd[1831]: time="2025-09-16T04:31:56.501735865Z" level=info msg="Ensure that sandbox 5401d0996498343bd6fb163953499ff681740769c71aa95d2a6100d10589494a in task-service has been cleanup successfully" Sep 16 04:31:56.567621 containerd[1831]: time="2025-09-16T04:31:56.567566558Z" level=info msg="RemovePodSandbox \"5401d0996498343bd6fb163953499ff681740769c71aa95d2a6100d10589494a\" returns successfully" Sep 16 04:31:56.567894 containerd[1831]: time="2025-09-16T04:31:56.567871799Z" level=info msg="StopPodSandbox for \"55ccb8da4a6de2cec88cb10f77af4b4978489fc9e38ec7578dd10643fb3f9a91\"" Sep 16 04:31:56.568192 containerd[1831]: time="2025-09-16T04:31:56.568154600Z" level=info msg="TearDown network for sandbox \"55ccb8da4a6de2cec88cb10f77af4b4978489fc9e38ec7578dd10643fb3f9a91\" successfully" Sep 16 04:31:56.568192 containerd[1831]: time="2025-09-16T04:31:56.568179777Z" level=info msg="StopPodSandbox for \"55ccb8da4a6de2cec88cb10f77af4b4978489fc9e38ec7578dd10643fb3f9a91\" returns successfully" Sep 16 04:31:56.568700 containerd[1831]: time="2025-09-16T04:31:56.568667480Z" level=info msg="RemovePodSandbox for \"55ccb8da4a6de2cec88cb10f77af4b4978489fc9e38ec7578dd10643fb3f9a91\"" Sep 16 04:31:56.568700 containerd[1831]: time="2025-09-16T04:31:56.568697424Z" level=info msg="Forcibly stopping sandbox \"55ccb8da4a6de2cec88cb10f77af4b4978489fc9e38ec7578dd10643fb3f9a91\"" Sep 16 04:31:56.568863 containerd[1831]: time="2025-09-16T04:31:56.568771371Z" level=info msg="TearDown network for sandbox \"55ccb8da4a6de2cec88cb10f77af4b4978489fc9e38ec7578dd10643fb3f9a91\" successfully" Sep 16 04:31:56.570604 containerd[1831]: time="2025-09-16T04:31:56.570582754Z" level=info msg="Ensure that sandbox 55ccb8da4a6de2cec88cb10f77af4b4978489fc9e38ec7578dd10643fb3f9a91 in task-service has been cleanup successfully" Sep 16 04:31:56.727282 containerd[1831]: time="2025-09-16T04:31:56.727203231Z" level=info msg="RemovePodSandbox \"55ccb8da4a6de2cec88cb10f77af4b4978489fc9e38ec7578dd10643fb3f9a91\" returns successfully" Sep 16 04:31:57.763778 containerd[1831]: time="2025-09-16T04:31:57.763072945Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e35c3a025b4822ef5939a4217b7ef9eab3223619f4e1ff987a056464424cc063\" id:\"fbfc7223714f8a7bf0cf64fc8869ce19885aa20695ca05b475565112c9c137ec\" pid:6046 exited_at:{seconds:1757997117 nanos:762767303}" Sep 16 04:31:59.842743 containerd[1831]: time="2025-09-16T04:31:59.842698093Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e35c3a025b4822ef5939a4217b7ef9eab3223619f4e1ff987a056464424cc063\" id:\"041b265fce249e7558882f7d57eb8bf4d5e41ccdd56fbf679c47a47c07f01a8e\" pid:6069 exited_at:{seconds:1757997119 nanos:842271376}" Sep 16 04:31:59.919151 sshd[5139]: Connection closed by 10.200.16.10 port 41806 Sep 16 04:31:59.919800 sshd-session[5136]: pam_unix(sshd:session): session closed for user core Sep 16 04:31:59.922723 systemd-logind[1808]: Session 33 logged out. Waiting for processes to exit. Sep 16 04:31:59.922847 systemd[1]: sshd@30-10.200.20.14:22-10.200.16.10:41806.service: Deactivated successfully. Sep 16 04:31:59.925189 systemd[1]: session-33.scope: Deactivated successfully. Sep 16 04:31:59.929542 systemd-logind[1808]: Removed session 33.