Jul 9 23:44:46.147930 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd490] Jul 9 23:44:46.147948 kernel: Linux version 6.12.36-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Wed Jul 9 22:19:33 -00 2025 Jul 9 23:44:46.147955 kernel: KASLR enabled Jul 9 23:44:46.147959 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jul 9 23:44:46.147964 kernel: printk: legacy bootconsole [pl11] enabled Jul 9 23:44:46.147968 kernel: efi: EFI v2.7 by EDK II Jul 9 23:44:46.147973 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f20f698 RNG=0x3fd5f998 MEMRESERVE=0x3e477598 Jul 9 23:44:46.147977 kernel: random: crng init done Jul 9 23:44:46.147980 kernel: secureboot: Secure boot disabled Jul 9 23:44:46.147984 kernel: ACPI: Early table checksum verification disabled Jul 9 23:44:46.147988 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Jul 9 23:44:46.147992 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 9 23:44:46.147996 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 9 23:44:46.148000 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jul 9 23:44:46.148006 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 9 23:44:46.148010 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 9 23:44:46.148014 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 9 23:44:46.148019 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 9 23:44:46.148023 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 9 23:44:46.148027 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 9 23:44:46.148032 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jul 9 23:44:46.148036 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 9 23:44:46.148040 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jul 9 23:44:46.148044 kernel: ACPI: Use ACPI SPCR as default console: Yes Jul 9 23:44:46.148048 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Jul 9 23:44:46.148052 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] hotplug Jul 9 23:44:46.148057 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] hotplug Jul 9 23:44:46.148061 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Jul 9 23:44:46.148065 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Jul 9 23:44:46.148070 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Jul 9 23:44:46.148074 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Jul 9 23:44:46.148078 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Jul 9 23:44:46.148082 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Jul 9 23:44:46.148086 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Jul 9 23:44:46.148091 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Jul 9 23:44:46.148095 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Jul 9 23:44:46.148099 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x1bfffffff] -> [mem 0x00000000-0x1bfffffff] Jul 9 23:44:46.148103 kernel: NODE_DATA(0) allocated [mem 0x1bf7fddc0-0x1bf804fff] Jul 9 23:44:46.148107 kernel: Zone ranges: Jul 9 23:44:46.148111 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jul 9 23:44:46.148118 kernel: DMA32 empty Jul 9 23:44:46.148122 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jul 9 23:44:46.148127 kernel: Device empty Jul 9 23:44:46.148131 kernel: Movable zone start for each node Jul 9 23:44:46.148135 kernel: Early memory node ranges Jul 9 23:44:46.148140 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jul 9 23:44:46.148145 kernel: node 0: [mem 0x0000000000824000-0x000000003e45ffff] Jul 9 23:44:46.148149 kernel: node 0: [mem 0x000000003e460000-0x000000003e46ffff] Jul 9 23:44:46.148153 kernel: node 0: [mem 0x000000003e470000-0x000000003e54ffff] Jul 9 23:44:46.148158 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Jul 9 23:44:46.148162 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Jul 9 23:44:46.148166 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Jul 9 23:44:46.148171 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Jul 9 23:44:46.148175 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jul 9 23:44:46.148179 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jul 9 23:44:46.148184 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jul 9 23:44:46.148188 kernel: psci: probing for conduit method from ACPI. Jul 9 23:44:46.148193 kernel: psci: PSCIv1.1 detected in firmware. Jul 9 23:44:46.148198 kernel: psci: Using standard PSCI v0.2 function IDs Jul 9 23:44:46.148202 kernel: psci: MIGRATE_INFO_TYPE not supported. Jul 9 23:44:46.148206 kernel: psci: SMC Calling Convention v1.4 Jul 9 23:44:46.148211 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jul 9 23:44:46.148215 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jul 9 23:44:46.148219 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Jul 9 23:44:46.148224 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Jul 9 23:44:46.148228 kernel: pcpu-alloc: [0] 0 [0] 1 Jul 9 23:44:46.148232 kernel: Detected PIPT I-cache on CPU0 Jul 9 23:44:46.148237 kernel: CPU features: detected: Address authentication (architected QARMA5 algorithm) Jul 9 23:44:46.148242 kernel: CPU features: detected: GIC system register CPU interface Jul 9 23:44:46.148246 kernel: CPU features: detected: Spectre-v4 Jul 9 23:44:46.148250 kernel: CPU features: detected: Spectre-BHB Jul 9 23:44:46.148255 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 9 23:44:46.148259 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 9 23:44:46.148263 kernel: CPU features: detected: ARM erratum 2067961 or 2054223 Jul 9 23:44:46.148268 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 9 23:44:46.148272 kernel: alternatives: applying boot alternatives Jul 9 23:44:46.148277 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=da23c3aa7de24c290e5e9aff0a0fccd6a322ecaa9bbfc71c29b2f39446459116 Jul 9 23:44:46.148282 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 9 23:44:46.148286 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 9 23:44:46.148292 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 9 23:44:46.148296 kernel: Fallback order for Node 0: 0 Jul 9 23:44:46.148300 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048540 Jul 9 23:44:46.148305 kernel: Policy zone: Normal Jul 9 23:44:46.148309 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 9 23:44:46.148313 kernel: software IO TLB: area num 2. Jul 9 23:44:46.148318 kernel: software IO TLB: mapped [mem 0x000000003a460000-0x000000003e460000] (64MB) Jul 9 23:44:46.148322 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 9 23:44:46.148326 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 9 23:44:46.148331 kernel: rcu: RCU event tracing is enabled. Jul 9 23:44:46.148336 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 9 23:44:46.148341 kernel: Trampoline variant of Tasks RCU enabled. Jul 9 23:44:46.148345 kernel: Tracing variant of Tasks RCU enabled. Jul 9 23:44:46.148350 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 9 23:44:46.148354 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 9 23:44:46.148358 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 9 23:44:46.148363 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 9 23:44:46.148367 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 9 23:44:46.148372 kernel: GICv3: 960 SPIs implemented Jul 9 23:44:46.148376 kernel: GICv3: 0 Extended SPIs implemented Jul 9 23:44:46.148380 kernel: Root IRQ handler: gic_handle_irq Jul 9 23:44:46.148385 kernel: GICv3: GICv3 features: 16 PPIs, RSS Jul 9 23:44:46.148389 kernel: GICv3: GICD_CTRL.DS=0, SCR_EL3.FIQ=0 Jul 9 23:44:46.148394 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jul 9 23:44:46.148398 kernel: ITS: No ITS available, not enabling LPIs Jul 9 23:44:46.148403 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 9 23:44:46.148407 kernel: arch_timer: cp15 timer(s) running at 1000.00MHz (virt). Jul 9 23:44:46.148412 kernel: clocksource: arch_sys_counter: mask: 0x1fffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 9 23:44:46.148416 kernel: sched_clock: 61 bits at 1000MHz, resolution 1ns, wraps every 4398046511103ns Jul 9 23:44:46.148421 kernel: Console: colour dummy device 80x25 Jul 9 23:44:46.148425 kernel: printk: legacy console [tty1] enabled Jul 9 23:44:46.148430 kernel: ACPI: Core revision 20240827 Jul 9 23:44:46.148434 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 2000.00 BogoMIPS (lpj=1000000) Jul 9 23:44:46.148440 kernel: pid_max: default: 32768 minimum: 301 Jul 9 23:44:46.148444 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 9 23:44:46.148449 kernel: landlock: Up and running. Jul 9 23:44:46.148453 kernel: SELinux: Initializing. Jul 9 23:44:46.148458 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 9 23:44:46.148462 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 9 23:44:46.148470 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x1a0000e, misc 0x31e1 Jul 9 23:44:46.148476 kernel: Hyper-V: Host Build 10.0.26100.1261-1-0 Jul 9 23:44:46.148480 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jul 9 23:44:46.148485 kernel: rcu: Hierarchical SRCU implementation. Jul 9 23:44:46.148490 kernel: rcu: Max phase no-delay instances is 400. Jul 9 23:44:46.148495 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 9 23:44:46.148500 kernel: Remapping and enabling EFI services. Jul 9 23:44:46.148505 kernel: smp: Bringing up secondary CPUs ... Jul 9 23:44:46.148509 kernel: Detected PIPT I-cache on CPU1 Jul 9 23:44:46.148514 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jul 9 23:44:46.148519 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd490] Jul 9 23:44:46.148524 kernel: smp: Brought up 1 node, 2 CPUs Jul 9 23:44:46.148529 kernel: SMP: Total of 2 processors activated. Jul 9 23:44:46.148534 kernel: CPU: All CPU(s) started at EL1 Jul 9 23:44:46.148538 kernel: CPU features: detected: 32-bit EL0 Support Jul 9 23:44:46.148543 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jul 9 23:44:46.148548 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 9 23:44:46.148553 kernel: CPU features: detected: Common not Private translations Jul 9 23:44:46.148558 kernel: CPU features: detected: CRC32 instructions Jul 9 23:44:46.148562 kernel: CPU features: detected: Generic authentication (architected QARMA5 algorithm) Jul 9 23:44:46.148568 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 9 23:44:46.148573 kernel: CPU features: detected: LSE atomic instructions Jul 9 23:44:46.148577 kernel: CPU features: detected: Privileged Access Never Jul 9 23:44:46.148582 kernel: CPU features: detected: Speculation barrier (SB) Jul 9 23:44:46.148587 kernel: CPU features: detected: TLB range maintenance instructions Jul 9 23:44:46.148591 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 9 23:44:46.148596 kernel: CPU features: detected: Scalable Vector Extension Jul 9 23:44:46.148601 kernel: alternatives: applying system-wide alternatives Jul 9 23:44:46.148606 kernel: CPU features: detected: Hardware dirty bit management on CPU0-1 Jul 9 23:44:46.148611 kernel: SVE: maximum available vector length 16 bytes per vector Jul 9 23:44:46.148616 kernel: SVE: default vector length 16 bytes per vector Jul 9 23:44:46.148621 kernel: Memory: 3975544K/4194160K available (11136K kernel code, 2428K rwdata, 9032K rodata, 39488K init, 1035K bss, 213816K reserved, 0K cma-reserved) Jul 9 23:44:46.148626 kernel: devtmpfs: initialized Jul 9 23:44:46.148631 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 9 23:44:46.148635 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 9 23:44:46.148640 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 9 23:44:46.148645 kernel: 0 pages in range for non-PLT usage Jul 9 23:44:46.148649 kernel: 508448 pages in range for PLT usage Jul 9 23:44:46.148655 kernel: pinctrl core: initialized pinctrl subsystem Jul 9 23:44:46.148660 kernel: SMBIOS 3.1.0 present. Jul 9 23:44:46.148665 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Jul 9 23:44:46.148669 kernel: DMI: Memory slots populated: 2/2 Jul 9 23:44:46.148674 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 9 23:44:46.148679 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 9 23:44:46.148683 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 9 23:44:46.148688 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 9 23:44:46.148693 kernel: audit: initializing netlink subsys (disabled) Jul 9 23:44:46.148699 kernel: audit: type=2000 audit(0.059:1): state=initialized audit_enabled=0 res=1 Jul 9 23:44:46.148703 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 9 23:44:46.148708 kernel: cpuidle: using governor menu Jul 9 23:44:46.148713 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 9 23:44:46.148717 kernel: ASID allocator initialised with 32768 entries Jul 9 23:44:46.148722 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 9 23:44:46.148727 kernel: Serial: AMBA PL011 UART driver Jul 9 23:44:46.148731 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 9 23:44:46.148736 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 9 23:44:46.148742 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 9 23:44:46.148747 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 9 23:44:46.148751 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 9 23:44:46.148756 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 9 23:44:46.148761 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 9 23:44:46.148766 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 9 23:44:46.148770 kernel: ACPI: Added _OSI(Module Device) Jul 9 23:44:46.148775 kernel: ACPI: Added _OSI(Processor Device) Jul 9 23:44:46.148779 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 9 23:44:46.148785 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 9 23:44:46.148790 kernel: ACPI: Interpreter enabled Jul 9 23:44:46.148794 kernel: ACPI: Using GIC for interrupt routing Jul 9 23:44:46.148799 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jul 9 23:44:46.148804 kernel: printk: legacy console [ttyAMA0] enabled Jul 9 23:44:46.148809 kernel: printk: legacy bootconsole [pl11] disabled Jul 9 23:44:46.148813 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jul 9 23:44:46.148818 kernel: ACPI: CPU0 has been hot-added Jul 9 23:44:46.148823 kernel: ACPI: CPU1 has been hot-added Jul 9 23:44:46.148828 kernel: iommu: Default domain type: Translated Jul 9 23:44:46.148833 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 9 23:44:46.148837 kernel: efivars: Registered efivars operations Jul 9 23:44:46.148842 kernel: vgaarb: loaded Jul 9 23:44:46.148847 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 9 23:44:46.148851 kernel: VFS: Disk quotas dquot_6.6.0 Jul 9 23:44:46.148856 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 9 23:44:46.148861 kernel: pnp: PnP ACPI init Jul 9 23:44:46.148874 kernel: pnp: PnP ACPI: found 0 devices Jul 9 23:44:46.148880 kernel: NET: Registered PF_INET protocol family Jul 9 23:44:46.148885 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 9 23:44:46.148889 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 9 23:44:46.148894 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 9 23:44:46.148899 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 9 23:44:46.148904 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 9 23:44:46.148908 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 9 23:44:46.148913 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 9 23:44:46.148918 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 9 23:44:46.148923 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 9 23:44:46.148928 kernel: PCI: CLS 0 bytes, default 64 Jul 9 23:44:46.148932 kernel: kvm [1]: HYP mode not available Jul 9 23:44:46.148937 kernel: Initialise system trusted keyrings Jul 9 23:44:46.148942 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 9 23:44:46.148946 kernel: Key type asymmetric registered Jul 9 23:44:46.148951 kernel: Asymmetric key parser 'x509' registered Jul 9 23:44:46.148956 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 9 23:44:46.148960 kernel: io scheduler mq-deadline registered Jul 9 23:44:46.148966 kernel: io scheduler kyber registered Jul 9 23:44:46.148971 kernel: io scheduler bfq registered Jul 9 23:44:46.148975 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 9 23:44:46.148980 kernel: thunder_xcv, ver 1.0 Jul 9 23:44:46.148984 kernel: thunder_bgx, ver 1.0 Jul 9 23:44:46.148989 kernel: nicpf, ver 1.0 Jul 9 23:44:46.148994 kernel: nicvf, ver 1.0 Jul 9 23:44:46.149111 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 9 23:44:46.149162 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-09T23:44:45 UTC (1752104685) Jul 9 23:44:46.149169 kernel: efifb: probing for efifb Jul 9 23:44:46.149174 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jul 9 23:44:46.149178 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jul 9 23:44:46.149183 kernel: efifb: scrolling: redraw Jul 9 23:44:46.149188 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jul 9 23:44:46.149193 kernel: Console: switching to colour frame buffer device 128x48 Jul 9 23:44:46.149197 kernel: fb0: EFI VGA frame buffer device Jul 9 23:44:46.149202 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jul 9 23:44:46.149208 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 9 23:44:46.149212 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Jul 9 23:44:46.149217 kernel: NET: Registered PF_INET6 protocol family Jul 9 23:44:46.149222 kernel: watchdog: NMI not fully supported Jul 9 23:44:46.149227 kernel: watchdog: Hard watchdog permanently disabled Jul 9 23:44:46.149231 kernel: Segment Routing with IPv6 Jul 9 23:44:46.149236 kernel: In-situ OAM (IOAM) with IPv6 Jul 9 23:44:46.149240 kernel: NET: Registered PF_PACKET protocol family Jul 9 23:44:46.149245 kernel: Key type dns_resolver registered Jul 9 23:44:46.149251 kernel: registered taskstats version 1 Jul 9 23:44:46.149255 kernel: Loading compiled-in X.509 certificates Jul 9 23:44:46.149260 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.36-flatcar: 11eff9deb028731c4f89f27f6fac8d1c08902e5a' Jul 9 23:44:46.149265 kernel: Demotion targets for Node 0: null Jul 9 23:44:46.149269 kernel: Key type .fscrypt registered Jul 9 23:44:46.149274 kernel: Key type fscrypt-provisioning registered Jul 9 23:44:46.149279 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 9 23:44:46.149283 kernel: ima: Allocated hash algorithm: sha1 Jul 9 23:44:46.149288 kernel: ima: No architecture policies found Jul 9 23:44:46.149293 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 9 23:44:46.149298 kernel: clk: Disabling unused clocks Jul 9 23:44:46.149303 kernel: PM: genpd: Disabling unused power domains Jul 9 23:44:46.149307 kernel: Warning: unable to open an initial console. Jul 9 23:44:46.149312 kernel: Freeing unused kernel memory: 39488K Jul 9 23:44:46.149317 kernel: Run /init as init process Jul 9 23:44:46.149322 kernel: with arguments: Jul 9 23:44:46.149326 kernel: /init Jul 9 23:44:46.149331 kernel: with environment: Jul 9 23:44:46.149336 kernel: HOME=/ Jul 9 23:44:46.149341 kernel: TERM=linux Jul 9 23:44:46.149345 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 9 23:44:46.149351 systemd[1]: Successfully made /usr/ read-only. Jul 9 23:44:46.149358 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 9 23:44:46.149363 systemd[1]: Detected virtualization microsoft. Jul 9 23:44:46.149368 systemd[1]: Detected architecture arm64. Jul 9 23:44:46.149374 systemd[1]: Running in initrd. Jul 9 23:44:46.149379 systemd[1]: No hostname configured, using default hostname. Jul 9 23:44:46.149384 systemd[1]: Hostname set to . Jul 9 23:44:46.149389 systemd[1]: Initializing machine ID from random generator. Jul 9 23:44:46.149394 systemd[1]: Queued start job for default target initrd.target. Jul 9 23:44:46.149400 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 9 23:44:46.149405 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 9 23:44:46.149410 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 9 23:44:46.149417 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 9 23:44:46.149422 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 9 23:44:46.149428 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 9 23:44:46.149433 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 9 23:44:46.149438 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 9 23:44:46.149444 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 9 23:44:46.149449 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 9 23:44:46.149455 systemd[1]: Reached target paths.target - Path Units. Jul 9 23:44:46.149460 systemd[1]: Reached target slices.target - Slice Units. Jul 9 23:44:46.149465 systemd[1]: Reached target swap.target - Swaps. Jul 9 23:44:46.149470 systemd[1]: Reached target timers.target - Timer Units. Jul 9 23:44:46.149475 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 9 23:44:46.149480 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 9 23:44:46.149485 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 9 23:44:46.149490 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 9 23:44:46.149495 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 9 23:44:46.149501 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 9 23:44:46.149507 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 9 23:44:46.149512 systemd[1]: Reached target sockets.target - Socket Units. Jul 9 23:44:46.149517 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 9 23:44:46.149522 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 9 23:44:46.149527 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 9 23:44:46.149532 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 9 23:44:46.149537 systemd[1]: Starting systemd-fsck-usr.service... Jul 9 23:44:46.149544 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 9 23:44:46.149549 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 9 23:44:46.149565 systemd-journald[224]: Collecting audit messages is disabled. Jul 9 23:44:46.149579 systemd-journald[224]: Journal started Jul 9 23:44:46.149594 systemd-journald[224]: Runtime Journal (/run/log/journal/4c96a635c3d2446197b95ec6639d2f0c) is 8M, max 78.5M, 70.5M free. Jul 9 23:44:46.158078 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 9 23:44:46.169249 systemd-modules-load[226]: Inserted module 'overlay' Jul 9 23:44:46.183152 systemd[1]: Started systemd-journald.service - Journal Service. Jul 9 23:44:46.197173 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 9 23:44:46.211362 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 9 23:44:46.211390 kernel: Bridge firewalling registered Jul 9 23:44:46.202204 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 9 23:44:46.216774 systemd-modules-load[226]: Inserted module 'br_netfilter' Jul 9 23:44:46.223535 systemd[1]: Finished systemd-fsck-usr.service. Jul 9 23:44:46.230691 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 9 23:44:46.242158 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 9 23:44:46.254106 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 9 23:44:46.272614 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 9 23:44:46.283409 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 9 23:44:46.292435 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 9 23:44:46.313775 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 9 23:44:46.322625 systemd-tmpfiles[250]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 9 23:44:46.324813 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 9 23:44:46.336787 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 9 23:44:46.349517 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 9 23:44:46.360647 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 9 23:44:46.384779 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 9 23:44:46.392547 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 9 23:44:46.426369 dracut-cmdline[261]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=da23c3aa7de24c290e5e9aff0a0fccd6a322ecaa9bbfc71c29b2f39446459116 Jul 9 23:44:46.457625 systemd-resolved[262]: Positive Trust Anchors: Jul 9 23:44:46.457635 systemd-resolved[262]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 9 23:44:46.495190 kernel: SCSI subsystem initialized Jul 9 23:44:46.495210 kernel: Loading iSCSI transport class v2.0-870. Jul 9 23:44:46.457653 systemd-resolved[262]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 9 23:44:46.459432 systemd-resolved[262]: Defaulting to hostname 'linux'. Jul 9 23:44:46.460263 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 9 23:44:46.466153 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 9 23:44:46.542338 kernel: iscsi: registered transport (tcp) Jul 9 23:44:46.476969 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 9 23:44:46.555420 kernel: iscsi: registered transport (qla4xxx) Jul 9 23:44:46.555435 kernel: QLogic iSCSI HBA Driver Jul 9 23:44:46.569729 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 9 23:44:46.589829 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 9 23:44:46.595744 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 9 23:44:46.645758 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 9 23:44:46.653004 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 9 23:44:46.723888 kernel: raid6: neonx8 gen() 18561 MB/s Jul 9 23:44:46.742872 kernel: raid6: neonx4 gen() 18559 MB/s Jul 9 23:44:46.761872 kernel: raid6: neonx2 gen() 17090 MB/s Jul 9 23:44:46.783883 kernel: raid6: neonx1 gen() 15134 MB/s Jul 9 23:44:46.798877 kernel: raid6: int64x8 gen() 10239 MB/s Jul 9 23:44:46.818872 kernel: raid6: int64x4 gen() 10608 MB/s Jul 9 23:44:46.838872 kernel: raid6: int64x2 gen() 8985 MB/s Jul 9 23:44:46.860624 kernel: raid6: int64x1 gen() 7045 MB/s Jul 9 23:44:46.860633 kernel: raid6: using algorithm neonx8 gen() 18561 MB/s Jul 9 23:44:46.882679 kernel: raid6: .... xor() 14903 MB/s, rmw enabled Jul 9 23:44:46.882736 kernel: raid6: using neon recovery algorithm Jul 9 23:44:46.892606 kernel: xor: measuring software checksum speed Jul 9 23:44:46.892616 kernel: 8regs : 28569 MB/sec Jul 9 23:44:46.895270 kernel: 32regs : 28785 MB/sec Jul 9 23:44:46.898005 kernel: arm64_neon : 37422 MB/sec Jul 9 23:44:46.901607 kernel: xor: using function: arm64_neon (37422 MB/sec) Jul 9 23:44:46.940890 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 9 23:44:46.946132 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 9 23:44:46.957024 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 9 23:44:46.982146 systemd-udevd[473]: Using default interface naming scheme 'v255'. Jul 9 23:44:46.986873 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 9 23:44:46.998013 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 9 23:44:47.035676 dracut-pre-trigger[485]: rd.md=0: removing MD RAID activation Jul 9 23:44:47.056398 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 9 23:44:47.063623 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 9 23:44:47.117604 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 9 23:44:47.130844 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 9 23:44:47.200144 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 9 23:44:47.215848 kernel: hv_vmbus: Vmbus version:5.3 Jul 9 23:44:47.215886 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 9 23:44:47.200284 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 9 23:44:47.210879 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 9 23:44:47.228309 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 9 23:44:47.278974 kernel: hv_vmbus: registering driver hyperv_keyboard Jul 9 23:44:47.278996 kernel: hv_vmbus: registering driver hid_hyperv Jul 9 23:44:47.279003 kernel: hv_vmbus: registering driver hv_storvsc Jul 9 23:44:47.279009 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 9 23:44:47.279016 kernel: hv_vmbus: registering driver hv_netvsc Jul 9 23:44:47.279024 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jul 9 23:44:47.269464 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 9 23:44:47.300156 kernel: scsi host1: storvsc_host_t Jul 9 23:44:47.300316 kernel: scsi host0: storvsc_host_t Jul 9 23:44:47.300391 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jul 9 23:44:47.300398 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jul 9 23:44:47.269571 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 9 23:44:47.323569 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jul 9 23:44:47.323711 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Jul 9 23:44:47.304732 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 9 23:44:47.305998 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 9 23:44:47.358578 kernel: PTP clock support registered Jul 9 23:44:47.358629 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jul 9 23:44:47.359920 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 9 23:44:47.378322 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jul 9 23:44:47.378506 kernel: sd 0:0:0:0: [sda] Write Protect is off Jul 9 23:44:47.378583 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jul 9 23:44:47.378645 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jul 9 23:44:47.389600 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#304 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Jul 9 23:44:47.400887 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#311 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Jul 9 23:44:47.401102 kernel: hv_netvsc 00224876-fa22-0022-4876-fa2200224876 eth0: VF slot 1 added Jul 9 23:44:47.410884 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 9 23:44:47.410939 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jul 9 23:44:47.419658 kernel: hv_utils: Registering HyperV Utility Driver Jul 9 23:44:47.419703 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jul 9 23:44:47.419849 kernel: hv_vmbus: registering driver hv_utils Jul 9 23:44:47.433163 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 9 23:44:47.433227 kernel: hv_utils: Shutdown IC version 3.2 Jul 9 23:44:47.433234 kernel: hv_utils: Heartbeat IC version 3.0 Jul 9 23:44:47.433248 kernel: hv_vmbus: registering driver hv_pci Jul 9 23:44:47.433254 kernel: hv_utils: TimeSync IC version 4.0 Jul 9 23:44:47.013701 systemd-resolved[262]: Clock change detected. Flushing caches. Jul 9 23:44:47.033837 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jul 9 23:44:47.033968 kernel: hv_pci efdde20e-e145-44bc-8bef-2f640626197b: PCI VMBus probing: Using version 0x10004 Jul 9 23:44:47.034048 systemd-journald[224]: Time jumped backwards, rotating. Jul 9 23:44:47.034077 kernel: hv_pci efdde20e-e145-44bc-8bef-2f640626197b: PCI host bridge to bus e145:00 Jul 9 23:44:47.042965 kernel: pci_bus e145:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jul 9 23:44:47.043129 kernel: pci_bus e145:00: No busn resource found for root bus, will use [bus 00-ff] Jul 9 23:44:47.054522 kernel: pci e145:00:02.0: [15b3:101a] type 00 class 0x020000 PCIe Endpoint Jul 9 23:44:47.061450 kernel: pci e145:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref] Jul 9 23:44:47.069529 kernel: pci e145:00:02.0: enabling Extended Tags Jul 9 23:44:47.085492 kernel: pci e145:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at e145:00:02.0 (capable of 252.048 Gb/s with 16.0 GT/s PCIe x16 link) Jul 9 23:44:47.100898 kernel: pci_bus e145:00: busn_res: [bus 00-ff] end is updated to 00 Jul 9 23:44:47.101076 kernel: pci e145:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref]: assigned Jul 9 23:44:47.122460 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#159 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jul 9 23:44:47.144455 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jul 9 23:44:47.176105 kernel: mlx5_core e145:00:02.0: enabling device (0000 -> 0002) Jul 9 23:44:47.187434 kernel: mlx5_core e145:00:02.0: PTM is not supported by PCIe Jul 9 23:44:47.187562 kernel: mlx5_core e145:00:02.0: firmware version: 16.30.5006 Jul 9 23:44:47.364613 kernel: hv_netvsc 00224876-fa22-0022-4876-fa2200224876 eth0: VF registering: eth1 Jul 9 23:44:47.364820 kernel: mlx5_core e145:00:02.0 eth1: joined to eth0 Jul 9 23:44:47.370417 kernel: mlx5_core e145:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Jul 9 23:44:47.381455 kernel: mlx5_core e145:00:02.0 enP57669s1: renamed from eth1 Jul 9 23:44:47.605743 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jul 9 23:44:47.700212 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jul 9 23:44:47.724761 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jul 9 23:44:47.754068 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jul 9 23:44:47.759378 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jul 9 23:44:47.772223 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 9 23:44:47.784096 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 9 23:44:47.794884 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 9 23:44:47.807053 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 9 23:44:47.821600 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 9 23:44:47.834921 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 9 23:44:47.855528 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#144 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Jul 9 23:44:47.856005 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 9 23:44:47.879474 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 9 23:44:47.887446 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#276 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Jul 9 23:44:47.900677 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 9 23:44:48.907535 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#279 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Jul 9 23:44:48.920405 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 9 23:44:48.920468 disk-uuid[659]: The operation has completed successfully. Jul 9 23:44:48.989943 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 9 23:44:48.991456 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 9 23:44:49.019121 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 9 23:44:49.045713 sh[824]: Success Jul 9 23:44:49.082559 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 9 23:44:49.082626 kernel: device-mapper: uevent: version 1.0.3 Jul 9 23:44:49.088938 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 9 23:44:49.097505 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jul 9 23:44:49.296875 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 9 23:44:49.305057 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 9 23:44:49.321746 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 9 23:44:49.345750 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 9 23:44:49.345807 kernel: BTRFS: device fsid 0f8170d9-c2a5-4c49-82bc-4e538bfc9b9b devid 1 transid 39 /dev/mapper/usr (254:0) scanned by mount (842) Jul 9 23:44:49.351446 kernel: BTRFS info (device dm-0): first mount of filesystem 0f8170d9-c2a5-4c49-82bc-4e538bfc9b9b Jul 9 23:44:49.356008 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 9 23:44:49.359168 kernel: BTRFS info (device dm-0): using free-space-tree Jul 9 23:44:49.685629 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 9 23:44:49.689902 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 9 23:44:49.697852 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 9 23:44:49.698652 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 9 23:44:49.721190 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 9 23:44:49.747731 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (877) Jul 9 23:44:49.754456 kernel: BTRFS info (device sda6): first mount of filesystem 3e5253a1-0691-476f-bde5-7794093008ce Jul 9 23:44:49.754512 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 9 23:44:49.762964 kernel: BTRFS info (device sda6): using free-space-tree Jul 9 23:44:49.789550 kernel: BTRFS info (device sda6): last unmount of filesystem 3e5253a1-0691-476f-bde5-7794093008ce Jul 9 23:44:49.790489 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 9 23:44:49.803766 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 9 23:44:49.848472 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 9 23:44:49.860614 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 9 23:44:49.903244 systemd-networkd[1012]: lo: Link UP Jul 9 23:44:49.903253 systemd-networkd[1012]: lo: Gained carrier Jul 9 23:44:49.904720 systemd-networkd[1012]: Enumeration completed Jul 9 23:44:49.906251 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 9 23:44:49.909639 systemd-networkd[1012]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 9 23:44:49.909643 systemd-networkd[1012]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 9 23:44:49.914013 systemd[1]: Reached target network.target - Network. Jul 9 23:44:49.997449 kernel: mlx5_core e145:00:02.0 enP57669s1: Link up Jul 9 23:44:50.031497 kernel: hv_netvsc 00224876-fa22-0022-4876-fa2200224876 eth0: Data path switched to VF: enP57669s1 Jul 9 23:44:50.031555 systemd-networkd[1012]: enP57669s1: Link UP Jul 9 23:44:50.031604 systemd-networkd[1012]: eth0: Link UP Jul 9 23:44:50.031693 systemd-networkd[1012]: eth0: Gained carrier Jul 9 23:44:50.031702 systemd-networkd[1012]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 9 23:44:50.042878 systemd-networkd[1012]: enP57669s1: Gained carrier Jul 9 23:44:50.071497 systemd-networkd[1012]: eth0: DHCPv4 address 10.200.20.14/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jul 9 23:44:50.810619 ignition[958]: Ignition 2.21.0 Jul 9 23:44:50.813205 ignition[958]: Stage: fetch-offline Jul 9 23:44:50.813322 ignition[958]: no configs at "/usr/lib/ignition/base.d" Jul 9 23:44:50.817615 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 9 23:44:50.813328 ignition[958]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 9 23:44:50.826890 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 9 23:44:50.813451 ignition[958]: parsed url from cmdline: "" Jul 9 23:44:50.813454 ignition[958]: no config URL provided Jul 9 23:44:50.813457 ignition[958]: reading system config file "/usr/lib/ignition/user.ign" Jul 9 23:44:50.813463 ignition[958]: no config at "/usr/lib/ignition/user.ign" Jul 9 23:44:50.813467 ignition[958]: failed to fetch config: resource requires networking Jul 9 23:44:50.813616 ignition[958]: Ignition finished successfully Jul 9 23:44:50.856915 ignition[1023]: Ignition 2.21.0 Jul 9 23:44:50.856933 ignition[1023]: Stage: fetch Jul 9 23:44:50.857089 ignition[1023]: no configs at "/usr/lib/ignition/base.d" Jul 9 23:44:50.857096 ignition[1023]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 9 23:44:50.857159 ignition[1023]: parsed url from cmdline: "" Jul 9 23:44:50.857162 ignition[1023]: no config URL provided Jul 9 23:44:50.857165 ignition[1023]: reading system config file "/usr/lib/ignition/user.ign" Jul 9 23:44:50.857170 ignition[1023]: no config at "/usr/lib/ignition/user.ign" Jul 9 23:44:50.857203 ignition[1023]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jul 9 23:44:50.935584 ignition[1023]: GET result: OK Jul 9 23:44:50.935666 ignition[1023]: config has been read from IMDS userdata Jul 9 23:44:50.935690 ignition[1023]: parsing config with SHA512: c46d940ea3771df566eee109d06532fd74fb33b69d65a0054c48a35507774e8b56080c28a484e02bc7c22b9c536b916ca0824aee84e47a94b7966c2b434225fa Jul 9 23:44:50.941886 unknown[1023]: fetched base config from "system" Jul 9 23:44:50.941892 unknown[1023]: fetched base config from "system" Jul 9 23:44:50.942146 ignition[1023]: fetch: fetch complete Jul 9 23:44:50.941896 unknown[1023]: fetched user config from "azure" Jul 9 23:44:50.942150 ignition[1023]: fetch: fetch passed Jul 9 23:44:50.944130 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 9 23:44:50.942191 ignition[1023]: Ignition finished successfully Jul 9 23:44:50.954570 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 9 23:44:50.989267 ignition[1030]: Ignition 2.21.0 Jul 9 23:44:50.989283 ignition[1030]: Stage: kargs Jul 9 23:44:50.989478 ignition[1030]: no configs at "/usr/lib/ignition/base.d" Jul 9 23:44:50.995131 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 9 23:44:50.989485 ignition[1030]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 9 23:44:51.006582 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 9 23:44:50.990791 ignition[1030]: kargs: kargs passed Jul 9 23:44:50.990848 ignition[1030]: Ignition finished successfully Jul 9 23:44:51.033686 ignition[1036]: Ignition 2.21.0 Jul 9 23:44:51.033698 ignition[1036]: Stage: disks Jul 9 23:44:51.037879 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 9 23:44:51.034142 ignition[1036]: no configs at "/usr/lib/ignition/base.d" Jul 9 23:44:51.045550 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 9 23:44:51.034152 ignition[1036]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 9 23:44:51.054746 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 9 23:44:51.035249 ignition[1036]: disks: disks passed Jul 9 23:44:51.065389 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 9 23:44:51.035301 ignition[1036]: Ignition finished successfully Jul 9 23:44:51.076461 systemd[1]: Reached target sysinit.target - System Initialization. Jul 9 23:44:51.086052 systemd[1]: Reached target basic.target - Basic System. Jul 9 23:44:51.097396 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 9 23:44:51.166662 systemd-fsck[1044]: ROOT: clean, 15/7326000 files, 477845/7359488 blocks Jul 9 23:44:51.173730 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 9 23:44:51.184120 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 9 23:44:51.385443 kernel: EXT4-fs (sda9): mounted filesystem 961fd3ec-635c-4a87-8aef-ca8f12cd8be8 r/w with ordered data mode. Quota mode: none. Jul 9 23:44:51.385954 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 9 23:44:51.390382 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 9 23:44:51.413940 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 9 23:44:51.432029 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 9 23:44:51.448461 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1058) Jul 9 23:44:51.459944 kernel: BTRFS info (device sda6): first mount of filesystem 3e5253a1-0691-476f-bde5-7794093008ce Jul 9 23:44:51.459993 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 9 23:44:51.462700 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jul 9 23:44:51.470770 kernel: BTRFS info (device sda6): using free-space-tree Jul 9 23:44:51.476120 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 9 23:44:51.476162 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 9 23:44:51.483117 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 9 23:44:51.499043 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 9 23:44:51.507703 systemd-networkd[1012]: enP57669s1: Gained IPv6LL Jul 9 23:44:51.509789 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 9 23:44:51.758602 systemd-networkd[1012]: eth0: Gained IPv6LL Jul 9 23:44:51.910994 coreos-metadata[1060]: Jul 09 23:44:51.910 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jul 9 23:44:51.922293 coreos-metadata[1060]: Jul 09 23:44:51.922 INFO Fetch successful Jul 9 23:44:51.922293 coreos-metadata[1060]: Jul 09 23:44:51.922 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jul 9 23:44:51.936040 coreos-metadata[1060]: Jul 09 23:44:51.935 INFO Fetch successful Jul 9 23:44:51.950842 coreos-metadata[1060]: Jul 09 23:44:51.950 INFO wrote hostname ci-4344.1.1-n-76bacae427 to /sysroot/etc/hostname Jul 9 23:44:51.958404 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 9 23:44:52.178397 initrd-setup-root[1089]: cut: /sysroot/etc/passwd: No such file or directory Jul 9 23:44:52.199680 initrd-setup-root[1096]: cut: /sysroot/etc/group: No such file or directory Jul 9 23:44:52.206449 initrd-setup-root[1103]: cut: /sysroot/etc/shadow: No such file or directory Jul 9 23:44:52.214451 initrd-setup-root[1110]: cut: /sysroot/etc/gshadow: No such file or directory Jul 9 23:44:52.963863 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 9 23:44:52.970374 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 9 23:44:52.974974 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 9 23:44:52.999856 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 9 23:44:53.011447 kernel: BTRFS info (device sda6): last unmount of filesystem 3e5253a1-0691-476f-bde5-7794093008ce Jul 9 23:44:53.023718 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 9 23:44:53.034974 ignition[1181]: INFO : Ignition 2.21.0 Jul 9 23:44:53.034974 ignition[1181]: INFO : Stage: mount Jul 9 23:44:53.042035 ignition[1181]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 9 23:44:53.042035 ignition[1181]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 9 23:44:53.042035 ignition[1181]: INFO : mount: mount passed Jul 9 23:44:53.042035 ignition[1181]: INFO : Ignition finished successfully Jul 9 23:44:53.042614 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 9 23:44:53.053743 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 9 23:44:53.080555 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 9 23:44:53.103491 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1190) Jul 9 23:44:53.113635 kernel: BTRFS info (device sda6): first mount of filesystem 3e5253a1-0691-476f-bde5-7794093008ce Jul 9 23:44:53.113671 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 9 23:44:53.116898 kernel: BTRFS info (device sda6): using free-space-tree Jul 9 23:44:53.119372 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 9 23:44:53.143586 ignition[1208]: INFO : Ignition 2.21.0 Jul 9 23:44:53.147583 ignition[1208]: INFO : Stage: files Jul 9 23:44:53.147583 ignition[1208]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 9 23:44:53.147583 ignition[1208]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 9 23:44:53.147583 ignition[1208]: DEBUG : files: compiled without relabeling support, skipping Jul 9 23:44:53.166799 ignition[1208]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 9 23:44:53.166799 ignition[1208]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 9 23:44:53.219193 ignition[1208]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 9 23:44:53.225582 ignition[1208]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 9 23:44:53.225582 ignition[1208]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 9 23:44:53.219657 unknown[1208]: wrote ssh authorized keys file for user: core Jul 9 23:44:53.241779 ignition[1208]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jul 9 23:44:53.241779 ignition[1208]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Jul 9 23:44:53.269293 ignition[1208]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 9 23:44:53.378029 ignition[1208]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jul 9 23:44:53.378029 ignition[1208]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 9 23:44:53.378029 ignition[1208]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jul 9 23:44:53.816547 ignition[1208]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 9 23:44:53.885819 ignition[1208]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 9 23:44:53.885819 ignition[1208]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 9 23:44:53.902822 ignition[1208]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 9 23:44:53.902822 ignition[1208]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 9 23:44:53.902822 ignition[1208]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 9 23:44:53.902822 ignition[1208]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 9 23:44:53.902822 ignition[1208]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 9 23:44:53.902822 ignition[1208]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 9 23:44:53.902822 ignition[1208]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 9 23:44:53.964770 ignition[1208]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 9 23:44:53.964770 ignition[1208]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 9 23:44:53.964770 ignition[1208]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 9 23:44:53.964770 ignition[1208]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 9 23:44:53.964770 ignition[1208]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 9 23:44:53.964770 ignition[1208]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Jul 9 23:44:54.536557 ignition[1208]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 9 23:44:54.701659 ignition[1208]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 9 23:44:54.701659 ignition[1208]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 9 23:44:54.718350 ignition[1208]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 9 23:44:54.727178 ignition[1208]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 9 23:44:54.727178 ignition[1208]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 9 23:44:54.742973 ignition[1208]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jul 9 23:44:54.742973 ignition[1208]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jul 9 23:44:54.742973 ignition[1208]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 9 23:44:54.742973 ignition[1208]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 9 23:44:54.742973 ignition[1208]: INFO : files: files passed Jul 9 23:44:54.742973 ignition[1208]: INFO : Ignition finished successfully Jul 9 23:44:54.737642 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 9 23:44:54.748684 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 9 23:44:54.781446 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 9 23:44:54.798916 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 9 23:44:54.799003 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 9 23:44:54.825591 initrd-setup-root-after-ignition[1237]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 9 23:44:54.825591 initrd-setup-root-after-ignition[1237]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 9 23:44:54.839663 initrd-setup-root-after-ignition[1241]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 9 23:44:54.840063 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 9 23:44:54.852721 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 9 23:44:54.857851 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 9 23:44:54.904968 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 9 23:44:54.905066 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 9 23:44:54.914792 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 9 23:44:54.925029 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 9 23:44:54.933399 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 9 23:44:54.934154 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 9 23:44:54.969837 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 9 23:44:54.976613 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 9 23:44:55.000940 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 9 23:44:55.006239 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 9 23:44:55.015461 systemd[1]: Stopped target timers.target - Timer Units. Jul 9 23:44:55.023926 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 9 23:44:55.024036 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 9 23:44:55.038403 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 9 23:44:55.042758 systemd[1]: Stopped target basic.target - Basic System. Jul 9 23:44:55.052281 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 9 23:44:55.061848 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 9 23:44:55.072048 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 9 23:44:55.082125 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 9 23:44:55.091662 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 9 23:44:55.102661 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 9 23:44:55.113187 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 9 23:44:55.123025 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 9 23:44:55.134858 systemd[1]: Stopped target swap.target - Swaps. Jul 9 23:44:55.143029 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 9 23:44:55.143141 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 9 23:44:55.155330 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 9 23:44:55.161957 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 9 23:44:55.171625 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 9 23:44:55.176694 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 9 23:44:55.182615 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 9 23:44:55.182716 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 9 23:44:55.197206 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 9 23:44:55.197293 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 9 23:44:55.203127 systemd[1]: ignition-files.service: Deactivated successfully. Jul 9 23:44:55.203202 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 9 23:44:55.212045 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jul 9 23:44:55.277002 ignition[1261]: INFO : Ignition 2.21.0 Jul 9 23:44:55.277002 ignition[1261]: INFO : Stage: umount Jul 9 23:44:55.277002 ignition[1261]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 9 23:44:55.277002 ignition[1261]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 9 23:44:55.277002 ignition[1261]: INFO : umount: umount passed Jul 9 23:44:55.277002 ignition[1261]: INFO : Ignition finished successfully Jul 9 23:44:55.212115 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 9 23:44:55.224798 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 9 23:44:55.252261 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 9 23:44:55.264813 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 9 23:44:55.264957 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 9 23:44:55.271922 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 9 23:44:55.272010 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 9 23:44:55.294230 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 9 23:44:55.295065 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 9 23:44:55.295163 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 9 23:44:55.306750 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 9 23:44:55.308467 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 9 23:44:55.321623 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 9 23:44:55.321736 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 9 23:44:55.331363 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 9 23:44:55.331526 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 9 23:44:55.341069 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 9 23:44:55.341119 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 9 23:44:55.350050 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 9 23:44:55.350101 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 9 23:44:55.358785 systemd[1]: Stopped target network.target - Network. Jul 9 23:44:55.368423 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 9 23:44:55.368504 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 9 23:44:55.378223 systemd[1]: Stopped target paths.target - Path Units. Jul 9 23:44:55.386387 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 9 23:44:55.390990 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 9 23:44:55.397110 systemd[1]: Stopped target slices.target - Slice Units. Jul 9 23:44:55.404724 systemd[1]: Stopped target sockets.target - Socket Units. Jul 9 23:44:55.409307 systemd[1]: iscsid.socket: Deactivated successfully. Jul 9 23:44:55.409361 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 9 23:44:55.417903 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 9 23:44:55.417940 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 9 23:44:55.426723 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 9 23:44:55.426776 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 9 23:44:55.437758 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 9 23:44:55.437791 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 9 23:44:55.448698 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 9 23:44:55.448730 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 9 23:44:55.458573 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 9 23:44:55.466753 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 9 23:44:55.488672 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 9 23:44:55.488821 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 9 23:44:55.503828 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 9 23:44:55.718180 kernel: hv_netvsc 00224876-fa22-0022-4876-fa2200224876 eth0: Data path switched from VF: enP57669s1 Jul 9 23:44:55.504055 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 9 23:44:55.507104 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 9 23:44:55.516853 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 9 23:44:55.517576 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 9 23:44:55.526457 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 9 23:44:55.526503 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 9 23:44:55.537826 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 9 23:44:55.552641 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 9 23:44:55.552721 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 9 23:44:55.569425 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 9 23:44:55.569529 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 9 23:44:55.582446 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 9 23:44:55.582493 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 9 23:44:55.588062 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 9 23:44:55.588096 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 9 23:44:55.603605 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 9 23:44:55.613689 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 9 23:44:55.613750 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 9 23:44:55.651014 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 9 23:44:55.651179 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 9 23:44:55.661928 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 9 23:44:55.661960 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 9 23:44:55.672129 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 9 23:44:55.672157 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 9 23:44:55.688666 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 9 23:44:55.688726 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 9 23:44:55.705952 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 9 23:44:55.706010 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 9 23:44:55.718226 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 9 23:44:55.718283 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 9 23:44:55.736710 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 9 23:44:55.753614 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 9 23:44:55.753710 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 9 23:44:55.980792 systemd-journald[224]: Received SIGTERM from PID 1 (systemd). Jul 9 23:44:55.775368 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 9 23:44:55.775652 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 9 23:44:55.785404 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 9 23:44:55.785478 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 9 23:44:55.799392 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 9 23:44:55.799453 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 9 23:44:55.806040 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 9 23:44:55.806094 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 9 23:44:55.822626 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jul 9 23:44:55.822679 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Jul 9 23:44:55.822701 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 9 23:44:55.822725 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 9 23:44:55.823099 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 9 23:44:55.823211 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 9 23:44:55.834651 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 9 23:44:55.834740 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 9 23:44:55.846607 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 9 23:44:55.857937 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 9 23:44:55.887866 systemd[1]: Switching root. Jul 9 23:44:56.082669 systemd-journald[224]: Journal stopped Jul 9 23:45:00.041238 kernel: SELinux: policy capability network_peer_controls=1 Jul 9 23:45:00.041260 kernel: SELinux: policy capability open_perms=1 Jul 9 23:45:00.041267 kernel: SELinux: policy capability extended_socket_class=1 Jul 9 23:45:00.041273 kernel: SELinux: policy capability always_check_network=0 Jul 9 23:45:00.041279 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 9 23:45:00.041284 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 9 23:45:00.041291 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 9 23:45:00.041296 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 9 23:45:00.041301 kernel: SELinux: policy capability userspace_initial_context=0 Jul 9 23:45:00.041307 systemd[1]: Successfully loaded SELinux policy in 145.475ms. Jul 9 23:45:00.041315 kernel: audit: type=1403 audit(1752104697.003:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 9 23:45:00.041321 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 7.801ms. Jul 9 23:45:00.041327 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 9 23:45:00.041335 systemd[1]: Detected virtualization microsoft. Jul 9 23:45:00.041341 systemd[1]: Detected architecture arm64. Jul 9 23:45:00.041348 systemd[1]: Detected first boot. Jul 9 23:45:00.041354 systemd[1]: Hostname set to . Jul 9 23:45:00.041360 systemd[1]: Initializing machine ID from random generator. Jul 9 23:45:00.041366 zram_generator::config[1304]: No configuration found. Jul 9 23:45:00.041372 kernel: NET: Registered PF_VSOCK protocol family Jul 9 23:45:00.041378 systemd[1]: Populated /etc with preset unit settings. Jul 9 23:45:00.041384 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 9 23:45:00.041391 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 9 23:45:00.041397 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 9 23:45:00.041403 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 9 23:45:00.041409 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 9 23:45:00.041415 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 9 23:45:00.041421 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 9 23:45:00.041445 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 9 23:45:00.041453 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 9 23:45:00.041459 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 9 23:45:00.041465 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 9 23:45:00.041471 systemd[1]: Created slice user.slice - User and Session Slice. Jul 9 23:45:00.041477 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 9 23:45:00.041484 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 9 23:45:00.041490 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 9 23:45:00.041496 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 9 23:45:00.041502 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 9 23:45:00.041510 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 9 23:45:00.041518 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jul 9 23:45:00.041526 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 9 23:45:00.041532 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 9 23:45:00.041538 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 9 23:45:00.041544 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 9 23:45:00.041550 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 9 23:45:00.041557 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 9 23:45:00.041564 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 9 23:45:00.041570 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 9 23:45:00.041576 systemd[1]: Reached target slices.target - Slice Units. Jul 9 23:45:00.041582 systemd[1]: Reached target swap.target - Swaps. Jul 9 23:45:00.041588 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 9 23:45:00.041594 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 9 23:45:00.041601 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 9 23:45:00.041607 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 9 23:45:00.041613 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 9 23:45:00.041620 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 9 23:45:00.041626 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 9 23:45:00.041632 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 9 23:45:00.041639 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 9 23:45:00.041646 systemd[1]: Mounting media.mount - External Media Directory... Jul 9 23:45:00.041652 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 9 23:45:00.041658 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 9 23:45:00.041664 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 9 23:45:00.041670 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 9 23:45:00.041677 systemd[1]: Reached target machines.target - Containers. Jul 9 23:45:00.041683 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 9 23:45:00.041690 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 9 23:45:00.041696 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 9 23:45:00.041702 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 9 23:45:00.041709 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 9 23:45:00.041715 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 9 23:45:00.041721 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 9 23:45:00.041727 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 9 23:45:00.041733 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 9 23:45:00.041740 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 9 23:45:00.041746 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 9 23:45:00.041753 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 9 23:45:00.041760 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 9 23:45:00.041766 systemd[1]: Stopped systemd-fsck-usr.service. Jul 9 23:45:00.041771 kernel: fuse: init (API version 7.41) Jul 9 23:45:00.041778 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 9 23:45:00.041784 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 9 23:45:00.041790 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 9 23:45:00.041797 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 9 23:45:00.041803 kernel: loop: module loaded Jul 9 23:45:00.041809 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 9 23:45:00.041814 kernel: ACPI: bus type drm_connector registered Jul 9 23:45:00.041821 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 9 23:45:00.041842 systemd-journald[1408]: Collecting audit messages is disabled. Jul 9 23:45:00.041857 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 9 23:45:00.041865 systemd-journald[1408]: Journal started Jul 9 23:45:00.041879 systemd-journald[1408]: Runtime Journal (/run/log/journal/740179392d474f96b586da8d9cd4aa94) is 8M, max 78.5M, 70.5M free. Jul 9 23:44:59.273881 systemd[1]: Queued start job for default target multi-user.target. Jul 9 23:44:59.279888 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jul 9 23:44:59.280274 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 9 23:44:59.280571 systemd[1]: systemd-journald.service: Consumed 2.829s CPU time. Jul 9 23:45:00.060952 systemd[1]: verity-setup.service: Deactivated successfully. Jul 9 23:45:00.061019 systemd[1]: Stopped verity-setup.service. Jul 9 23:45:00.076072 systemd[1]: Started systemd-journald.service - Journal Service. Jul 9 23:45:00.077275 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 9 23:45:00.083702 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 9 23:45:00.090852 systemd[1]: Mounted media.mount - External Media Directory. Jul 9 23:45:00.095772 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 9 23:45:00.100781 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 9 23:45:00.105887 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 9 23:45:00.110386 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 9 23:45:00.115879 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 9 23:45:00.122150 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 9 23:45:00.122362 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 9 23:45:00.128011 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 9 23:45:00.128214 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 9 23:45:00.133870 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 9 23:45:00.134046 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 9 23:45:00.139840 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 9 23:45:00.140042 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 9 23:45:00.146129 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 9 23:45:00.146318 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 9 23:45:00.153033 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 9 23:45:00.153231 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 9 23:45:00.158980 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 9 23:45:00.164522 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 9 23:45:00.170595 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 9 23:45:00.176672 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 9 23:45:00.182895 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 9 23:45:00.197586 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 9 23:45:00.204321 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 9 23:45:00.214513 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 9 23:45:00.219825 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 9 23:45:00.219856 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 9 23:45:00.225285 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 9 23:45:00.232474 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 9 23:45:00.238952 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 9 23:45:00.255481 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 9 23:45:00.268592 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 9 23:45:00.273741 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 9 23:45:00.276570 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 9 23:45:00.282130 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 9 23:45:00.288195 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 9 23:45:00.294319 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 9 23:45:00.301700 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 9 23:45:00.312213 systemd-journald[1408]: Time spent on flushing to /var/log/journal/740179392d474f96b586da8d9cd4aa94 is 47.105ms for 943 entries. Jul 9 23:45:00.312213 systemd-journald[1408]: System Journal (/var/log/journal/740179392d474f96b586da8d9cd4aa94) is 11.8M, max 2.6G, 2.6G free. Jul 9 23:45:00.486969 systemd-journald[1408]: Received client request to flush runtime journal. Jul 9 23:45:00.487011 systemd-journald[1408]: /var/log/journal/740179392d474f96b586da8d9cd4aa94/system.journal: Realtime clock jumped backwards relative to last journal entry, rotating. Jul 9 23:45:00.487028 kernel: loop0: detected capacity change from 0 to 28936 Jul 9 23:45:00.487043 systemd-journald[1408]: Rotating system journal. Jul 9 23:45:00.312871 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 9 23:45:00.324633 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 9 23:45:00.336823 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 9 23:45:00.343222 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 9 23:45:00.358399 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 9 23:45:00.411209 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 9 23:45:00.473020 systemd-tmpfiles[1445]: ACLs are not supported, ignoring. Jul 9 23:45:00.473028 systemd-tmpfiles[1445]: ACLs are not supported, ignoring. Jul 9 23:45:00.478502 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 9 23:45:00.487237 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 9 23:45:00.492926 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 9 23:45:00.506287 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 9 23:45:00.507465 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 9 23:45:00.823466 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 9 23:45:00.857208 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 9 23:45:00.863990 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 9 23:45:00.884856 systemd-tmpfiles[1464]: ACLs are not supported, ignoring. Jul 9 23:45:00.884872 systemd-tmpfiles[1464]: ACLs are not supported, ignoring. Jul 9 23:45:00.889477 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 9 23:45:00.969458 kernel: loop1: detected capacity change from 0 to 138376 Jul 9 23:45:01.401484 kernel: loop2: detected capacity change from 0 to 107312 Jul 9 23:45:01.494075 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 9 23:45:01.500834 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 9 23:45:01.524988 systemd-udevd[1470]: Using default interface naming scheme 'v255'. Jul 9 23:45:01.799474 kernel: loop3: detected capacity change from 0 to 211168 Jul 9 23:45:01.801468 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 9 23:45:01.812515 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 9 23:45:01.827465 kernel: loop4: detected capacity change from 0 to 28936 Jul 9 23:45:01.843463 kernel: loop5: detected capacity change from 0 to 138376 Jul 9 23:45:01.855466 kernel: loop6: detected capacity change from 0 to 107312 Jul 9 23:45:01.864448 kernel: loop7: detected capacity change from 0 to 211168 Jul 9 23:45:01.868405 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 9 23:45:01.876135 (sd-merge)[1499]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jul 9 23:45:01.880632 (sd-merge)[1499]: Merged extensions into '/usr'. Jul 9 23:45:01.942847 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jul 9 23:45:01.946616 systemd[1]: Reload requested from client PID 1443 ('systemd-sysext') (unit systemd-sysext.service)... Jul 9 23:45:01.946633 systemd[1]: Reloading... Jul 9 23:45:02.046588 zram_generator::config[1543]: No configuration found. Jul 9 23:45:02.083097 kernel: mousedev: PS/2 mouse device common for all mice Jul 9 23:45:02.094916 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#173 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jul 9 23:45:02.156521 kernel: hv_vmbus: registering driver hv_balloon Jul 9 23:45:02.168650 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jul 9 23:45:02.168751 kernel: hv_vmbus: registering driver hyperv_fb Jul 9 23:45:02.168776 kernel: hv_balloon: Memory hot add disabled on ARM64 Jul 9 23:45:02.168837 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jul 9 23:45:02.178723 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jul 9 23:45:02.185155 kernel: Console: switching to colour dummy device 80x25 Jul 9 23:45:02.196456 kernel: Console: switching to colour frame buffer device 128x48 Jul 9 23:45:02.228699 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 9 23:45:02.315760 kernel: MACsec IEEE 802.1AE Jul 9 23:45:02.318212 systemd[1]: Reloading finished in 371 ms. Jul 9 23:45:02.333323 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 9 23:45:02.338561 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 9 23:45:02.347047 systemd-networkd[1483]: lo: Link UP Jul 9 23:45:02.347057 systemd-networkd[1483]: lo: Gained carrier Jul 9 23:45:02.350386 systemd-networkd[1483]: Enumeration completed Jul 9 23:45:02.351391 systemd-networkd[1483]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 9 23:45:02.351420 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 9 23:45:02.352411 systemd-networkd[1483]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 9 23:45:02.389164 systemd[1]: Starting ensure-sysext.service... Jul 9 23:45:02.418736 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 9 23:45:02.428961 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 9 23:45:02.439123 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 9 23:45:02.449450 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 9 23:45:02.479672 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jul 9 23:45:02.510640 systemd[1]: Reload requested from client PID 1658 ('systemctl') (unit ensure-sysext.service)... Jul 9 23:45:02.510662 systemd[1]: Reloading... Jul 9 23:45:02.530735 systemd-tmpfiles[1676]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 9 23:45:02.530757 systemd-tmpfiles[1676]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 9 23:45:02.531470 kernel: mlx5_core e145:00:02.0 enP57669s1: Link up Jul 9 23:45:02.531408 systemd-tmpfiles[1676]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 9 23:45:02.531916 systemd-tmpfiles[1676]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 9 23:45:02.532503 systemd-tmpfiles[1676]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 9 23:45:02.532667 systemd-tmpfiles[1676]: ACLs are not supported, ignoring. Jul 9 23:45:02.532696 systemd-tmpfiles[1676]: ACLs are not supported, ignoring. Jul 9 23:45:02.565665 systemd-tmpfiles[1676]: Detected autofs mount point /boot during canonicalization of boot. Jul 9 23:45:02.566386 systemd-tmpfiles[1676]: Skipping /boot Jul 9 23:45:02.569092 kernel: hv_netvsc 00224876-fa22-0022-4876-fa2200224876 eth0: Data path switched to VF: enP57669s1 Jul 9 23:45:02.568578 systemd-networkd[1483]: enP57669s1: Link UP Jul 9 23:45:02.568686 systemd-networkd[1483]: eth0: Link UP Jul 9 23:45:02.568689 systemd-networkd[1483]: eth0: Gained carrier Jul 9 23:45:02.569193 systemd-networkd[1483]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 9 23:45:02.575894 systemd-networkd[1483]: enP57669s1: Gained carrier Jul 9 23:45:02.586467 zram_generator::config[1720]: No configuration found. Jul 9 23:45:02.587947 systemd-tmpfiles[1676]: Detected autofs mount point /boot during canonicalization of boot. Jul 9 23:45:02.587958 systemd-tmpfiles[1676]: Skipping /boot Jul 9 23:45:02.590880 systemd-networkd[1483]: eth0: DHCPv4 address 10.200.20.14/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jul 9 23:45:02.681513 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 9 23:45:02.764532 systemd[1]: Reloading finished in 253 ms. Jul 9 23:45:02.786858 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 9 23:45:02.794575 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 9 23:45:02.819710 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 9 23:45:02.833672 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 9 23:45:02.838721 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 9 23:45:02.847838 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 9 23:45:02.856789 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 9 23:45:02.866725 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 9 23:45:02.873415 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 9 23:45:02.874950 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 9 23:45:02.880696 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 9 23:45:02.882372 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 9 23:45:02.895026 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 9 23:45:02.906739 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 9 23:45:02.918606 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 9 23:45:02.919149 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 9 23:45:02.928323 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 9 23:45:02.928542 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 9 23:45:02.935974 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 9 23:45:02.936468 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 9 23:45:02.943002 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 9 23:45:02.943446 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 9 23:45:02.954464 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 9 23:45:02.960803 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 9 23:45:02.979083 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 9 23:45:02.987103 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 9 23:45:02.989714 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 9 23:45:03.002897 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 9 23:45:03.013587 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 9 23:45:03.019693 augenrules[1819]: No rules Jul 9 23:45:03.025675 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 9 23:45:03.031231 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 9 23:45:03.031348 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 9 23:45:03.031577 systemd[1]: Reached target time-set.target - System Time Set. Jul 9 23:45:03.037414 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 9 23:45:03.045374 systemd[1]: audit-rules.service: Deactivated successfully. Jul 9 23:45:03.045581 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 9 23:45:03.050691 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 9 23:45:03.050832 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 9 23:45:03.058544 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 9 23:45:03.058699 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 9 23:45:03.065280 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 9 23:45:03.065420 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 9 23:45:03.070917 systemd-resolved[1794]: Positive Trust Anchors: Jul 9 23:45:03.071213 systemd-resolved[1794]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 9 23:45:03.071292 systemd-resolved[1794]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 9 23:45:03.074117 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 9 23:45:03.075480 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 9 23:45:03.081173 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 9 23:45:03.089459 systemd-resolved[1794]: Using system hostname 'ci-4344.1.1-n-76bacae427'. Jul 9 23:45:03.092986 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 9 23:45:03.098735 systemd[1]: Finished ensure-sysext.service. Jul 9 23:45:03.105900 systemd[1]: Reached target network.target - Network. Jul 9 23:45:03.109935 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 9 23:45:03.115250 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 9 23:45:03.115320 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 9 23:45:03.176867 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 9 23:45:03.589223 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 9 23:45:03.595951 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 9 23:45:04.174550 systemd-networkd[1483]: enP57669s1: Gained IPv6LL Jul 9 23:45:04.558672 systemd-networkd[1483]: eth0: Gained IPv6LL Jul 9 23:45:04.560855 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 9 23:45:04.567035 systemd[1]: Reached target network-online.target - Network is Online. Jul 9 23:45:06.872407 ldconfig[1438]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 9 23:45:06.883973 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 9 23:45:06.891393 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 9 23:45:06.907779 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 9 23:45:06.913320 systemd[1]: Reached target sysinit.target - System Initialization. Jul 9 23:45:06.918640 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 9 23:45:06.924748 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 9 23:45:06.930588 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 9 23:45:06.935850 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 9 23:45:06.941950 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 9 23:45:06.948045 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 9 23:45:06.948070 systemd[1]: Reached target paths.target - Path Units. Jul 9 23:45:06.952808 systemd[1]: Reached target timers.target - Timer Units. Jul 9 23:45:06.958079 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 9 23:45:06.965767 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 9 23:45:06.972319 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 9 23:45:06.978066 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 9 23:45:06.984522 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 9 23:45:07.000202 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 9 23:45:07.006246 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 9 23:45:07.012350 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 9 23:45:07.017763 systemd[1]: Reached target sockets.target - Socket Units. Jul 9 23:45:07.022646 systemd[1]: Reached target basic.target - Basic System. Jul 9 23:45:07.026946 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 9 23:45:07.026966 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 9 23:45:07.029185 systemd[1]: Starting chronyd.service - NTP client/server... Jul 9 23:45:07.044549 systemd[1]: Starting containerd.service - containerd container runtime... Jul 9 23:45:07.052573 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 9 23:45:07.065223 (chronyd)[1846]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Jul 9 23:45:07.065232 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 9 23:45:07.070337 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 9 23:45:07.085862 chronyd[1855]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Jul 9 23:45:07.088307 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 9 23:45:07.102310 jq[1856]: false Jul 9 23:45:07.097835 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 9 23:45:07.102736 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 9 23:45:07.103682 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jul 9 23:45:07.108786 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jul 9 23:45:07.110562 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 23:45:07.119611 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 9 23:45:07.124507 KVP[1858]: KVP starting; pid is:1858 Jul 9 23:45:07.129739 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 9 23:45:07.130390 extend-filesystems[1857]: Found /dev/sda6 Jul 9 23:45:07.141346 kernel: hv_utils: KVP IC version 4.0 Jul 9 23:45:07.135476 KVP[1858]: KVP LIC Version: 3.1 Jul 9 23:45:07.135711 chronyd[1855]: Timezone right/UTC failed leap second check, ignoring Jul 9 23:45:07.135915 chronyd[1855]: Loaded seccomp filter (level 2) Jul 9 23:45:07.141845 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 9 23:45:07.152328 extend-filesystems[1857]: Found /dev/sda9 Jul 9 23:45:07.155810 extend-filesystems[1857]: Checking size of /dev/sda9 Jul 9 23:45:07.163236 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 9 23:45:07.173517 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 9 23:45:07.181796 extend-filesystems[1857]: Old size kept for /dev/sda9 Jul 9 23:45:07.193379 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 9 23:45:07.201122 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 9 23:45:07.204678 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 9 23:45:07.205511 systemd[1]: Starting update-engine.service - Update Engine... Jul 9 23:45:07.213669 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 9 23:45:07.221925 systemd[1]: Started chronyd.service - NTP client/server. Jul 9 23:45:07.228803 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 9 23:45:07.237140 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 9 23:45:07.237317 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 9 23:45:07.237615 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 9 23:45:07.238507 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 9 23:45:07.245816 jq[1889]: true Jul 9 23:45:07.246967 systemd[1]: motdgen.service: Deactivated successfully. Jul 9 23:45:07.247147 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 9 23:45:07.256806 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 9 23:45:07.266721 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 9 23:45:07.266904 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 9 23:45:07.292106 update_engine[1888]: I20250709 23:45:07.291793 1888 main.cc:92] Flatcar Update Engine starting Jul 9 23:45:07.298867 jq[1910]: true Jul 9 23:45:07.306805 (ntainerd)[1912]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 9 23:45:07.312519 systemd-logind[1882]: New seat seat0. Jul 9 23:45:07.315511 systemd-logind[1882]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 9 23:45:07.315737 systemd[1]: Started systemd-logind.service - User Login Management. Jul 9 23:45:07.359732 tar[1908]: linux-arm64/LICENSE Jul 9 23:45:07.360161 tar[1908]: linux-arm64/helm Jul 9 23:45:07.459661 dbus-daemon[1852]: [system] SELinux support is enabled Jul 9 23:45:07.459871 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 9 23:45:07.466797 update_engine[1888]: I20250709 23:45:07.466374 1888 update_check_scheduler.cc:74] Next update check in 6m17s Jul 9 23:45:07.470345 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 9 23:45:07.470886 dbus-daemon[1852]: [system] Successfully activated service 'org.freedesktop.systemd1' Jul 9 23:45:07.470372 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 9 23:45:07.480333 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 9 23:45:07.480355 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 9 23:45:07.489858 systemd[1]: Started update-engine.service - Update Engine. Jul 9 23:45:07.499953 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 9 23:45:07.521379 bash[1991]: Updated "/home/core/.ssh/authorized_keys" Jul 9 23:45:07.528145 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 9 23:45:07.540951 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 9 23:45:07.557165 coreos-metadata[1848]: Jul 09 23:45:07.556 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jul 9 23:45:07.563735 coreos-metadata[1848]: Jul 09 23:45:07.563 INFO Fetch successful Jul 9 23:45:07.563735 coreos-metadata[1848]: Jul 09 23:45:07.563 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jul 9 23:45:07.617695 coreos-metadata[1848]: Jul 09 23:45:07.617 INFO Fetch successful Jul 9 23:45:07.618559 coreos-metadata[1848]: Jul 09 23:45:07.617 INFO Fetching http://168.63.129.16/machine/eed29e0e-4313-4046-a1e7-cca7155d1df5/be405171%2Df999%2D49c0%2Db00b%2D1a3d1f2be099.%5Fci%2D4344.1.1%2Dn%2D76bacae427?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jul 9 23:45:07.620503 coreos-metadata[1848]: Jul 09 23:45:07.620 INFO Fetch successful Jul 9 23:45:07.620662 coreos-metadata[1848]: Jul 09 23:45:07.620 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jul 9 23:45:07.632293 coreos-metadata[1848]: Jul 09 23:45:07.632 INFO Fetch successful Jul 9 23:45:07.671088 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 9 23:45:07.680156 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 9 23:45:07.757814 locksmithd[1994]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 9 23:45:07.846935 containerd[1912]: time="2025-07-09T23:45:07Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 9 23:45:07.857977 containerd[1912]: time="2025-07-09T23:45:07.856843088Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jul 9 23:45:07.875085 containerd[1912]: time="2025-07-09T23:45:07.875043336Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.216µs" Jul 9 23:45:07.875085 containerd[1912]: time="2025-07-09T23:45:07.875078888Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 9 23:45:07.875210 containerd[1912]: time="2025-07-09T23:45:07.875099280Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 9 23:45:07.875250 containerd[1912]: time="2025-07-09T23:45:07.875233888Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 9 23:45:07.875273 containerd[1912]: time="2025-07-09T23:45:07.875249400Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 9 23:45:07.875273 containerd[1912]: time="2025-07-09T23:45:07.875270416Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 9 23:45:07.875325 containerd[1912]: time="2025-07-09T23:45:07.875313624Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 9 23:45:07.875325 containerd[1912]: time="2025-07-09T23:45:07.875322160Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 9 23:45:07.876659 sshd_keygen[1891]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 9 23:45:07.878816 containerd[1912]: time="2025-07-09T23:45:07.878780416Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 9 23:45:07.878816 containerd[1912]: time="2025-07-09T23:45:07.878811240Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 9 23:45:07.878894 containerd[1912]: time="2025-07-09T23:45:07.878829184Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 9 23:45:07.878894 containerd[1912]: time="2025-07-09T23:45:07.878835008Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 9 23:45:07.879101 containerd[1912]: time="2025-07-09T23:45:07.878930184Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 9 23:45:07.879101 containerd[1912]: time="2025-07-09T23:45:07.879097200Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 9 23:45:07.879146 containerd[1912]: time="2025-07-09T23:45:07.879120240Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 9 23:45:07.879146 containerd[1912]: time="2025-07-09T23:45:07.879127296Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 9 23:45:07.879187 containerd[1912]: time="2025-07-09T23:45:07.879173496Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 9 23:45:07.879883 containerd[1912]: time="2025-07-09T23:45:07.879328272Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 9 23:45:07.879883 containerd[1912]: time="2025-07-09T23:45:07.879386904Z" level=info msg="metadata content store policy set" policy=shared Jul 9 23:45:07.895478 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 9 23:45:07.902989 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 9 23:45:07.908855 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jul 9 23:45:07.923185 containerd[1912]: time="2025-07-09T23:45:07.921003680Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 9 23:45:07.923185 containerd[1912]: time="2025-07-09T23:45:07.921077960Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 9 23:45:07.923185 containerd[1912]: time="2025-07-09T23:45:07.921091600Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 9 23:45:07.923185 containerd[1912]: time="2025-07-09T23:45:07.921100384Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 9 23:45:07.923185 containerd[1912]: time="2025-07-09T23:45:07.921110216Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 9 23:45:07.923185 containerd[1912]: time="2025-07-09T23:45:07.921116920Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 9 23:45:07.923185 containerd[1912]: time="2025-07-09T23:45:07.921125304Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 9 23:45:07.923185 containerd[1912]: time="2025-07-09T23:45:07.921133200Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 9 23:45:07.923185 containerd[1912]: time="2025-07-09T23:45:07.921142944Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 9 23:45:07.923185 containerd[1912]: time="2025-07-09T23:45:07.921157840Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 9 23:45:07.923185 containerd[1912]: time="2025-07-09T23:45:07.921164720Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 9 23:45:07.923185 containerd[1912]: time="2025-07-09T23:45:07.921173800Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 9 23:45:07.923185 containerd[1912]: time="2025-07-09T23:45:07.921313736Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 9 23:45:07.923185 containerd[1912]: time="2025-07-09T23:45:07.921329384Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 9 23:45:07.923455 containerd[1912]: time="2025-07-09T23:45:07.921340008Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 9 23:45:07.923455 containerd[1912]: time="2025-07-09T23:45:07.921346968Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 9 23:45:07.923455 containerd[1912]: time="2025-07-09T23:45:07.921354400Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 9 23:45:07.923455 containerd[1912]: time="2025-07-09T23:45:07.921362192Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 9 23:45:07.923455 containerd[1912]: time="2025-07-09T23:45:07.921373544Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 9 23:45:07.923455 containerd[1912]: time="2025-07-09T23:45:07.921380944Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 9 23:45:07.923455 containerd[1912]: time="2025-07-09T23:45:07.921389520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 9 23:45:07.923455 containerd[1912]: time="2025-07-09T23:45:07.921396016Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 9 23:45:07.923455 containerd[1912]: time="2025-07-09T23:45:07.921403160Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 9 23:45:07.923455 containerd[1912]: time="2025-07-09T23:45:07.921484184Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 9 23:45:07.923455 containerd[1912]: time="2025-07-09T23:45:07.921498072Z" level=info msg="Start snapshots syncer" Jul 9 23:45:07.923455 containerd[1912]: time="2025-07-09T23:45:07.921520296Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 9 23:45:07.923629 containerd[1912]: time="2025-07-09T23:45:07.921691304Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 9 23:45:07.923629 containerd[1912]: time="2025-07-09T23:45:07.921751360Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 9 23:45:07.923705 containerd[1912]: time="2025-07-09T23:45:07.921815112Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 9 23:45:07.923705 containerd[1912]: time="2025-07-09T23:45:07.921918432Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 9 23:45:07.923705 containerd[1912]: time="2025-07-09T23:45:07.921933520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 9 23:45:07.923705 containerd[1912]: time="2025-07-09T23:45:07.921940264Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 9 23:45:07.923705 containerd[1912]: time="2025-07-09T23:45:07.921947480Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 9 23:45:07.923705 containerd[1912]: time="2025-07-09T23:45:07.921955448Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 9 23:45:07.923705 containerd[1912]: time="2025-07-09T23:45:07.921961976Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 9 23:45:07.923705 containerd[1912]: time="2025-07-09T23:45:07.921971176Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 9 23:45:07.923705 containerd[1912]: time="2025-07-09T23:45:07.921991072Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 9 23:45:07.923705 containerd[1912]: time="2025-07-09T23:45:07.921998088Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 9 23:45:07.923705 containerd[1912]: time="2025-07-09T23:45:07.922011504Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 9 23:45:07.923705 containerd[1912]: time="2025-07-09T23:45:07.922042992Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 9 23:45:07.923705 containerd[1912]: time="2025-07-09T23:45:07.922053800Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 9 23:45:07.923705 containerd[1912]: time="2025-07-09T23:45:07.922058872Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 9 23:45:07.923863 containerd[1912]: time="2025-07-09T23:45:07.922066504Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 9 23:45:07.923863 containerd[1912]: time="2025-07-09T23:45:07.922071080Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 9 23:45:07.923863 containerd[1912]: time="2025-07-09T23:45:07.922077920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 9 23:45:07.923863 containerd[1912]: time="2025-07-09T23:45:07.922084296Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 9 23:45:07.923863 containerd[1912]: time="2025-07-09T23:45:07.922096576Z" level=info msg="runtime interface created" Jul 9 23:45:07.923863 containerd[1912]: time="2025-07-09T23:45:07.922100224Z" level=info msg="created NRI interface" Jul 9 23:45:07.923863 containerd[1912]: time="2025-07-09T23:45:07.922105712Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 9 23:45:07.923863 containerd[1912]: time="2025-07-09T23:45:07.922113968Z" level=info msg="Connect containerd service" Jul 9 23:45:07.923863 containerd[1912]: time="2025-07-09T23:45:07.922133504Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 9 23:45:07.925702 containerd[1912]: time="2025-07-09T23:45:07.924314608Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 9 23:45:07.932224 systemd[1]: issuegen.service: Deactivated successfully. Jul 9 23:45:07.932853 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 9 23:45:07.942577 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jul 9 23:45:07.953798 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 9 23:45:07.991544 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 9 23:45:08.003180 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 9 23:45:08.012468 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jul 9 23:45:08.021524 systemd[1]: Reached target getty.target - Login Prompts. Jul 9 23:45:08.048057 tar[1908]: linux-arm64/README.md Jul 9 23:45:08.060862 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 9 23:45:08.142717 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 23:45:08.152057 (kubelet)[2050]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 9 23:45:08.416463 kubelet[2050]: E0709 23:45:08.416334 2050 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 9 23:45:08.419738 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 9 23:45:08.419850 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 9 23:45:08.420134 systemd[1]: kubelet.service: Consumed 559ms CPU time, 255.6M memory peak. Jul 9 23:45:08.463058 containerd[1912]: time="2025-07-09T23:45:08.463004424Z" level=info msg="Start subscribing containerd event" Jul 9 23:45:08.463509 containerd[1912]: time="2025-07-09T23:45:08.463151064Z" level=info msg="Start recovering state" Jul 9 23:45:08.463509 containerd[1912]: time="2025-07-09T23:45:08.463248992Z" level=info msg="Start event monitor" Jul 9 23:45:08.463509 containerd[1912]: time="2025-07-09T23:45:08.463262760Z" level=info msg="Start cni network conf syncer for default" Jul 9 23:45:08.463509 containerd[1912]: time="2025-07-09T23:45:08.463267736Z" level=info msg="Start streaming server" Jul 9 23:45:08.463509 containerd[1912]: time="2025-07-09T23:45:08.463273816Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 9 23:45:08.463509 containerd[1912]: time="2025-07-09T23:45:08.463278944Z" level=info msg="runtime interface starting up..." Jul 9 23:45:08.463509 containerd[1912]: time="2025-07-09T23:45:08.463283264Z" level=info msg="starting plugins..." Jul 9 23:45:08.463509 containerd[1912]: time="2025-07-09T23:45:08.463294712Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 9 23:45:08.463799 containerd[1912]: time="2025-07-09T23:45:08.463777824Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 9 23:45:08.464101 containerd[1912]: time="2025-07-09T23:45:08.464069048Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 9 23:45:08.464145 containerd[1912]: time="2025-07-09T23:45:08.464132416Z" level=info msg="containerd successfully booted in 0.617529s" Jul 9 23:45:08.464276 systemd[1]: Started containerd.service - containerd container runtime. Jul 9 23:45:08.470970 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 9 23:45:08.478521 systemd[1]: Startup finished in 1.741s (kernel) + 11.646s (initrd) + 11.619s (userspace) = 25.007s. Jul 9 23:45:08.712465 login[2036]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Jul 9 23:45:08.713476 login[2037]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:45:08.731513 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 9 23:45:08.732399 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 9 23:45:08.736859 systemd-logind[1882]: New session 2 of user core. Jul 9 23:45:08.755343 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 9 23:45:08.757140 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 9 23:45:08.783066 (systemd)[2070]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 9 23:45:08.785137 systemd-logind[1882]: New session c1 of user core. Jul 9 23:45:08.936892 systemd[2070]: Queued start job for default target default.target. Jul 9 23:45:08.943217 systemd[2070]: Created slice app.slice - User Application Slice. Jul 9 23:45:08.943897 systemd[2070]: Reached target paths.target - Paths. Jul 9 23:45:08.943948 systemd[2070]: Reached target timers.target - Timers. Jul 9 23:45:08.945037 systemd[2070]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 9 23:45:08.952083 systemd[2070]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 9 23:45:08.952134 systemd[2070]: Reached target sockets.target - Sockets. Jul 9 23:45:08.952170 systemd[2070]: Reached target basic.target - Basic System. Jul 9 23:45:08.952191 systemd[2070]: Reached target default.target - Main User Target. Jul 9 23:45:08.952213 systemd[2070]: Startup finished in 161ms. Jul 9 23:45:08.952475 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 9 23:45:08.958574 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 9 23:45:09.469686 waagent[2032]: 2025-07-09T23:45:09.469607Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Jul 9 23:45:09.474075 waagent[2032]: 2025-07-09T23:45:09.474022Z INFO Daemon Daemon OS: flatcar 4344.1.1 Jul 9 23:45:09.478236 waagent[2032]: 2025-07-09T23:45:09.478191Z INFO Daemon Daemon Python: 3.11.12 Jul 9 23:45:09.482111 waagent[2032]: 2025-07-09T23:45:09.482048Z INFO Daemon Daemon Run daemon Jul 9 23:45:09.485823 waagent[2032]: 2025-07-09T23:45:09.485603Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4344.1.1' Jul 9 23:45:09.494012 waagent[2032]: 2025-07-09T23:45:09.493960Z INFO Daemon Daemon Using waagent for provisioning Jul 9 23:45:09.498695 waagent[2032]: 2025-07-09T23:45:09.498653Z INFO Daemon Daemon Activate resource disk Jul 9 23:45:09.502461 waagent[2032]: 2025-07-09T23:45:09.502417Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jul 9 23:45:09.511603 waagent[2032]: 2025-07-09T23:45:09.511556Z INFO Daemon Daemon Found device: None Jul 9 23:45:09.515162 waagent[2032]: 2025-07-09T23:45:09.515127Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jul 9 23:45:09.522080 waagent[2032]: 2025-07-09T23:45:09.522045Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jul 9 23:45:09.531815 waagent[2032]: 2025-07-09T23:45:09.531776Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jul 9 23:45:09.536309 waagent[2032]: 2025-07-09T23:45:09.536281Z INFO Daemon Daemon Running default provisioning handler Jul 9 23:45:09.546560 waagent[2032]: 2025-07-09T23:45:09.546505Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jul 9 23:45:09.557612 waagent[2032]: 2025-07-09T23:45:09.557561Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jul 9 23:45:09.565561 waagent[2032]: 2025-07-09T23:45:09.565522Z INFO Daemon Daemon cloud-init is enabled: False Jul 9 23:45:09.569882 waagent[2032]: 2025-07-09T23:45:09.569852Z INFO Daemon Daemon Copying ovf-env.xml Jul 9 23:45:09.647047 waagent[2032]: 2025-07-09T23:45:09.646974Z INFO Daemon Daemon Successfully mounted dvd Jul 9 23:45:09.675265 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jul 9 23:45:09.677162 waagent[2032]: 2025-07-09T23:45:09.677097Z INFO Daemon Daemon Detect protocol endpoint Jul 9 23:45:09.681497 waagent[2032]: 2025-07-09T23:45:09.681448Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jul 9 23:45:09.686223 waagent[2032]: 2025-07-09T23:45:09.686186Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jul 9 23:45:09.691578 waagent[2032]: 2025-07-09T23:45:09.691548Z INFO Daemon Daemon Test for route to 168.63.129.16 Jul 9 23:45:09.695706 waagent[2032]: 2025-07-09T23:45:09.695675Z INFO Daemon Daemon Route to 168.63.129.16 exists Jul 9 23:45:09.700406 waagent[2032]: 2025-07-09T23:45:09.700375Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jul 9 23:45:09.712850 login[2036]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:45:09.717210 systemd-logind[1882]: New session 1 of user core. Jul 9 23:45:09.726642 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 9 23:45:09.752992 waagent[2032]: 2025-07-09T23:45:09.747694Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jul 9 23:45:09.753339 waagent[2032]: 2025-07-09T23:45:09.753315Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jul 9 23:45:09.758158 waagent[2032]: 2025-07-09T23:45:09.757529Z INFO Daemon Daemon Server preferred version:2015-04-05 Jul 9 23:45:09.884258 waagent[2032]: 2025-07-09T23:45:09.884170Z INFO Daemon Daemon Initializing goal state during protocol detection Jul 9 23:45:09.889597 waagent[2032]: 2025-07-09T23:45:09.889553Z INFO Daemon Daemon Forcing an update of the goal state. Jul 9 23:45:09.897578 waagent[2032]: 2025-07-09T23:45:09.897538Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jul 9 23:45:09.917693 waagent[2032]: 2025-07-09T23:45:09.917661Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.175 Jul 9 23:45:09.922397 waagent[2032]: 2025-07-09T23:45:09.922363Z INFO Daemon Jul 9 23:45:09.924779 waagent[2032]: 2025-07-09T23:45:09.924751Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 295a0854-fa8c-44d5-9ef2-16c045b6b65c eTag: 15846543624242754164 source: Fabric] Jul 9 23:45:09.934197 waagent[2032]: 2025-07-09T23:45:09.934165Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jul 9 23:45:09.939618 waagent[2032]: 2025-07-09T23:45:09.939588Z INFO Daemon Jul 9 23:45:09.941842 waagent[2032]: 2025-07-09T23:45:09.941819Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jul 9 23:45:09.951401 waagent[2032]: 2025-07-09T23:45:09.951372Z INFO Daemon Daemon Downloading artifacts profile blob Jul 9 23:45:10.027130 waagent[2032]: 2025-07-09T23:45:10.027049Z INFO Daemon Downloaded certificate {'thumbprint': '6AFFFB81F6B37CC60E75D6A75509B33E0D0DAC20', 'hasPrivateKey': False} Jul 9 23:45:10.035031 waagent[2032]: 2025-07-09T23:45:10.034990Z INFO Daemon Downloaded certificate {'thumbprint': '336FE753A55BB79666ED488B467784D936C96413', 'hasPrivateKey': True} Jul 9 23:45:10.042479 waagent[2032]: 2025-07-09T23:45:10.042409Z INFO Daemon Fetch goal state completed Jul 9 23:45:10.089463 waagent[2032]: 2025-07-09T23:45:10.089411Z INFO Daemon Daemon Starting provisioning Jul 9 23:45:10.094450 waagent[2032]: 2025-07-09T23:45:10.094397Z INFO Daemon Daemon Handle ovf-env.xml. Jul 9 23:45:10.098626 waagent[2032]: 2025-07-09T23:45:10.098595Z INFO Daemon Daemon Set hostname [ci-4344.1.1-n-76bacae427] Jul 9 23:45:10.125468 waagent[2032]: 2025-07-09T23:45:10.125393Z INFO Daemon Daemon Publish hostname [ci-4344.1.1-n-76bacae427] Jul 9 23:45:10.130376 waagent[2032]: 2025-07-09T23:45:10.130329Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jul 9 23:45:10.135024 waagent[2032]: 2025-07-09T23:45:10.134990Z INFO Daemon Daemon Primary interface is [eth0] Jul 9 23:45:10.145854 systemd-networkd[1483]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 9 23:45:10.145863 systemd-networkd[1483]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 9 23:45:10.145921 systemd-networkd[1483]: eth0: DHCP lease lost Jul 9 23:45:10.146756 waagent[2032]: 2025-07-09T23:45:10.146692Z INFO Daemon Daemon Create user account if not exists Jul 9 23:45:10.150964 waagent[2032]: 2025-07-09T23:45:10.150909Z INFO Daemon Daemon User core already exists, skip useradd Jul 9 23:45:10.155539 waagent[2032]: 2025-07-09T23:45:10.155500Z INFO Daemon Daemon Configure sudoer Jul 9 23:45:10.164731 waagent[2032]: 2025-07-09T23:45:10.164669Z INFO Daemon Daemon Configure sshd Jul 9 23:45:10.172971 waagent[2032]: 2025-07-09T23:45:10.172908Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jul 9 23:45:10.182817 waagent[2032]: 2025-07-09T23:45:10.182768Z INFO Daemon Daemon Deploy ssh public key. Jul 9 23:45:10.183500 systemd-networkd[1483]: eth0: DHCPv4 address 10.200.20.14/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jul 9 23:45:11.263358 waagent[2032]: 2025-07-09T23:45:11.263311Z INFO Daemon Daemon Provisioning complete Jul 9 23:45:11.279560 waagent[2032]: 2025-07-09T23:45:11.279524Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jul 9 23:45:11.284325 waagent[2032]: 2025-07-09T23:45:11.284288Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jul 9 23:45:11.292142 waagent[2032]: 2025-07-09T23:45:11.292107Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Jul 9 23:45:11.393031 waagent[2124]: 2025-07-09T23:45:11.392527Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Jul 9 23:45:11.393031 waagent[2124]: 2025-07-09T23:45:11.392669Z INFO ExtHandler ExtHandler OS: flatcar 4344.1.1 Jul 9 23:45:11.393031 waagent[2124]: 2025-07-09T23:45:11.392707Z INFO ExtHandler ExtHandler Python: 3.11.12 Jul 9 23:45:11.393031 waagent[2124]: 2025-07-09T23:45:11.392744Z INFO ExtHandler ExtHandler CPU Arch: aarch64 Jul 9 23:45:11.412920 waagent[2124]: 2025-07-09T23:45:11.412851Z INFO ExtHandler ExtHandler Distro: flatcar-4344.1.1; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.12; Arch: aarch64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Jul 9 23:45:11.413223 waagent[2124]: 2025-07-09T23:45:11.413193Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 9 23:45:11.413331 waagent[2124]: 2025-07-09T23:45:11.413311Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 9 23:45:11.419848 waagent[2124]: 2025-07-09T23:45:11.419792Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jul 9 23:45:11.426479 waagent[2124]: 2025-07-09T23:45:11.425689Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.175 Jul 9 23:45:11.426479 waagent[2124]: 2025-07-09T23:45:11.426122Z INFO ExtHandler Jul 9 23:45:11.426479 waagent[2124]: 2025-07-09T23:45:11.426177Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: a3f7b367-f452-4022-b239-94d4ae7902c2 eTag: 15846543624242754164 source: Fabric] Jul 9 23:45:11.426479 waagent[2124]: 2025-07-09T23:45:11.426379Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jul 9 23:45:11.426853 waagent[2124]: 2025-07-09T23:45:11.426819Z INFO ExtHandler Jul 9 23:45:11.426887 waagent[2124]: 2025-07-09T23:45:11.426871Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jul 9 23:45:11.430881 waagent[2124]: 2025-07-09T23:45:11.430852Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jul 9 23:45:11.492053 waagent[2124]: 2025-07-09T23:45:11.491972Z INFO ExtHandler Downloaded certificate {'thumbprint': '6AFFFB81F6B37CC60E75D6A75509B33E0D0DAC20', 'hasPrivateKey': False} Jul 9 23:45:11.492381 waagent[2124]: 2025-07-09T23:45:11.492348Z INFO ExtHandler Downloaded certificate {'thumbprint': '336FE753A55BB79666ED488B467784D936C96413', 'hasPrivateKey': True} Jul 9 23:45:11.492720 waagent[2124]: 2025-07-09T23:45:11.492691Z INFO ExtHandler Fetch goal state completed Jul 9 23:45:11.506929 waagent[2124]: 2025-07-09T23:45:11.506870Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.3.3 11 Feb 2025 (Library: OpenSSL 3.3.3 11 Feb 2025) Jul 9 23:45:11.510487 waagent[2124]: 2025-07-09T23:45:11.510414Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 2124 Jul 9 23:45:11.510595 waagent[2124]: 2025-07-09T23:45:11.510573Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jul 9 23:45:11.510842 waagent[2124]: 2025-07-09T23:45:11.510816Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Jul 9 23:45:11.511953 waagent[2124]: 2025-07-09T23:45:11.511917Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4344.1.1', '', 'Flatcar Container Linux by Kinvolk'] Jul 9 23:45:11.512277 waagent[2124]: 2025-07-09T23:45:11.512247Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4344.1.1', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Jul 9 23:45:11.512394 waagent[2124]: 2025-07-09T23:45:11.512373Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Jul 9 23:45:11.512863 waagent[2124]: 2025-07-09T23:45:11.512833Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jul 9 23:45:11.583382 waagent[2124]: 2025-07-09T23:45:11.583289Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jul 9 23:45:11.583517 waagent[2124]: 2025-07-09T23:45:11.583490Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jul 9 23:45:11.587820 waagent[2124]: 2025-07-09T23:45:11.587790Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jul 9 23:45:11.592859 systemd[1]: Reload requested from client PID 2141 ('systemctl') (unit waagent.service)... Jul 9 23:45:11.593104 systemd[1]: Reloading... Jul 9 23:45:11.669479 zram_generator::config[2194]: No configuration found. Jul 9 23:45:11.727244 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 9 23:45:11.810476 systemd[1]: Reloading finished in 217 ms. Jul 9 23:45:11.835509 waagent[2124]: 2025-07-09T23:45:11.833776Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jul 9 23:45:11.835509 waagent[2124]: 2025-07-09T23:45:11.833919Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jul 9 23:45:12.017622 waagent[2124]: 2025-07-09T23:45:12.017556Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jul 9 23:45:12.018075 waagent[2124]: 2025-07-09T23:45:12.018038Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Jul 9 23:45:12.018878 waagent[2124]: 2025-07-09T23:45:12.018839Z INFO ExtHandler ExtHandler Starting env monitor service. Jul 9 23:45:12.018975 waagent[2124]: 2025-07-09T23:45:12.018942Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 9 23:45:12.019079 waagent[2124]: 2025-07-09T23:45:12.019051Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 9 23:45:12.019250 waagent[2124]: 2025-07-09T23:45:12.019224Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jul 9 23:45:12.019625 waagent[2124]: 2025-07-09T23:45:12.019589Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jul 9 23:45:12.019682 waagent[2124]: 2025-07-09T23:45:12.019651Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jul 9 23:45:12.019682 waagent[2124]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jul 9 23:45:12.019682 waagent[2124]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Jul 9 23:45:12.019682 waagent[2124]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jul 9 23:45:12.019682 waagent[2124]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jul 9 23:45:12.019682 waagent[2124]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jul 9 23:45:12.019682 waagent[2124]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jul 9 23:45:12.020096 waagent[2124]: 2025-07-09T23:45:12.020066Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jul 9 23:45:12.020150 waagent[2124]: 2025-07-09T23:45:12.020112Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 9 23:45:12.020168 waagent[2124]: 2025-07-09T23:45:12.020155Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 9 23:45:12.020268 waagent[2124]: 2025-07-09T23:45:12.020248Z INFO EnvHandler ExtHandler Configure routes Jul 9 23:45:12.020304 waagent[2124]: 2025-07-09T23:45:12.020288Z INFO EnvHandler ExtHandler Gateway:None Jul 9 23:45:12.020321 waagent[2124]: 2025-07-09T23:45:12.020314Z INFO EnvHandler ExtHandler Routes:None Jul 9 23:45:12.020470 waagent[2124]: 2025-07-09T23:45:12.020402Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jul 9 23:45:12.020829 waagent[2124]: 2025-07-09T23:45:12.020795Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jul 9 23:45:12.020969 waagent[2124]: 2025-07-09T23:45:12.020931Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jul 9 23:45:12.021214 waagent[2124]: 2025-07-09T23:45:12.021103Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jul 9 23:45:12.027138 waagent[2124]: 2025-07-09T23:45:12.027091Z INFO ExtHandler ExtHandler Jul 9 23:45:12.027306 waagent[2124]: 2025-07-09T23:45:12.027279Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 2d79f48d-2de3-4474-8413-b5c6b0f0ae1e correlation bf0b480d-0c77-49b4-9083-ae4cdbbb978f created: 2025-07-09T23:43:58.836861Z] Jul 9 23:45:12.027751 waagent[2124]: 2025-07-09T23:45:12.027700Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jul 9 23:45:12.028280 waagent[2124]: 2025-07-09T23:45:12.028241Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Jul 9 23:45:12.055469 waagent[2124]: 2025-07-09T23:45:12.055167Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Jul 9 23:45:12.055469 waagent[2124]: Try `iptables -h' or 'iptables --help' for more information.) Jul 9 23:45:12.055782 waagent[2124]: 2025-07-09T23:45:12.055748Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: FE139CBA-7C52-430C-86EE-BF052492860A;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Jul 9 23:45:12.070642 waagent[2124]: 2025-07-09T23:45:12.070583Z INFO MonitorHandler ExtHandler Network interfaces: Jul 9 23:45:12.070642 waagent[2124]: Executing ['ip', '-a', '-o', 'link']: Jul 9 23:45:12.070642 waagent[2124]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jul 9 23:45:12.070642 waagent[2124]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:76:fa:22 brd ff:ff:ff:ff:ff:ff Jul 9 23:45:12.070642 waagent[2124]: 3: enP57669s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:76:fa:22 brd ff:ff:ff:ff:ff:ff\ altname enP57669p0s2 Jul 9 23:45:12.070642 waagent[2124]: Executing ['ip', '-4', '-a', '-o', 'address']: Jul 9 23:45:12.070642 waagent[2124]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jul 9 23:45:12.070642 waagent[2124]: 2: eth0 inet 10.200.20.14/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Jul 9 23:45:12.070642 waagent[2124]: Executing ['ip', '-6', '-a', '-o', 'address']: Jul 9 23:45:12.070642 waagent[2124]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jul 9 23:45:12.070642 waagent[2124]: 2: eth0 inet6 fe80::222:48ff:fe76:fa22/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jul 9 23:45:12.070642 waagent[2124]: 3: enP57669s1 inet6 fe80::222:48ff:fe76:fa22/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jul 9 23:45:12.097149 waagent[2124]: 2025-07-09T23:45:12.097031Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Jul 9 23:45:12.097149 waagent[2124]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jul 9 23:45:12.097149 waagent[2124]: pkts bytes target prot opt in out source destination Jul 9 23:45:12.097149 waagent[2124]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jul 9 23:45:12.097149 waagent[2124]: pkts bytes target prot opt in out source destination Jul 9 23:45:12.097149 waagent[2124]: Chain OUTPUT (policy ACCEPT 4 packets, 216 bytes) Jul 9 23:45:12.097149 waagent[2124]: pkts bytes target prot opt in out source destination Jul 9 23:45:12.097149 waagent[2124]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jul 9 23:45:12.097149 waagent[2124]: 6 888 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jul 9 23:45:12.097149 waagent[2124]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jul 9 23:45:12.100662 waagent[2124]: 2025-07-09T23:45:12.100606Z INFO EnvHandler ExtHandler Current Firewall rules: Jul 9 23:45:12.100662 waagent[2124]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jul 9 23:45:12.100662 waagent[2124]: pkts bytes target prot opt in out source destination Jul 9 23:45:12.100662 waagent[2124]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jul 9 23:45:12.100662 waagent[2124]: pkts bytes target prot opt in out source destination Jul 9 23:45:12.100662 waagent[2124]: Chain OUTPUT (policy ACCEPT 4 packets, 216 bytes) Jul 9 23:45:12.100662 waagent[2124]: pkts bytes target prot opt in out source destination Jul 9 23:45:12.100662 waagent[2124]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jul 9 23:45:12.100662 waagent[2124]: 10 1304 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jul 9 23:45:12.100662 waagent[2124]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jul 9 23:45:12.100876 waagent[2124]: 2025-07-09T23:45:12.100851Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jul 9 23:45:18.503350 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 9 23:45:18.504718 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 23:45:18.609767 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 23:45:18.614762 (kubelet)[2274]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 9 23:45:18.704505 kubelet[2274]: E0709 23:45:18.704425 2274 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 9 23:45:18.707619 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 9 23:45:18.707867 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 9 23:45:18.708464 systemd[1]: kubelet.service: Consumed 110ms CPU time, 104.7M memory peak. Jul 9 23:45:28.753424 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 9 23:45:28.755246 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 23:45:28.855309 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 23:45:28.858119 (kubelet)[2289]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 9 23:45:28.995541 kubelet[2289]: E0709 23:45:28.995488 2289 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 9 23:45:28.997797 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 9 23:45:28.998005 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 9 23:45:28.998596 systemd[1]: kubelet.service: Consumed 170ms CPU time, 107.3M memory peak. Jul 9 23:45:30.924533 chronyd[1855]: Selected source PHC0 Jul 9 23:45:39.003413 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 9 23:45:39.005247 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 23:45:39.369588 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 23:45:39.372310 (kubelet)[2303]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 9 23:45:39.398564 kubelet[2303]: E0709 23:45:39.398481 2303 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 9 23:45:39.400917 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 9 23:45:39.401150 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 9 23:45:39.401671 systemd[1]: kubelet.service: Consumed 106ms CPU time, 104.9M memory peak. Jul 9 23:45:45.593913 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 9 23:45:45.595279 systemd[1]: Started sshd@0-10.200.20.14:22-10.200.16.10:40088.service - OpenSSH per-connection server daemon (10.200.16.10:40088). Jul 9 23:45:46.131548 sshd[2310]: Accepted publickey for core from 10.200.16.10 port 40088 ssh2: RSA SHA256:zFMRRzzSGWgmvEk8T0W8VsmZJ1v5NiT01j8gkhQ3zko Jul 9 23:45:46.132650 sshd-session[2310]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:45:46.136708 systemd-logind[1882]: New session 3 of user core. Jul 9 23:45:46.143600 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 9 23:45:46.542106 systemd[1]: Started sshd@1-10.200.20.14:22-10.200.16.10:40092.service - OpenSSH per-connection server daemon (10.200.16.10:40092). Jul 9 23:45:47.002593 sshd[2315]: Accepted publickey for core from 10.200.16.10 port 40092 ssh2: RSA SHA256:zFMRRzzSGWgmvEk8T0W8VsmZJ1v5NiT01j8gkhQ3zko Jul 9 23:45:47.003893 sshd-session[2315]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:45:47.007343 systemd-logind[1882]: New session 4 of user core. Jul 9 23:45:47.017555 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 9 23:45:47.330951 sshd[2317]: Connection closed by 10.200.16.10 port 40092 Jul 9 23:45:47.331553 sshd-session[2315]: pam_unix(sshd:session): session closed for user core Jul 9 23:45:47.334785 systemd-logind[1882]: Session 4 logged out. Waiting for processes to exit. Jul 9 23:45:47.335318 systemd[1]: sshd@1-10.200.20.14:22-10.200.16.10:40092.service: Deactivated successfully. Jul 9 23:45:47.337256 systemd[1]: session-4.scope: Deactivated successfully. Jul 9 23:45:47.339657 systemd-logind[1882]: Removed session 4. Jul 9 23:45:47.421662 systemd[1]: Started sshd@2-10.200.20.14:22-10.200.16.10:40096.service - OpenSSH per-connection server daemon (10.200.16.10:40096). Jul 9 23:45:47.910184 sshd[2323]: Accepted publickey for core from 10.200.16.10 port 40096 ssh2: RSA SHA256:zFMRRzzSGWgmvEk8T0W8VsmZJ1v5NiT01j8gkhQ3zko Jul 9 23:45:47.911310 sshd-session[2323]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:45:47.915228 systemd-logind[1882]: New session 5 of user core. Jul 9 23:45:47.924551 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 9 23:45:48.249275 sshd[2325]: Connection closed by 10.200.16.10 port 40096 Jul 9 23:45:48.249895 sshd-session[2323]: pam_unix(sshd:session): session closed for user core Jul 9 23:45:48.252860 systemd[1]: sshd@2-10.200.20.14:22-10.200.16.10:40096.service: Deactivated successfully. Jul 9 23:45:48.254224 systemd[1]: session-5.scope: Deactivated successfully. Jul 9 23:45:48.255889 systemd-logind[1882]: Session 5 logged out. Waiting for processes to exit. Jul 9 23:45:48.256877 systemd-logind[1882]: Removed session 5. Jul 9 23:45:48.330162 systemd[1]: Started sshd@3-10.200.20.14:22-10.200.16.10:40098.service - OpenSSH per-connection server daemon (10.200.16.10:40098). Jul 9 23:45:48.807956 sshd[2331]: Accepted publickey for core from 10.200.16.10 port 40098 ssh2: RSA SHA256:zFMRRzzSGWgmvEk8T0W8VsmZJ1v5NiT01j8gkhQ3zko Jul 9 23:45:48.809100 sshd-session[2331]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:45:48.812633 systemd-logind[1882]: New session 6 of user core. Jul 9 23:45:48.820578 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 9 23:45:49.147077 sshd[2333]: Connection closed by 10.200.16.10 port 40098 Jul 9 23:45:49.146909 sshd-session[2331]: pam_unix(sshd:session): session closed for user core Jul 9 23:45:49.149770 systemd-logind[1882]: Session 6 logged out. Waiting for processes to exit. Jul 9 23:45:49.149898 systemd[1]: sshd@3-10.200.20.14:22-10.200.16.10:40098.service: Deactivated successfully. Jul 9 23:45:49.151247 systemd[1]: session-6.scope: Deactivated successfully. Jul 9 23:45:49.154028 systemd-logind[1882]: Removed session 6. Jul 9 23:45:49.237050 systemd[1]: Started sshd@4-10.200.20.14:22-10.200.16.10:40102.service - OpenSSH per-connection server daemon (10.200.16.10:40102). Jul 9 23:45:49.503280 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jul 9 23:45:49.505542 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 23:45:49.696714 sshd[2339]: Accepted publickey for core from 10.200.16.10 port 40102 ssh2: RSA SHA256:zFMRRzzSGWgmvEk8T0W8VsmZJ1v5NiT01j8gkhQ3zko Jul 9 23:45:49.697791 sshd-session[2339]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:45:49.701603 systemd-logind[1882]: New session 7 of user core. Jul 9 23:45:49.709608 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 9 23:45:49.865629 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 23:45:49.871755 (kubelet)[2350]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 9 23:45:49.899208 kubelet[2350]: E0709 23:45:49.899097 2350 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 9 23:45:49.901213 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 9 23:45:49.901326 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 9 23:45:49.901858 systemd[1]: kubelet.service: Consumed 107ms CPU time, 107.8M memory peak. Jul 9 23:45:50.137231 sudo[2356]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 9 23:45:50.137464 sudo[2356]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 9 23:45:50.167128 sudo[2356]: pam_unix(sudo:session): session closed for user root Jul 9 23:45:50.238850 sshd[2344]: Connection closed by 10.200.16.10 port 40102 Jul 9 23:45:50.239529 sshd-session[2339]: pam_unix(sshd:session): session closed for user core Jul 9 23:45:50.242795 systemd[1]: sshd@4-10.200.20.14:22-10.200.16.10:40102.service: Deactivated successfully. Jul 9 23:45:50.244320 systemd[1]: session-7.scope: Deactivated successfully. Jul 9 23:45:50.245158 systemd-logind[1882]: Session 7 logged out. Waiting for processes to exit. Jul 9 23:45:50.246835 systemd-logind[1882]: Removed session 7. Jul 9 23:45:50.303861 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Jul 9 23:45:50.324644 systemd[1]: Started sshd@5-10.200.20.14:22-10.200.16.10:56844.service - OpenSSH per-connection server daemon (10.200.16.10:56844). Jul 9 23:45:50.801456 sshd[2362]: Accepted publickey for core from 10.200.16.10 port 56844 ssh2: RSA SHA256:zFMRRzzSGWgmvEk8T0W8VsmZJ1v5NiT01j8gkhQ3zko Jul 9 23:45:50.802570 sshd-session[2362]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:45:50.806495 systemd-logind[1882]: New session 8 of user core. Jul 9 23:45:50.815564 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 9 23:45:51.066146 sudo[2366]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 9 23:45:51.066365 sudo[2366]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 9 23:45:51.073003 sudo[2366]: pam_unix(sudo:session): session closed for user root Jul 9 23:45:51.076700 sudo[2365]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 9 23:45:51.076903 sudo[2365]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 9 23:45:51.084189 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 9 23:45:51.113301 augenrules[2388]: No rules Jul 9 23:45:51.114525 systemd[1]: audit-rules.service: Deactivated successfully. Jul 9 23:45:51.114706 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 9 23:45:51.117113 sudo[2365]: pam_unix(sudo:session): session closed for user root Jul 9 23:45:51.190975 sshd[2364]: Connection closed by 10.200.16.10 port 56844 Jul 9 23:45:51.191337 sshd-session[2362]: pam_unix(sshd:session): session closed for user core Jul 9 23:45:51.195203 systemd[1]: sshd@5-10.200.20.14:22-10.200.16.10:56844.service: Deactivated successfully. Jul 9 23:45:51.196510 systemd[1]: session-8.scope: Deactivated successfully. Jul 9 23:45:51.197148 systemd-logind[1882]: Session 8 logged out. Waiting for processes to exit. Jul 9 23:45:51.198097 systemd-logind[1882]: Removed session 8. Jul 9 23:45:51.278047 systemd[1]: Started sshd@6-10.200.20.14:22-10.200.16.10:56854.service - OpenSSH per-connection server daemon (10.200.16.10:56854). Jul 9 23:45:51.740869 sshd[2397]: Accepted publickey for core from 10.200.16.10 port 56854 ssh2: RSA SHA256:zFMRRzzSGWgmvEk8T0W8VsmZJ1v5NiT01j8gkhQ3zko Jul 9 23:45:51.741972 sshd-session[2397]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:45:51.745688 systemd-logind[1882]: New session 9 of user core. Jul 9 23:45:51.757676 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 9 23:45:51.998718 sudo[2400]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 9 23:45:51.999300 sudo[2400]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 9 23:45:52.472013 update_engine[1888]: I20250709 23:45:52.471930 1888 update_attempter.cc:509] Updating boot flags... Jul 9 23:45:53.124813 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 9 23:45:53.140733 (dockerd)[2482]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 9 23:45:53.808443 dockerd[2482]: time="2025-07-09T23:45:53.806399873Z" level=info msg="Starting up" Jul 9 23:45:53.809408 dockerd[2482]: time="2025-07-09T23:45:53.809370860Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 9 23:45:53.902884 dockerd[2482]: time="2025-07-09T23:45:53.902838962Z" level=info msg="Loading containers: start." Jul 9 23:45:53.929450 kernel: Initializing XFRM netlink socket Jul 9 23:45:54.203020 systemd-networkd[1483]: docker0: Link UP Jul 9 23:45:54.228213 dockerd[2482]: time="2025-07-09T23:45:54.228168904Z" level=info msg="Loading containers: done." Jul 9 23:45:54.237914 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck183931339-merged.mount: Deactivated successfully. Jul 9 23:45:54.258997 dockerd[2482]: time="2025-07-09T23:45:54.258951012Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 9 23:45:54.259089 dockerd[2482]: time="2025-07-09T23:45:54.259052399Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jul 9 23:45:54.259191 dockerd[2482]: time="2025-07-09T23:45:54.259171067Z" level=info msg="Initializing buildkit" Jul 9 23:45:54.311822 dockerd[2482]: time="2025-07-09T23:45:54.311772641Z" level=info msg="Completed buildkit initialization" Jul 9 23:45:54.317013 dockerd[2482]: time="2025-07-09T23:45:54.316961968Z" level=info msg="Daemon has completed initialization" Jul 9 23:45:54.317279 dockerd[2482]: time="2025-07-09T23:45:54.317159158Z" level=info msg="API listen on /run/docker.sock" Jul 9 23:45:54.317399 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 9 23:45:54.858939 containerd[1912]: time="2025-07-09T23:45:54.858863691Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\"" Jul 9 23:45:55.711548 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3479626860.mount: Deactivated successfully. Jul 9 23:45:57.042354 containerd[1912]: time="2025-07-09T23:45:57.041714101Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:45:57.049458 containerd[1912]: time="2025-07-09T23:45:57.049419568Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.2: active requests=0, bytes read=27351716" Jul 9 23:45:57.062974 containerd[1912]: time="2025-07-09T23:45:57.062946678Z" level=info msg="ImageCreate event name:\"sha256:04ac773cca35cc457f24a6501b6b308d63a2cddd1aec14fe95559bccca3010a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:45:57.067474 containerd[1912]: time="2025-07-09T23:45:57.067440571Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:45:57.068224 containerd[1912]: time="2025-07-09T23:45:57.068200572Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.2\" with image id \"sha256:04ac773cca35cc457f24a6501b6b308d63a2cddd1aec14fe95559bccca3010a4\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\", size \"27348516\" in 2.209299783s" Jul 9 23:45:57.068277 containerd[1912]: time="2025-07-09T23:45:57.068229533Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\" returns image reference \"sha256:04ac773cca35cc457f24a6501b6b308d63a2cddd1aec14fe95559bccca3010a4\"" Jul 9 23:45:57.069487 containerd[1912]: time="2025-07-09T23:45:57.069462408Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\"" Jul 9 23:45:58.383003 containerd[1912]: time="2025-07-09T23:45:58.382946496Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:45:58.389956 containerd[1912]: time="2025-07-09T23:45:58.389913538Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.2: active requests=0, bytes read=23537623" Jul 9 23:45:58.399068 containerd[1912]: time="2025-07-09T23:45:58.399019849Z" level=info msg="ImageCreate event name:\"sha256:99a259072231375ad69a369cdf5620d60cdff72d450951c603fad8a94667af65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:45:58.409424 containerd[1912]: time="2025-07-09T23:45:58.409350506Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:45:58.410233 containerd[1912]: time="2025-07-09T23:45:58.409916945Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.2\" with image id \"sha256:99a259072231375ad69a369cdf5620d60cdff72d450951c603fad8a94667af65\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\", size \"25092541\" in 1.340426734s" Jul 9 23:45:58.410233 containerd[1912]: time="2025-07-09T23:45:58.409946802Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\" returns image reference \"sha256:99a259072231375ad69a369cdf5620d60cdff72d450951c603fad8a94667af65\"" Jul 9 23:45:58.410733 containerd[1912]: time="2025-07-09T23:45:58.410714884Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\"" Jul 9 23:45:59.705758 containerd[1912]: time="2025-07-09T23:45:59.705709021Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:45:59.708906 containerd[1912]: time="2025-07-09T23:45:59.708866910Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.2: active requests=0, bytes read=18293515" Jul 9 23:45:59.716390 containerd[1912]: time="2025-07-09T23:45:59.716342973Z" level=info msg="ImageCreate event name:\"sha256:bb3da57746ca4726b669d35145eb9b4085643c61bbc80b9df3bf1e6021ba9eaf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:45:59.723121 containerd[1912]: time="2025-07-09T23:45:59.723085341Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:45:59.724215 containerd[1912]: time="2025-07-09T23:45:59.724189767Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.2\" with image id \"sha256:bb3da57746ca4726b669d35145eb9b4085643c61bbc80b9df3bf1e6021ba9eaf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\", size \"19848451\" in 1.313375294s" Jul 9 23:45:59.724253 containerd[1912]: time="2025-07-09T23:45:59.724220928Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\" returns image reference \"sha256:bb3da57746ca4726b669d35145eb9b4085643c61bbc80b9df3bf1e6021ba9eaf\"" Jul 9 23:45:59.724847 containerd[1912]: time="2025-07-09T23:45:59.724645869Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\"" Jul 9 23:46:00.003223 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jul 9 23:46:00.004520 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 23:46:00.105729 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 23:46:00.108359 (kubelet)[2748]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 9 23:46:00.136399 kubelet[2748]: E0709 23:46:00.136331 2748 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 9 23:46:00.138678 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 9 23:46:00.138915 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 9 23:46:00.139505 systemd[1]: kubelet.service: Consumed 107ms CPU time, 105.2M memory peak. Jul 9 23:46:02.398820 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount785656174.mount: Deactivated successfully. Jul 9 23:46:02.747092 containerd[1912]: time="2025-07-09T23:46:02.746518922Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:46:02.750078 containerd[1912]: time="2025-07-09T23:46:02.750043567Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.2: active requests=0, bytes read=28199472" Jul 9 23:46:02.755479 containerd[1912]: time="2025-07-09T23:46:02.755456990Z" level=info msg="ImageCreate event name:\"sha256:c26522e54bad2e6bfbb1bf11500833c94433076a3fa38436a2ec496a422c5455\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:46:02.761725 containerd[1912]: time="2025-07-09T23:46:02.761657085Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:46:02.762274 containerd[1912]: time="2025-07-09T23:46:02.762044681Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.2\" with image id \"sha256:c26522e54bad2e6bfbb1bf11500833c94433076a3fa38436a2ec496a422c5455\", repo tag \"registry.k8s.io/kube-proxy:v1.33.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\", size \"28198491\" in 3.037369867s" Jul 9 23:46:02.762274 containerd[1912]: time="2025-07-09T23:46:02.762072434Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\" returns image reference \"sha256:c26522e54bad2e6bfbb1bf11500833c94433076a3fa38436a2ec496a422c5455\"" Jul 9 23:46:02.762619 containerd[1912]: time="2025-07-09T23:46:02.762599906Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jul 9 23:46:03.409283 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2669034724.mount: Deactivated successfully. Jul 9 23:46:04.720467 containerd[1912]: time="2025-07-09T23:46:04.719800900Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:46:04.725829 containerd[1912]: time="2025-07-09T23:46:04.725661976Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152117" Jul 9 23:46:04.729667 containerd[1912]: time="2025-07-09T23:46:04.729638885Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:46:04.734145 containerd[1912]: time="2025-07-09T23:46:04.734113875Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:46:04.734783 containerd[1912]: time="2025-07-09T23:46:04.734757753Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.972043171s" Jul 9 23:46:04.734874 containerd[1912]: time="2025-07-09T23:46:04.734860660Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Jul 9 23:46:04.735286 containerd[1912]: time="2025-07-09T23:46:04.735262746Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 9 23:46:05.370888 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3626221546.mount: Deactivated successfully. Jul 9 23:46:05.411153 containerd[1912]: time="2025-07-09T23:46:05.410665993Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 9 23:46:05.413930 containerd[1912]: time="2025-07-09T23:46:05.413884549Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Jul 9 23:46:05.418994 containerd[1912]: time="2025-07-09T23:46:05.418972648Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 9 23:46:05.424747 containerd[1912]: time="2025-07-09T23:46:05.424719240Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 9 23:46:05.425653 containerd[1912]: time="2025-07-09T23:46:05.425621454Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 690.331396ms" Jul 9 23:46:05.425759 containerd[1912]: time="2025-07-09T23:46:05.425745650Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 9 23:46:05.426916 containerd[1912]: time="2025-07-09T23:46:05.426890561Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jul 9 23:46:06.368073 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4033920977.mount: Deactivated successfully. Jul 9 23:46:08.145814 containerd[1912]: time="2025-07-09T23:46:08.145753392Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:46:08.198531 containerd[1912]: time="2025-07-09T23:46:08.198476061Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69334599" Jul 9 23:46:08.204173 containerd[1912]: time="2025-07-09T23:46:08.204091850Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:46:08.211497 containerd[1912]: time="2025-07-09T23:46:08.211447104Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:46:08.212169 containerd[1912]: time="2025-07-09T23:46:08.212058972Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 2.785064032s" Jul 9 23:46:08.212169 containerd[1912]: time="2025-07-09T23:46:08.212086061Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Jul 9 23:46:10.253278 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jul 9 23:46:10.257529 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 23:46:10.474558 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 23:46:10.477347 (kubelet)[2902]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 9 23:46:10.507915 kubelet[2902]: E0709 23:46:10.507792 2902 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 9 23:46:10.510314 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 9 23:46:10.510440 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 9 23:46:10.510715 systemd[1]: kubelet.service: Consumed 105ms CPU time, 106.7M memory peak. Jul 9 23:46:11.161727 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 23:46:11.161834 systemd[1]: kubelet.service: Consumed 105ms CPU time, 106.7M memory peak. Jul 9 23:46:11.163759 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 23:46:11.185317 systemd[1]: Reload requested from client PID 2916 ('systemctl') (unit session-9.scope)... Jul 9 23:46:11.185338 systemd[1]: Reloading... Jul 9 23:46:11.281455 zram_generator::config[2964]: No configuration found. Jul 9 23:46:11.347186 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 9 23:46:11.431256 systemd[1]: Reloading finished in 245 ms. Jul 9 23:46:11.479827 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 9 23:46:11.479890 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 9 23:46:11.480113 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 23:46:11.480153 systemd[1]: kubelet.service: Consumed 75ms CPU time, 94.9M memory peak. Jul 9 23:46:11.481333 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 23:46:11.699782 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 23:46:11.706726 (kubelet)[3029]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 9 23:46:11.806975 kubelet[3029]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 9 23:46:11.806975 kubelet[3029]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 9 23:46:11.806975 kubelet[3029]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 9 23:46:11.807330 kubelet[3029]: I0709 23:46:11.806956 3029 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 9 23:46:12.037750 kubelet[3029]: I0709 23:46:12.037713 3029 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 9 23:46:12.037750 kubelet[3029]: I0709 23:46:12.037742 3029 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 9 23:46:12.037956 kubelet[3029]: I0709 23:46:12.037940 3029 server.go:956] "Client rotation is on, will bootstrap in background" Jul 9 23:46:12.050596 kubelet[3029]: E0709 23:46:12.050557 3029 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.20.14:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.14:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jul 9 23:46:12.053455 kubelet[3029]: I0709 23:46:12.052941 3029 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 9 23:46:12.060106 kubelet[3029]: I0709 23:46:12.060087 3029 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 9 23:46:12.062821 kubelet[3029]: I0709 23:46:12.062794 3029 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 9 23:46:12.063131 kubelet[3029]: I0709 23:46:12.063101 3029 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 9 23:46:12.063324 kubelet[3029]: I0709 23:46:12.063192 3029 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4344.1.1-n-76bacae427","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 9 23:46:12.063474 kubelet[3029]: I0709 23:46:12.063460 3029 topology_manager.go:138] "Creating topology manager with none policy" Jul 9 23:46:12.063533 kubelet[3029]: I0709 23:46:12.063524 3029 container_manager_linux.go:303] "Creating device plugin manager" Jul 9 23:46:12.063712 kubelet[3029]: I0709 23:46:12.063697 3029 state_mem.go:36] "Initialized new in-memory state store" Jul 9 23:46:12.065498 kubelet[3029]: I0709 23:46:12.065478 3029 kubelet.go:480] "Attempting to sync node with API server" Jul 9 23:46:12.065595 kubelet[3029]: I0709 23:46:12.065584 3029 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 9 23:46:12.065677 kubelet[3029]: I0709 23:46:12.065654 3029 kubelet.go:386] "Adding apiserver pod source" Jul 9 23:46:12.066485 kubelet[3029]: I0709 23:46:12.066466 3029 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 9 23:46:12.069870 kubelet[3029]: E0709 23:46:12.069835 3029 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4344.1.1-n-76bacae427&limit=500&resourceVersion=0\": dial tcp 10.200.20.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 9 23:46:12.070286 kubelet[3029]: E0709 23:46:12.070244 3029 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 9 23:46:12.071221 kubelet[3029]: I0709 23:46:12.071183 3029 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 9 23:46:12.071675 kubelet[3029]: I0709 23:46:12.071641 3029 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 9 23:46:12.071746 kubelet[3029]: W0709 23:46:12.071698 3029 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 9 23:46:12.074603 kubelet[3029]: I0709 23:46:12.074577 3029 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 9 23:46:12.074696 kubelet[3029]: I0709 23:46:12.074681 3029 server.go:1289] "Started kubelet" Jul 9 23:46:12.076504 kubelet[3029]: I0709 23:46:12.076480 3029 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 9 23:46:12.078251 kubelet[3029]: E0709 23:46:12.077305 3029 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.14:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.14:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4344.1.1-n-76bacae427.1850ba01d4c5ecea default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4344.1.1-n-76bacae427,UID:ci-4344.1.1-n-76bacae427,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4344.1.1-n-76bacae427,},FirstTimestamp:2025-07-09 23:46:12.07459761 +0000 UTC m=+0.364989695,LastTimestamp:2025-07-09 23:46:12.07459761 +0000 UTC m=+0.364989695,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4344.1.1-n-76bacae427,}" Jul 9 23:46:12.078807 kubelet[3029]: I0709 23:46:12.078776 3029 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 9 23:46:12.079588 kubelet[3029]: I0709 23:46:12.079550 3029 server.go:317] "Adding debug handlers to kubelet server" Jul 9 23:46:12.082253 kubelet[3029]: I0709 23:46:12.082180 3029 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 9 23:46:12.083512 kubelet[3029]: I0709 23:46:12.082846 3029 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 9 23:46:12.083512 kubelet[3029]: I0709 23:46:12.083108 3029 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 9 23:46:12.083512 kubelet[3029]: I0709 23:46:12.083331 3029 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 9 23:46:12.084716 kubelet[3029]: I0709 23:46:12.084684 3029 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 9 23:46:12.085052 kubelet[3029]: E0709 23:46:12.085020 3029 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4344.1.1-n-76bacae427\" not found" Jul 9 23:46:12.090628 kubelet[3029]: I0709 23:46:12.090588 3029 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 9 23:46:12.090721 kubelet[3029]: I0709 23:46:12.090660 3029 reconciler.go:26] "Reconciler: start to sync state" Jul 9 23:46:12.091590 kubelet[3029]: E0709 23:46:12.091554 3029 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344.1.1-n-76bacae427?timeout=10s\": dial tcp 10.200.20.14:6443: connect: connection refused" interval="200ms" Jul 9 23:46:12.092740 kubelet[3029]: I0709 23:46:12.092704 3029 factory.go:223] Registration of the systemd container factory successfully Jul 9 23:46:12.092815 kubelet[3029]: I0709 23:46:12.092806 3029 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 9 23:46:12.093558 kubelet[3029]: E0709 23:46:12.093506 3029 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 9 23:46:12.094930 kubelet[3029]: E0709 23:46:12.094149 3029 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 9 23:46:12.094930 kubelet[3029]: I0709 23:46:12.094654 3029 factory.go:223] Registration of the containerd container factory successfully Jul 9 23:46:12.099133 kubelet[3029]: I0709 23:46:12.099106 3029 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 9 23:46:12.099246 kubelet[3029]: I0709 23:46:12.099235 3029 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 9 23:46:12.099308 kubelet[3029]: I0709 23:46:12.099297 3029 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 9 23:46:12.099348 kubelet[3029]: I0709 23:46:12.099341 3029 kubelet.go:2436] "Starting kubelet main sync loop" Jul 9 23:46:12.099423 kubelet[3029]: E0709 23:46:12.099409 3029 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 9 23:46:12.103757 kubelet[3029]: E0709 23:46:12.103725 3029 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 9 23:46:12.124199 kubelet[3029]: I0709 23:46:12.124178 3029 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 9 23:46:12.124646 kubelet[3029]: I0709 23:46:12.124364 3029 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 9 23:46:12.124646 kubelet[3029]: I0709 23:46:12.124390 3029 state_mem.go:36] "Initialized new in-memory state store" Jul 9 23:46:12.185438 kubelet[3029]: E0709 23:46:12.185382 3029 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4344.1.1-n-76bacae427\" not found" Jul 9 23:46:12.199866 kubelet[3029]: E0709 23:46:12.199827 3029 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 9 23:46:12.251264 kubelet[3029]: I0709 23:46:12.251163 3029 policy_none.go:49] "None policy: Start" Jul 9 23:46:12.251264 kubelet[3029]: I0709 23:46:12.251211 3029 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 9 23:46:12.251264 kubelet[3029]: I0709 23:46:12.251224 3029 state_mem.go:35] "Initializing new in-memory state store" Jul 9 23:46:12.262572 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 9 23:46:12.273819 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 9 23:46:12.276288 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 9 23:46:12.286378 kubelet[3029]: E0709 23:46:12.285704 3029 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4344.1.1-n-76bacae427\" not found" Jul 9 23:46:12.286378 kubelet[3029]: E0709 23:46:12.286143 3029 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 9 23:46:12.286788 kubelet[3029]: I0709 23:46:12.286679 3029 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 9 23:46:12.286788 kubelet[3029]: I0709 23:46:12.286695 3029 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 9 23:46:12.287450 kubelet[3029]: I0709 23:46:12.287244 3029 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 9 23:46:12.288908 kubelet[3029]: E0709 23:46:12.288828 3029 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 9 23:46:12.288908 kubelet[3029]: E0709 23:46:12.288858 3029 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4344.1.1-n-76bacae427\" not found" Jul 9 23:46:12.292912 kubelet[3029]: E0709 23:46:12.292881 3029 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344.1.1-n-76bacae427?timeout=10s\": dial tcp 10.200.20.14:6443: connect: connection refused" interval="400ms" Jul 9 23:46:12.389406 kubelet[3029]: I0709 23:46:12.389335 3029 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344.1.1-n-76bacae427" Jul 9 23:46:12.389783 kubelet[3029]: E0709 23:46:12.389751 3029 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.14:6443/api/v1/nodes\": dial tcp 10.200.20.14:6443: connect: connection refused" node="ci-4344.1.1-n-76bacae427" Jul 9 23:46:12.413328 systemd[1]: Created slice kubepods-burstable-pode0eb40cebf0db7962cf510db71d9d64b.slice - libcontainer container kubepods-burstable-pode0eb40cebf0db7962cf510db71d9d64b.slice. Jul 9 23:46:12.418072 kubelet[3029]: E0709 23:46:12.418016 3029 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.1-n-76bacae427\" not found" node="ci-4344.1.1-n-76bacae427" Jul 9 23:46:12.423192 systemd[1]: Created slice kubepods-burstable-pod74a93de7e991d2989650c3c235887cd5.slice - libcontainer container kubepods-burstable-pod74a93de7e991d2989650c3c235887cd5.slice. Jul 9 23:46:12.424715 kubelet[3029]: E0709 23:46:12.424696 3029 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.1-n-76bacae427\" not found" node="ci-4344.1.1-n-76bacae427" Jul 9 23:46:12.438844 systemd[1]: Created slice kubepods-burstable-pod9d2882d2482527b834e0b8007fdb2700.slice - libcontainer container kubepods-burstable-pod9d2882d2482527b834e0b8007fdb2700.slice. Jul 9 23:46:12.440316 kubelet[3029]: E0709 23:46:12.440159 3029 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.1-n-76bacae427\" not found" node="ci-4344.1.1-n-76bacae427" Jul 9 23:46:12.493915 kubelet[3029]: I0709 23:46:12.493878 3029 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e0eb40cebf0db7962cf510db71d9d64b-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4344.1.1-n-76bacae427\" (UID: \"e0eb40cebf0db7962cf510db71d9d64b\") " pod="kube-system/kube-apiserver-ci-4344.1.1-n-76bacae427" Jul 9 23:46:12.494245 kubelet[3029]: I0709 23:46:12.494125 3029 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/74a93de7e991d2989650c3c235887cd5-ca-certs\") pod \"kube-controller-manager-ci-4344.1.1-n-76bacae427\" (UID: \"74a93de7e991d2989650c3c235887cd5\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-n-76bacae427" Jul 9 23:46:12.494245 kubelet[3029]: I0709 23:46:12.494148 3029 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/74a93de7e991d2989650c3c235887cd5-kubeconfig\") pod \"kube-controller-manager-ci-4344.1.1-n-76bacae427\" (UID: \"74a93de7e991d2989650c3c235887cd5\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-n-76bacae427" Jul 9 23:46:12.494245 kubelet[3029]: I0709 23:46:12.494161 3029 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/74a93de7e991d2989650c3c235887cd5-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4344.1.1-n-76bacae427\" (UID: \"74a93de7e991d2989650c3c235887cd5\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-n-76bacae427" Jul 9 23:46:12.494245 kubelet[3029]: I0709 23:46:12.494172 3029 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e0eb40cebf0db7962cf510db71d9d64b-k8s-certs\") pod \"kube-apiserver-ci-4344.1.1-n-76bacae427\" (UID: \"e0eb40cebf0db7962cf510db71d9d64b\") " pod="kube-system/kube-apiserver-ci-4344.1.1-n-76bacae427" Jul 9 23:46:12.494245 kubelet[3029]: I0709 23:46:12.494187 3029 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/74a93de7e991d2989650c3c235887cd5-flexvolume-dir\") pod \"kube-controller-manager-ci-4344.1.1-n-76bacae427\" (UID: \"74a93de7e991d2989650c3c235887cd5\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-n-76bacae427" Jul 9 23:46:12.494366 kubelet[3029]: I0709 23:46:12.494197 3029 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/74a93de7e991d2989650c3c235887cd5-k8s-certs\") pod \"kube-controller-manager-ci-4344.1.1-n-76bacae427\" (UID: \"74a93de7e991d2989650c3c235887cd5\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-n-76bacae427" Jul 9 23:46:12.494366 kubelet[3029]: I0709 23:46:12.494215 3029 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9d2882d2482527b834e0b8007fdb2700-kubeconfig\") pod \"kube-scheduler-ci-4344.1.1-n-76bacae427\" (UID: \"9d2882d2482527b834e0b8007fdb2700\") " pod="kube-system/kube-scheduler-ci-4344.1.1-n-76bacae427" Jul 9 23:46:12.494366 kubelet[3029]: I0709 23:46:12.494223 3029 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e0eb40cebf0db7962cf510db71d9d64b-ca-certs\") pod \"kube-apiserver-ci-4344.1.1-n-76bacae427\" (UID: \"e0eb40cebf0db7962cf510db71d9d64b\") " pod="kube-system/kube-apiserver-ci-4344.1.1-n-76bacae427" Jul 9 23:46:12.592576 kubelet[3029]: I0709 23:46:12.592239 3029 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344.1.1-n-76bacae427" Jul 9 23:46:12.593018 kubelet[3029]: E0709 23:46:12.592993 3029 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.14:6443/api/v1/nodes\": dial tcp 10.200.20.14:6443: connect: connection refused" node="ci-4344.1.1-n-76bacae427" Jul 9 23:46:12.694038 kubelet[3029]: E0709 23:46:12.693992 3029 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344.1.1-n-76bacae427?timeout=10s\": dial tcp 10.200.20.14:6443: connect: connection refused" interval="800ms" Jul 9 23:46:12.721174 containerd[1912]: time="2025-07-09T23:46:12.721128271Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4344.1.1-n-76bacae427,Uid:e0eb40cebf0db7962cf510db71d9d64b,Namespace:kube-system,Attempt:0,}" Jul 9 23:46:12.725689 containerd[1912]: time="2025-07-09T23:46:12.725553725Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4344.1.1-n-76bacae427,Uid:74a93de7e991d2989650c3c235887cd5,Namespace:kube-system,Attempt:0,}" Jul 9 23:46:12.741484 containerd[1912]: time="2025-07-09T23:46:12.741361148Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4344.1.1-n-76bacae427,Uid:9d2882d2482527b834e0b8007fdb2700,Namespace:kube-system,Attempt:0,}" Jul 9 23:46:12.897732 containerd[1912]: time="2025-07-09T23:46:12.897593346Z" level=info msg="connecting to shim f256f63f4dc41c793596933293c0afb65b93dde8e198e79a2e381935acad9e9d" address="unix:///run/containerd/s/8dfee699ad0f366c6f7ad77b37de6e077fc3e3bfde0d8fb00c58f9d0bd08af11" namespace=k8s.io protocol=ttrpc version=3 Jul 9 23:46:12.908095 containerd[1912]: time="2025-07-09T23:46:12.907571823Z" level=info msg="connecting to shim b4328c1c24a86f3f1314b82578cda282db1a0c1b2377ed127ba58623914b3b96" address="unix:///run/containerd/s/1c26daa041e256318bda1516767ae07106f0a5f9078dfc24471f2bb3ba9d5659" namespace=k8s.io protocol=ttrpc version=3 Jul 9 23:46:12.916559 containerd[1912]: time="2025-07-09T23:46:12.915683130Z" level=info msg="connecting to shim df67d4411f232e96584149bf6e43c890725c3cce79913ee1da1eb9dd7595dfef" address="unix:///run/containerd/s/f78c55f1bb61c06a0a8f5a49703e54c7be021a9c30ba46f4e062fb736485b89a" namespace=k8s.io protocol=ttrpc version=3 Jul 9 23:46:12.928745 systemd[1]: Started cri-containerd-f256f63f4dc41c793596933293c0afb65b93dde8e198e79a2e381935acad9e9d.scope - libcontainer container f256f63f4dc41c793596933293c0afb65b93dde8e198e79a2e381935acad9e9d. Jul 9 23:46:12.934346 systemd[1]: Started cri-containerd-b4328c1c24a86f3f1314b82578cda282db1a0c1b2377ed127ba58623914b3b96.scope - libcontainer container b4328c1c24a86f3f1314b82578cda282db1a0c1b2377ed127ba58623914b3b96. Jul 9 23:46:12.959604 systemd[1]: Started cri-containerd-df67d4411f232e96584149bf6e43c890725c3cce79913ee1da1eb9dd7595dfef.scope - libcontainer container df67d4411f232e96584149bf6e43c890725c3cce79913ee1da1eb9dd7595dfef. Jul 9 23:46:12.989723 containerd[1912]: time="2025-07-09T23:46:12.989677469Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4344.1.1-n-76bacae427,Uid:e0eb40cebf0db7962cf510db71d9d64b,Namespace:kube-system,Attempt:0,} returns sandbox id \"f256f63f4dc41c793596933293c0afb65b93dde8e198e79a2e381935acad9e9d\"" Jul 9 23:46:12.995373 kubelet[3029]: I0709 23:46:12.995344 3029 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344.1.1-n-76bacae427" Jul 9 23:46:12.996010 kubelet[3029]: E0709 23:46:12.995648 3029 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.14:6443/api/v1/nodes\": dial tcp 10.200.20.14:6443: connect: connection refused" node="ci-4344.1.1-n-76bacae427" Jul 9 23:46:13.003253 containerd[1912]: time="2025-07-09T23:46:13.003049700Z" level=info msg="CreateContainer within sandbox \"f256f63f4dc41c793596933293c0afb65b93dde8e198e79a2e381935acad9e9d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 9 23:46:13.003884 containerd[1912]: time="2025-07-09T23:46:13.003856025Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4344.1.1-n-76bacae427,Uid:9d2882d2482527b834e0b8007fdb2700,Namespace:kube-system,Attempt:0,} returns sandbox id \"b4328c1c24a86f3f1314b82578cda282db1a0c1b2377ed127ba58623914b3b96\"" Jul 9 23:46:13.014099 containerd[1912]: time="2025-07-09T23:46:13.014060575Z" level=info msg="CreateContainer within sandbox \"b4328c1c24a86f3f1314b82578cda282db1a0c1b2377ed127ba58623914b3b96\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 9 23:46:13.020014 containerd[1912]: time="2025-07-09T23:46:13.019985547Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4344.1.1-n-76bacae427,Uid:74a93de7e991d2989650c3c235887cd5,Namespace:kube-system,Attempt:0,} returns sandbox id \"df67d4411f232e96584149bf6e43c890725c3cce79913ee1da1eb9dd7595dfef\"" Jul 9 23:46:13.029683 containerd[1912]: time="2025-07-09T23:46:13.029651797Z" level=info msg="CreateContainer within sandbox \"df67d4411f232e96584149bf6e43c890725c3cce79913ee1da1eb9dd7595dfef\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 9 23:46:13.033274 kubelet[3029]: E0709 23:46:13.033231 3029 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4344.1.1-n-76bacae427&limit=500&resourceVersion=0\": dial tcp 10.200.20.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 9 23:46:13.042038 containerd[1912]: time="2025-07-09T23:46:13.041979607Z" level=info msg="Container 34f3813445f0b07967344b4d11e3b2445eee6d513c62ca7c58320b0504d49672: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:46:13.083304 containerd[1912]: time="2025-07-09T23:46:13.083255094Z" level=info msg="Container d7032ef97daeae0e59085b247e26e1cd798aa26a91ee525b2cffdd5b7459d502: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:46:13.106482 containerd[1912]: time="2025-07-09T23:46:13.106051975Z" level=info msg="Container 57f2e6fe869b4bed5d10906dc76781f5477aee1f2856314a3946e76e13244542: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:46:13.109990 containerd[1912]: time="2025-07-09T23:46:13.109920137Z" level=info msg="CreateContainer within sandbox \"f256f63f4dc41c793596933293c0afb65b93dde8e198e79a2e381935acad9e9d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"34f3813445f0b07967344b4d11e3b2445eee6d513c62ca7c58320b0504d49672\"" Jul 9 23:46:13.110704 containerd[1912]: time="2025-07-09T23:46:13.110680925Z" level=info msg="StartContainer for \"34f3813445f0b07967344b4d11e3b2445eee6d513c62ca7c58320b0504d49672\"" Jul 9 23:46:13.112359 containerd[1912]: time="2025-07-09T23:46:13.112338192Z" level=info msg="connecting to shim 34f3813445f0b07967344b4d11e3b2445eee6d513c62ca7c58320b0504d49672" address="unix:///run/containerd/s/8dfee699ad0f366c6f7ad77b37de6e077fc3e3bfde0d8fb00c58f9d0bd08af11" protocol=ttrpc version=3 Jul 9 23:46:13.128574 systemd[1]: Started cri-containerd-34f3813445f0b07967344b4d11e3b2445eee6d513c62ca7c58320b0504d49672.scope - libcontainer container 34f3813445f0b07967344b4d11e3b2445eee6d513c62ca7c58320b0504d49672. Jul 9 23:46:13.148437 containerd[1912]: time="2025-07-09T23:46:13.148321905Z" level=info msg="CreateContainer within sandbox \"df67d4411f232e96584149bf6e43c890725c3cce79913ee1da1eb9dd7595dfef\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"57f2e6fe869b4bed5d10906dc76781f5477aee1f2856314a3946e76e13244542\"" Jul 9 23:46:13.149448 containerd[1912]: time="2025-07-09T23:46:13.149273075Z" level=info msg="StartContainer for \"57f2e6fe869b4bed5d10906dc76781f5477aee1f2856314a3946e76e13244542\"" Jul 9 23:46:13.152534 containerd[1912]: time="2025-07-09T23:46:13.152507519Z" level=info msg="connecting to shim 57f2e6fe869b4bed5d10906dc76781f5477aee1f2856314a3946e76e13244542" address="unix:///run/containerd/s/f78c55f1bb61c06a0a8f5a49703e54c7be021a9c30ba46f4e062fb736485b89a" protocol=ttrpc version=3 Jul 9 23:46:13.155046 containerd[1912]: time="2025-07-09T23:46:13.154988688Z" level=info msg="CreateContainer within sandbox \"b4328c1c24a86f3f1314b82578cda282db1a0c1b2377ed127ba58623914b3b96\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d7032ef97daeae0e59085b247e26e1cd798aa26a91ee525b2cffdd5b7459d502\"" Jul 9 23:46:13.155774 containerd[1912]: time="2025-07-09T23:46:13.155748572Z" level=info msg="StartContainer for \"d7032ef97daeae0e59085b247e26e1cd798aa26a91ee525b2cffdd5b7459d502\"" Jul 9 23:46:13.156717 containerd[1912]: time="2025-07-09T23:46:13.156419635Z" level=info msg="connecting to shim d7032ef97daeae0e59085b247e26e1cd798aa26a91ee525b2cffdd5b7459d502" address="unix:///run/containerd/s/1c26daa041e256318bda1516767ae07106f0a5f9078dfc24471f2bb3ba9d5659" protocol=ttrpc version=3 Jul 9 23:46:13.176768 systemd[1]: Started cri-containerd-57f2e6fe869b4bed5d10906dc76781f5477aee1f2856314a3946e76e13244542.scope - libcontainer container 57f2e6fe869b4bed5d10906dc76781f5477aee1f2856314a3946e76e13244542. Jul 9 23:46:13.181178 containerd[1912]: time="2025-07-09T23:46:13.181095248Z" level=info msg="StartContainer for \"34f3813445f0b07967344b4d11e3b2445eee6d513c62ca7c58320b0504d49672\" returns successfully" Jul 9 23:46:13.193580 systemd[1]: Started cri-containerd-d7032ef97daeae0e59085b247e26e1cd798aa26a91ee525b2cffdd5b7459d502.scope - libcontainer container d7032ef97daeae0e59085b247e26e1cd798aa26a91ee525b2cffdd5b7459d502. Jul 9 23:46:13.237183 containerd[1912]: time="2025-07-09T23:46:13.237000379Z" level=info msg="StartContainer for \"57f2e6fe869b4bed5d10906dc76781f5477aee1f2856314a3946e76e13244542\" returns successfully" Jul 9 23:46:13.261094 containerd[1912]: time="2025-07-09T23:46:13.261034552Z" level=info msg="StartContainer for \"d7032ef97daeae0e59085b247e26e1cd798aa26a91ee525b2cffdd5b7459d502\" returns successfully" Jul 9 23:46:13.798593 kubelet[3029]: I0709 23:46:13.798561 3029 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344.1.1-n-76bacae427" Jul 9 23:46:14.135695 kubelet[3029]: E0709 23:46:14.135274 3029 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.1-n-76bacae427\" not found" node="ci-4344.1.1-n-76bacae427" Jul 9 23:46:14.141049 kubelet[3029]: E0709 23:46:14.140893 3029 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.1-n-76bacae427\" not found" node="ci-4344.1.1-n-76bacae427" Jul 9 23:46:14.143888 kubelet[3029]: E0709 23:46:14.143864 3029 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.1-n-76bacae427\" not found" node="ci-4344.1.1-n-76bacae427" Jul 9 23:46:14.312030 kubelet[3029]: E0709 23:46:14.311976 3029 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4344.1.1-n-76bacae427\" not found" node="ci-4344.1.1-n-76bacae427" Jul 9 23:46:14.454989 kubelet[3029]: I0709 23:46:14.454859 3029 kubelet_node_status.go:78] "Successfully registered node" node="ci-4344.1.1-n-76bacae427" Jul 9 23:46:14.454989 kubelet[3029]: E0709 23:46:14.454911 3029 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4344.1.1-n-76bacae427\": node \"ci-4344.1.1-n-76bacae427\" not found" Jul 9 23:46:14.475304 kubelet[3029]: E0709 23:46:14.475267 3029 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4344.1.1-n-76bacae427\" not found" Jul 9 23:46:14.576142 kubelet[3029]: E0709 23:46:14.576095 3029 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4344.1.1-n-76bacae427\" not found" Jul 9 23:46:14.676796 kubelet[3029]: E0709 23:46:14.676748 3029 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4344.1.1-n-76bacae427\" not found" Jul 9 23:46:14.777378 kubelet[3029]: E0709 23:46:14.777300 3029 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4344.1.1-n-76bacae427\" not found" Jul 9 23:46:14.878448 kubelet[3029]: E0709 23:46:14.878393 3029 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4344.1.1-n-76bacae427\" not found" Jul 9 23:46:14.978984 kubelet[3029]: E0709 23:46:14.978942 3029 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4344.1.1-n-76bacae427\" not found" Jul 9 23:46:15.080032 kubelet[3029]: E0709 23:46:15.079909 3029 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4344.1.1-n-76bacae427\" not found" Jul 9 23:46:15.145048 kubelet[3029]: E0709 23:46:15.144981 3029 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.1-n-76bacae427\" not found" node="ci-4344.1.1-n-76bacae427" Jul 9 23:46:15.145317 kubelet[3029]: E0709 23:46:15.145191 3029 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.1-n-76bacae427\" not found" node="ci-4344.1.1-n-76bacae427" Jul 9 23:46:15.145715 kubelet[3029]: E0709 23:46:15.145686 3029 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.1-n-76bacae427\" not found" node="ci-4344.1.1-n-76bacae427" Jul 9 23:46:15.180961 kubelet[3029]: E0709 23:46:15.180932 3029 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4344.1.1-n-76bacae427\" not found" Jul 9 23:46:15.286039 kubelet[3029]: I0709 23:46:15.286000 3029 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4344.1.1-n-76bacae427" Jul 9 23:46:15.298685 kubelet[3029]: I0709 23:46:15.298371 3029 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jul 9 23:46:15.298685 kubelet[3029]: I0709 23:46:15.298511 3029 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4344.1.1-n-76bacae427" Jul 9 23:46:15.309295 kubelet[3029]: I0709 23:46:15.309257 3029 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jul 9 23:46:15.309423 kubelet[3029]: I0709 23:46:15.309378 3029 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4344.1.1-n-76bacae427" Jul 9 23:46:15.315862 kubelet[3029]: I0709 23:46:15.315839 3029 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jul 9 23:46:16.072208 kubelet[3029]: I0709 23:46:16.072160 3029 apiserver.go:52] "Watching apiserver" Jul 9 23:46:16.090771 kubelet[3029]: I0709 23:46:16.090727 3029 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 9 23:46:16.145030 kubelet[3029]: I0709 23:46:16.144975 3029 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4344.1.1-n-76bacae427" Jul 9 23:46:16.145030 kubelet[3029]: I0709 23:46:16.144994 3029 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4344.1.1-n-76bacae427" Jul 9 23:46:16.153770 kubelet[3029]: I0709 23:46:16.153684 3029 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jul 9 23:46:16.153882 kubelet[3029]: E0709 23:46:16.153798 3029 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4344.1.1-n-76bacae427\" already exists" pod="kube-system/kube-scheduler-ci-4344.1.1-n-76bacae427" Jul 9 23:46:16.154478 kubelet[3029]: I0709 23:46:16.154311 3029 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jul 9 23:46:16.154478 kubelet[3029]: E0709 23:46:16.154348 3029 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4344.1.1-n-76bacae427\" already exists" pod="kube-system/kube-apiserver-ci-4344.1.1-n-76bacae427" Jul 9 23:46:16.748904 systemd[1]: Reload requested from client PID 3311 ('systemctl') (unit session-9.scope)... Jul 9 23:46:16.748921 systemd[1]: Reloading... Jul 9 23:46:16.817463 zram_generator::config[3357]: No configuration found. Jul 9 23:46:16.888729 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 9 23:46:16.981214 systemd[1]: Reloading finished in 232 ms. Jul 9 23:46:17.005368 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 23:46:17.021410 systemd[1]: kubelet.service: Deactivated successfully. Jul 9 23:46:17.021647 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 23:46:17.021711 systemd[1]: kubelet.service: Consumed 569ms CPU time, 127.2M memory peak. Jul 9 23:46:17.023316 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 23:46:17.126588 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 23:46:17.135715 (kubelet)[3421]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 9 23:46:17.215159 kubelet[3421]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 9 23:46:17.216086 kubelet[3421]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 9 23:46:17.216086 kubelet[3421]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 9 23:46:17.216086 kubelet[3421]: I0709 23:46:17.215654 3421 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 9 23:46:17.220881 kubelet[3421]: I0709 23:46:17.220854 3421 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 9 23:46:17.220881 kubelet[3421]: I0709 23:46:17.220877 3421 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 9 23:46:17.221051 kubelet[3421]: I0709 23:46:17.221036 3421 server.go:956] "Client rotation is on, will bootstrap in background" Jul 9 23:46:17.221998 kubelet[3421]: I0709 23:46:17.221980 3421 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jul 9 23:46:17.223831 kubelet[3421]: I0709 23:46:17.223592 3421 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 9 23:46:17.229621 kubelet[3421]: I0709 23:46:17.228198 3421 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 9 23:46:17.231855 kubelet[3421]: I0709 23:46:17.231839 3421 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 9 23:46:17.232199 kubelet[3421]: I0709 23:46:17.232167 3421 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 9 23:46:17.232405 kubelet[3421]: I0709 23:46:17.232277 3421 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4344.1.1-n-76bacae427","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 9 23:46:17.232572 kubelet[3421]: I0709 23:46:17.232560 3421 topology_manager.go:138] "Creating topology manager with none policy" Jul 9 23:46:17.232621 kubelet[3421]: I0709 23:46:17.232614 3421 container_manager_linux.go:303] "Creating device plugin manager" Jul 9 23:46:17.232696 kubelet[3421]: I0709 23:46:17.232689 3421 state_mem.go:36] "Initialized new in-memory state store" Jul 9 23:46:17.232884 kubelet[3421]: I0709 23:46:17.232872 3421 kubelet.go:480] "Attempting to sync node with API server" Jul 9 23:46:17.232948 kubelet[3421]: I0709 23:46:17.232939 3421 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 9 23:46:17.233009 kubelet[3421]: I0709 23:46:17.233002 3421 kubelet.go:386] "Adding apiserver pod source" Jul 9 23:46:17.233057 kubelet[3421]: I0709 23:46:17.233050 3421 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 9 23:46:17.237404 kubelet[3421]: I0709 23:46:17.237379 3421 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 9 23:46:17.238069 kubelet[3421]: I0709 23:46:17.238054 3421 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 9 23:46:17.240580 kubelet[3421]: I0709 23:46:17.240566 3421 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 9 23:46:17.240764 kubelet[3421]: I0709 23:46:17.240745 3421 server.go:1289] "Started kubelet" Jul 9 23:46:17.241775 kubelet[3421]: I0709 23:46:17.241727 3421 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 9 23:46:17.241996 kubelet[3421]: I0709 23:46:17.241970 3421 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 9 23:46:17.242027 kubelet[3421]: I0709 23:46:17.242010 3421 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 9 23:46:17.242731 kubelet[3421]: I0709 23:46:17.242715 3421 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 9 23:46:17.243657 kubelet[3421]: I0709 23:46:17.243630 3421 server.go:317] "Adding debug handlers to kubelet server" Jul 9 23:46:17.249247 kubelet[3421]: I0709 23:46:17.249195 3421 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 9 23:46:17.252481 kubelet[3421]: I0709 23:46:17.252066 3421 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 9 23:46:17.253144 kubelet[3421]: I0709 23:46:17.253126 3421 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 9 23:46:17.256943 kubelet[3421]: I0709 23:46:17.256850 3421 factory.go:223] Registration of the systemd container factory successfully Jul 9 23:46:17.257003 kubelet[3421]: I0709 23:46:17.256955 3421 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 9 23:46:17.260190 kubelet[3421]: I0709 23:46:17.260160 3421 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 9 23:46:17.260421 kubelet[3421]: E0709 23:46:17.260402 3421 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 9 23:46:17.260627 kubelet[3421]: I0709 23:46:17.260603 3421 reconciler.go:26] "Reconciler: start to sync state" Jul 9 23:46:17.260691 kubelet[3421]: I0709 23:46:17.260642 3421 factory.go:223] Registration of the containerd container factory successfully Jul 9 23:46:17.264939 kubelet[3421]: I0709 23:46:17.264133 3421 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 9 23:46:17.264939 kubelet[3421]: I0709 23:46:17.264155 3421 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 9 23:46:17.264939 kubelet[3421]: I0709 23:46:17.264175 3421 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 9 23:46:17.264939 kubelet[3421]: I0709 23:46:17.264179 3421 kubelet.go:2436] "Starting kubelet main sync loop" Jul 9 23:46:17.264939 kubelet[3421]: E0709 23:46:17.264211 3421 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 9 23:46:17.299249 kubelet[3421]: I0709 23:46:17.299214 3421 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 9 23:46:17.299249 kubelet[3421]: I0709 23:46:17.299239 3421 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 9 23:46:17.299249 kubelet[3421]: I0709 23:46:17.299261 3421 state_mem.go:36] "Initialized new in-memory state store" Jul 9 23:46:17.299443 kubelet[3421]: I0709 23:46:17.299414 3421 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 9 23:46:17.299511 kubelet[3421]: I0709 23:46:17.299423 3421 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 9 23:46:17.299511 kubelet[3421]: I0709 23:46:17.299509 3421 policy_none.go:49] "None policy: Start" Jul 9 23:46:17.299567 kubelet[3421]: I0709 23:46:17.299518 3421 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 9 23:46:17.299567 kubelet[3421]: I0709 23:46:17.299527 3421 state_mem.go:35] "Initializing new in-memory state store" Jul 9 23:46:17.299622 kubelet[3421]: I0709 23:46:17.299605 3421 state_mem.go:75] "Updated machine memory state" Jul 9 23:46:17.305631 kubelet[3421]: E0709 23:46:17.304170 3421 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 9 23:46:17.305631 kubelet[3421]: I0709 23:46:17.305091 3421 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 9 23:46:17.305631 kubelet[3421]: I0709 23:46:17.305104 3421 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 9 23:46:17.305631 kubelet[3421]: I0709 23:46:17.305322 3421 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 9 23:46:17.307297 kubelet[3421]: E0709 23:46:17.307267 3421 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 9 23:46:17.365488 kubelet[3421]: I0709 23:46:17.365375 3421 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4344.1.1-n-76bacae427" Jul 9 23:46:17.366024 kubelet[3421]: I0709 23:46:17.365379 3421 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4344.1.1-n-76bacae427" Jul 9 23:46:17.366024 kubelet[3421]: I0709 23:46:17.365965 3421 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4344.1.1-n-76bacae427" Jul 9 23:46:17.381139 kubelet[3421]: I0709 23:46:17.381097 3421 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jul 9 23:46:17.381612 kubelet[3421]: E0709 23:46:17.381416 3421 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4344.1.1-n-76bacae427\" already exists" pod="kube-system/kube-apiserver-ci-4344.1.1-n-76bacae427" Jul 9 23:46:17.382672 kubelet[3421]: I0709 23:46:17.382649 3421 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jul 9 23:46:17.382870 kubelet[3421]: I0709 23:46:17.382723 3421 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jul 9 23:46:17.383340 kubelet[3421]: E0709 23:46:17.383226 3421 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4344.1.1-n-76bacae427\" already exists" pod="kube-system/kube-controller-manager-ci-4344.1.1-n-76bacae427" Jul 9 23:46:17.383340 kubelet[3421]: E0709 23:46:17.382850 3421 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4344.1.1-n-76bacae427\" already exists" pod="kube-system/kube-scheduler-ci-4344.1.1-n-76bacae427" Jul 9 23:46:17.414381 kubelet[3421]: I0709 23:46:17.414290 3421 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344.1.1-n-76bacae427" Jul 9 23:46:17.425115 kubelet[3421]: I0709 23:46:17.425059 3421 kubelet_node_status.go:124] "Node was previously registered" node="ci-4344.1.1-n-76bacae427" Jul 9 23:46:17.425238 kubelet[3421]: I0709 23:46:17.425212 3421 kubelet_node_status.go:78] "Successfully registered node" node="ci-4344.1.1-n-76bacae427" Jul 9 23:46:17.461283 kubelet[3421]: I0709 23:46:17.461233 3421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9d2882d2482527b834e0b8007fdb2700-kubeconfig\") pod \"kube-scheduler-ci-4344.1.1-n-76bacae427\" (UID: \"9d2882d2482527b834e0b8007fdb2700\") " pod="kube-system/kube-scheduler-ci-4344.1.1-n-76bacae427" Jul 9 23:46:17.561621 kubelet[3421]: I0709 23:46:17.561458 3421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/74a93de7e991d2989650c3c235887cd5-flexvolume-dir\") pod \"kube-controller-manager-ci-4344.1.1-n-76bacae427\" (UID: \"74a93de7e991d2989650c3c235887cd5\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-n-76bacae427" Jul 9 23:46:17.561621 kubelet[3421]: I0709 23:46:17.561588 3421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/74a93de7e991d2989650c3c235887cd5-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4344.1.1-n-76bacae427\" (UID: \"74a93de7e991d2989650c3c235887cd5\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-n-76bacae427" Jul 9 23:46:17.561621 kubelet[3421]: I0709 23:46:17.561612 3421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e0eb40cebf0db7962cf510db71d9d64b-k8s-certs\") pod \"kube-apiserver-ci-4344.1.1-n-76bacae427\" (UID: \"e0eb40cebf0db7962cf510db71d9d64b\") " pod="kube-system/kube-apiserver-ci-4344.1.1-n-76bacae427" Jul 9 23:46:17.561621 kubelet[3421]: I0709 23:46:17.561623 3421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e0eb40cebf0db7962cf510db71d9d64b-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4344.1.1-n-76bacae427\" (UID: \"e0eb40cebf0db7962cf510db71d9d64b\") " pod="kube-system/kube-apiserver-ci-4344.1.1-n-76bacae427" Jul 9 23:46:17.562192 kubelet[3421]: I0709 23:46:17.561633 3421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/74a93de7e991d2989650c3c235887cd5-ca-certs\") pod \"kube-controller-manager-ci-4344.1.1-n-76bacae427\" (UID: \"74a93de7e991d2989650c3c235887cd5\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-n-76bacae427" Jul 9 23:46:17.562192 kubelet[3421]: I0709 23:46:17.561643 3421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/74a93de7e991d2989650c3c235887cd5-k8s-certs\") pod \"kube-controller-manager-ci-4344.1.1-n-76bacae427\" (UID: \"74a93de7e991d2989650c3c235887cd5\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-n-76bacae427" Jul 9 23:46:17.562192 kubelet[3421]: I0709 23:46:17.561654 3421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/74a93de7e991d2989650c3c235887cd5-kubeconfig\") pod \"kube-controller-manager-ci-4344.1.1-n-76bacae427\" (UID: \"74a93de7e991d2989650c3c235887cd5\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-n-76bacae427" Jul 9 23:46:17.562192 kubelet[3421]: I0709 23:46:17.561682 3421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e0eb40cebf0db7962cf510db71d9d64b-ca-certs\") pod \"kube-apiserver-ci-4344.1.1-n-76bacae427\" (UID: \"e0eb40cebf0db7962cf510db71d9d64b\") " pod="kube-system/kube-apiserver-ci-4344.1.1-n-76bacae427" Jul 9 23:46:17.762810 sudo[3457]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 9 23:46:17.763015 sudo[3457]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jul 9 23:46:18.125959 sudo[3457]: pam_unix(sudo:session): session closed for user root Jul 9 23:46:18.238018 kubelet[3421]: I0709 23:46:18.237582 3421 apiserver.go:52] "Watching apiserver" Jul 9 23:46:18.261272 kubelet[3421]: I0709 23:46:18.261217 3421 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 9 23:46:18.289530 kubelet[3421]: I0709 23:46:18.289467 3421 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4344.1.1-n-76bacae427" Jul 9 23:46:18.290890 kubelet[3421]: I0709 23:46:18.290866 3421 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4344.1.1-n-76bacae427" Jul 9 23:46:18.305050 kubelet[3421]: I0709 23:46:18.304498 3421 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jul 9 23:46:18.305050 kubelet[3421]: E0709 23:46:18.304548 3421 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4344.1.1-n-76bacae427\" already exists" pod="kube-system/kube-controller-manager-ci-4344.1.1-n-76bacae427" Jul 9 23:46:18.307310 kubelet[3421]: I0709 23:46:18.307209 3421 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jul 9 23:46:18.307501 kubelet[3421]: E0709 23:46:18.307408 3421 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4344.1.1-n-76bacae427\" already exists" pod="kube-system/kube-scheduler-ci-4344.1.1-n-76bacae427" Jul 9 23:46:18.318619 kubelet[3421]: I0709 23:46:18.318422 3421 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4344.1.1-n-76bacae427" podStartSLOduration=3.318412502 podStartE2EDuration="3.318412502s" podCreationTimestamp="2025-07-09 23:46:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 23:46:18.318275761 +0000 UTC m=+1.179392811" watchObservedRunningTime="2025-07-09 23:46:18.318412502 +0000 UTC m=+1.179529544" Jul 9 23:46:18.339804 kubelet[3421]: I0709 23:46:18.339664 3421 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4344.1.1-n-76bacae427" podStartSLOduration=3.339650007 podStartE2EDuration="3.339650007s" podCreationTimestamp="2025-07-09 23:46:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 23:46:18.329120846 +0000 UTC m=+1.190237896" watchObservedRunningTime="2025-07-09 23:46:18.339650007 +0000 UTC m=+1.200767049" Jul 9 23:46:18.350932 kubelet[3421]: I0709 23:46:18.350877 3421 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4344.1.1-n-76bacae427" podStartSLOduration=3.350864209 podStartE2EDuration="3.350864209s" podCreationTimestamp="2025-07-09 23:46:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 23:46:18.340313943 +0000 UTC m=+1.201430985" watchObservedRunningTime="2025-07-09 23:46:18.350864209 +0000 UTC m=+1.211981251" Jul 9 23:46:19.086536 sudo[2400]: pam_unix(sudo:session): session closed for user root Jul 9 23:46:19.158631 sshd[2399]: Connection closed by 10.200.16.10 port 56854 Jul 9 23:46:19.159039 sshd-session[2397]: pam_unix(sshd:session): session closed for user core Jul 9 23:46:19.162667 systemd-logind[1882]: Session 9 logged out. Waiting for processes to exit. Jul 9 23:46:19.163499 systemd[1]: sshd@6-10.200.20.14:22-10.200.16.10:56854.service: Deactivated successfully. Jul 9 23:46:19.165625 systemd[1]: session-9.scope: Deactivated successfully. Jul 9 23:46:19.165911 systemd[1]: session-9.scope: Consumed 3.853s CPU time, 273.7M memory peak. Jul 9 23:46:19.168041 systemd-logind[1882]: Removed session 9. Jul 9 23:46:22.074391 kubelet[3421]: I0709 23:46:22.074360 3421 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 9 23:46:22.075004 containerd[1912]: time="2025-07-09T23:46:22.074916558Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 9 23:46:22.075275 kubelet[3421]: I0709 23:46:22.075121 3421 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 9 23:46:22.696157 systemd[1]: Created slice kubepods-besteffort-podb8a292cf_a4ea_4ef9_b2b9_3c342b877ecf.slice - libcontainer container kubepods-besteffort-podb8a292cf_a4ea_4ef9_b2b9_3c342b877ecf.slice. Jul 9 23:46:22.708280 systemd[1]: Created slice kubepods-burstable-pod46f90c65_3425_4ac9_ad87_764f78c1a0f3.slice - libcontainer container kubepods-burstable-pod46f90c65_3425_4ac9_ad87_764f78c1a0f3.slice. Jul 9 23:46:22.789916 kubelet[3421]: I0709 23:46:22.789843 3421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b8a292cf-a4ea-4ef9-b2b9-3c342b877ecf-lib-modules\") pod \"kube-proxy-pvrlg\" (UID: \"b8a292cf-a4ea-4ef9-b2b9-3c342b877ecf\") " pod="kube-system/kube-proxy-pvrlg" Jul 9 23:46:22.789916 kubelet[3421]: I0709 23:46:22.789888 3421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/46f90c65-3425-4ac9-ad87-764f78c1a0f3-bpf-maps\") pod \"cilium-r9mlv\" (UID: \"46f90c65-3425-4ac9-ad87-764f78c1a0f3\") " pod="kube-system/cilium-r9mlv" Jul 9 23:46:22.789916 kubelet[3421]: I0709 23:46:22.789900 3421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/46f90c65-3425-4ac9-ad87-764f78c1a0f3-cni-path\") pod \"cilium-r9mlv\" (UID: \"46f90c65-3425-4ac9-ad87-764f78c1a0f3\") " pod="kube-system/cilium-r9mlv" Jul 9 23:46:22.789916 kubelet[3421]: I0709 23:46:22.789909 3421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/46f90c65-3425-4ac9-ad87-764f78c1a0f3-lib-modules\") pod \"cilium-r9mlv\" (UID: \"46f90c65-3425-4ac9-ad87-764f78c1a0f3\") " pod="kube-system/cilium-r9mlv" Jul 9 23:46:22.789916 kubelet[3421]: I0709 23:46:22.789918 3421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/46f90c65-3425-4ac9-ad87-764f78c1a0f3-xtables-lock\") pod \"cilium-r9mlv\" (UID: \"46f90c65-3425-4ac9-ad87-764f78c1a0f3\") " pod="kube-system/cilium-r9mlv" Jul 9 23:46:22.789916 kubelet[3421]: I0709 23:46:22.789929 3421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/46f90c65-3425-4ac9-ad87-764f78c1a0f3-host-proc-sys-net\") pod \"cilium-r9mlv\" (UID: \"46f90c65-3425-4ac9-ad87-764f78c1a0f3\") " pod="kube-system/cilium-r9mlv" Jul 9 23:46:22.790154 kubelet[3421]: I0709 23:46:22.789938 3421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/46f90c65-3425-4ac9-ad87-764f78c1a0f3-cilium-run\") pod \"cilium-r9mlv\" (UID: \"46f90c65-3425-4ac9-ad87-764f78c1a0f3\") " pod="kube-system/cilium-r9mlv" Jul 9 23:46:22.790154 kubelet[3421]: I0709 23:46:22.789948 3421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/46f90c65-3425-4ac9-ad87-764f78c1a0f3-host-proc-sys-kernel\") pod \"cilium-r9mlv\" (UID: \"46f90c65-3425-4ac9-ad87-764f78c1a0f3\") " pod="kube-system/cilium-r9mlv" Jul 9 23:46:22.790154 kubelet[3421]: I0709 23:46:22.789959 3421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hcjm\" (UniqueName: \"kubernetes.io/projected/46f90c65-3425-4ac9-ad87-764f78c1a0f3-kube-api-access-8hcjm\") pod \"cilium-r9mlv\" (UID: \"46f90c65-3425-4ac9-ad87-764f78c1a0f3\") " pod="kube-system/cilium-r9mlv" Jul 9 23:46:22.790154 kubelet[3421]: I0709 23:46:22.789970 3421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b8a292cf-a4ea-4ef9-b2b9-3c342b877ecf-kube-proxy\") pod \"kube-proxy-pvrlg\" (UID: \"b8a292cf-a4ea-4ef9-b2b9-3c342b877ecf\") " pod="kube-system/kube-proxy-pvrlg" Jul 9 23:46:22.790154 kubelet[3421]: I0709 23:46:22.789980 3421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/46f90c65-3425-4ac9-ad87-764f78c1a0f3-hostproc\") pod \"cilium-r9mlv\" (UID: \"46f90c65-3425-4ac9-ad87-764f78c1a0f3\") " pod="kube-system/cilium-r9mlv" Jul 9 23:46:22.790154 kubelet[3421]: I0709 23:46:22.789996 3421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/46f90c65-3425-4ac9-ad87-764f78c1a0f3-cilium-cgroup\") pod \"cilium-r9mlv\" (UID: \"46f90c65-3425-4ac9-ad87-764f78c1a0f3\") " pod="kube-system/cilium-r9mlv" Jul 9 23:46:22.790238 kubelet[3421]: I0709 23:46:22.790004 3421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/46f90c65-3425-4ac9-ad87-764f78c1a0f3-etc-cni-netd\") pod \"cilium-r9mlv\" (UID: \"46f90c65-3425-4ac9-ad87-764f78c1a0f3\") " pod="kube-system/cilium-r9mlv" Jul 9 23:46:22.790238 kubelet[3421]: I0709 23:46:22.790012 3421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/46f90c65-3425-4ac9-ad87-764f78c1a0f3-hubble-tls\") pod \"cilium-r9mlv\" (UID: \"46f90c65-3425-4ac9-ad87-764f78c1a0f3\") " pod="kube-system/cilium-r9mlv" Jul 9 23:46:22.790238 kubelet[3421]: I0709 23:46:22.790024 3421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xf7qz\" (UniqueName: \"kubernetes.io/projected/b8a292cf-a4ea-4ef9-b2b9-3c342b877ecf-kube-api-access-xf7qz\") pod \"kube-proxy-pvrlg\" (UID: \"b8a292cf-a4ea-4ef9-b2b9-3c342b877ecf\") " pod="kube-system/kube-proxy-pvrlg" Jul 9 23:46:22.790238 kubelet[3421]: I0709 23:46:22.790034 3421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/46f90c65-3425-4ac9-ad87-764f78c1a0f3-clustermesh-secrets\") pod \"cilium-r9mlv\" (UID: \"46f90c65-3425-4ac9-ad87-764f78c1a0f3\") " pod="kube-system/cilium-r9mlv" Jul 9 23:46:22.790238 kubelet[3421]: I0709 23:46:22.790042 3421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/46f90c65-3425-4ac9-ad87-764f78c1a0f3-cilium-config-path\") pod \"cilium-r9mlv\" (UID: \"46f90c65-3425-4ac9-ad87-764f78c1a0f3\") " pod="kube-system/cilium-r9mlv" Jul 9 23:46:22.790309 kubelet[3421]: I0709 23:46:22.790051 3421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b8a292cf-a4ea-4ef9-b2b9-3c342b877ecf-xtables-lock\") pod \"kube-proxy-pvrlg\" (UID: \"b8a292cf-a4ea-4ef9-b2b9-3c342b877ecf\") " pod="kube-system/kube-proxy-pvrlg" Jul 9 23:46:22.902268 kubelet[3421]: E0709 23:46:22.902237 3421 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jul 9 23:46:22.902268 kubelet[3421]: E0709 23:46:22.902268 3421 projected.go:194] Error preparing data for projected volume kube-api-access-xf7qz for pod kube-system/kube-proxy-pvrlg: configmap "kube-root-ca.crt" not found Jul 9 23:46:22.902418 kubelet[3421]: E0709 23:46:22.902333 3421 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b8a292cf-a4ea-4ef9-b2b9-3c342b877ecf-kube-api-access-xf7qz podName:b8a292cf-a4ea-4ef9-b2b9-3c342b877ecf nodeName:}" failed. No retries permitted until 2025-07-09 23:46:23.402312722 +0000 UTC m=+6.263429764 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-xf7qz" (UniqueName: "kubernetes.io/projected/b8a292cf-a4ea-4ef9-b2b9-3c342b877ecf-kube-api-access-xf7qz") pod "kube-proxy-pvrlg" (UID: "b8a292cf-a4ea-4ef9-b2b9-3c342b877ecf") : configmap "kube-root-ca.crt" not found Jul 9 23:46:22.903692 kubelet[3421]: E0709 23:46:22.903639 3421 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jul 9 23:46:22.903692 kubelet[3421]: E0709 23:46:22.903660 3421 projected.go:194] Error preparing data for projected volume kube-api-access-8hcjm for pod kube-system/cilium-r9mlv: configmap "kube-root-ca.crt" not found Jul 9 23:46:22.903849 kubelet[3421]: E0709 23:46:22.903825 3421 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/46f90c65-3425-4ac9-ad87-764f78c1a0f3-kube-api-access-8hcjm podName:46f90c65-3425-4ac9-ad87-764f78c1a0f3 nodeName:}" failed. No retries permitted until 2025-07-09 23:46:23.403809921 +0000 UTC m=+6.264926979 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-8hcjm" (UniqueName: "kubernetes.io/projected/46f90c65-3425-4ac9-ad87-764f78c1a0f3-kube-api-access-8hcjm") pod "cilium-r9mlv" (UID: "46f90c65-3425-4ac9-ad87-764f78c1a0f3") : configmap "kube-root-ca.crt" not found Jul 9 23:46:23.284111 systemd[1]: Created slice kubepods-besteffort-podf39ab7ab_8c6d_49cc_9213_ba71667bdcf6.slice - libcontainer container kubepods-besteffort-podf39ab7ab_8c6d_49cc_9213_ba71667bdcf6.slice. Jul 9 23:46:23.294188 kubelet[3421]: I0709 23:46:23.293663 3421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xncbz\" (UniqueName: \"kubernetes.io/projected/f39ab7ab-8c6d-49cc-9213-ba71667bdcf6-kube-api-access-xncbz\") pod \"cilium-operator-6c4d7847fc-ndz4d\" (UID: \"f39ab7ab-8c6d-49cc-9213-ba71667bdcf6\") " pod="kube-system/cilium-operator-6c4d7847fc-ndz4d" Jul 9 23:46:23.295011 kubelet[3421]: I0709 23:46:23.294547 3421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f39ab7ab-8c6d-49cc-9213-ba71667bdcf6-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-ndz4d\" (UID: \"f39ab7ab-8c6d-49cc-9213-ba71667bdcf6\") " pod="kube-system/cilium-operator-6c4d7847fc-ndz4d" Jul 9 23:46:23.588802 containerd[1912]: time="2025-07-09T23:46:23.588688610Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-ndz4d,Uid:f39ab7ab-8c6d-49cc-9213-ba71667bdcf6,Namespace:kube-system,Attempt:0,}" Jul 9 23:46:23.604680 containerd[1912]: time="2025-07-09T23:46:23.604523454Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pvrlg,Uid:b8a292cf-a4ea-4ef9-b2b9-3c342b877ecf,Namespace:kube-system,Attempt:0,}" Jul 9 23:46:23.612544 containerd[1912]: time="2025-07-09T23:46:23.612501554Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-r9mlv,Uid:46f90c65-3425-4ac9-ad87-764f78c1a0f3,Namespace:kube-system,Attempt:0,}" Jul 9 23:46:23.720872 containerd[1912]: time="2025-07-09T23:46:23.720771151Z" level=info msg="connecting to shim b4d5389fa92681707e5fdd4e91f41e90ff1631f7d5c92dc687c360e2e864d555" address="unix:///run/containerd/s/1eacbe477eb2c8a51146dc1c0ce8f4005f9409bdcbee0bbe0cc46e99d4329696" namespace=k8s.io protocol=ttrpc version=3 Jul 9 23:46:23.740590 systemd[1]: Started cri-containerd-b4d5389fa92681707e5fdd4e91f41e90ff1631f7d5c92dc687c360e2e864d555.scope - libcontainer container b4d5389fa92681707e5fdd4e91f41e90ff1631f7d5c92dc687c360e2e864d555. Jul 9 23:46:23.776204 containerd[1912]: time="2025-07-09T23:46:23.776162067Z" level=info msg="connecting to shim a8c9e973d12dfdae9ba1cddab3e14a07dde0dc08052de9ea4c27af9df56ef77d" address="unix:///run/containerd/s/3237f20094052e99b7b34112beccb8bc06570bcaf56ffd29c86d531219f38c5e" namespace=k8s.io protocol=ttrpc version=3 Jul 9 23:46:23.782085 containerd[1912]: time="2025-07-09T23:46:23.781838723Z" level=info msg="connecting to shim 6aafcf9d03c31f250007071bda7c67b8dec6f90354b12114420275b68ea9e7e9" address="unix:///run/containerd/s/2da6bde675e868b168877ece118578220c14e164a7a99b936cfbc8f0ff143cf3" namespace=k8s.io protocol=ttrpc version=3 Jul 9 23:46:23.783993 containerd[1912]: time="2025-07-09T23:46:23.783728288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-ndz4d,Uid:f39ab7ab-8c6d-49cc-9213-ba71667bdcf6,Namespace:kube-system,Attempt:0,} returns sandbox id \"b4d5389fa92681707e5fdd4e91f41e90ff1631f7d5c92dc687c360e2e864d555\"" Jul 9 23:46:23.786519 containerd[1912]: time="2025-07-09T23:46:23.786441244Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 9 23:46:23.805590 systemd[1]: Started cri-containerd-6aafcf9d03c31f250007071bda7c67b8dec6f90354b12114420275b68ea9e7e9.scope - libcontainer container 6aafcf9d03c31f250007071bda7c67b8dec6f90354b12114420275b68ea9e7e9. Jul 9 23:46:23.808745 systemd[1]: Started cri-containerd-a8c9e973d12dfdae9ba1cddab3e14a07dde0dc08052de9ea4c27af9df56ef77d.scope - libcontainer container a8c9e973d12dfdae9ba1cddab3e14a07dde0dc08052de9ea4c27af9df56ef77d. Jul 9 23:46:23.841629 containerd[1912]: time="2025-07-09T23:46:23.841528117Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-r9mlv,Uid:46f90c65-3425-4ac9-ad87-764f78c1a0f3,Namespace:kube-system,Attempt:0,} returns sandbox id \"6aafcf9d03c31f250007071bda7c67b8dec6f90354b12114420275b68ea9e7e9\"" Jul 9 23:46:23.849217 containerd[1912]: time="2025-07-09T23:46:23.849154828Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pvrlg,Uid:b8a292cf-a4ea-4ef9-b2b9-3c342b877ecf,Namespace:kube-system,Attempt:0,} returns sandbox id \"a8c9e973d12dfdae9ba1cddab3e14a07dde0dc08052de9ea4c27af9df56ef77d\"" Jul 9 23:46:23.857844 containerd[1912]: time="2025-07-09T23:46:23.857809665Z" level=info msg="CreateContainer within sandbox \"a8c9e973d12dfdae9ba1cddab3e14a07dde0dc08052de9ea4c27af9df56ef77d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 9 23:46:23.893252 containerd[1912]: time="2025-07-09T23:46:23.893133975Z" level=info msg="Container 32955f8e7c4f84db309afa62dbf57821cece17f08f47d319066fc65469b5fc03: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:46:23.922480 containerd[1912]: time="2025-07-09T23:46:23.922423631Z" level=info msg="CreateContainer within sandbox \"a8c9e973d12dfdae9ba1cddab3e14a07dde0dc08052de9ea4c27af9df56ef77d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"32955f8e7c4f84db309afa62dbf57821cece17f08f47d319066fc65469b5fc03\"" Jul 9 23:46:23.923890 containerd[1912]: time="2025-07-09T23:46:23.923819490Z" level=info msg="StartContainer for \"32955f8e7c4f84db309afa62dbf57821cece17f08f47d319066fc65469b5fc03\"" Jul 9 23:46:23.925399 containerd[1912]: time="2025-07-09T23:46:23.925325809Z" level=info msg="connecting to shim 32955f8e7c4f84db309afa62dbf57821cece17f08f47d319066fc65469b5fc03" address="unix:///run/containerd/s/3237f20094052e99b7b34112beccb8bc06570bcaf56ffd29c86d531219f38c5e" protocol=ttrpc version=3 Jul 9 23:46:23.942663 systemd[1]: Started cri-containerd-32955f8e7c4f84db309afa62dbf57821cece17f08f47d319066fc65469b5fc03.scope - libcontainer container 32955f8e7c4f84db309afa62dbf57821cece17f08f47d319066fc65469b5fc03. Jul 9 23:46:23.981266 containerd[1912]: time="2025-07-09T23:46:23.981149078Z" level=info msg="StartContainer for \"32955f8e7c4f84db309afa62dbf57821cece17f08f47d319066fc65469b5fc03\" returns successfully" Jul 9 23:46:24.314628 kubelet[3421]: I0709 23:46:24.314567 3421 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-pvrlg" podStartSLOduration=2.314552623 podStartE2EDuration="2.314552623s" podCreationTimestamp="2025-07-09 23:46:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 23:46:24.314112159 +0000 UTC m=+7.175229209" watchObservedRunningTime="2025-07-09 23:46:24.314552623 +0000 UTC m=+7.175669665" Jul 9 23:46:25.614680 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2721289565.mount: Deactivated successfully. Jul 9 23:46:26.181949 containerd[1912]: time="2025-07-09T23:46:26.181426423Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:46:26.186331 containerd[1912]: time="2025-07-09T23:46:26.186296289Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Jul 9 23:46:26.191021 containerd[1912]: time="2025-07-09T23:46:26.190993869Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:46:26.191888 containerd[1912]: time="2025-07-09T23:46:26.191778146Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 2.405298765s" Jul 9 23:46:26.191888 containerd[1912]: time="2025-07-09T23:46:26.191807667Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jul 9 23:46:26.193078 containerd[1912]: time="2025-07-09T23:46:26.192892859Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 9 23:46:26.200448 containerd[1912]: time="2025-07-09T23:46:26.200411278Z" level=info msg="CreateContainer within sandbox \"b4d5389fa92681707e5fdd4e91f41e90ff1631f7d5c92dc687c360e2e864d555\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 9 23:46:26.233568 containerd[1912]: time="2025-07-09T23:46:26.233523595Z" level=info msg="Container e0f3007449f4ac80765e555074bee59ef3c46063acae17bae9a13d56e5d538ed: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:46:26.251710 containerd[1912]: time="2025-07-09T23:46:26.251669411Z" level=info msg="CreateContainer within sandbox \"b4d5389fa92681707e5fdd4e91f41e90ff1631f7d5c92dc687c360e2e864d555\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"e0f3007449f4ac80765e555074bee59ef3c46063acae17bae9a13d56e5d538ed\"" Jul 9 23:46:26.252350 containerd[1912]: time="2025-07-09T23:46:26.252233912Z" level=info msg="StartContainer for \"e0f3007449f4ac80765e555074bee59ef3c46063acae17bae9a13d56e5d538ed\"" Jul 9 23:46:26.253280 containerd[1912]: time="2025-07-09T23:46:26.253234941Z" level=info msg="connecting to shim e0f3007449f4ac80765e555074bee59ef3c46063acae17bae9a13d56e5d538ed" address="unix:///run/containerd/s/1eacbe477eb2c8a51146dc1c0ce8f4005f9409bdcbee0bbe0cc46e99d4329696" protocol=ttrpc version=3 Jul 9 23:46:26.269572 systemd[1]: Started cri-containerd-e0f3007449f4ac80765e555074bee59ef3c46063acae17bae9a13d56e5d538ed.scope - libcontainer container e0f3007449f4ac80765e555074bee59ef3c46063acae17bae9a13d56e5d538ed. Jul 9 23:46:26.294812 containerd[1912]: time="2025-07-09T23:46:26.294774926Z" level=info msg="StartContainer for \"e0f3007449f4ac80765e555074bee59ef3c46063acae17bae9a13d56e5d538ed\" returns successfully" Jul 9 23:46:26.320026 kubelet[3421]: I0709 23:46:26.319960 3421 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-ndz4d" podStartSLOduration=0.913380356 podStartE2EDuration="3.319946056s" podCreationTimestamp="2025-07-09 23:46:23 +0000 UTC" firstStartedPulling="2025-07-09 23:46:23.786193962 +0000 UTC m=+6.647311012" lastFinishedPulling="2025-07-09 23:46:26.19275967 +0000 UTC m=+9.053876712" observedRunningTime="2025-07-09 23:46:26.31973492 +0000 UTC m=+9.180851962" watchObservedRunningTime="2025-07-09 23:46:26.319946056 +0000 UTC m=+9.181063106" Jul 9 23:46:29.378520 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount516654361.mount: Deactivated successfully. Jul 9 23:46:31.542999 containerd[1912]: time="2025-07-09T23:46:31.542886252Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:46:31.550001 containerd[1912]: time="2025-07-09T23:46:31.549959836Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Jul 9 23:46:31.558092 containerd[1912]: time="2025-07-09T23:46:31.558021822Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:46:31.559100 containerd[1912]: time="2025-07-09T23:46:31.559077146Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 5.366159679s" Jul 9 23:46:31.559197 containerd[1912]: time="2025-07-09T23:46:31.559184366Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jul 9 23:46:31.580113 containerd[1912]: time="2025-07-09T23:46:31.580070644Z" level=info msg="CreateContainer within sandbox \"6aafcf9d03c31f250007071bda7c67b8dec6f90354b12114420275b68ea9e7e9\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 9 23:46:31.609453 containerd[1912]: time="2025-07-09T23:46:31.609020172Z" level=info msg="Container 414d57f2d5303836cba5b768e145269fd7c01d0c7f68f382766465dfd5c95ffd: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:46:31.610866 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1467929150.mount: Deactivated successfully. Jul 9 23:46:31.630483 containerd[1912]: time="2025-07-09T23:46:31.630415019Z" level=info msg="CreateContainer within sandbox \"6aafcf9d03c31f250007071bda7c67b8dec6f90354b12114420275b68ea9e7e9\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"414d57f2d5303836cba5b768e145269fd7c01d0c7f68f382766465dfd5c95ffd\"" Jul 9 23:46:31.631131 containerd[1912]: time="2025-07-09T23:46:31.631048913Z" level=info msg="StartContainer for \"414d57f2d5303836cba5b768e145269fd7c01d0c7f68f382766465dfd5c95ffd\"" Jul 9 23:46:31.632030 containerd[1912]: time="2025-07-09T23:46:31.631989577Z" level=info msg="connecting to shim 414d57f2d5303836cba5b768e145269fd7c01d0c7f68f382766465dfd5c95ffd" address="unix:///run/containerd/s/2da6bde675e868b168877ece118578220c14e164a7a99b936cfbc8f0ff143cf3" protocol=ttrpc version=3 Jul 9 23:46:31.648570 systemd[1]: Started cri-containerd-414d57f2d5303836cba5b768e145269fd7c01d0c7f68f382766465dfd5c95ffd.scope - libcontainer container 414d57f2d5303836cba5b768e145269fd7c01d0c7f68f382766465dfd5c95ffd. Jul 9 23:46:31.676516 systemd[1]: cri-containerd-414d57f2d5303836cba5b768e145269fd7c01d0c7f68f382766465dfd5c95ffd.scope: Deactivated successfully. Jul 9 23:46:31.678096 containerd[1912]: time="2025-07-09T23:46:31.678051103Z" level=info msg="TaskExit event in podsandbox handler container_id:\"414d57f2d5303836cba5b768e145269fd7c01d0c7f68f382766465dfd5c95ffd\" id:\"414d57f2d5303836cba5b768e145269fd7c01d0c7f68f382766465dfd5c95ffd\" pid:3886 exited_at:{seconds:1752104791 nanos:677539261}" Jul 9 23:46:31.678900 containerd[1912]: time="2025-07-09T23:46:31.678718957Z" level=info msg="received exit event container_id:\"414d57f2d5303836cba5b768e145269fd7c01d0c7f68f382766465dfd5c95ffd\" id:\"414d57f2d5303836cba5b768e145269fd7c01d0c7f68f382766465dfd5c95ffd\" pid:3886 exited_at:{seconds:1752104791 nanos:677539261}" Jul 9 23:46:31.680780 containerd[1912]: time="2025-07-09T23:46:31.680752811Z" level=info msg="StartContainer for \"414d57f2d5303836cba5b768e145269fd7c01d0c7f68f382766465dfd5c95ffd\" returns successfully" Jul 9 23:46:31.697622 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-414d57f2d5303836cba5b768e145269fd7c01d0c7f68f382766465dfd5c95ffd-rootfs.mount: Deactivated successfully. Jul 9 23:46:34.337811 containerd[1912]: time="2025-07-09T23:46:34.337378120Z" level=info msg="CreateContainer within sandbox \"6aafcf9d03c31f250007071bda7c67b8dec6f90354b12114420275b68ea9e7e9\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 9 23:46:34.371495 containerd[1912]: time="2025-07-09T23:46:34.371455166Z" level=info msg="Container 08bd64f27d0c9e0039b4750ce2b98835a7156656a07c0adf10e51757d7af507b: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:46:34.373367 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3702681355.mount: Deactivated successfully. Jul 9 23:46:34.397736 containerd[1912]: time="2025-07-09T23:46:34.397694778Z" level=info msg="CreateContainer within sandbox \"6aafcf9d03c31f250007071bda7c67b8dec6f90354b12114420275b68ea9e7e9\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"08bd64f27d0c9e0039b4750ce2b98835a7156656a07c0adf10e51757d7af507b\"" Jul 9 23:46:34.399058 containerd[1912]: time="2025-07-09T23:46:34.399013831Z" level=info msg="StartContainer for \"08bd64f27d0c9e0039b4750ce2b98835a7156656a07c0adf10e51757d7af507b\"" Jul 9 23:46:34.399807 containerd[1912]: time="2025-07-09T23:46:34.399756032Z" level=info msg="connecting to shim 08bd64f27d0c9e0039b4750ce2b98835a7156656a07c0adf10e51757d7af507b" address="unix:///run/containerd/s/2da6bde675e868b168877ece118578220c14e164a7a99b936cfbc8f0ff143cf3" protocol=ttrpc version=3 Jul 9 23:46:34.416580 systemd[1]: Started cri-containerd-08bd64f27d0c9e0039b4750ce2b98835a7156656a07c0adf10e51757d7af507b.scope - libcontainer container 08bd64f27d0c9e0039b4750ce2b98835a7156656a07c0adf10e51757d7af507b. Jul 9 23:46:34.445146 containerd[1912]: time="2025-07-09T23:46:34.445105894Z" level=info msg="StartContainer for \"08bd64f27d0c9e0039b4750ce2b98835a7156656a07c0adf10e51757d7af507b\" returns successfully" Jul 9 23:46:34.455351 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 9 23:46:34.455736 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 9 23:46:34.456609 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 9 23:46:34.459952 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 9 23:46:34.461157 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 9 23:46:34.462817 systemd[1]: cri-containerd-08bd64f27d0c9e0039b4750ce2b98835a7156656a07c0adf10e51757d7af507b.scope: Deactivated successfully. Jul 9 23:46:34.466600 containerd[1912]: time="2025-07-09T23:46:34.462950340Z" level=info msg="received exit event container_id:\"08bd64f27d0c9e0039b4750ce2b98835a7156656a07c0adf10e51757d7af507b\" id:\"08bd64f27d0c9e0039b4750ce2b98835a7156656a07c0adf10e51757d7af507b\" pid:3932 exited_at:{seconds:1752104794 nanos:462708476}" Jul 9 23:46:34.466600 containerd[1912]: time="2025-07-09T23:46:34.463162076Z" level=info msg="TaskExit event in podsandbox handler container_id:\"08bd64f27d0c9e0039b4750ce2b98835a7156656a07c0adf10e51757d7af507b\" id:\"08bd64f27d0c9e0039b4750ce2b98835a7156656a07c0adf10e51757d7af507b\" pid:3932 exited_at:{seconds:1752104794 nanos:462708476}" Jul 9 23:46:34.481600 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 9 23:46:35.337733 containerd[1912]: time="2025-07-09T23:46:35.337603708Z" level=info msg="CreateContainer within sandbox \"6aafcf9d03c31f250007071bda7c67b8dec6f90354b12114420275b68ea9e7e9\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 9 23:46:35.370589 containerd[1912]: time="2025-07-09T23:46:35.369367226Z" level=info msg="Container 1e09bcee8669ef627a4844c7e84f22b5f114268efd49d06924e8df77dba6d328: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:46:35.369593 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-08bd64f27d0c9e0039b4750ce2b98835a7156656a07c0adf10e51757d7af507b-rootfs.mount: Deactivated successfully. Jul 9 23:46:35.394684 containerd[1912]: time="2025-07-09T23:46:35.394629067Z" level=info msg="CreateContainer within sandbox \"6aafcf9d03c31f250007071bda7c67b8dec6f90354b12114420275b68ea9e7e9\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"1e09bcee8669ef627a4844c7e84f22b5f114268efd49d06924e8df77dba6d328\"" Jul 9 23:46:35.395388 containerd[1912]: time="2025-07-09T23:46:35.395243249Z" level=info msg="StartContainer for \"1e09bcee8669ef627a4844c7e84f22b5f114268efd49d06924e8df77dba6d328\"" Jul 9 23:46:35.397275 containerd[1912]: time="2025-07-09T23:46:35.397252923Z" level=info msg="connecting to shim 1e09bcee8669ef627a4844c7e84f22b5f114268efd49d06924e8df77dba6d328" address="unix:///run/containerd/s/2da6bde675e868b168877ece118578220c14e164a7a99b936cfbc8f0ff143cf3" protocol=ttrpc version=3 Jul 9 23:46:35.419563 systemd[1]: Started cri-containerd-1e09bcee8669ef627a4844c7e84f22b5f114268efd49d06924e8df77dba6d328.scope - libcontainer container 1e09bcee8669ef627a4844c7e84f22b5f114268efd49d06924e8df77dba6d328. Jul 9 23:46:35.446302 systemd[1]: cri-containerd-1e09bcee8669ef627a4844c7e84f22b5f114268efd49d06924e8df77dba6d328.scope: Deactivated successfully. Jul 9 23:46:35.447654 containerd[1912]: time="2025-07-09T23:46:35.447614134Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1e09bcee8669ef627a4844c7e84f22b5f114268efd49d06924e8df77dba6d328\" id:\"1e09bcee8669ef627a4844c7e84f22b5f114268efd49d06924e8df77dba6d328\" pid:3978 exited_at:{seconds:1752104795 nanos:446129864}" Jul 9 23:46:35.448779 containerd[1912]: time="2025-07-09T23:46:35.448629675Z" level=info msg="received exit event container_id:\"1e09bcee8669ef627a4844c7e84f22b5f114268efd49d06924e8df77dba6d328\" id:\"1e09bcee8669ef627a4844c7e84f22b5f114268efd49d06924e8df77dba6d328\" pid:3978 exited_at:{seconds:1752104795 nanos:446129864}" Jul 9 23:46:35.455509 containerd[1912]: time="2025-07-09T23:46:35.455482357Z" level=info msg="StartContainer for \"1e09bcee8669ef627a4844c7e84f22b5f114268efd49d06924e8df77dba6d328\" returns successfully" Jul 9 23:46:35.466415 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1e09bcee8669ef627a4844c7e84f22b5f114268efd49d06924e8df77dba6d328-rootfs.mount: Deactivated successfully. Jul 9 23:46:36.343138 containerd[1912]: time="2025-07-09T23:46:36.343095971Z" level=info msg="CreateContainer within sandbox \"6aafcf9d03c31f250007071bda7c67b8dec6f90354b12114420275b68ea9e7e9\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 9 23:46:36.377017 containerd[1912]: time="2025-07-09T23:46:36.376965982Z" level=info msg="Container ccc507efc29f68744ec3bd4a655c0d89795ea50e91cc1fdf27214f676042dda0: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:46:36.381058 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2831986396.mount: Deactivated successfully. Jul 9 23:46:36.403627 containerd[1912]: time="2025-07-09T23:46:36.403583032Z" level=info msg="CreateContainer within sandbox \"6aafcf9d03c31f250007071bda7c67b8dec6f90354b12114420275b68ea9e7e9\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ccc507efc29f68744ec3bd4a655c0d89795ea50e91cc1fdf27214f676042dda0\"" Jul 9 23:46:36.404592 containerd[1912]: time="2025-07-09T23:46:36.404559979Z" level=info msg="StartContainer for \"ccc507efc29f68744ec3bd4a655c0d89795ea50e91cc1fdf27214f676042dda0\"" Jul 9 23:46:36.406613 containerd[1912]: time="2025-07-09T23:46:36.406582861Z" level=info msg="connecting to shim ccc507efc29f68744ec3bd4a655c0d89795ea50e91cc1fdf27214f676042dda0" address="unix:///run/containerd/s/2da6bde675e868b168877ece118578220c14e164a7a99b936cfbc8f0ff143cf3" protocol=ttrpc version=3 Jul 9 23:46:36.422616 systemd[1]: Started cri-containerd-ccc507efc29f68744ec3bd4a655c0d89795ea50e91cc1fdf27214f676042dda0.scope - libcontainer container ccc507efc29f68744ec3bd4a655c0d89795ea50e91cc1fdf27214f676042dda0. Jul 9 23:46:36.442323 systemd[1]: cri-containerd-ccc507efc29f68744ec3bd4a655c0d89795ea50e91cc1fdf27214f676042dda0.scope: Deactivated successfully. Jul 9 23:46:36.444345 containerd[1912]: time="2025-07-09T23:46:36.444313388Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ccc507efc29f68744ec3bd4a655c0d89795ea50e91cc1fdf27214f676042dda0\" id:\"ccc507efc29f68744ec3bd4a655c0d89795ea50e91cc1fdf27214f676042dda0\" pid:4019 exited_at:{seconds:1752104796 nanos:444024794}" Jul 9 23:46:36.448746 containerd[1912]: time="2025-07-09T23:46:36.448621057Z" level=info msg="received exit event container_id:\"ccc507efc29f68744ec3bd4a655c0d89795ea50e91cc1fdf27214f676042dda0\" id:\"ccc507efc29f68744ec3bd4a655c0d89795ea50e91cc1fdf27214f676042dda0\" pid:4019 exited_at:{seconds:1752104796 nanos:444024794}" Jul 9 23:46:36.454054 containerd[1912]: time="2025-07-09T23:46:36.454016966Z" level=info msg="StartContainer for \"ccc507efc29f68744ec3bd4a655c0d89795ea50e91cc1fdf27214f676042dda0\" returns successfully" Jul 9 23:46:36.463943 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ccc507efc29f68744ec3bd4a655c0d89795ea50e91cc1fdf27214f676042dda0-rootfs.mount: Deactivated successfully. Jul 9 23:46:37.348802 containerd[1912]: time="2025-07-09T23:46:37.348421540Z" level=info msg="CreateContainer within sandbox \"6aafcf9d03c31f250007071bda7c67b8dec6f90354b12114420275b68ea9e7e9\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 9 23:46:37.381545 containerd[1912]: time="2025-07-09T23:46:37.381471448Z" level=info msg="Container fd8fdcf2cdbbbd74ae2bc3acca75e63574476fb7c6b85baa5595710006db0450: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:46:37.401740 containerd[1912]: time="2025-07-09T23:46:37.401629423Z" level=info msg="CreateContainer within sandbox \"6aafcf9d03c31f250007071bda7c67b8dec6f90354b12114420275b68ea9e7e9\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"fd8fdcf2cdbbbd74ae2bc3acca75e63574476fb7c6b85baa5595710006db0450\"" Jul 9 23:46:37.402459 containerd[1912]: time="2025-07-09T23:46:37.402320944Z" level=info msg="StartContainer for \"fd8fdcf2cdbbbd74ae2bc3acca75e63574476fb7c6b85baa5595710006db0450\"" Jul 9 23:46:37.403470 containerd[1912]: time="2025-07-09T23:46:37.403412464Z" level=info msg="connecting to shim fd8fdcf2cdbbbd74ae2bc3acca75e63574476fb7c6b85baa5595710006db0450" address="unix:///run/containerd/s/2da6bde675e868b168877ece118578220c14e164a7a99b936cfbc8f0ff143cf3" protocol=ttrpc version=3 Jul 9 23:46:37.421594 systemd[1]: Started cri-containerd-fd8fdcf2cdbbbd74ae2bc3acca75e63574476fb7c6b85baa5595710006db0450.scope - libcontainer container fd8fdcf2cdbbbd74ae2bc3acca75e63574476fb7c6b85baa5595710006db0450. Jul 9 23:46:37.453455 containerd[1912]: time="2025-07-09T23:46:37.452284533Z" level=info msg="StartContainer for \"fd8fdcf2cdbbbd74ae2bc3acca75e63574476fb7c6b85baa5595710006db0450\" returns successfully" Jul 9 23:46:37.515141 containerd[1912]: time="2025-07-09T23:46:37.515098710Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fd8fdcf2cdbbbd74ae2bc3acca75e63574476fb7c6b85baa5595710006db0450\" id:\"df6965995de876e60b16d92a86313a18dd645fcd430c46cff62bb4cf0454f3b4\" pid:4089 exited_at:{seconds:1752104797 nanos:514701232}" Jul 9 23:46:37.536993 kubelet[3421]: I0709 23:46:37.536964 3421 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 9 23:46:37.573338 kubelet[3421]: I0709 23:46:37.572981 3421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7m8p\" (UniqueName: \"kubernetes.io/projected/58f67327-d2d4-4076-aabd-1ab34578b09f-kube-api-access-v7m8p\") pod \"coredns-674b8bbfcf-mlcpr\" (UID: \"58f67327-d2d4-4076-aabd-1ab34578b09f\") " pod="kube-system/coredns-674b8bbfcf-mlcpr" Jul 9 23:46:37.573916 kubelet[3421]: I0709 23:46:37.573552 3421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/58f67327-d2d4-4076-aabd-1ab34578b09f-config-volume\") pod \"coredns-674b8bbfcf-mlcpr\" (UID: \"58f67327-d2d4-4076-aabd-1ab34578b09f\") " pod="kube-system/coredns-674b8bbfcf-mlcpr" Jul 9 23:46:37.579162 systemd[1]: Created slice kubepods-burstable-pod58f67327_d2d4_4076_aabd_1ab34578b09f.slice - libcontainer container kubepods-burstable-pod58f67327_d2d4_4076_aabd_1ab34578b09f.slice. Jul 9 23:46:37.584971 systemd[1]: Created slice kubepods-burstable-pod9cafa5d1_0c0e_4087_b05a_3e9bb08ef164.slice - libcontainer container kubepods-burstable-pod9cafa5d1_0c0e_4087_b05a_3e9bb08ef164.slice. Jul 9 23:46:37.674787 kubelet[3421]: I0709 23:46:37.674682 3421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9cafa5d1-0c0e-4087-b05a-3e9bb08ef164-config-volume\") pod \"coredns-674b8bbfcf-425m8\" (UID: \"9cafa5d1-0c0e-4087-b05a-3e9bb08ef164\") " pod="kube-system/coredns-674b8bbfcf-425m8" Jul 9 23:46:37.674950 kubelet[3421]: I0709 23:46:37.674935 3421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r462h\" (UniqueName: \"kubernetes.io/projected/9cafa5d1-0c0e-4087-b05a-3e9bb08ef164-kube-api-access-r462h\") pod \"coredns-674b8bbfcf-425m8\" (UID: \"9cafa5d1-0c0e-4087-b05a-3e9bb08ef164\") " pod="kube-system/coredns-674b8bbfcf-425m8" Jul 9 23:46:37.882910 containerd[1912]: time="2025-07-09T23:46:37.882873530Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-mlcpr,Uid:58f67327-d2d4-4076-aabd-1ab34578b09f,Namespace:kube-system,Attempt:0,}" Jul 9 23:46:37.889701 containerd[1912]: time="2025-07-09T23:46:37.889663994Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-425m8,Uid:9cafa5d1-0c0e-4087-b05a-3e9bb08ef164,Namespace:kube-system,Attempt:0,}" Jul 9 23:46:39.401941 systemd-networkd[1483]: cilium_host: Link UP Jul 9 23:46:39.404987 systemd-networkd[1483]: cilium_net: Link UP Jul 9 23:46:39.405864 systemd-networkd[1483]: cilium_host: Gained carrier Jul 9 23:46:39.407138 systemd-networkd[1483]: cilium_net: Gained carrier Jul 9 23:46:39.465640 systemd-networkd[1483]: cilium_net: Gained IPv6LL Jul 9 23:46:39.553284 systemd-networkd[1483]: cilium_vxlan: Link UP Jul 9 23:46:39.553289 systemd-networkd[1483]: cilium_vxlan: Gained carrier Jul 9 23:46:39.753480 kernel: NET: Registered PF_ALG protocol family Jul 9 23:46:40.209841 systemd-networkd[1483]: lxc_health: Link UP Jul 9 23:46:40.219754 systemd-networkd[1483]: lxc_health: Gained carrier Jul 9 23:46:40.413414 systemd-networkd[1483]: lxcfbdf7c6f1d92: Link UP Jul 9 23:46:40.421967 kernel: eth0: renamed from tmpe871b Jul 9 23:46:40.421503 systemd-networkd[1483]: lxcfbdf7c6f1d92: Gained carrier Jul 9 23:46:40.430537 systemd-networkd[1483]: cilium_host: Gained IPv6LL Jul 9 23:46:40.433948 systemd-networkd[1483]: lxc62597fe00b6d: Link UP Jul 9 23:46:40.445462 kernel: eth0: renamed from tmpdf12d Jul 9 23:46:40.449101 systemd-networkd[1483]: lxc62597fe00b6d: Gained carrier Jul 9 23:46:40.622619 systemd-networkd[1483]: cilium_vxlan: Gained IPv6LL Jul 9 23:46:41.632753 kubelet[3421]: I0709 23:46:41.632692 3421 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-r9mlv" podStartSLOduration=11.91574229 podStartE2EDuration="19.632679415s" podCreationTimestamp="2025-07-09 23:46:22 +0000 UTC" firstStartedPulling="2025-07-09 23:46:23.84297057 +0000 UTC m=+6.704087612" lastFinishedPulling="2025-07-09 23:46:31.559907695 +0000 UTC m=+14.421024737" observedRunningTime="2025-07-09 23:46:38.360243842 +0000 UTC m=+21.221360884" watchObservedRunningTime="2025-07-09 23:46:41.632679415 +0000 UTC m=+24.493796457" Jul 9 23:46:41.646563 systemd-networkd[1483]: lxc62597fe00b6d: Gained IPv6LL Jul 9 23:46:41.903667 systemd-networkd[1483]: lxcfbdf7c6f1d92: Gained IPv6LL Jul 9 23:46:41.966676 systemd-networkd[1483]: lxc_health: Gained IPv6LL Jul 9 23:46:43.030061 containerd[1912]: time="2025-07-09T23:46:43.029997409Z" level=info msg="connecting to shim df12d615412ccbaf30a7257df7a00058f429b3b9d89055765ca515cb33bab839" address="unix:///run/containerd/s/7c80e466685e89cb722732f052415ca7d0106cfe7ab78e84603072f9ba30f62f" namespace=k8s.io protocol=ttrpc version=3 Jul 9 23:46:43.033666 containerd[1912]: time="2025-07-09T23:46:43.033626854Z" level=info msg="connecting to shim e871bb80735d0922004c4932c404e01b49c64b5c6ab952fa4861c0d886272041" address="unix:///run/containerd/s/c569f0f27aad8644328c65ee8cdcb77e4be8ded06c127f98a75739082e445b34" namespace=k8s.io protocol=ttrpc version=3 Jul 9 23:46:43.055564 systemd[1]: Started cri-containerd-df12d615412ccbaf30a7257df7a00058f429b3b9d89055765ca515cb33bab839.scope - libcontainer container df12d615412ccbaf30a7257df7a00058f429b3b9d89055765ca515cb33bab839. Jul 9 23:46:43.058735 systemd[1]: Started cri-containerd-e871bb80735d0922004c4932c404e01b49c64b5c6ab952fa4861c0d886272041.scope - libcontainer container e871bb80735d0922004c4932c404e01b49c64b5c6ab952fa4861c0d886272041. Jul 9 23:46:43.104258 containerd[1912]: time="2025-07-09T23:46:43.104205424Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-425m8,Uid:9cafa5d1-0c0e-4087-b05a-3e9bb08ef164,Namespace:kube-system,Attempt:0,} returns sandbox id \"df12d615412ccbaf30a7257df7a00058f429b3b9d89055765ca515cb33bab839\"" Jul 9 23:46:43.121709 containerd[1912]: time="2025-07-09T23:46:43.121666998Z" level=info msg="CreateContainer within sandbox \"df12d615412ccbaf30a7257df7a00058f429b3b9d89055765ca515cb33bab839\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 9 23:46:43.122606 containerd[1912]: time="2025-07-09T23:46:43.122562637Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-mlcpr,Uid:58f67327-d2d4-4076-aabd-1ab34578b09f,Namespace:kube-system,Attempt:0,} returns sandbox id \"e871bb80735d0922004c4932c404e01b49c64b5c6ab952fa4861c0d886272041\"" Jul 9 23:46:43.138105 containerd[1912]: time="2025-07-09T23:46:43.138035807Z" level=info msg="CreateContainer within sandbox \"e871bb80735d0922004c4932c404e01b49c64b5c6ab952fa4861c0d886272041\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 9 23:46:43.169129 containerd[1912]: time="2025-07-09T23:46:43.168963162Z" level=info msg="Container 9533bf328d83134277ed654778e9e9949845bb356759b58d01d0458f48e184dc: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:46:43.182269 containerd[1912]: time="2025-07-09T23:46:43.181703335Z" level=info msg="Container 00ef6c181788c700c9db6fcd675870b5820117b50b6754d09959d1ac4540bbde: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:46:43.209820 containerd[1912]: time="2025-07-09T23:46:43.209761936Z" level=info msg="CreateContainer within sandbox \"df12d615412ccbaf30a7257df7a00058f429b3b9d89055765ca515cb33bab839\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9533bf328d83134277ed654778e9e9949845bb356759b58d01d0458f48e184dc\"" Jul 9 23:46:43.210697 containerd[1912]: time="2025-07-09T23:46:43.210628782Z" level=info msg="StartContainer for \"9533bf328d83134277ed654778e9e9949845bb356759b58d01d0458f48e184dc\"" Jul 9 23:46:43.212071 containerd[1912]: time="2025-07-09T23:46:43.212024406Z" level=info msg="connecting to shim 9533bf328d83134277ed654778e9e9949845bb356759b58d01d0458f48e184dc" address="unix:///run/containerd/s/7c80e466685e89cb722732f052415ca7d0106cfe7ab78e84603072f9ba30f62f" protocol=ttrpc version=3 Jul 9 23:46:43.229783 containerd[1912]: time="2025-07-09T23:46:43.229719324Z" level=info msg="CreateContainer within sandbox \"e871bb80735d0922004c4932c404e01b49c64b5c6ab952fa4861c0d886272041\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"00ef6c181788c700c9db6fcd675870b5820117b50b6754d09959d1ac4540bbde\"" Jul 9 23:46:43.230583 containerd[1912]: time="2025-07-09T23:46:43.230553561Z" level=info msg="StartContainer for \"00ef6c181788c700c9db6fcd675870b5820117b50b6754d09959d1ac4540bbde\"" Jul 9 23:46:43.231646 systemd[1]: Started cri-containerd-9533bf328d83134277ed654778e9e9949845bb356759b58d01d0458f48e184dc.scope - libcontainer container 9533bf328d83134277ed654778e9e9949845bb356759b58d01d0458f48e184dc. Jul 9 23:46:43.234984 containerd[1912]: time="2025-07-09T23:46:43.234937359Z" level=info msg="connecting to shim 00ef6c181788c700c9db6fcd675870b5820117b50b6754d09959d1ac4540bbde" address="unix:///run/containerd/s/c569f0f27aad8644328c65ee8cdcb77e4be8ded06c127f98a75739082e445b34" protocol=ttrpc version=3 Jul 9 23:46:43.257643 systemd[1]: Started cri-containerd-00ef6c181788c700c9db6fcd675870b5820117b50b6754d09959d1ac4540bbde.scope - libcontainer container 00ef6c181788c700c9db6fcd675870b5820117b50b6754d09959d1ac4540bbde. Jul 9 23:46:43.292144 containerd[1912]: time="2025-07-09T23:46:43.291999298Z" level=info msg="StartContainer for \"9533bf328d83134277ed654778e9e9949845bb356759b58d01d0458f48e184dc\" returns successfully" Jul 9 23:46:43.302819 containerd[1912]: time="2025-07-09T23:46:43.302521970Z" level=info msg="StartContainer for \"00ef6c181788c700c9db6fcd675870b5820117b50b6754d09959d1ac4540bbde\" returns successfully" Jul 9 23:46:43.385808 kubelet[3421]: I0709 23:46:43.385754 3421 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-425m8" podStartSLOduration=20.385738629 podStartE2EDuration="20.385738629s" podCreationTimestamp="2025-07-09 23:46:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 23:46:43.383597548 +0000 UTC m=+26.244714590" watchObservedRunningTime="2025-07-09 23:46:43.385738629 +0000 UTC m=+26.246855671" Jul 9 23:46:43.386512 kubelet[3421]: I0709 23:46:43.385935 3421 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-mlcpr" podStartSLOduration=20.385930044 podStartE2EDuration="20.385930044s" podCreationTimestamp="2025-07-09 23:46:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 23:46:43.370563173 +0000 UTC m=+26.231680223" watchObservedRunningTime="2025-07-09 23:46:43.385930044 +0000 UTC m=+26.247047086" Jul 9 23:46:52.459019 kubelet[3421]: I0709 23:46:52.458848 3421 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 9 23:47:53.216008 systemd[1]: Started sshd@7-10.200.20.14:22-10.200.16.10:38452.service - OpenSSH per-connection server daemon (10.200.16.10:38452). Jul 9 23:47:53.711201 sshd[4741]: Accepted publickey for core from 10.200.16.10 port 38452 ssh2: RSA SHA256:zFMRRzzSGWgmvEk8T0W8VsmZJ1v5NiT01j8gkhQ3zko Jul 9 23:47:53.712559 sshd-session[4741]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:47:53.716401 systemd-logind[1882]: New session 10 of user core. Jul 9 23:47:53.723557 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 9 23:47:54.109412 sshd[4743]: Connection closed by 10.200.16.10 port 38452 Jul 9 23:47:54.109962 sshd-session[4741]: pam_unix(sshd:session): session closed for user core Jul 9 23:47:54.112946 systemd[1]: sshd@7-10.200.20.14:22-10.200.16.10:38452.service: Deactivated successfully. Jul 9 23:47:54.114585 systemd[1]: session-10.scope: Deactivated successfully. Jul 9 23:47:54.116055 systemd-logind[1882]: Session 10 logged out. Waiting for processes to exit. Jul 9 23:47:54.117569 systemd-logind[1882]: Removed session 10. Jul 9 23:47:59.184896 systemd[1]: Started sshd@8-10.200.20.14:22-10.200.16.10:38458.service - OpenSSH per-connection server daemon (10.200.16.10:38458). Jul 9 23:47:59.618520 sshd[4758]: Accepted publickey for core from 10.200.16.10 port 38458 ssh2: RSA SHA256:zFMRRzzSGWgmvEk8T0W8VsmZJ1v5NiT01j8gkhQ3zko Jul 9 23:47:59.619637 sshd-session[4758]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:47:59.623466 systemd-logind[1882]: New session 11 of user core. Jul 9 23:47:59.631561 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 9 23:47:59.990741 sshd[4760]: Connection closed by 10.200.16.10 port 38458 Jul 9 23:47:59.991228 sshd-session[4758]: pam_unix(sshd:session): session closed for user core Jul 9 23:47:59.994525 systemd[1]: sshd@8-10.200.20.14:22-10.200.16.10:38458.service: Deactivated successfully. Jul 9 23:47:59.996100 systemd[1]: session-11.scope: Deactivated successfully. Jul 9 23:47:59.996695 systemd-logind[1882]: Session 11 logged out. Waiting for processes to exit. Jul 9 23:47:59.997819 systemd-logind[1882]: Removed session 11. Jul 9 23:48:05.087285 systemd[1]: Started sshd@9-10.200.20.14:22-10.200.16.10:52594.service - OpenSSH per-connection server daemon (10.200.16.10:52594). Jul 9 23:48:05.581301 sshd[4773]: Accepted publickey for core from 10.200.16.10 port 52594 ssh2: RSA SHA256:zFMRRzzSGWgmvEk8T0W8VsmZJ1v5NiT01j8gkhQ3zko Jul 9 23:48:05.582400 sshd-session[4773]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:48:05.586106 systemd-logind[1882]: New session 12 of user core. Jul 9 23:48:05.593555 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 9 23:48:05.972728 sshd[4775]: Connection closed by 10.200.16.10 port 52594 Jul 9 23:48:05.971899 sshd-session[4773]: pam_unix(sshd:session): session closed for user core Jul 9 23:48:05.975044 systemd[1]: sshd@9-10.200.20.14:22-10.200.16.10:52594.service: Deactivated successfully. Jul 9 23:48:05.976613 systemd[1]: session-12.scope: Deactivated successfully. Jul 9 23:48:05.977263 systemd-logind[1882]: Session 12 logged out. Waiting for processes to exit. Jul 9 23:48:05.978473 systemd-logind[1882]: Removed session 12. Jul 9 23:48:11.061480 systemd[1]: Started sshd@10-10.200.20.14:22-10.200.16.10:33776.service - OpenSSH per-connection server daemon (10.200.16.10:33776). Jul 9 23:48:11.536061 sshd[4787]: Accepted publickey for core from 10.200.16.10 port 33776 ssh2: RSA SHA256:zFMRRzzSGWgmvEk8T0W8VsmZJ1v5NiT01j8gkhQ3zko Jul 9 23:48:11.537160 sshd-session[4787]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:48:11.540687 systemd-logind[1882]: New session 13 of user core. Jul 9 23:48:11.548561 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 9 23:48:11.924952 sshd[4789]: Connection closed by 10.200.16.10 port 33776 Jul 9 23:48:11.925725 sshd-session[4787]: pam_unix(sshd:session): session closed for user core Jul 9 23:48:11.928840 systemd[1]: sshd@10-10.200.20.14:22-10.200.16.10:33776.service: Deactivated successfully. Jul 9 23:48:11.930589 systemd[1]: session-13.scope: Deactivated successfully. Jul 9 23:48:11.931419 systemd-logind[1882]: Session 13 logged out. Waiting for processes to exit. Jul 9 23:48:11.932600 systemd-logind[1882]: Removed session 13. Jul 9 23:48:17.008616 systemd[1]: Started sshd@11-10.200.20.14:22-10.200.16.10:33786.service - OpenSSH per-connection server daemon (10.200.16.10:33786). Jul 9 23:48:17.484501 sshd[4805]: Accepted publickey for core from 10.200.16.10 port 33786 ssh2: RSA SHA256:zFMRRzzSGWgmvEk8T0W8VsmZJ1v5NiT01j8gkhQ3zko Jul 9 23:48:17.486105 sshd-session[4805]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:48:17.494030 systemd-logind[1882]: New session 14 of user core. Jul 9 23:48:17.498598 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 9 23:48:17.870465 sshd[4809]: Connection closed by 10.200.16.10 port 33786 Jul 9 23:48:17.871031 sshd-session[4805]: pam_unix(sshd:session): session closed for user core Jul 9 23:48:17.874190 systemd-logind[1882]: Session 14 logged out. Waiting for processes to exit. Jul 9 23:48:17.874723 systemd[1]: sshd@11-10.200.20.14:22-10.200.16.10:33786.service: Deactivated successfully. Jul 9 23:48:17.876261 systemd[1]: session-14.scope: Deactivated successfully. Jul 9 23:48:17.877984 systemd-logind[1882]: Removed session 14. Jul 9 23:48:17.955817 systemd[1]: Started sshd@12-10.200.20.14:22-10.200.16.10:33798.service - OpenSSH per-connection server daemon (10.200.16.10:33798). Jul 9 23:48:18.428822 sshd[4821]: Accepted publickey for core from 10.200.16.10 port 33798 ssh2: RSA SHA256:zFMRRzzSGWgmvEk8T0W8VsmZJ1v5NiT01j8gkhQ3zko Jul 9 23:48:18.429919 sshd-session[4821]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:48:18.433327 systemd-logind[1882]: New session 15 of user core. Jul 9 23:48:18.443614 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 9 23:48:18.836145 sshd[4823]: Connection closed by 10.200.16.10 port 33798 Jul 9 23:48:18.836525 sshd-session[4821]: pam_unix(sshd:session): session closed for user core Jul 9 23:48:18.839802 systemd[1]: sshd@12-10.200.20.14:22-10.200.16.10:33798.service: Deactivated successfully. Jul 9 23:48:18.842524 systemd[1]: session-15.scope: Deactivated successfully. Jul 9 23:48:18.843958 systemd-logind[1882]: Session 15 logged out. Waiting for processes to exit. Jul 9 23:48:18.846156 systemd-logind[1882]: Removed session 15. Jul 9 23:48:18.926875 systemd[1]: Started sshd@13-10.200.20.14:22-10.200.16.10:33800.service - OpenSSH per-connection server daemon (10.200.16.10:33800). Jul 9 23:48:19.406275 sshd[4833]: Accepted publickey for core from 10.200.16.10 port 33800 ssh2: RSA SHA256:zFMRRzzSGWgmvEk8T0W8VsmZJ1v5NiT01j8gkhQ3zko Jul 9 23:48:19.407426 sshd-session[4833]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:48:19.411539 systemd-logind[1882]: New session 16 of user core. Jul 9 23:48:19.415536 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 9 23:48:19.787130 sshd[4835]: Connection closed by 10.200.16.10 port 33800 Jul 9 23:48:19.787652 sshd-session[4833]: pam_unix(sshd:session): session closed for user core Jul 9 23:48:19.790600 systemd-logind[1882]: Session 16 logged out. Waiting for processes to exit. Jul 9 23:48:19.791176 systemd[1]: sshd@13-10.200.20.14:22-10.200.16.10:33800.service: Deactivated successfully. Jul 9 23:48:19.793536 systemd[1]: session-16.scope: Deactivated successfully. Jul 9 23:48:19.795774 systemd-logind[1882]: Removed session 16. Jul 9 23:48:24.868188 systemd[1]: Started sshd@14-10.200.20.14:22-10.200.16.10:58334.service - OpenSSH per-connection server daemon (10.200.16.10:58334). Jul 9 23:48:25.334244 sshd[4849]: Accepted publickey for core from 10.200.16.10 port 58334 ssh2: RSA SHA256:zFMRRzzSGWgmvEk8T0W8VsmZJ1v5NiT01j8gkhQ3zko Jul 9 23:48:25.335325 sshd-session[4849]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:48:25.339706 systemd-logind[1882]: New session 17 of user core. Jul 9 23:48:25.345544 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 9 23:48:25.701540 sshd[4851]: Connection closed by 10.200.16.10 port 58334 Jul 9 23:48:25.702254 sshd-session[4849]: pam_unix(sshd:session): session closed for user core Jul 9 23:48:25.705144 systemd[1]: sshd@14-10.200.20.14:22-10.200.16.10:58334.service: Deactivated successfully. Jul 9 23:48:25.706770 systemd[1]: session-17.scope: Deactivated successfully. Jul 9 23:48:25.707501 systemd-logind[1882]: Session 17 logged out. Waiting for processes to exit. Jul 9 23:48:25.708803 systemd-logind[1882]: Removed session 17. Jul 9 23:48:25.793117 systemd[1]: Started sshd@15-10.200.20.14:22-10.200.16.10:58344.service - OpenSSH per-connection server daemon (10.200.16.10:58344). Jul 9 23:48:26.297527 sshd[4862]: Accepted publickey for core from 10.200.16.10 port 58344 ssh2: RSA SHA256:zFMRRzzSGWgmvEk8T0W8VsmZJ1v5NiT01j8gkhQ3zko Jul 9 23:48:26.298647 sshd-session[4862]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:48:26.302835 systemd-logind[1882]: New session 18 of user core. Jul 9 23:48:26.308535 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 9 23:48:26.729340 sshd[4864]: Connection closed by 10.200.16.10 port 58344 Jul 9 23:48:26.729985 sshd-session[4862]: pam_unix(sshd:session): session closed for user core Jul 9 23:48:26.733255 systemd-logind[1882]: Session 18 logged out. Waiting for processes to exit. Jul 9 23:48:26.733753 systemd[1]: sshd@15-10.200.20.14:22-10.200.16.10:58344.service: Deactivated successfully. Jul 9 23:48:26.735073 systemd[1]: session-18.scope: Deactivated successfully. Jul 9 23:48:26.737129 systemd-logind[1882]: Removed session 18. Jul 9 23:48:26.811560 systemd[1]: Started sshd@16-10.200.20.14:22-10.200.16.10:58356.service - OpenSSH per-connection server daemon (10.200.16.10:58356). Jul 9 23:48:27.287074 sshd[4873]: Accepted publickey for core from 10.200.16.10 port 58356 ssh2: RSA SHA256:zFMRRzzSGWgmvEk8T0W8VsmZJ1v5NiT01j8gkhQ3zko Jul 9 23:48:27.288180 sshd-session[4873]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:48:27.291921 systemd-logind[1882]: New session 19 of user core. Jul 9 23:48:27.302567 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 9 23:48:28.203166 sshd[4875]: Connection closed by 10.200.16.10 port 58356 Jul 9 23:48:28.203572 sshd-session[4873]: pam_unix(sshd:session): session closed for user core Jul 9 23:48:28.206863 systemd[1]: sshd@16-10.200.20.14:22-10.200.16.10:58356.service: Deactivated successfully. Jul 9 23:48:28.208569 systemd[1]: session-19.scope: Deactivated successfully. Jul 9 23:48:28.209201 systemd-logind[1882]: Session 19 logged out. Waiting for processes to exit. Jul 9 23:48:28.210601 systemd-logind[1882]: Removed session 19. Jul 9 23:48:28.297202 systemd[1]: Started sshd@17-10.200.20.14:22-10.200.16.10:58358.service - OpenSSH per-connection server daemon (10.200.16.10:58358). Jul 9 23:48:28.795528 sshd[4892]: Accepted publickey for core from 10.200.16.10 port 58358 ssh2: RSA SHA256:zFMRRzzSGWgmvEk8T0W8VsmZJ1v5NiT01j8gkhQ3zko Jul 9 23:48:28.796671 sshd-session[4892]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:48:28.800721 systemd-logind[1882]: New session 20 of user core. Jul 9 23:48:28.806564 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 9 23:48:29.277756 sshd[4894]: Connection closed by 10.200.16.10 port 58358 Jul 9 23:48:29.276944 sshd-session[4892]: pam_unix(sshd:session): session closed for user core Jul 9 23:48:29.280085 systemd-logind[1882]: Session 20 logged out. Waiting for processes to exit. Jul 9 23:48:29.280244 systemd[1]: sshd@17-10.200.20.14:22-10.200.16.10:58358.service: Deactivated successfully. Jul 9 23:48:29.282995 systemd[1]: session-20.scope: Deactivated successfully. Jul 9 23:48:29.285349 systemd-logind[1882]: Removed session 20. Jul 9 23:48:29.374639 systemd[1]: Started sshd@18-10.200.20.14:22-10.200.16.10:58374.service - OpenSSH per-connection server daemon (10.200.16.10:58374). Jul 9 23:48:29.851420 sshd[4904]: Accepted publickey for core from 10.200.16.10 port 58374 ssh2: RSA SHA256:zFMRRzzSGWgmvEk8T0W8VsmZJ1v5NiT01j8gkhQ3zko Jul 9 23:48:29.852497 sshd-session[4904]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:48:29.856511 systemd-logind[1882]: New session 21 of user core. Jul 9 23:48:29.861553 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 9 23:48:30.235635 sshd[4906]: Connection closed by 10.200.16.10 port 58374 Jul 9 23:48:30.236240 sshd-session[4904]: pam_unix(sshd:session): session closed for user core Jul 9 23:48:30.239376 systemd[1]: sshd@18-10.200.20.14:22-10.200.16.10:58374.service: Deactivated successfully. Jul 9 23:48:30.241326 systemd[1]: session-21.scope: Deactivated successfully. Jul 9 23:48:30.242185 systemd-logind[1882]: Session 21 logged out. Waiting for processes to exit. Jul 9 23:48:30.243740 systemd-logind[1882]: Removed session 21. Jul 9 23:48:35.319524 systemd[1]: Started sshd@19-10.200.20.14:22-10.200.16.10:44622.service - OpenSSH per-connection server daemon (10.200.16.10:44622). Jul 9 23:48:35.749486 sshd[4920]: Accepted publickey for core from 10.200.16.10 port 44622 ssh2: RSA SHA256:zFMRRzzSGWgmvEk8T0W8VsmZJ1v5NiT01j8gkhQ3zko Jul 9 23:48:35.750637 sshd-session[4920]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:48:35.754505 systemd-logind[1882]: New session 22 of user core. Jul 9 23:48:35.759570 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 9 23:48:36.117634 sshd[4922]: Connection closed by 10.200.16.10 port 44622 Jul 9 23:48:36.118193 sshd-session[4920]: pam_unix(sshd:session): session closed for user core Jul 9 23:48:36.121235 systemd[1]: sshd@19-10.200.20.14:22-10.200.16.10:44622.service: Deactivated successfully. Jul 9 23:48:36.122604 systemd[1]: session-22.scope: Deactivated successfully. Jul 9 23:48:36.123159 systemd-logind[1882]: Session 22 logged out. Waiting for processes to exit. Jul 9 23:48:36.124267 systemd-logind[1882]: Removed session 22. Jul 9 23:48:41.209189 systemd[1]: Started sshd@20-10.200.20.14:22-10.200.16.10:54682.service - OpenSSH per-connection server daemon (10.200.16.10:54682). Jul 9 23:48:41.705898 sshd[4933]: Accepted publickey for core from 10.200.16.10 port 54682 ssh2: RSA SHA256:zFMRRzzSGWgmvEk8T0W8VsmZJ1v5NiT01j8gkhQ3zko Jul 9 23:48:41.706987 sshd-session[4933]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:48:41.710484 systemd-logind[1882]: New session 23 of user core. Jul 9 23:48:41.714538 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 9 23:48:42.104477 sshd[4935]: Connection closed by 10.200.16.10 port 54682 Jul 9 23:48:42.105029 sshd-session[4933]: pam_unix(sshd:session): session closed for user core Jul 9 23:48:42.108133 systemd-logind[1882]: Session 23 logged out. Waiting for processes to exit. Jul 9 23:48:42.108265 systemd[1]: sshd@20-10.200.20.14:22-10.200.16.10:54682.service: Deactivated successfully. Jul 9 23:48:42.110672 systemd[1]: session-23.scope: Deactivated successfully. Jul 9 23:48:42.111641 systemd-logind[1882]: Removed session 23. Jul 9 23:48:42.184974 systemd[1]: Started sshd@21-10.200.20.14:22-10.200.16.10:54690.service - OpenSSH per-connection server daemon (10.200.16.10:54690). Jul 9 23:48:42.661450 sshd[4946]: Accepted publickey for core from 10.200.16.10 port 54690 ssh2: RSA SHA256:zFMRRzzSGWgmvEk8T0W8VsmZJ1v5NiT01j8gkhQ3zko Jul 9 23:48:42.662640 sshd-session[4946]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:48:42.666456 systemd-logind[1882]: New session 24 of user core. Jul 9 23:48:42.670549 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 9 23:48:44.206847 containerd[1912]: time="2025-07-09T23:48:44.206691442Z" level=info msg="StopContainer for \"e0f3007449f4ac80765e555074bee59ef3c46063acae17bae9a13d56e5d538ed\" with timeout 30 (s)" Jul 9 23:48:44.207520 containerd[1912]: time="2025-07-09T23:48:44.207496310Z" level=info msg="Stop container \"e0f3007449f4ac80765e555074bee59ef3c46063acae17bae9a13d56e5d538ed\" with signal terminated" Jul 9 23:48:44.221113 systemd[1]: cri-containerd-e0f3007449f4ac80765e555074bee59ef3c46063acae17bae9a13d56e5d538ed.scope: Deactivated successfully. Jul 9 23:48:44.222699 containerd[1912]: time="2025-07-09T23:48:44.221846410Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 9 23:48:44.224419 containerd[1912]: time="2025-07-09T23:48:44.224392433Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e0f3007449f4ac80765e555074bee59ef3c46063acae17bae9a13d56e5d538ed\" id:\"e0f3007449f4ac80765e555074bee59ef3c46063acae17bae9a13d56e5d538ed\" pid:3824 exited_at:{seconds:1752104924 nanos:223683569}" Jul 9 23:48:44.225139 containerd[1912]: time="2025-07-09T23:48:44.224911411Z" level=info msg="received exit event container_id:\"e0f3007449f4ac80765e555074bee59ef3c46063acae17bae9a13d56e5d538ed\" id:\"e0f3007449f4ac80765e555074bee59ef3c46063acae17bae9a13d56e5d538ed\" pid:3824 exited_at:{seconds:1752104924 nanos:223683569}" Jul 9 23:48:44.230224 containerd[1912]: time="2025-07-09T23:48:44.230187504Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fd8fdcf2cdbbbd74ae2bc3acca75e63574476fb7c6b85baa5595710006db0450\" id:\"7bf851d0fe2b78e7da29da94c1ccbbe92580302a9ba8386983f196518536a819\" pid:4969 exited_at:{seconds:1752104924 nanos:228996215}" Jul 9 23:48:44.232281 containerd[1912]: time="2025-07-09T23:48:44.232213781Z" level=info msg="StopContainer for \"fd8fdcf2cdbbbd74ae2bc3acca75e63574476fb7c6b85baa5595710006db0450\" with timeout 2 (s)" Jul 9 23:48:44.232548 containerd[1912]: time="2025-07-09T23:48:44.232530656Z" level=info msg="Stop container \"fd8fdcf2cdbbbd74ae2bc3acca75e63574476fb7c6b85baa5595710006db0450\" with signal terminated" Jul 9 23:48:44.239163 systemd-networkd[1483]: lxc_health: Link DOWN Jul 9 23:48:44.239170 systemd-networkd[1483]: lxc_health: Lost carrier Jul 9 23:48:44.251200 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e0f3007449f4ac80765e555074bee59ef3c46063acae17bae9a13d56e5d538ed-rootfs.mount: Deactivated successfully. Jul 9 23:48:44.252853 systemd[1]: cri-containerd-fd8fdcf2cdbbbd74ae2bc3acca75e63574476fb7c6b85baa5595710006db0450.scope: Deactivated successfully. Jul 9 23:48:44.252968 containerd[1912]: time="2025-07-09T23:48:44.252848649Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fd8fdcf2cdbbbd74ae2bc3acca75e63574476fb7c6b85baa5595710006db0450\" id:\"fd8fdcf2cdbbbd74ae2bc3acca75e63574476fb7c6b85baa5595710006db0450\" pid:4058 exited_at:{seconds:1752104924 nanos:252608665}" Jul 9 23:48:44.253241 containerd[1912]: time="2025-07-09T23:48:44.253221510Z" level=info msg="received exit event container_id:\"fd8fdcf2cdbbbd74ae2bc3acca75e63574476fb7c6b85baa5595710006db0450\" id:\"fd8fdcf2cdbbbd74ae2bc3acca75e63574476fb7c6b85baa5595710006db0450\" pid:4058 exited_at:{seconds:1752104924 nanos:252608665}" Jul 9 23:48:44.253514 systemd[1]: cri-containerd-fd8fdcf2cdbbbd74ae2bc3acca75e63574476fb7c6b85baa5595710006db0450.scope: Consumed 4.402s CPU time, 124.8M memory peak, 128K read from disk, 12.9M written to disk. Jul 9 23:48:44.272638 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fd8fdcf2cdbbbd74ae2bc3acca75e63574476fb7c6b85baa5595710006db0450-rootfs.mount: Deactivated successfully. Jul 9 23:48:44.308201 containerd[1912]: time="2025-07-09T23:48:44.308163962Z" level=info msg="StopContainer for \"fd8fdcf2cdbbbd74ae2bc3acca75e63574476fb7c6b85baa5595710006db0450\" returns successfully" Jul 9 23:48:44.308798 containerd[1912]: time="2025-07-09T23:48:44.308773262Z" level=info msg="StopPodSandbox for \"6aafcf9d03c31f250007071bda7c67b8dec6f90354b12114420275b68ea9e7e9\"" Jul 9 23:48:44.308876 containerd[1912]: time="2025-07-09T23:48:44.308822496Z" level=info msg="Container to stop \"1e09bcee8669ef627a4844c7e84f22b5f114268efd49d06924e8df77dba6d328\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 9 23:48:44.308876 containerd[1912]: time="2025-07-09T23:48:44.308830560Z" level=info msg="Container to stop \"ccc507efc29f68744ec3bd4a655c0d89795ea50e91cc1fdf27214f676042dda0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 9 23:48:44.308876 containerd[1912]: time="2025-07-09T23:48:44.308838033Z" level=info msg="Container to stop \"414d57f2d5303836cba5b768e145269fd7c01d0c7f68f382766465dfd5c95ffd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 9 23:48:44.308876 containerd[1912]: time="2025-07-09T23:48:44.308844577Z" level=info msg="Container to stop \"fd8fdcf2cdbbbd74ae2bc3acca75e63574476fb7c6b85baa5595710006db0450\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 9 23:48:44.308876 containerd[1912]: time="2025-07-09T23:48:44.308850161Z" level=info msg="Container to stop \"08bd64f27d0c9e0039b4750ce2b98835a7156656a07c0adf10e51757d7af507b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 9 23:48:44.315221 systemd[1]: cri-containerd-6aafcf9d03c31f250007071bda7c67b8dec6f90354b12114420275b68ea9e7e9.scope: Deactivated successfully. Jul 9 23:48:44.315720 containerd[1912]: time="2025-07-09T23:48:44.315646322Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6aafcf9d03c31f250007071bda7c67b8dec6f90354b12114420275b68ea9e7e9\" id:\"6aafcf9d03c31f250007071bda7c67b8dec6f90354b12114420275b68ea9e7e9\" pid:3607 exit_status:137 exited_at:{seconds:1752104924 nanos:315058006}" Jul 9 23:48:44.317688 containerd[1912]: time="2025-07-09T23:48:44.317660431Z" level=info msg="StopContainer for \"e0f3007449f4ac80765e555074bee59ef3c46063acae17bae9a13d56e5d538ed\" returns successfully" Jul 9 23:48:44.318224 containerd[1912]: time="2025-07-09T23:48:44.318202130Z" level=info msg="StopPodSandbox for \"b4d5389fa92681707e5fdd4e91f41e90ff1631f7d5c92dc687c360e2e864d555\"" Jul 9 23:48:44.318740 containerd[1912]: time="2025-07-09T23:48:44.318620328Z" level=info msg="Container to stop \"e0f3007449f4ac80765e555074bee59ef3c46063acae17bae9a13d56e5d538ed\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 9 23:48:44.325839 systemd[1]: cri-containerd-b4d5389fa92681707e5fdd4e91f41e90ff1631f7d5c92dc687c360e2e864d555.scope: Deactivated successfully. Jul 9 23:48:44.341166 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6aafcf9d03c31f250007071bda7c67b8dec6f90354b12114420275b68ea9e7e9-rootfs.mount: Deactivated successfully. Jul 9 23:48:44.347028 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b4d5389fa92681707e5fdd4e91f41e90ff1631f7d5c92dc687c360e2e864d555-rootfs.mount: Deactivated successfully. Jul 9 23:48:44.364573 containerd[1912]: time="2025-07-09T23:48:44.364517102Z" level=info msg="shim disconnected" id=6aafcf9d03c31f250007071bda7c67b8dec6f90354b12114420275b68ea9e7e9 namespace=k8s.io Jul 9 23:48:44.364746 containerd[1912]: time="2025-07-09T23:48:44.364581584Z" level=warning msg="cleaning up after shim disconnected" id=6aafcf9d03c31f250007071bda7c67b8dec6f90354b12114420275b68ea9e7e9 namespace=k8s.io Jul 9 23:48:44.364746 containerd[1912]: time="2025-07-09T23:48:44.364605505Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 9 23:48:44.365446 containerd[1912]: time="2025-07-09T23:48:44.365377403Z" level=info msg="shim disconnected" id=b4d5389fa92681707e5fdd4e91f41e90ff1631f7d5c92dc687c360e2e864d555 namespace=k8s.io Jul 9 23:48:44.365446 containerd[1912]: time="2025-07-09T23:48:44.365402884Z" level=warning msg="cleaning up after shim disconnected" id=b4d5389fa92681707e5fdd4e91f41e90ff1631f7d5c92dc687c360e2e864d555 namespace=k8s.io Jul 9 23:48:44.365446 containerd[1912]: time="2025-07-09T23:48:44.365419293Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 9 23:48:44.375909 containerd[1912]: time="2025-07-09T23:48:44.375811313Z" level=info msg="received exit event sandbox_id:\"b4d5389fa92681707e5fdd4e91f41e90ff1631f7d5c92dc687c360e2e864d555\" exit_status:137 exited_at:{seconds:1752104924 nanos:329730541}" Jul 9 23:48:44.377825 containerd[1912]: time="2025-07-09T23:48:44.377761788Z" level=info msg="received exit event sandbox_id:\"6aafcf9d03c31f250007071bda7c67b8dec6f90354b12114420275b68ea9e7e9\" exit_status:137 exited_at:{seconds:1752104924 nanos:315058006}" Jul 9 23:48:44.377899 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b4d5389fa92681707e5fdd4e91f41e90ff1631f7d5c92dc687c360e2e864d555-shm.mount: Deactivated successfully. Jul 9 23:48:44.378752 containerd[1912]: time="2025-07-09T23:48:44.376073114Z" level=info msg="TearDown network for sandbox \"b4d5389fa92681707e5fdd4e91f41e90ff1631f7d5c92dc687c360e2e864d555\" successfully" Jul 9 23:48:44.378752 containerd[1912]: time="2025-07-09T23:48:44.378488901Z" level=info msg="StopPodSandbox for \"b4d5389fa92681707e5fdd4e91f41e90ff1631f7d5c92dc687c360e2e864d555\" returns successfully" Jul 9 23:48:44.378752 containerd[1912]: time="2025-07-09T23:48:44.376945248Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b4d5389fa92681707e5fdd4e91f41e90ff1631f7d5c92dc687c360e2e864d555\" id:\"b4d5389fa92681707e5fdd4e91f41e90ff1631f7d5c92dc687c360e2e864d555\" pid:3530 exit_status:137 exited_at:{seconds:1752104924 nanos:329730541}" Jul 9 23:48:44.378752 containerd[1912]: time="2025-07-09T23:48:44.378706908Z" level=info msg="TearDown network for sandbox \"6aafcf9d03c31f250007071bda7c67b8dec6f90354b12114420275b68ea9e7e9\" successfully" Jul 9 23:48:44.378752 containerd[1912]: time="2025-07-09T23:48:44.378718269Z" level=info msg="StopPodSandbox for \"6aafcf9d03c31f250007071bda7c67b8dec6f90354b12114420275b68ea9e7e9\" returns successfully" Jul 9 23:48:44.473468 kubelet[3421]: I0709 23:48:44.472491 3421 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8hcjm\" (UniqueName: \"kubernetes.io/projected/46f90c65-3425-4ac9-ad87-764f78c1a0f3-kube-api-access-8hcjm\") pod \"46f90c65-3425-4ac9-ad87-764f78c1a0f3\" (UID: \"46f90c65-3425-4ac9-ad87-764f78c1a0f3\") " Jul 9 23:48:44.473468 kubelet[3421]: I0709 23:48:44.472535 3421 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/46f90c65-3425-4ac9-ad87-764f78c1a0f3-cilium-cgroup\") pod \"46f90c65-3425-4ac9-ad87-764f78c1a0f3\" (UID: \"46f90c65-3425-4ac9-ad87-764f78c1a0f3\") " Jul 9 23:48:44.473468 kubelet[3421]: I0709 23:48:44.472553 3421 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/46f90c65-3425-4ac9-ad87-764f78c1a0f3-cilium-config-path\") pod \"46f90c65-3425-4ac9-ad87-764f78c1a0f3\" (UID: \"46f90c65-3425-4ac9-ad87-764f78c1a0f3\") " Jul 9 23:48:44.473468 kubelet[3421]: I0709 23:48:44.472564 3421 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/46f90c65-3425-4ac9-ad87-764f78c1a0f3-xtables-lock\") pod \"46f90c65-3425-4ac9-ad87-764f78c1a0f3\" (UID: \"46f90c65-3425-4ac9-ad87-764f78c1a0f3\") " Jul 9 23:48:44.473468 kubelet[3421]: I0709 23:48:44.472573 3421 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/46f90c65-3425-4ac9-ad87-764f78c1a0f3-cilium-run\") pod \"46f90c65-3425-4ac9-ad87-764f78c1a0f3\" (UID: \"46f90c65-3425-4ac9-ad87-764f78c1a0f3\") " Jul 9 23:48:44.473468 kubelet[3421]: I0709 23:48:44.472583 3421 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/46f90c65-3425-4ac9-ad87-764f78c1a0f3-clustermesh-secrets\") pod \"46f90c65-3425-4ac9-ad87-764f78c1a0f3\" (UID: \"46f90c65-3425-4ac9-ad87-764f78c1a0f3\") " Jul 9 23:48:44.473851 kubelet[3421]: I0709 23:48:44.472592 3421 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/46f90c65-3425-4ac9-ad87-764f78c1a0f3-bpf-maps\") pod \"46f90c65-3425-4ac9-ad87-764f78c1a0f3\" (UID: \"46f90c65-3425-4ac9-ad87-764f78c1a0f3\") " Jul 9 23:48:44.473851 kubelet[3421]: I0709 23:48:44.472601 3421 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/46f90c65-3425-4ac9-ad87-764f78c1a0f3-etc-cni-netd\") pod \"46f90c65-3425-4ac9-ad87-764f78c1a0f3\" (UID: \"46f90c65-3425-4ac9-ad87-764f78c1a0f3\") " Jul 9 23:48:44.473851 kubelet[3421]: I0709 23:48:44.472611 3421 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f39ab7ab-8c6d-49cc-9213-ba71667bdcf6-cilium-config-path\") pod \"f39ab7ab-8c6d-49cc-9213-ba71667bdcf6\" (UID: \"f39ab7ab-8c6d-49cc-9213-ba71667bdcf6\") " Jul 9 23:48:44.473851 kubelet[3421]: I0709 23:48:44.472642 3421 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/46f90c65-3425-4ac9-ad87-764f78c1a0f3-host-proc-sys-kernel\") pod \"46f90c65-3425-4ac9-ad87-764f78c1a0f3\" (UID: \"46f90c65-3425-4ac9-ad87-764f78c1a0f3\") " Jul 9 23:48:44.473851 kubelet[3421]: I0709 23:48:44.472652 3421 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/46f90c65-3425-4ac9-ad87-764f78c1a0f3-hostproc\") pod \"46f90c65-3425-4ac9-ad87-764f78c1a0f3\" (UID: \"46f90c65-3425-4ac9-ad87-764f78c1a0f3\") " Jul 9 23:48:44.473851 kubelet[3421]: I0709 23:48:44.472664 3421 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/46f90c65-3425-4ac9-ad87-764f78c1a0f3-hubble-tls\") pod \"46f90c65-3425-4ac9-ad87-764f78c1a0f3\" (UID: \"46f90c65-3425-4ac9-ad87-764f78c1a0f3\") " Jul 9 23:48:44.473938 kubelet[3421]: I0709 23:48:44.472676 3421 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/46f90c65-3425-4ac9-ad87-764f78c1a0f3-cni-path\") pod \"46f90c65-3425-4ac9-ad87-764f78c1a0f3\" (UID: \"46f90c65-3425-4ac9-ad87-764f78c1a0f3\") " Jul 9 23:48:44.473938 kubelet[3421]: I0709 23:48:44.472687 3421 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/46f90c65-3425-4ac9-ad87-764f78c1a0f3-host-proc-sys-net\") pod \"46f90c65-3425-4ac9-ad87-764f78c1a0f3\" (UID: \"46f90c65-3425-4ac9-ad87-764f78c1a0f3\") " Jul 9 23:48:44.473938 kubelet[3421]: I0709 23:48:44.472700 3421 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xncbz\" (UniqueName: \"kubernetes.io/projected/f39ab7ab-8c6d-49cc-9213-ba71667bdcf6-kube-api-access-xncbz\") pod \"f39ab7ab-8c6d-49cc-9213-ba71667bdcf6\" (UID: \"f39ab7ab-8c6d-49cc-9213-ba71667bdcf6\") " Jul 9 23:48:44.474786 kubelet[3421]: I0709 23:48:44.474186 3421 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/46f90c65-3425-4ac9-ad87-764f78c1a0f3-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "46f90c65-3425-4ac9-ad87-764f78c1a0f3" (UID: "46f90c65-3425-4ac9-ad87-764f78c1a0f3"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 9 23:48:44.474786 kubelet[3421]: I0709 23:48:44.474237 3421 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/46f90c65-3425-4ac9-ad87-764f78c1a0f3-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "46f90c65-3425-4ac9-ad87-764f78c1a0f3" (UID: "46f90c65-3425-4ac9-ad87-764f78c1a0f3"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 9 23:48:44.475454 kubelet[3421]: I0709 23:48:44.474983 3421 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/46f90c65-3425-4ac9-ad87-764f78c1a0f3-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "46f90c65-3425-4ac9-ad87-764f78c1a0f3" (UID: "46f90c65-3425-4ac9-ad87-764f78c1a0f3"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 9 23:48:44.475454 kubelet[3421]: I0709 23:48:44.475011 3421 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/46f90c65-3425-4ac9-ad87-764f78c1a0f3-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "46f90c65-3425-4ac9-ad87-764f78c1a0f3" (UID: "46f90c65-3425-4ac9-ad87-764f78c1a0f3"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 9 23:48:44.476221 kubelet[3421]: I0709 23:48:44.476193 3421 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/46f90c65-3425-4ac9-ad87-764f78c1a0f3-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "46f90c65-3425-4ac9-ad87-764f78c1a0f3" (UID: "46f90c65-3425-4ac9-ad87-764f78c1a0f3"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 9 23:48:44.476776 kubelet[3421]: I0709 23:48:44.476750 3421 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46f90c65-3425-4ac9-ad87-764f78c1a0f3-kube-api-access-8hcjm" (OuterVolumeSpecName: "kube-api-access-8hcjm") pod "46f90c65-3425-4ac9-ad87-764f78c1a0f3" (UID: "46f90c65-3425-4ac9-ad87-764f78c1a0f3"). InnerVolumeSpecName "kube-api-access-8hcjm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 9 23:48:44.476926 kubelet[3421]: I0709 23:48:44.476905 3421 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f39ab7ab-8c6d-49cc-9213-ba71667bdcf6-kube-api-access-xncbz" (OuterVolumeSpecName: "kube-api-access-xncbz") pod "f39ab7ab-8c6d-49cc-9213-ba71667bdcf6" (UID: "f39ab7ab-8c6d-49cc-9213-ba71667bdcf6"). InnerVolumeSpecName "kube-api-access-xncbz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 9 23:48:44.477004 kubelet[3421]: I0709 23:48:44.476991 3421 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/46f90c65-3425-4ac9-ad87-764f78c1a0f3-cni-path" (OuterVolumeSpecName: "cni-path") pod "46f90c65-3425-4ac9-ad87-764f78c1a0f3" (UID: "46f90c65-3425-4ac9-ad87-764f78c1a0f3"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 9 23:48:44.477061 kubelet[3421]: I0709 23:48:44.477052 3421 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/46f90c65-3425-4ac9-ad87-764f78c1a0f3-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "46f90c65-3425-4ac9-ad87-764f78c1a0f3" (UID: "46f90c65-3425-4ac9-ad87-764f78c1a0f3"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 9 23:48:44.477123 kubelet[3421]: I0709 23:48:44.477114 3421 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/46f90c65-3425-4ac9-ad87-764f78c1a0f3-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "46f90c65-3425-4ac9-ad87-764f78c1a0f3" (UID: "46f90c65-3425-4ac9-ad87-764f78c1a0f3"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 9 23:48:44.477173 kubelet[3421]: I0709 23:48:44.477165 3421 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/46f90c65-3425-4ac9-ad87-764f78c1a0f3-hostproc" (OuterVolumeSpecName: "hostproc") pod "46f90c65-3425-4ac9-ad87-764f78c1a0f3" (UID: "46f90c65-3425-4ac9-ad87-764f78c1a0f3"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 9 23:48:44.478131 kubelet[3421]: I0709 23:48:44.478108 3421 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46f90c65-3425-4ac9-ad87-764f78c1a0f3-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "46f90c65-3425-4ac9-ad87-764f78c1a0f3" (UID: "46f90c65-3425-4ac9-ad87-764f78c1a0f3"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 9 23:48:44.478582 kubelet[3421]: I0709 23:48:44.478542 3421 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46f90c65-3425-4ac9-ad87-764f78c1a0f3-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "46f90c65-3425-4ac9-ad87-764f78c1a0f3" (UID: "46f90c65-3425-4ac9-ad87-764f78c1a0f3"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 9 23:48:44.478820 kubelet[3421]: I0709 23:48:44.478803 3421 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f39ab7ab-8c6d-49cc-9213-ba71667bdcf6-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f39ab7ab-8c6d-49cc-9213-ba71667bdcf6" (UID: "f39ab7ab-8c6d-49cc-9213-ba71667bdcf6"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 9 23:48:44.479036 kubelet[3421]: I0709 23:48:44.479013 3421 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46f90c65-3425-4ac9-ad87-764f78c1a0f3-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "46f90c65-3425-4ac9-ad87-764f78c1a0f3" (UID: "46f90c65-3425-4ac9-ad87-764f78c1a0f3"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 9 23:48:44.560208 kubelet[3421]: I0709 23:48:44.559696 3421 scope.go:117] "RemoveContainer" containerID="e0f3007449f4ac80765e555074bee59ef3c46063acae17bae9a13d56e5d538ed" Jul 9 23:48:44.562454 containerd[1912]: time="2025-07-09T23:48:44.562409584Z" level=info msg="RemoveContainer for \"e0f3007449f4ac80765e555074bee59ef3c46063acae17bae9a13d56e5d538ed\"" Jul 9 23:48:44.565997 systemd[1]: Removed slice kubepods-besteffort-podf39ab7ab_8c6d_49cc_9213_ba71667bdcf6.slice - libcontainer container kubepods-besteffort-podf39ab7ab_8c6d_49cc_9213_ba71667bdcf6.slice. Jul 9 23:48:44.573883 kubelet[3421]: I0709 23:48:44.573849 3421 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/46f90c65-3425-4ac9-ad87-764f78c1a0f3-lib-modules\") pod \"46f90c65-3425-4ac9-ad87-764f78c1a0f3\" (UID: \"46f90c65-3425-4ac9-ad87-764f78c1a0f3\") " Jul 9 23:48:44.575492 kubelet[3421]: I0709 23:48:44.573965 3421 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/46f90c65-3425-4ac9-ad87-764f78c1a0f3-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "46f90c65-3425-4ac9-ad87-764f78c1a0f3" (UID: "46f90c65-3425-4ac9-ad87-764f78c1a0f3"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 9 23:48:44.575492 kubelet[3421]: I0709 23:48:44.574024 3421 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/46f90c65-3425-4ac9-ad87-764f78c1a0f3-hubble-tls\") on node \"ci-4344.1.1-n-76bacae427\" DevicePath \"\"" Jul 9 23:48:44.575492 kubelet[3421]: I0709 23:48:44.574037 3421 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/46f90c65-3425-4ac9-ad87-764f78c1a0f3-cni-path\") on node \"ci-4344.1.1-n-76bacae427\" DevicePath \"\"" Jul 9 23:48:44.575492 kubelet[3421]: I0709 23:48:44.574044 3421 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/46f90c65-3425-4ac9-ad87-764f78c1a0f3-host-proc-sys-net\") on node \"ci-4344.1.1-n-76bacae427\" DevicePath \"\"" Jul 9 23:48:44.575492 kubelet[3421]: I0709 23:48:44.574064 3421 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xncbz\" (UniqueName: \"kubernetes.io/projected/f39ab7ab-8c6d-49cc-9213-ba71667bdcf6-kube-api-access-xncbz\") on node \"ci-4344.1.1-n-76bacae427\" DevicePath \"\"" Jul 9 23:48:44.575492 kubelet[3421]: I0709 23:48:44.574072 3421 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8hcjm\" (UniqueName: \"kubernetes.io/projected/46f90c65-3425-4ac9-ad87-764f78c1a0f3-kube-api-access-8hcjm\") on node \"ci-4344.1.1-n-76bacae427\" DevicePath \"\"" Jul 9 23:48:44.575492 kubelet[3421]: I0709 23:48:44.574078 3421 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/46f90c65-3425-4ac9-ad87-764f78c1a0f3-cilium-cgroup\") on node \"ci-4344.1.1-n-76bacae427\" DevicePath \"\"" Jul 9 23:48:44.575658 kubelet[3421]: I0709 23:48:44.574083 3421 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/46f90c65-3425-4ac9-ad87-764f78c1a0f3-cilium-config-path\") on node \"ci-4344.1.1-n-76bacae427\" DevicePath \"\"" Jul 9 23:48:44.575658 kubelet[3421]: I0709 23:48:44.574091 3421 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/46f90c65-3425-4ac9-ad87-764f78c1a0f3-xtables-lock\") on node \"ci-4344.1.1-n-76bacae427\" DevicePath \"\"" Jul 9 23:48:44.575658 kubelet[3421]: I0709 23:48:44.574097 3421 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/46f90c65-3425-4ac9-ad87-764f78c1a0f3-cilium-run\") on node \"ci-4344.1.1-n-76bacae427\" DevicePath \"\"" Jul 9 23:48:44.575658 kubelet[3421]: I0709 23:48:44.574103 3421 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/46f90c65-3425-4ac9-ad87-764f78c1a0f3-clustermesh-secrets\") on node \"ci-4344.1.1-n-76bacae427\" DevicePath \"\"" Jul 9 23:48:44.575658 kubelet[3421]: I0709 23:48:44.574109 3421 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/46f90c65-3425-4ac9-ad87-764f78c1a0f3-bpf-maps\") on node \"ci-4344.1.1-n-76bacae427\" DevicePath \"\"" Jul 9 23:48:44.575658 kubelet[3421]: I0709 23:48:44.574117 3421 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/46f90c65-3425-4ac9-ad87-764f78c1a0f3-etc-cni-netd\") on node \"ci-4344.1.1-n-76bacae427\" DevicePath \"\"" Jul 9 23:48:44.575658 kubelet[3421]: I0709 23:48:44.574124 3421 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f39ab7ab-8c6d-49cc-9213-ba71667bdcf6-cilium-config-path\") on node \"ci-4344.1.1-n-76bacae427\" DevicePath \"\"" Jul 9 23:48:44.575658 kubelet[3421]: I0709 23:48:44.574130 3421 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/46f90c65-3425-4ac9-ad87-764f78c1a0f3-host-proc-sys-kernel\") on node \"ci-4344.1.1-n-76bacae427\" DevicePath \"\"" Jul 9 23:48:44.575810 kubelet[3421]: I0709 23:48:44.574138 3421 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/46f90c65-3425-4ac9-ad87-764f78c1a0f3-hostproc\") on node \"ci-4344.1.1-n-76bacae427\" DevicePath \"\"" Jul 9 23:48:44.579684 containerd[1912]: time="2025-07-09T23:48:44.579529491Z" level=info msg="RemoveContainer for \"e0f3007449f4ac80765e555074bee59ef3c46063acae17bae9a13d56e5d538ed\" returns successfully" Jul 9 23:48:44.579886 kubelet[3421]: I0709 23:48:44.579862 3421 scope.go:117] "RemoveContainer" containerID="e0f3007449f4ac80765e555074bee59ef3c46063acae17bae9a13d56e5d538ed" Jul 9 23:48:44.580085 containerd[1912]: time="2025-07-09T23:48:44.580055589Z" level=error msg="ContainerStatus for \"e0f3007449f4ac80765e555074bee59ef3c46063acae17bae9a13d56e5d538ed\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e0f3007449f4ac80765e555074bee59ef3c46063acae17bae9a13d56e5d538ed\": not found" Jul 9 23:48:44.580669 kubelet[3421]: E0709 23:48:44.580527 3421 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e0f3007449f4ac80765e555074bee59ef3c46063acae17bae9a13d56e5d538ed\": not found" containerID="e0f3007449f4ac80765e555074bee59ef3c46063acae17bae9a13d56e5d538ed" Jul 9 23:48:44.580669 kubelet[3421]: I0709 23:48:44.580554 3421 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e0f3007449f4ac80765e555074bee59ef3c46063acae17bae9a13d56e5d538ed"} err="failed to get container status \"e0f3007449f4ac80765e555074bee59ef3c46063acae17bae9a13d56e5d538ed\": rpc error: code = NotFound desc = an error occurred when try to find container \"e0f3007449f4ac80765e555074bee59ef3c46063acae17bae9a13d56e5d538ed\": not found" Jul 9 23:48:44.580669 kubelet[3421]: I0709 23:48:44.580582 3421 scope.go:117] "RemoveContainer" containerID="fd8fdcf2cdbbbd74ae2bc3acca75e63574476fb7c6b85baa5595710006db0450" Jul 9 23:48:44.583469 containerd[1912]: time="2025-07-09T23:48:44.583286363Z" level=info msg="RemoveContainer for \"fd8fdcf2cdbbbd74ae2bc3acca75e63574476fb7c6b85baa5595710006db0450\"" Jul 9 23:48:44.597995 containerd[1912]: time="2025-07-09T23:48:44.597952690Z" level=info msg="RemoveContainer for \"fd8fdcf2cdbbbd74ae2bc3acca75e63574476fb7c6b85baa5595710006db0450\" returns successfully" Jul 9 23:48:44.598173 kubelet[3421]: I0709 23:48:44.598153 3421 scope.go:117] "RemoveContainer" containerID="ccc507efc29f68744ec3bd4a655c0d89795ea50e91cc1fdf27214f676042dda0" Jul 9 23:48:44.599294 containerd[1912]: time="2025-07-09T23:48:44.599251655Z" level=info msg="RemoveContainer for \"ccc507efc29f68744ec3bd4a655c0d89795ea50e91cc1fdf27214f676042dda0\"" Jul 9 23:48:44.612374 containerd[1912]: time="2025-07-09T23:48:44.612333223Z" level=info msg="RemoveContainer for \"ccc507efc29f68744ec3bd4a655c0d89795ea50e91cc1fdf27214f676042dda0\" returns successfully" Jul 9 23:48:44.612670 kubelet[3421]: I0709 23:48:44.612643 3421 scope.go:117] "RemoveContainer" containerID="1e09bcee8669ef627a4844c7e84f22b5f114268efd49d06924e8df77dba6d328" Jul 9 23:48:44.614590 containerd[1912]: time="2025-07-09T23:48:44.614523330Z" level=info msg="RemoveContainer for \"1e09bcee8669ef627a4844c7e84f22b5f114268efd49d06924e8df77dba6d328\"" Jul 9 23:48:44.626999 containerd[1912]: time="2025-07-09T23:48:44.626964757Z" level=info msg="RemoveContainer for \"1e09bcee8669ef627a4844c7e84f22b5f114268efd49d06924e8df77dba6d328\" returns successfully" Jul 9 23:48:44.627321 kubelet[3421]: I0709 23:48:44.627281 3421 scope.go:117] "RemoveContainer" containerID="08bd64f27d0c9e0039b4750ce2b98835a7156656a07c0adf10e51757d7af507b" Jul 9 23:48:44.628907 containerd[1912]: time="2025-07-09T23:48:44.628826757Z" level=info msg="RemoveContainer for \"08bd64f27d0c9e0039b4750ce2b98835a7156656a07c0adf10e51757d7af507b\"" Jul 9 23:48:44.645290 containerd[1912]: time="2025-07-09T23:48:44.645197574Z" level=info msg="RemoveContainer for \"08bd64f27d0c9e0039b4750ce2b98835a7156656a07c0adf10e51757d7af507b\" returns successfully" Jul 9 23:48:44.645583 kubelet[3421]: I0709 23:48:44.645568 3421 scope.go:117] "RemoveContainer" containerID="414d57f2d5303836cba5b768e145269fd7c01d0c7f68f382766465dfd5c95ffd" Jul 9 23:48:44.647133 containerd[1912]: time="2025-07-09T23:48:44.647069246Z" level=info msg="RemoveContainer for \"414d57f2d5303836cba5b768e145269fd7c01d0c7f68f382766465dfd5c95ffd\"" Jul 9 23:48:44.674635 kubelet[3421]: I0709 23:48:44.674597 3421 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/46f90c65-3425-4ac9-ad87-764f78c1a0f3-lib-modules\") on node \"ci-4344.1.1-n-76bacae427\" DevicePath \"\"" Jul 9 23:48:44.692858 containerd[1912]: time="2025-07-09T23:48:44.692795774Z" level=info msg="RemoveContainer for \"414d57f2d5303836cba5b768e145269fd7c01d0c7f68f382766465dfd5c95ffd\" returns successfully" Jul 9 23:48:44.693048 kubelet[3421]: I0709 23:48:44.692990 3421 scope.go:117] "RemoveContainer" containerID="fd8fdcf2cdbbbd74ae2bc3acca75e63574476fb7c6b85baa5595710006db0450" Jul 9 23:48:44.693319 containerd[1912]: time="2025-07-09T23:48:44.693244918Z" level=error msg="ContainerStatus for \"fd8fdcf2cdbbbd74ae2bc3acca75e63574476fb7c6b85baa5595710006db0450\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fd8fdcf2cdbbbd74ae2bc3acca75e63574476fb7c6b85baa5595710006db0450\": not found" Jul 9 23:48:44.693400 kubelet[3421]: E0709 23:48:44.693373 3421 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fd8fdcf2cdbbbd74ae2bc3acca75e63574476fb7c6b85baa5595710006db0450\": not found" containerID="fd8fdcf2cdbbbd74ae2bc3acca75e63574476fb7c6b85baa5595710006db0450" Jul 9 23:48:44.693461 kubelet[3421]: I0709 23:48:44.693397 3421 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fd8fdcf2cdbbbd74ae2bc3acca75e63574476fb7c6b85baa5595710006db0450"} err="failed to get container status \"fd8fdcf2cdbbbd74ae2bc3acca75e63574476fb7c6b85baa5595710006db0450\": rpc error: code = NotFound desc = an error occurred when try to find container \"fd8fdcf2cdbbbd74ae2bc3acca75e63574476fb7c6b85baa5595710006db0450\": not found" Jul 9 23:48:44.693461 kubelet[3421]: I0709 23:48:44.693441 3421 scope.go:117] "RemoveContainer" containerID="ccc507efc29f68744ec3bd4a655c0d89795ea50e91cc1fdf27214f676042dda0" Jul 9 23:48:44.693693 containerd[1912]: time="2025-07-09T23:48:44.693659428Z" level=error msg="ContainerStatus for \"ccc507efc29f68744ec3bd4a655c0d89795ea50e91cc1fdf27214f676042dda0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ccc507efc29f68744ec3bd4a655c0d89795ea50e91cc1fdf27214f676042dda0\": not found" Jul 9 23:48:44.693788 kubelet[3421]: E0709 23:48:44.693772 3421 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ccc507efc29f68744ec3bd4a655c0d89795ea50e91cc1fdf27214f676042dda0\": not found" containerID="ccc507efc29f68744ec3bd4a655c0d89795ea50e91cc1fdf27214f676042dda0" Jul 9 23:48:44.693822 kubelet[3421]: I0709 23:48:44.693793 3421 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ccc507efc29f68744ec3bd4a655c0d89795ea50e91cc1fdf27214f676042dda0"} err="failed to get container status \"ccc507efc29f68744ec3bd4a655c0d89795ea50e91cc1fdf27214f676042dda0\": rpc error: code = NotFound desc = an error occurred when try to find container \"ccc507efc29f68744ec3bd4a655c0d89795ea50e91cc1fdf27214f676042dda0\": not found" Jul 9 23:48:44.693822 kubelet[3421]: I0709 23:48:44.693806 3421 scope.go:117] "RemoveContainer" containerID="1e09bcee8669ef627a4844c7e84f22b5f114268efd49d06924e8df77dba6d328" Jul 9 23:48:44.694011 containerd[1912]: time="2025-07-09T23:48:44.693978903Z" level=error msg="ContainerStatus for \"1e09bcee8669ef627a4844c7e84f22b5f114268efd49d06924e8df77dba6d328\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1e09bcee8669ef627a4844c7e84f22b5f114268efd49d06924e8df77dba6d328\": not found" Jul 9 23:48:44.694230 kubelet[3421]: E0709 23:48:44.694211 3421 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1e09bcee8669ef627a4844c7e84f22b5f114268efd49d06924e8df77dba6d328\": not found" containerID="1e09bcee8669ef627a4844c7e84f22b5f114268efd49d06924e8df77dba6d328" Jul 9 23:48:44.694230 kubelet[3421]: I0709 23:48:44.694229 3421 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1e09bcee8669ef627a4844c7e84f22b5f114268efd49d06924e8df77dba6d328"} err="failed to get container status \"1e09bcee8669ef627a4844c7e84f22b5f114268efd49d06924e8df77dba6d328\": rpc error: code = NotFound desc = an error occurred when try to find container \"1e09bcee8669ef627a4844c7e84f22b5f114268efd49d06924e8df77dba6d328\": not found" Jul 9 23:48:44.694305 kubelet[3421]: I0709 23:48:44.694240 3421 scope.go:117] "RemoveContainer" containerID="08bd64f27d0c9e0039b4750ce2b98835a7156656a07c0adf10e51757d7af507b" Jul 9 23:48:44.694478 containerd[1912]: time="2025-07-09T23:48:44.694448215Z" level=error msg="ContainerStatus for \"08bd64f27d0c9e0039b4750ce2b98835a7156656a07c0adf10e51757d7af507b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"08bd64f27d0c9e0039b4750ce2b98835a7156656a07c0adf10e51757d7af507b\": not found" Jul 9 23:48:44.694589 kubelet[3421]: E0709 23:48:44.694570 3421 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"08bd64f27d0c9e0039b4750ce2b98835a7156656a07c0adf10e51757d7af507b\": not found" containerID="08bd64f27d0c9e0039b4750ce2b98835a7156656a07c0adf10e51757d7af507b" Jul 9 23:48:44.694652 kubelet[3421]: I0709 23:48:44.694612 3421 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"08bd64f27d0c9e0039b4750ce2b98835a7156656a07c0adf10e51757d7af507b"} err="failed to get container status \"08bd64f27d0c9e0039b4750ce2b98835a7156656a07c0adf10e51757d7af507b\": rpc error: code = NotFound desc = an error occurred when try to find container \"08bd64f27d0c9e0039b4750ce2b98835a7156656a07c0adf10e51757d7af507b\": not found" Jul 9 23:48:44.694652 kubelet[3421]: I0709 23:48:44.694630 3421 scope.go:117] "RemoveContainer" containerID="414d57f2d5303836cba5b768e145269fd7c01d0c7f68f382766465dfd5c95ffd" Jul 9 23:48:44.694885 kubelet[3421]: E0709 23:48:44.694856 3421 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"414d57f2d5303836cba5b768e145269fd7c01d0c7f68f382766465dfd5c95ffd\": not found" containerID="414d57f2d5303836cba5b768e145269fd7c01d0c7f68f382766465dfd5c95ffd" Jul 9 23:48:44.694885 kubelet[3421]: I0709 23:48:44.694868 3421 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"414d57f2d5303836cba5b768e145269fd7c01d0c7f68f382766465dfd5c95ffd"} err="failed to get container status \"414d57f2d5303836cba5b768e145269fd7c01d0c7f68f382766465dfd5c95ffd\": rpc error: code = NotFound desc = an error occurred when try to find container \"414d57f2d5303836cba5b768e145269fd7c01d0c7f68f382766465dfd5c95ffd\": not found" Jul 9 23:48:44.694925 containerd[1912]: time="2025-07-09T23:48:44.694770722Z" level=error msg="ContainerStatus for \"414d57f2d5303836cba5b768e145269fd7c01d0c7f68f382766465dfd5c95ffd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"414d57f2d5303836cba5b768e145269fd7c01d0c7f68f382766465dfd5c95ffd\": not found" Jul 9 23:48:44.873864 systemd[1]: Removed slice kubepods-burstable-pod46f90c65_3425_4ac9_ad87_764f78c1a0f3.slice - libcontainer container kubepods-burstable-pod46f90c65_3425_4ac9_ad87_764f78c1a0f3.slice. Jul 9 23:48:44.874131 systemd[1]: kubepods-burstable-pod46f90c65_3425_4ac9_ad87_764f78c1a0f3.slice: Consumed 4.462s CPU time, 125.3M memory peak, 128K read from disk, 12.9M written to disk. Jul 9 23:48:45.251098 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6aafcf9d03c31f250007071bda7c67b8dec6f90354b12114420275b68ea9e7e9-shm.mount: Deactivated successfully. Jul 9 23:48:45.251549 systemd[1]: var-lib-kubelet-pods-46f90c65\x2d3425\x2d4ac9\x2dad87\x2d764f78c1a0f3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8hcjm.mount: Deactivated successfully. Jul 9 23:48:45.251694 systemd[1]: var-lib-kubelet-pods-f39ab7ab\x2d8c6d\x2d49cc\x2d9213\x2dba71667bdcf6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxncbz.mount: Deactivated successfully. Jul 9 23:48:45.251801 systemd[1]: var-lib-kubelet-pods-46f90c65\x2d3425\x2d4ac9\x2dad87\x2d764f78c1a0f3-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 9 23:48:45.251907 systemd[1]: var-lib-kubelet-pods-46f90c65\x2d3425\x2d4ac9\x2dad87\x2d764f78c1a0f3-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 9 23:48:45.266907 kubelet[3421]: I0709 23:48:45.266853 3421 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46f90c65-3425-4ac9-ad87-764f78c1a0f3" path="/var/lib/kubelet/pods/46f90c65-3425-4ac9-ad87-764f78c1a0f3/volumes" Jul 9 23:48:45.267463 kubelet[3421]: I0709 23:48:45.267415 3421 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f39ab7ab-8c6d-49cc-9213-ba71667bdcf6" path="/var/lib/kubelet/pods/f39ab7ab-8c6d-49cc-9213-ba71667bdcf6/volumes" Jul 9 23:48:46.239093 sshd[4948]: Connection closed by 10.200.16.10 port 54690 Jul 9 23:48:46.239726 sshd-session[4946]: pam_unix(sshd:session): session closed for user core Jul 9 23:48:46.243088 systemd[1]: sshd@21-10.200.20.14:22-10.200.16.10:54690.service: Deactivated successfully. Jul 9 23:48:46.244558 systemd[1]: session-24.scope: Deactivated successfully. Jul 9 23:48:46.245127 systemd-logind[1882]: Session 24 logged out. Waiting for processes to exit. Jul 9 23:48:46.246264 systemd-logind[1882]: Removed session 24. Jul 9 23:48:46.323294 systemd[1]: Started sshd@22-10.200.20.14:22-10.200.16.10:54696.service - OpenSSH per-connection server daemon (10.200.16.10:54696). Jul 9 23:48:46.799748 sshd[5100]: Accepted publickey for core from 10.200.16.10 port 54696 ssh2: RSA SHA256:zFMRRzzSGWgmvEk8T0W8VsmZJ1v5NiT01j8gkhQ3zko Jul 9 23:48:46.800881 sshd-session[5100]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:48:46.804689 systemd-logind[1882]: New session 25 of user core. Jul 9 23:48:46.810561 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 9 23:48:47.342547 kubelet[3421]: E0709 23:48:47.342385 3421 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 9 23:48:47.546126 systemd[1]: Created slice kubepods-burstable-pod0bbf0dcf_91f8_4740_9580_6052c2d4cf96.slice - libcontainer container kubepods-burstable-pod0bbf0dcf_91f8_4740_9580_6052c2d4cf96.slice. Jul 9 23:48:47.553807 sshd[5102]: Connection closed by 10.200.16.10 port 54696 Jul 9 23:48:47.554857 sshd-session[5100]: pam_unix(sshd:session): session closed for user core Jul 9 23:48:47.558447 systemd[1]: sshd@22-10.200.20.14:22-10.200.16.10:54696.service: Deactivated successfully. Jul 9 23:48:47.564762 systemd[1]: session-25.scope: Deactivated successfully. Jul 9 23:48:47.566523 systemd-logind[1882]: Session 25 logged out. Waiting for processes to exit. Jul 9 23:48:47.569257 systemd-logind[1882]: Removed session 25. Jul 9 23:48:47.638677 systemd[1]: Started sshd@23-10.200.20.14:22-10.200.16.10:54708.service - OpenSSH per-connection server daemon (10.200.16.10:54708). Jul 9 23:48:47.689420 kubelet[3421]: I0709 23:48:47.689220 3421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0bbf0dcf-91f8-4740-9580-6052c2d4cf96-bpf-maps\") pod \"cilium-jpr7d\" (UID: \"0bbf0dcf-91f8-4740-9580-6052c2d4cf96\") " pod="kube-system/cilium-jpr7d" Jul 9 23:48:47.689420 kubelet[3421]: I0709 23:48:47.689266 3421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0bbf0dcf-91f8-4740-9580-6052c2d4cf96-cilium-config-path\") pod \"cilium-jpr7d\" (UID: \"0bbf0dcf-91f8-4740-9580-6052c2d4cf96\") " pod="kube-system/cilium-jpr7d" Jul 9 23:48:47.689420 kubelet[3421]: I0709 23:48:47.689286 3421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0bbf0dcf-91f8-4740-9580-6052c2d4cf96-lib-modules\") pod \"cilium-jpr7d\" (UID: \"0bbf0dcf-91f8-4740-9580-6052c2d4cf96\") " pod="kube-system/cilium-jpr7d" Jul 9 23:48:47.689420 kubelet[3421]: I0709 23:48:47.689297 3421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0bbf0dcf-91f8-4740-9580-6052c2d4cf96-xtables-lock\") pod \"cilium-jpr7d\" (UID: \"0bbf0dcf-91f8-4740-9580-6052c2d4cf96\") " pod="kube-system/cilium-jpr7d" Jul 9 23:48:47.689420 kubelet[3421]: I0709 23:48:47.689331 3421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0bbf0dcf-91f8-4740-9580-6052c2d4cf96-cilium-cgroup\") pod \"cilium-jpr7d\" (UID: \"0bbf0dcf-91f8-4740-9580-6052c2d4cf96\") " pod="kube-system/cilium-jpr7d" Jul 9 23:48:47.689420 kubelet[3421]: I0709 23:48:47.689359 3421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0bbf0dcf-91f8-4740-9580-6052c2d4cf96-cilium-run\") pod \"cilium-jpr7d\" (UID: \"0bbf0dcf-91f8-4740-9580-6052c2d4cf96\") " pod="kube-system/cilium-jpr7d" Jul 9 23:48:47.689658 kubelet[3421]: I0709 23:48:47.689371 3421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0bbf0dcf-91f8-4740-9580-6052c2d4cf96-hostproc\") pod \"cilium-jpr7d\" (UID: \"0bbf0dcf-91f8-4740-9580-6052c2d4cf96\") " pod="kube-system/cilium-jpr7d" Jul 9 23:48:47.689658 kubelet[3421]: I0709 23:48:47.689381 3421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0bbf0dcf-91f8-4740-9580-6052c2d4cf96-cni-path\") pod \"cilium-jpr7d\" (UID: \"0bbf0dcf-91f8-4740-9580-6052c2d4cf96\") " pod="kube-system/cilium-jpr7d" Jul 9 23:48:47.689658 kubelet[3421]: I0709 23:48:47.689390 3421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0bbf0dcf-91f8-4740-9580-6052c2d4cf96-host-proc-sys-net\") pod \"cilium-jpr7d\" (UID: \"0bbf0dcf-91f8-4740-9580-6052c2d4cf96\") " pod="kube-system/cilium-jpr7d" Jul 9 23:48:47.689658 kubelet[3421]: I0709 23:48:47.689402 3421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0bbf0dcf-91f8-4740-9580-6052c2d4cf96-host-proc-sys-kernel\") pod \"cilium-jpr7d\" (UID: \"0bbf0dcf-91f8-4740-9580-6052c2d4cf96\") " pod="kube-system/cilium-jpr7d" Jul 9 23:48:47.689658 kubelet[3421]: I0709 23:48:47.689442 3421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0bbf0dcf-91f8-4740-9580-6052c2d4cf96-clustermesh-secrets\") pod \"cilium-jpr7d\" (UID: \"0bbf0dcf-91f8-4740-9580-6052c2d4cf96\") " pod="kube-system/cilium-jpr7d" Jul 9 23:48:47.689658 kubelet[3421]: I0709 23:48:47.689458 3421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0bbf0dcf-91f8-4740-9580-6052c2d4cf96-hubble-tls\") pod \"cilium-jpr7d\" (UID: \"0bbf0dcf-91f8-4740-9580-6052c2d4cf96\") " pod="kube-system/cilium-jpr7d" Jul 9 23:48:47.689743 kubelet[3421]: I0709 23:48:47.689483 3421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhhx4\" (UniqueName: \"kubernetes.io/projected/0bbf0dcf-91f8-4740-9580-6052c2d4cf96-kube-api-access-zhhx4\") pod \"cilium-jpr7d\" (UID: \"0bbf0dcf-91f8-4740-9580-6052c2d4cf96\") " pod="kube-system/cilium-jpr7d" Jul 9 23:48:47.689743 kubelet[3421]: I0709 23:48:47.689502 3421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0bbf0dcf-91f8-4740-9580-6052c2d4cf96-etc-cni-netd\") pod \"cilium-jpr7d\" (UID: \"0bbf0dcf-91f8-4740-9580-6052c2d4cf96\") " pod="kube-system/cilium-jpr7d" Jul 9 23:48:47.689743 kubelet[3421]: I0709 23:48:47.689514 3421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/0bbf0dcf-91f8-4740-9580-6052c2d4cf96-cilium-ipsec-secrets\") pod \"cilium-jpr7d\" (UID: \"0bbf0dcf-91f8-4740-9580-6052c2d4cf96\") " pod="kube-system/cilium-jpr7d" Jul 9 23:48:47.849188 containerd[1912]: time="2025-07-09T23:48:47.849151425Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jpr7d,Uid:0bbf0dcf-91f8-4740-9580-6052c2d4cf96,Namespace:kube-system,Attempt:0,}" Jul 9 23:48:47.909746 containerd[1912]: time="2025-07-09T23:48:47.909341945Z" level=info msg="connecting to shim 10e92edecd61d6fc5b07b03c5c4916bd1f448d716d88517ebdd0a9fddc4f577e" address="unix:///run/containerd/s/e279d30486651f2f250adde6b02e2b3203d72e0202a8fe488659ea1493153e5e" namespace=k8s.io protocol=ttrpc version=3 Jul 9 23:48:47.924563 systemd[1]: Started cri-containerd-10e92edecd61d6fc5b07b03c5c4916bd1f448d716d88517ebdd0a9fddc4f577e.scope - libcontainer container 10e92edecd61d6fc5b07b03c5c4916bd1f448d716d88517ebdd0a9fddc4f577e. Jul 9 23:48:47.947210 containerd[1912]: time="2025-07-09T23:48:47.947164970Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jpr7d,Uid:0bbf0dcf-91f8-4740-9580-6052c2d4cf96,Namespace:kube-system,Attempt:0,} returns sandbox id \"10e92edecd61d6fc5b07b03c5c4916bd1f448d716d88517ebdd0a9fddc4f577e\"" Jul 9 23:48:47.957531 containerd[1912]: time="2025-07-09T23:48:47.957413482Z" level=info msg="CreateContainer within sandbox \"10e92edecd61d6fc5b07b03c5c4916bd1f448d716d88517ebdd0a9fddc4f577e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 9 23:48:47.989118 containerd[1912]: time="2025-07-09T23:48:47.989073183Z" level=info msg="Container c623695dd8eadcddd52ab2af6f1a4037bd2736471606a13e20f233b5065d0314: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:48:48.008346 containerd[1912]: time="2025-07-09T23:48:48.008292322Z" level=info msg="CreateContainer within sandbox \"10e92edecd61d6fc5b07b03c5c4916bd1f448d716d88517ebdd0a9fddc4f577e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c623695dd8eadcddd52ab2af6f1a4037bd2736471606a13e20f233b5065d0314\"" Jul 9 23:48:48.008926 containerd[1912]: time="2025-07-09T23:48:48.008906783Z" level=info msg="StartContainer for \"c623695dd8eadcddd52ab2af6f1a4037bd2736471606a13e20f233b5065d0314\"" Jul 9 23:48:48.009778 containerd[1912]: time="2025-07-09T23:48:48.009718987Z" level=info msg="connecting to shim c623695dd8eadcddd52ab2af6f1a4037bd2736471606a13e20f233b5065d0314" address="unix:///run/containerd/s/e279d30486651f2f250adde6b02e2b3203d72e0202a8fe488659ea1493153e5e" protocol=ttrpc version=3 Jul 9 23:48:48.027558 systemd[1]: Started cri-containerd-c623695dd8eadcddd52ab2af6f1a4037bd2736471606a13e20f233b5065d0314.scope - libcontainer container c623695dd8eadcddd52ab2af6f1a4037bd2736471606a13e20f233b5065d0314. Jul 9 23:48:48.053175 containerd[1912]: time="2025-07-09T23:48:48.053054857Z" level=info msg="StartContainer for \"c623695dd8eadcddd52ab2af6f1a4037bd2736471606a13e20f233b5065d0314\" returns successfully" Jul 9 23:48:48.053089 systemd[1]: cri-containerd-c623695dd8eadcddd52ab2af6f1a4037bd2736471606a13e20f233b5065d0314.scope: Deactivated successfully. Jul 9 23:48:48.057790 containerd[1912]: time="2025-07-09T23:48:48.055940212Z" level=info msg="received exit event container_id:\"c623695dd8eadcddd52ab2af6f1a4037bd2736471606a13e20f233b5065d0314\" id:\"c623695dd8eadcddd52ab2af6f1a4037bd2736471606a13e20f233b5065d0314\" pid:5177 exited_at:{seconds:1752104928 nanos:55544463}" Jul 9 23:48:48.057790 containerd[1912]: time="2025-07-09T23:48:48.056368379Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c623695dd8eadcddd52ab2af6f1a4037bd2736471606a13e20f233b5065d0314\" id:\"c623695dd8eadcddd52ab2af6f1a4037bd2736471606a13e20f233b5065d0314\" pid:5177 exited_at:{seconds:1752104928 nanos:55544463}" Jul 9 23:48:48.109962 sshd[5112]: Accepted publickey for core from 10.200.16.10 port 54708 ssh2: RSA SHA256:zFMRRzzSGWgmvEk8T0W8VsmZJ1v5NiT01j8gkhQ3zko Jul 9 23:48:48.111526 sshd-session[5112]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:48:48.115528 systemd-logind[1882]: New session 26 of user core. Jul 9 23:48:48.121558 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 9 23:48:48.438715 sshd[5208]: Connection closed by 10.200.16.10 port 54708 Jul 9 23:48:48.438216 sshd-session[5112]: pam_unix(sshd:session): session closed for user core Jul 9 23:48:48.441422 systemd[1]: sshd@23-10.200.20.14:22-10.200.16.10:54708.service: Deactivated successfully. Jul 9 23:48:48.443380 systemd[1]: session-26.scope: Deactivated successfully. Jul 9 23:48:48.444272 systemd-logind[1882]: Session 26 logged out. Waiting for processes to exit. Jul 9 23:48:48.445588 systemd-logind[1882]: Removed session 26. Jul 9 23:48:48.524996 systemd[1]: Started sshd@24-10.200.20.14:22-10.200.16.10:54710.service - OpenSSH per-connection server daemon (10.200.16.10:54710). Jul 9 23:48:48.587360 containerd[1912]: time="2025-07-09T23:48:48.587285057Z" level=info msg="CreateContainer within sandbox \"10e92edecd61d6fc5b07b03c5c4916bd1f448d716d88517ebdd0a9fddc4f577e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 9 23:48:48.612324 containerd[1912]: time="2025-07-09T23:48:48.612274833Z" level=info msg="Container 62999cabdc521c899cb6e6978c8daa0523c26e3b81ed8fa4febcb084f3d7a42a: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:48:48.631096 containerd[1912]: time="2025-07-09T23:48:48.631057325Z" level=info msg="CreateContainer within sandbox \"10e92edecd61d6fc5b07b03c5c4916bd1f448d716d88517ebdd0a9fddc4f577e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"62999cabdc521c899cb6e6978c8daa0523c26e3b81ed8fa4febcb084f3d7a42a\"" Jul 9 23:48:48.631808 containerd[1912]: time="2025-07-09T23:48:48.631780126Z" level=info msg="StartContainer for \"62999cabdc521c899cb6e6978c8daa0523c26e3b81ed8fa4febcb084f3d7a42a\"" Jul 9 23:48:48.632606 containerd[1912]: time="2025-07-09T23:48:48.632581650Z" level=info msg="connecting to shim 62999cabdc521c899cb6e6978c8daa0523c26e3b81ed8fa4febcb084f3d7a42a" address="unix:///run/containerd/s/e279d30486651f2f250adde6b02e2b3203d72e0202a8fe488659ea1493153e5e" protocol=ttrpc version=3 Jul 9 23:48:48.648546 systemd[1]: Started cri-containerd-62999cabdc521c899cb6e6978c8daa0523c26e3b81ed8fa4febcb084f3d7a42a.scope - libcontainer container 62999cabdc521c899cb6e6978c8daa0523c26e3b81ed8fa4febcb084f3d7a42a. Jul 9 23:48:48.673567 containerd[1912]: time="2025-07-09T23:48:48.673528734Z" level=info msg="StartContainer for \"62999cabdc521c899cb6e6978c8daa0523c26e3b81ed8fa4febcb084f3d7a42a\" returns successfully" Jul 9 23:48:48.675945 systemd[1]: cri-containerd-62999cabdc521c899cb6e6978c8daa0523c26e3b81ed8fa4febcb084f3d7a42a.scope: Deactivated successfully. Jul 9 23:48:48.677530 containerd[1912]: time="2025-07-09T23:48:48.676824399Z" level=info msg="received exit event container_id:\"62999cabdc521c899cb6e6978c8daa0523c26e3b81ed8fa4febcb084f3d7a42a\" id:\"62999cabdc521c899cb6e6978c8daa0523c26e3b81ed8fa4febcb084f3d7a42a\" pid:5229 exited_at:{seconds:1752104928 nanos:675784915}" Jul 9 23:48:48.677530 containerd[1912]: time="2025-07-09T23:48:48.677176243Z" level=info msg="TaskExit event in podsandbox handler container_id:\"62999cabdc521c899cb6e6978c8daa0523c26e3b81ed8fa4febcb084f3d7a42a\" id:\"62999cabdc521c899cb6e6978c8daa0523c26e3b81ed8fa4febcb084f3d7a42a\" pid:5229 exited_at:{seconds:1752104928 nanos:675784915}" Jul 9 23:48:48.984493 sshd[5215]: Accepted publickey for core from 10.200.16.10 port 54710 ssh2: RSA SHA256:zFMRRzzSGWgmvEk8T0W8VsmZJ1v5NiT01j8gkhQ3zko Jul 9 23:48:48.985598 sshd-session[5215]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:48:48.989113 systemd-logind[1882]: New session 27 of user core. Jul 9 23:48:48.999545 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 9 23:48:49.594735 containerd[1912]: time="2025-07-09T23:48:49.594180736Z" level=info msg="CreateContainer within sandbox \"10e92edecd61d6fc5b07b03c5c4916bd1f448d716d88517ebdd0a9fddc4f577e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 9 23:48:49.622635 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3724329860.mount: Deactivated successfully. Jul 9 23:48:49.625914 containerd[1912]: time="2025-07-09T23:48:49.625877487Z" level=info msg="Container ac13bcab4a7aca5e623d9f3cbfccf9a5db858abc0aae81ce012c9abc7b3d2b08: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:48:49.628113 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3370295892.mount: Deactivated successfully. Jul 9 23:48:49.645826 containerd[1912]: time="2025-07-09T23:48:49.645786106Z" level=info msg="CreateContainer within sandbox \"10e92edecd61d6fc5b07b03c5c4916bd1f448d716d88517ebdd0a9fddc4f577e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ac13bcab4a7aca5e623d9f3cbfccf9a5db858abc0aae81ce012c9abc7b3d2b08\"" Jul 9 23:48:49.647105 containerd[1912]: time="2025-07-09T23:48:49.647039092Z" level=info msg="StartContainer for \"ac13bcab4a7aca5e623d9f3cbfccf9a5db858abc0aae81ce012c9abc7b3d2b08\"" Jul 9 23:48:49.648319 containerd[1912]: time="2025-07-09T23:48:49.648298352Z" level=info msg="connecting to shim ac13bcab4a7aca5e623d9f3cbfccf9a5db858abc0aae81ce012c9abc7b3d2b08" address="unix:///run/containerd/s/e279d30486651f2f250adde6b02e2b3203d72e0202a8fe488659ea1493153e5e" protocol=ttrpc version=3 Jul 9 23:48:49.664566 systemd[1]: Started cri-containerd-ac13bcab4a7aca5e623d9f3cbfccf9a5db858abc0aae81ce012c9abc7b3d2b08.scope - libcontainer container ac13bcab4a7aca5e623d9f3cbfccf9a5db858abc0aae81ce012c9abc7b3d2b08. Jul 9 23:48:49.690320 containerd[1912]: time="2025-07-09T23:48:49.690282671Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ac13bcab4a7aca5e623d9f3cbfccf9a5db858abc0aae81ce012c9abc7b3d2b08\" id:\"ac13bcab4a7aca5e623d9f3cbfccf9a5db858abc0aae81ce012c9abc7b3d2b08\" pid:5283 exited_at:{seconds:1752104929 nanos:689962364}" Jul 9 23:48:49.690655 systemd[1]: cri-containerd-ac13bcab4a7aca5e623d9f3cbfccf9a5db858abc0aae81ce012c9abc7b3d2b08.scope: Deactivated successfully. Jul 9 23:48:49.694035 containerd[1912]: time="2025-07-09T23:48:49.693992711Z" level=info msg="received exit event container_id:\"ac13bcab4a7aca5e623d9f3cbfccf9a5db858abc0aae81ce012c9abc7b3d2b08\" id:\"ac13bcab4a7aca5e623d9f3cbfccf9a5db858abc0aae81ce012c9abc7b3d2b08\" pid:5283 exited_at:{seconds:1752104929 nanos:689962364}" Jul 9 23:48:49.699693 containerd[1912]: time="2025-07-09T23:48:49.699669489Z" level=info msg="StartContainer for \"ac13bcab4a7aca5e623d9f3cbfccf9a5db858abc0aae81ce012c9abc7b3d2b08\" returns successfully" Jul 9 23:48:49.947137 kubelet[3421]: I0709 23:48:49.947006 3421 setters.go:618] "Node became not ready" node="ci-4344.1.1-n-76bacae427" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-07-09T23:48:49Z","lastTransitionTime":"2025-07-09T23:48:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 9 23:48:50.595852 containerd[1912]: time="2025-07-09T23:48:50.595815515Z" level=info msg="CreateContainer within sandbox \"10e92edecd61d6fc5b07b03c5c4916bd1f448d716d88517ebdd0a9fddc4f577e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 9 23:48:50.633150 containerd[1912]: time="2025-07-09T23:48:50.633096578Z" level=info msg="Container 7cb50712c138e7a9117b4fb4868ceef2d1fa6663b69bc402f60fc2ffb77687da: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:48:50.652098 containerd[1912]: time="2025-07-09T23:48:50.652059996Z" level=info msg="CreateContainer within sandbox \"10e92edecd61d6fc5b07b03c5c4916bd1f448d716d88517ebdd0a9fddc4f577e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"7cb50712c138e7a9117b4fb4868ceef2d1fa6663b69bc402f60fc2ffb77687da\"" Jul 9 23:48:50.652666 containerd[1912]: time="2025-07-09T23:48:50.652643440Z" level=info msg="StartContainer for \"7cb50712c138e7a9117b4fb4868ceef2d1fa6663b69bc402f60fc2ffb77687da\"" Jul 9 23:48:50.653436 containerd[1912]: time="2025-07-09T23:48:50.653366617Z" level=info msg="connecting to shim 7cb50712c138e7a9117b4fb4868ceef2d1fa6663b69bc402f60fc2ffb77687da" address="unix:///run/containerd/s/e279d30486651f2f250adde6b02e2b3203d72e0202a8fe488659ea1493153e5e" protocol=ttrpc version=3 Jul 9 23:48:50.673546 systemd[1]: Started cri-containerd-7cb50712c138e7a9117b4fb4868ceef2d1fa6663b69bc402f60fc2ffb77687da.scope - libcontainer container 7cb50712c138e7a9117b4fb4868ceef2d1fa6663b69bc402f60fc2ffb77687da. Jul 9 23:48:50.693056 systemd[1]: cri-containerd-7cb50712c138e7a9117b4fb4868ceef2d1fa6663b69bc402f60fc2ffb77687da.scope: Deactivated successfully. Jul 9 23:48:50.695943 containerd[1912]: time="2025-07-09T23:48:50.695905756Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7cb50712c138e7a9117b4fb4868ceef2d1fa6663b69bc402f60fc2ffb77687da\" id:\"7cb50712c138e7a9117b4fb4868ceef2d1fa6663b69bc402f60fc2ffb77687da\" pid:5322 exited_at:{seconds:1752104930 nanos:694169384}" Jul 9 23:48:50.696730 containerd[1912]: time="2025-07-09T23:48:50.696681334Z" level=info msg="received exit event container_id:\"7cb50712c138e7a9117b4fb4868ceef2d1fa6663b69bc402f60fc2ffb77687da\" id:\"7cb50712c138e7a9117b4fb4868ceef2d1fa6663b69bc402f60fc2ffb77687da\" pid:5322 exited_at:{seconds:1752104930 nanos:694169384}" Jul 9 23:48:50.704561 containerd[1912]: time="2025-07-09T23:48:50.704543076Z" level=info msg="StartContainer for \"7cb50712c138e7a9117b4fb4868ceef2d1fa6663b69bc402f60fc2ffb77687da\" returns successfully" Jul 9 23:48:50.714613 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7cb50712c138e7a9117b4fb4868ceef2d1fa6663b69bc402f60fc2ffb77687da-rootfs.mount: Deactivated successfully. Jul 9 23:48:51.601964 containerd[1912]: time="2025-07-09T23:48:51.601568841Z" level=info msg="CreateContainer within sandbox \"10e92edecd61d6fc5b07b03c5c4916bd1f448d716d88517ebdd0a9fddc4f577e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 9 23:48:51.630314 containerd[1912]: time="2025-07-09T23:48:51.630270760Z" level=info msg="Container ee0399dea27a0cda658bc6f8c29be2f04568d9632da9d3328bb51fdc252ef16f: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:48:51.650798 containerd[1912]: time="2025-07-09T23:48:51.650692411Z" level=info msg="CreateContainer within sandbox \"10e92edecd61d6fc5b07b03c5c4916bd1f448d716d88517ebdd0a9fddc4f577e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ee0399dea27a0cda658bc6f8c29be2f04568d9632da9d3328bb51fdc252ef16f\"" Jul 9 23:48:51.652925 containerd[1912]: time="2025-07-09T23:48:51.651868283Z" level=info msg="StartContainer for \"ee0399dea27a0cda658bc6f8c29be2f04568d9632da9d3328bb51fdc252ef16f\"" Jul 9 23:48:51.653405 containerd[1912]: time="2025-07-09T23:48:51.653372783Z" level=info msg="connecting to shim ee0399dea27a0cda658bc6f8c29be2f04568d9632da9d3328bb51fdc252ef16f" address="unix:///run/containerd/s/e279d30486651f2f250adde6b02e2b3203d72e0202a8fe488659ea1493153e5e" protocol=ttrpc version=3 Jul 9 23:48:51.670561 systemd[1]: Started cri-containerd-ee0399dea27a0cda658bc6f8c29be2f04568d9632da9d3328bb51fdc252ef16f.scope - libcontainer container ee0399dea27a0cda658bc6f8c29be2f04568d9632da9d3328bb51fdc252ef16f. Jul 9 23:48:51.707275 containerd[1912]: time="2025-07-09T23:48:51.707241804Z" level=info msg="StartContainer for \"ee0399dea27a0cda658bc6f8c29be2f04568d9632da9d3328bb51fdc252ef16f\" returns successfully" Jul 9 23:48:51.759134 containerd[1912]: time="2025-07-09T23:48:51.759051162Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ee0399dea27a0cda658bc6f8c29be2f04568d9632da9d3328bb51fdc252ef16f\" id:\"eea5e7df959d60ea84ce8feda6e1b608b27a9a0163025cf50239e8a07d38d933\" pid:5394 exited_at:{seconds:1752104931 nanos:758793713}" Jul 9 23:48:52.013654 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jul 9 23:48:52.615443 kubelet[3421]: I0709 23:48:52.614851 3421 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-jpr7d" podStartSLOduration=5.614838036 podStartE2EDuration="5.614838036s" podCreationTimestamp="2025-07-09 23:48:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 23:48:52.613930901 +0000 UTC m=+155.475047975" watchObservedRunningTime="2025-07-09 23:48:52.614838036 +0000 UTC m=+155.475955078" Jul 9 23:48:53.405031 containerd[1912]: time="2025-07-09T23:48:53.404883827Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ee0399dea27a0cda658bc6f8c29be2f04568d9632da9d3328bb51fdc252ef16f\" id:\"e9cbb350c63ac4765689ac4f71c4542c62f3b65eb69f9b63da2fa6b24021f5aa\" pid:5486 exit_status:1 exited_at:{seconds:1752104933 nanos:404274567}" Jul 9 23:48:54.443326 systemd-networkd[1483]: lxc_health: Link UP Jul 9 23:48:54.460086 systemd-networkd[1483]: lxc_health: Gained carrier Jul 9 23:48:55.519631 containerd[1912]: time="2025-07-09T23:48:55.519590822Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ee0399dea27a0cda658bc6f8c29be2f04568d9632da9d3328bb51fdc252ef16f\" id:\"f3b2223ef054f860c6d1f9a3b5a57e83f0d43f3ef2a513b33a83791f0744fa7c\" pid:5929 exited_at:{seconds:1752104935 nanos:519026498}" Jul 9 23:48:55.521408 kubelet[3421]: E0709 23:48:55.521381 3421 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:41452->127.0.0.1:43603: write tcp 127.0.0.1:41452->127.0.0.1:43603: write: broken pipe Jul 9 23:48:56.046624 systemd-networkd[1483]: lxc_health: Gained IPv6LL Jul 9 23:48:57.604721 containerd[1912]: time="2025-07-09T23:48:57.604682866Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ee0399dea27a0cda658bc6f8c29be2f04568d9632da9d3328bb51fdc252ef16f\" id:\"4b41126617a7b04ed6d4119582658b74a42c1c41ecfcf67ee020a8b0679b917d\" pid:5959 exited_at:{seconds:1752104937 nanos:604232186}" Jul 9 23:48:59.683503 containerd[1912]: time="2025-07-09T23:48:59.683459595Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ee0399dea27a0cda658bc6f8c29be2f04568d9632da9d3328bb51fdc252ef16f\" id:\"719bec20b0e7de9322a6d3d42f7e7167fdef858c0da38e6082469dd32ad7f6ee\" pid:5981 exited_at:{seconds:1752104939 nanos:683071342}" Jul 9 23:48:59.759825 sshd[5262]: Connection closed by 10.200.16.10 port 54710 Jul 9 23:48:59.759312 sshd-session[5215]: pam_unix(sshd:session): session closed for user core Jul 9 23:48:59.762001 systemd-logind[1882]: Session 27 logged out. Waiting for processes to exit. Jul 9 23:48:59.762129 systemd[1]: sshd@24-10.200.20.14:22-10.200.16.10:54710.service: Deactivated successfully. Jul 9 23:48:59.763906 systemd[1]: session-27.scope: Deactivated successfully. Jul 9 23:48:59.766155 systemd-logind[1882]: Removed session 27.