Jul 9 23:44:29.142776 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd490] Jul 9 23:44:29.142794 kernel: Linux version 6.12.36-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Wed Jul 9 22:19:33 -00 2025 Jul 9 23:44:29.142800 kernel: KASLR enabled Jul 9 23:44:29.142804 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jul 9 23:44:29.142809 kernel: printk: legacy bootconsole [pl11] enabled Jul 9 23:44:29.142813 kernel: efi: EFI v2.7 by EDK II Jul 9 23:44:29.142818 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f20d018 RNG=0x3fd5f998 MEMRESERVE=0x3e477598 Jul 9 23:44:29.142822 kernel: random: crng init done Jul 9 23:44:29.142826 kernel: secureboot: Secure boot disabled Jul 9 23:44:29.142830 kernel: ACPI: Early table checksum verification disabled Jul 9 23:44:29.142834 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Jul 9 23:44:29.142837 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 9 23:44:29.142841 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 9 23:44:29.142846 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jul 9 23:44:29.142851 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 9 23:44:29.142855 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 9 23:44:29.142860 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 9 23:44:29.142865 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 9 23:44:29.142869 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 9 23:44:29.142873 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 9 23:44:29.142877 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jul 9 23:44:29.142881 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 9 23:44:29.142885 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jul 9 23:44:29.142890 kernel: ACPI: Use ACPI SPCR as default console: Yes Jul 9 23:44:29.142894 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Jul 9 23:44:29.142898 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] hotplug Jul 9 23:44:29.142902 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] hotplug Jul 9 23:44:29.142906 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Jul 9 23:44:29.142911 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Jul 9 23:44:29.142916 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Jul 9 23:44:29.142920 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Jul 9 23:44:29.142924 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Jul 9 23:44:29.142928 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Jul 9 23:44:29.142932 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Jul 9 23:44:29.142937 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Jul 9 23:44:29.142941 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Jul 9 23:44:29.142945 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x1bfffffff] -> [mem 0x00000000-0x1bfffffff] Jul 9 23:44:29.142949 kernel: NODE_DATA(0) allocated [mem 0x1bf7fddc0-0x1bf804fff] Jul 9 23:44:29.142953 kernel: Zone ranges: Jul 9 23:44:29.142958 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jul 9 23:44:29.142965 kernel: DMA32 empty Jul 9 23:44:29.142969 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jul 9 23:44:29.142974 kernel: Device empty Jul 9 23:44:29.142978 kernel: Movable zone start for each node Jul 9 23:44:29.142982 kernel: Early memory node ranges Jul 9 23:44:29.142987 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jul 9 23:44:29.142992 kernel: node 0: [mem 0x0000000000824000-0x000000003e45ffff] Jul 9 23:44:29.142996 kernel: node 0: [mem 0x000000003e460000-0x000000003e46ffff] Jul 9 23:44:29.143000 kernel: node 0: [mem 0x000000003e470000-0x000000003e54ffff] Jul 9 23:44:29.143005 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Jul 9 23:44:29.143009 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Jul 9 23:44:29.143013 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Jul 9 23:44:29.143017 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Jul 9 23:44:29.143022 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jul 9 23:44:29.143026 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jul 9 23:44:29.143030 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jul 9 23:44:29.143035 kernel: psci: probing for conduit method from ACPI. Jul 9 23:44:29.143040 kernel: psci: PSCIv1.1 detected in firmware. Jul 9 23:44:29.143044 kernel: psci: Using standard PSCI v0.2 function IDs Jul 9 23:44:29.143048 kernel: psci: MIGRATE_INFO_TYPE not supported. Jul 9 23:44:29.143052 kernel: psci: SMC Calling Convention v1.4 Jul 9 23:44:29.143057 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jul 9 23:44:29.143061 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jul 9 23:44:29.143065 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Jul 9 23:44:29.143070 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Jul 9 23:44:29.143074 kernel: pcpu-alloc: [0] 0 [0] 1 Jul 9 23:44:29.143079 kernel: Detected PIPT I-cache on CPU0 Jul 9 23:44:29.143083 kernel: CPU features: detected: Address authentication (architected QARMA5 algorithm) Jul 9 23:44:29.143088 kernel: CPU features: detected: GIC system register CPU interface Jul 9 23:44:29.143093 kernel: CPU features: detected: Spectre-v4 Jul 9 23:44:29.143097 kernel: CPU features: detected: Spectre-BHB Jul 9 23:44:29.143101 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 9 23:44:29.143106 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 9 23:44:29.143110 kernel: CPU features: detected: ARM erratum 2067961 or 2054223 Jul 9 23:44:29.143114 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 9 23:44:29.143119 kernel: alternatives: applying boot alternatives Jul 9 23:44:29.143124 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=da23c3aa7de24c290e5e9aff0a0fccd6a322ecaa9bbfc71c29b2f39446459116 Jul 9 23:44:29.143129 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 9 23:44:29.143133 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 9 23:44:29.143138 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 9 23:44:29.143143 kernel: Fallback order for Node 0: 0 Jul 9 23:44:29.143147 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048540 Jul 9 23:44:29.143151 kernel: Policy zone: Normal Jul 9 23:44:29.143156 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 9 23:44:29.143160 kernel: software IO TLB: area num 2. Jul 9 23:44:29.143164 kernel: software IO TLB: mapped [mem 0x000000003a460000-0x000000003e460000] (64MB) Jul 9 23:44:29.143169 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 9 23:44:29.143173 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 9 23:44:29.143178 kernel: rcu: RCU event tracing is enabled. Jul 9 23:44:29.143182 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 9 23:44:29.143188 kernel: Trampoline variant of Tasks RCU enabled. Jul 9 23:44:29.143192 kernel: Tracing variant of Tasks RCU enabled. Jul 9 23:44:29.143197 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 9 23:44:29.143201 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 9 23:44:29.143205 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 9 23:44:29.143210 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 9 23:44:29.143214 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 9 23:44:29.143219 kernel: GICv3: 960 SPIs implemented Jul 9 23:44:29.143223 kernel: GICv3: 0 Extended SPIs implemented Jul 9 23:44:29.143227 kernel: Root IRQ handler: gic_handle_irq Jul 9 23:44:29.143231 kernel: GICv3: GICv3 features: 16 PPIs, RSS Jul 9 23:44:29.143236 kernel: GICv3: GICD_CTRL.DS=0, SCR_EL3.FIQ=0 Jul 9 23:44:29.143241 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jul 9 23:44:29.143245 kernel: ITS: No ITS available, not enabling LPIs Jul 9 23:44:29.143250 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 9 23:44:29.143254 kernel: arch_timer: cp15 timer(s) running at 1000.00MHz (virt). Jul 9 23:44:29.143258 kernel: clocksource: arch_sys_counter: mask: 0x1fffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 9 23:44:29.143263 kernel: sched_clock: 61 bits at 1000MHz, resolution 1ns, wraps every 4398046511103ns Jul 9 23:44:29.143267 kernel: Console: colour dummy device 80x25 Jul 9 23:44:29.143272 kernel: printk: legacy console [tty1] enabled Jul 9 23:44:29.143276 kernel: ACPI: Core revision 20240827 Jul 9 23:44:29.143281 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 2000.00 BogoMIPS (lpj=1000000) Jul 9 23:44:29.143349 kernel: pid_max: default: 32768 minimum: 301 Jul 9 23:44:29.143354 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 9 23:44:29.143358 kernel: landlock: Up and running. Jul 9 23:44:29.143363 kernel: SELinux: Initializing. Jul 9 23:44:29.143367 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 9 23:44:29.143372 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 9 23:44:29.143380 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x1a0000e, misc 0x31e1 Jul 9 23:44:29.143386 kernel: Hyper-V: Host Build 10.0.26100.1261-1-0 Jul 9 23:44:29.143391 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jul 9 23:44:29.143395 kernel: rcu: Hierarchical SRCU implementation. Jul 9 23:44:29.143400 kernel: rcu: Max phase no-delay instances is 400. Jul 9 23:44:29.143405 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 9 23:44:29.143410 kernel: Remapping and enabling EFI services. Jul 9 23:44:29.143415 kernel: smp: Bringing up secondary CPUs ... Jul 9 23:44:29.143420 kernel: Detected PIPT I-cache on CPU1 Jul 9 23:44:29.143424 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jul 9 23:44:29.143429 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd490] Jul 9 23:44:29.143435 kernel: smp: Brought up 1 node, 2 CPUs Jul 9 23:44:29.143439 kernel: SMP: Total of 2 processors activated. Jul 9 23:44:29.143444 kernel: CPU: All CPU(s) started at EL1 Jul 9 23:44:29.143449 kernel: CPU features: detected: 32-bit EL0 Support Jul 9 23:44:29.143454 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jul 9 23:44:29.143458 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 9 23:44:29.143463 kernel: CPU features: detected: Common not Private translations Jul 9 23:44:29.143468 kernel: CPU features: detected: CRC32 instructions Jul 9 23:44:29.143473 kernel: CPU features: detected: Generic authentication (architected QARMA5 algorithm) Jul 9 23:44:29.143478 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 9 23:44:29.143483 kernel: CPU features: detected: LSE atomic instructions Jul 9 23:44:29.143488 kernel: CPU features: detected: Privileged Access Never Jul 9 23:44:29.143492 kernel: CPU features: detected: Speculation barrier (SB) Jul 9 23:44:29.143497 kernel: CPU features: detected: TLB range maintenance instructions Jul 9 23:44:29.143502 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 9 23:44:29.143506 kernel: CPU features: detected: Scalable Vector Extension Jul 9 23:44:29.143511 kernel: alternatives: applying system-wide alternatives Jul 9 23:44:29.143516 kernel: CPU features: detected: Hardware dirty bit management on CPU0-1 Jul 9 23:44:29.143522 kernel: SVE: maximum available vector length 16 bytes per vector Jul 9 23:44:29.143527 kernel: SVE: default vector length 16 bytes per vector Jul 9 23:44:29.143531 kernel: Memory: 3975544K/4194160K available (11136K kernel code, 2428K rwdata, 9032K rodata, 39488K init, 1035K bss, 213816K reserved, 0K cma-reserved) Jul 9 23:44:29.143536 kernel: devtmpfs: initialized Jul 9 23:44:29.143541 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 9 23:44:29.143546 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 9 23:44:29.143551 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 9 23:44:29.143555 kernel: 0 pages in range for non-PLT usage Jul 9 23:44:29.143560 kernel: 508448 pages in range for PLT usage Jul 9 23:44:29.143566 kernel: pinctrl core: initialized pinctrl subsystem Jul 9 23:44:29.143570 kernel: SMBIOS 3.1.0 present. Jul 9 23:44:29.143575 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Jul 9 23:44:29.143580 kernel: DMI: Memory slots populated: 2/2 Jul 9 23:44:29.143585 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 9 23:44:29.143589 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 9 23:44:29.143594 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 9 23:44:29.143599 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 9 23:44:29.143604 kernel: audit: initializing netlink subsys (disabled) Jul 9 23:44:29.143609 kernel: audit: type=2000 audit(0.059:1): state=initialized audit_enabled=0 res=1 Jul 9 23:44:29.143614 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 9 23:44:29.143619 kernel: cpuidle: using governor menu Jul 9 23:44:29.143623 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 9 23:44:29.143628 kernel: ASID allocator initialised with 32768 entries Jul 9 23:44:29.143633 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 9 23:44:29.143638 kernel: Serial: AMBA PL011 UART driver Jul 9 23:44:29.143642 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 9 23:44:29.143647 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 9 23:44:29.143653 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 9 23:44:29.143657 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 9 23:44:29.143662 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 9 23:44:29.143667 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 9 23:44:29.143671 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 9 23:44:29.143676 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 9 23:44:29.143681 kernel: ACPI: Added _OSI(Module Device) Jul 9 23:44:29.143685 kernel: ACPI: Added _OSI(Processor Device) Jul 9 23:44:29.143690 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 9 23:44:29.143696 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 9 23:44:29.143700 kernel: ACPI: Interpreter enabled Jul 9 23:44:29.143705 kernel: ACPI: Using GIC for interrupt routing Jul 9 23:44:29.143710 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jul 9 23:44:29.143714 kernel: printk: legacy console [ttyAMA0] enabled Jul 9 23:44:29.143719 kernel: printk: legacy bootconsole [pl11] disabled Jul 9 23:44:29.143724 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jul 9 23:44:29.143729 kernel: ACPI: CPU0 has been hot-added Jul 9 23:44:29.143733 kernel: ACPI: CPU1 has been hot-added Jul 9 23:44:29.143739 kernel: iommu: Default domain type: Translated Jul 9 23:44:29.143744 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 9 23:44:29.143748 kernel: efivars: Registered efivars operations Jul 9 23:44:29.143753 kernel: vgaarb: loaded Jul 9 23:44:29.143758 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 9 23:44:29.143763 kernel: VFS: Disk quotas dquot_6.6.0 Jul 9 23:44:29.143767 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 9 23:44:29.143772 kernel: pnp: PnP ACPI init Jul 9 23:44:29.143777 kernel: pnp: PnP ACPI: found 0 devices Jul 9 23:44:29.143782 kernel: NET: Registered PF_INET protocol family Jul 9 23:44:29.143787 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 9 23:44:29.143792 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 9 23:44:29.143797 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 9 23:44:29.143802 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 9 23:44:29.143806 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 9 23:44:29.143811 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 9 23:44:29.143816 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 9 23:44:29.143820 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 9 23:44:29.143826 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 9 23:44:29.143831 kernel: PCI: CLS 0 bytes, default 64 Jul 9 23:44:29.143836 kernel: kvm [1]: HYP mode not available Jul 9 23:44:29.143840 kernel: Initialise system trusted keyrings Jul 9 23:44:29.143845 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 9 23:44:29.143850 kernel: Key type asymmetric registered Jul 9 23:44:29.143854 kernel: Asymmetric key parser 'x509' registered Jul 9 23:44:29.143859 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 9 23:44:29.143864 kernel: io scheduler mq-deadline registered Jul 9 23:44:29.143869 kernel: io scheduler kyber registered Jul 9 23:44:29.143874 kernel: io scheduler bfq registered Jul 9 23:44:29.143879 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 9 23:44:29.143883 kernel: thunder_xcv, ver 1.0 Jul 9 23:44:29.143888 kernel: thunder_bgx, ver 1.0 Jul 9 23:44:29.143893 kernel: nicpf, ver 1.0 Jul 9 23:44:29.143897 kernel: nicvf, ver 1.0 Jul 9 23:44:29.144011 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 9 23:44:29.144062 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-09T23:44:28 UTC (1752104668) Jul 9 23:44:29.144068 kernel: efifb: probing for efifb Jul 9 23:44:29.144073 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jul 9 23:44:29.144078 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jul 9 23:44:29.144083 kernel: efifb: scrolling: redraw Jul 9 23:44:29.144088 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jul 9 23:44:29.144092 kernel: Console: switching to colour frame buffer device 128x48 Jul 9 23:44:29.144097 kernel: fb0: EFI VGA frame buffer device Jul 9 23:44:29.144102 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jul 9 23:44:29.144108 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 9 23:44:29.144112 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Jul 9 23:44:29.144117 kernel: watchdog: NMI not fully supported Jul 9 23:44:29.144122 kernel: watchdog: Hard watchdog permanently disabled Jul 9 23:44:29.144127 kernel: NET: Registered PF_INET6 protocol family Jul 9 23:44:29.144131 kernel: Segment Routing with IPv6 Jul 9 23:44:29.144136 kernel: In-situ OAM (IOAM) with IPv6 Jul 9 23:44:29.144141 kernel: NET: Registered PF_PACKET protocol family Jul 9 23:44:29.144145 kernel: Key type dns_resolver registered Jul 9 23:44:29.144151 kernel: registered taskstats version 1 Jul 9 23:44:29.144156 kernel: Loading compiled-in X.509 certificates Jul 9 23:44:29.144160 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.36-flatcar: 11eff9deb028731c4f89f27f6fac8d1c08902e5a' Jul 9 23:44:29.144165 kernel: Demotion targets for Node 0: null Jul 9 23:44:29.144170 kernel: Key type .fscrypt registered Jul 9 23:44:29.144174 kernel: Key type fscrypt-provisioning registered Jul 9 23:44:29.144179 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 9 23:44:29.144184 kernel: ima: Allocated hash algorithm: sha1 Jul 9 23:44:29.144189 kernel: ima: No architecture policies found Jul 9 23:44:29.144194 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 9 23:44:29.144199 kernel: clk: Disabling unused clocks Jul 9 23:44:29.144204 kernel: PM: genpd: Disabling unused power domains Jul 9 23:44:29.144209 kernel: Warning: unable to open an initial console. Jul 9 23:44:29.144213 kernel: Freeing unused kernel memory: 39488K Jul 9 23:44:29.144218 kernel: Run /init as init process Jul 9 23:44:29.144223 kernel: with arguments: Jul 9 23:44:29.144227 kernel: /init Jul 9 23:44:29.144232 kernel: with environment: Jul 9 23:44:29.144237 kernel: HOME=/ Jul 9 23:44:29.144242 kernel: TERM=linux Jul 9 23:44:29.144247 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 9 23:44:29.144252 systemd[1]: Successfully made /usr/ read-only. Jul 9 23:44:29.144259 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 9 23:44:29.144265 systemd[1]: Detected virtualization microsoft. Jul 9 23:44:29.144270 systemd[1]: Detected architecture arm64. Jul 9 23:44:29.144275 systemd[1]: Running in initrd. Jul 9 23:44:29.144280 systemd[1]: No hostname configured, using default hostname. Jul 9 23:44:29.146324 systemd[1]: Hostname set to . Jul 9 23:44:29.146333 systemd[1]: Initializing machine ID from random generator. Jul 9 23:44:29.146338 systemd[1]: Queued start job for default target initrd.target. Jul 9 23:44:29.146344 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 9 23:44:29.146349 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 9 23:44:29.146355 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 9 23:44:29.146365 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 9 23:44:29.146371 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 9 23:44:29.146376 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 9 23:44:29.146382 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 9 23:44:29.146388 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 9 23:44:29.146393 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 9 23:44:29.146398 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 9 23:44:29.146405 systemd[1]: Reached target paths.target - Path Units. Jul 9 23:44:29.146410 systemd[1]: Reached target slices.target - Slice Units. Jul 9 23:44:29.146415 systemd[1]: Reached target swap.target - Swaps. Jul 9 23:44:29.146420 systemd[1]: Reached target timers.target - Timer Units. Jul 9 23:44:29.146425 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 9 23:44:29.146430 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 9 23:44:29.146436 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 9 23:44:29.146441 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 9 23:44:29.146446 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 9 23:44:29.146453 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 9 23:44:29.146458 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 9 23:44:29.146463 systemd[1]: Reached target sockets.target - Socket Units. Jul 9 23:44:29.146468 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 9 23:44:29.146474 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 9 23:44:29.146479 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 9 23:44:29.146485 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 9 23:44:29.146490 systemd[1]: Starting systemd-fsck-usr.service... Jul 9 23:44:29.146496 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 9 23:44:29.146501 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 9 23:44:29.146527 systemd-journald[226]: Collecting audit messages is disabled. Jul 9 23:44:29.146540 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 9 23:44:29.146548 systemd-journald[226]: Journal started Jul 9 23:44:29.146562 systemd-journald[226]: Runtime Journal (/run/log/journal/4f4cc4fc48be43cf9aa5f0f5862473ef) is 8M, max 78.5M, 70.5M free. Jul 9 23:44:29.151212 systemd-modules-load[228]: Inserted module 'overlay' Jul 9 23:44:29.173326 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 9 23:44:29.176894 systemd-modules-load[228]: Inserted module 'br_netfilter' Jul 9 23:44:29.186251 kernel: Bridge firewalling registered Jul 9 23:44:29.186271 systemd[1]: Started systemd-journald.service - Journal Service. Jul 9 23:44:29.193315 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 9 23:44:29.199157 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 9 23:44:29.210183 systemd[1]: Finished systemd-fsck-usr.service. Jul 9 23:44:29.219641 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 9 23:44:29.228437 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 9 23:44:29.240872 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 9 23:44:29.264429 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 9 23:44:29.275411 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 9 23:44:29.295378 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 9 23:44:29.303308 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 9 23:44:29.324380 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 9 23:44:29.330080 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 9 23:44:29.341489 systemd-tmpfiles[257]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 9 23:44:29.347422 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 9 23:44:29.363591 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 9 23:44:29.379917 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 9 23:44:29.391509 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 9 23:44:29.409782 dracut-cmdline[262]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=da23c3aa7de24c290e5e9aff0a0fccd6a322ecaa9bbfc71c29b2f39446459116 Jul 9 23:44:29.449155 systemd-resolved[263]: Positive Trust Anchors: Jul 9 23:44:29.449171 systemd-resolved[263]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 9 23:44:29.449190 systemd-resolved[263]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 9 23:44:29.450942 systemd-resolved[263]: Defaulting to hostname 'linux'. Jul 9 23:44:29.454274 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 9 23:44:29.460780 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 9 23:44:29.472988 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 9 23:44:29.579308 kernel: SCSI subsystem initialized Jul 9 23:44:29.585297 kernel: Loading iSCSI transport class v2.0-870. Jul 9 23:44:29.592300 kernel: iscsi: registered transport (tcp) Jul 9 23:44:29.606891 kernel: iscsi: registered transport (qla4xxx) Jul 9 23:44:29.606932 kernel: QLogic iSCSI HBA Driver Jul 9 23:44:29.622151 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 9 23:44:29.642862 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 9 23:44:29.650041 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 9 23:44:29.702105 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 9 23:44:29.708860 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 9 23:44:29.774315 kernel: raid6: neonx8 gen() 18534 MB/s Jul 9 23:44:29.793294 kernel: raid6: neonx4 gen() 18568 MB/s Jul 9 23:44:29.813290 kernel: raid6: neonx2 gen() 17102 MB/s Jul 9 23:44:29.834394 kernel: raid6: neonx1 gen() 15067 MB/s Jul 9 23:44:29.853394 kernel: raid6: int64x8 gen() 10542 MB/s Jul 9 23:44:29.872389 kernel: raid6: int64x4 gen() 10617 MB/s Jul 9 23:44:29.893390 kernel: raid6: int64x2 gen() 8989 MB/s Jul 9 23:44:29.915583 kernel: raid6: int64x1 gen() 7044 MB/s Jul 9 23:44:29.915677 kernel: raid6: using algorithm neonx4 gen() 18568 MB/s Jul 9 23:44:29.939121 kernel: raid6: .... xor() 15152 MB/s, rmw enabled Jul 9 23:44:29.939204 kernel: raid6: using neon recovery algorithm Jul 9 23:44:29.946300 kernel: xor: measuring software checksum speed Jul 9 23:44:29.946367 kernel: 8regs : 26616 MB/sec Jul 9 23:44:29.952474 kernel: 32regs : 28804 MB/sec Jul 9 23:44:29.955551 kernel: arm64_neon : 37559 MB/sec Jul 9 23:44:29.958980 kernel: xor: using function: arm64_neon (37559 MB/sec) Jul 9 23:44:29.998307 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 9 23:44:30.003975 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 9 23:44:30.015468 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 9 23:44:30.049856 systemd-udevd[474]: Using default interface naming scheme 'v255'. Jul 9 23:44:30.052800 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 9 23:44:30.069127 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 9 23:44:30.093722 dracut-pre-trigger[486]: rd.md=0: removing MD RAID activation Jul 9 23:44:30.115109 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 9 23:44:30.127177 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 9 23:44:30.175279 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 9 23:44:30.191060 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 9 23:44:30.253653 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 9 23:44:30.253755 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 9 23:44:30.275928 kernel: hv_vmbus: Vmbus version:5.3 Jul 9 23:44:30.275594 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 9 23:44:30.295960 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 9 23:44:30.317970 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 9 23:44:30.318030 kernel: hv_vmbus: registering driver hyperv_keyboard Jul 9 23:44:30.318039 kernel: hv_vmbus: registering driver hv_netvsc Jul 9 23:44:30.318046 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 9 23:44:30.310979 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 9 23:44:30.336434 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jul 9 23:44:30.324419 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 9 23:44:30.363250 kernel: hv_vmbus: registering driver hid_hyperv Jul 9 23:44:30.363269 kernel: PTP clock support registered Jul 9 23:44:30.363276 kernel: hv_vmbus: registering driver hv_storvsc Jul 9 23:44:30.335622 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 9 23:44:30.408703 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jul 9 23:44:30.408731 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jul 9 23:44:30.408874 kernel: scsi host0: storvsc_host_t Jul 9 23:44:30.408951 kernel: scsi host1: storvsc_host_t Jul 9 23:44:30.409018 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jul 9 23:44:30.409036 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Jul 9 23:44:30.354459 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 9 23:44:30.432478 kernel: hv_utils: Registering HyperV Utility Driver Jul 9 23:44:30.432525 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jul 9 23:44:30.432674 kernel: hv_vmbus: registering driver hv_utils Jul 9 23:44:30.432682 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jul 9 23:44:30.441810 kernel: hv_netvsc 00224876-f24c-0022-4876-f24c00224876 eth0: VF slot 1 added Jul 9 23:44:30.442018 kernel: sd 0:0:0:0: [sda] Write Protect is off Jul 9 23:44:30.453634 kernel: hv_utils: Heartbeat IC version 3.0 Jul 9 23:44:30.453692 kernel: hv_utils: Shutdown IC version 3.2 Jul 9 23:44:30.457017 kernel: hv_utils: TimeSync IC version 4.0 Jul 9 23:44:30.457054 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jul 9 23:44:30.101406 systemd-resolved[263]: Clock change detected. Flushing caches. Jul 9 23:44:30.117826 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jul 9 23:44:30.117964 systemd-journald[226]: Time jumped backwards, rotating. Jul 9 23:44:30.117991 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Jul 9 23:44:30.133540 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Jul 9 23:44:30.136352 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 9 23:44:30.151976 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 9 23:44:30.157024 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jul 9 23:44:30.173590 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jul 9 23:44:30.173797 kernel: hv_vmbus: registering driver hv_pci Jul 9 23:44:30.173813 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 9 23:44:30.173818 kernel: hv_pci 86735a4a-429a-494f-8ec5-4ea86385823a: PCI VMBus probing: Using version 0x10004 Jul 9 23:44:30.177568 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jul 9 23:44:30.177844 kernel: hv_pci 86735a4a-429a-494f-8ec5-4ea86385823a: PCI host bridge to bus 429a:00 Jul 9 23:44:30.187407 kernel: pci_bus 429a:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jul 9 23:44:30.187611 kernel: pci_bus 429a:00: No busn resource found for root bus, will use [bus 00-ff] Jul 9 23:44:30.200924 kernel: pci 429a:00:02.0: [15b3:101a] type 00 class 0x020000 PCIe Endpoint Jul 9 23:44:30.207501 kernel: pci 429a:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref] Jul 9 23:44:30.212701 kernel: pci 429a:00:02.0: enabling Extended Tags Jul 9 23:44:30.230736 kernel: pci 429a:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 429a:00:02.0 (capable of 252.048 Gb/s with 16.0 GT/s PCIe x16 link) Jul 9 23:44:30.242486 kernel: pci_bus 429a:00: busn_res: [bus 00-ff] end is updated to 00 Jul 9 23:44:30.242721 kernel: pci 429a:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref]: assigned Jul 9 23:44:30.265509 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#18 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jul 9 23:44:30.289545 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jul 9 23:44:30.313455 kernel: mlx5_core 429a:00:02.0: enabling device (0000 -> 0002) Jul 9 23:44:30.322833 kernel: mlx5_core 429a:00:02.0: PTM is not supported by PCIe Jul 9 23:44:30.323017 kernel: mlx5_core 429a:00:02.0: firmware version: 16.30.5006 Jul 9 23:44:30.501335 kernel: hv_netvsc 00224876-f24c-0022-4876-f24c00224876 eth0: VF registering: eth1 Jul 9 23:44:30.501548 kernel: mlx5_core 429a:00:02.0 eth1: joined to eth0 Jul 9 23:44:30.509535 kernel: mlx5_core 429a:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Jul 9 23:44:30.522533 kernel: mlx5_core 429a:00:02.0 enP17050s1: renamed from eth1 Jul 9 23:44:30.708045 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jul 9 23:44:30.764996 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jul 9 23:44:30.799531 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jul 9 23:44:30.826905 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jul 9 23:44:30.832124 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jul 9 23:44:30.844867 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 9 23:44:30.856946 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 9 23:44:30.866444 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 9 23:44:30.876783 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 9 23:44:30.887321 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 9 23:44:30.914098 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 9 23:44:30.935525 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#60 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Jul 9 23:44:30.940528 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 9 23:44:30.951505 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 9 23:44:31.972308 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Jul 9 23:44:31.984281 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 9 23:44:31.984331 disk-uuid[666]: The operation has completed successfully. Jul 9 23:44:32.051348 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 9 23:44:32.051452 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 9 23:44:32.076040 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 9 23:44:32.095740 sh[824]: Success Jul 9 23:44:32.131078 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 9 23:44:32.131141 kernel: device-mapper: uevent: version 1.0.3 Jul 9 23:44:32.136485 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 9 23:44:32.146681 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jul 9 23:44:32.336058 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 9 23:44:32.347790 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 9 23:44:32.360541 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 9 23:44:32.389678 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 9 23:44:32.389748 kernel: BTRFS: device fsid 0f8170d9-c2a5-4c49-82bc-4e538bfc9b9b devid 1 transid 39 /dev/mapper/usr (254:0) scanned by mount (842) Jul 9 23:44:32.397041 kernel: BTRFS info (device dm-0): first mount of filesystem 0f8170d9-c2a5-4c49-82bc-4e538bfc9b9b Jul 9 23:44:32.403037 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 9 23:44:32.407279 kernel: BTRFS info (device dm-0): using free-space-tree Jul 9 23:44:32.737601 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 9 23:44:32.742303 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 9 23:44:32.752631 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 9 23:44:32.753353 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 9 23:44:32.780300 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 9 23:44:32.802539 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (865) Jul 9 23:44:32.813971 kernel: BTRFS info (device sda6): first mount of filesystem 3e5253a1-0691-476f-bde5-7794093008ce Jul 9 23:44:32.814003 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 9 23:44:32.817773 kernel: BTRFS info (device sda6): using free-space-tree Jul 9 23:44:32.843547 kernel: BTRFS info (device sda6): last unmount of filesystem 3e5253a1-0691-476f-bde5-7794093008ce Jul 9 23:44:32.844210 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 9 23:44:32.850638 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 9 23:44:32.905798 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 9 23:44:32.921933 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 9 23:44:32.957419 systemd-networkd[1011]: lo: Link UP Jul 9 23:44:32.957430 systemd-networkd[1011]: lo: Gained carrier Jul 9 23:44:32.959106 systemd-networkd[1011]: Enumeration completed Jul 9 23:44:32.960417 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 9 23:44:32.964865 systemd-networkd[1011]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 9 23:44:32.964869 systemd-networkd[1011]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 9 23:44:32.970336 systemd[1]: Reached target network.target - Network. Jul 9 23:44:33.044527 kernel: mlx5_core 429a:00:02.0 enP17050s1: Link up Jul 9 23:44:33.078215 systemd-networkd[1011]: enP17050s1: Link UP Jul 9 23:44:33.081757 kernel: hv_netvsc 00224876-f24c-0022-4876-f24c00224876 eth0: Data path switched to VF: enP17050s1 Jul 9 23:44:33.078270 systemd-networkd[1011]: eth0: Link UP Jul 9 23:44:33.078357 systemd-networkd[1011]: eth0: Gained carrier Jul 9 23:44:33.078365 systemd-networkd[1011]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 9 23:44:33.084982 systemd-networkd[1011]: enP17050s1: Gained carrier Jul 9 23:44:33.117544 systemd-networkd[1011]: eth0: DHCPv4 address 10.200.20.40/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jul 9 23:44:33.880370 ignition[936]: Ignition 2.21.0 Jul 9 23:44:33.880383 ignition[936]: Stage: fetch-offline Jul 9 23:44:33.884597 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 9 23:44:33.880458 ignition[936]: no configs at "/usr/lib/ignition/base.d" Jul 9 23:44:33.892431 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 9 23:44:33.880464 ignition[936]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 9 23:44:33.881645 ignition[936]: parsed url from cmdline: "" Jul 9 23:44:33.881650 ignition[936]: no config URL provided Jul 9 23:44:33.881655 ignition[936]: reading system config file "/usr/lib/ignition/user.ign" Jul 9 23:44:33.881664 ignition[936]: no config at "/usr/lib/ignition/user.ign" Jul 9 23:44:33.881668 ignition[936]: failed to fetch config: resource requires networking Jul 9 23:44:33.881879 ignition[936]: Ignition finished successfully Jul 9 23:44:33.923866 ignition[1022]: Ignition 2.21.0 Jul 9 23:44:33.923872 ignition[1022]: Stage: fetch Jul 9 23:44:33.924017 ignition[1022]: no configs at "/usr/lib/ignition/base.d" Jul 9 23:44:33.924024 ignition[1022]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 9 23:44:33.924096 ignition[1022]: parsed url from cmdline: "" Jul 9 23:44:33.924100 ignition[1022]: no config URL provided Jul 9 23:44:33.924103 ignition[1022]: reading system config file "/usr/lib/ignition/user.ign" Jul 9 23:44:33.924109 ignition[1022]: no config at "/usr/lib/ignition/user.ign" Jul 9 23:44:33.924152 ignition[1022]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jul 9 23:44:34.007366 ignition[1022]: GET result: OK Jul 9 23:44:34.007445 ignition[1022]: config has been read from IMDS userdata Jul 9 23:44:34.007470 ignition[1022]: parsing config with SHA512: 738269813d5287f4dac8b8033984fe995f77b6a0855183a1ee62cf7fd8a283ec8e9ee920550ed3a370399ca9e99ffb73893f60f945569b1d41ba216f57e87c47 Jul 9 23:44:34.014417 unknown[1022]: fetched base config from "system" Jul 9 23:44:34.014433 unknown[1022]: fetched base config from "system" Jul 9 23:44:34.014691 ignition[1022]: fetch: fetch complete Jul 9 23:44:34.014437 unknown[1022]: fetched user config from "azure" Jul 9 23:44:34.014694 ignition[1022]: fetch: fetch passed Jul 9 23:44:34.019787 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 9 23:44:34.014732 ignition[1022]: Ignition finished successfully Jul 9 23:44:34.029315 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 9 23:44:34.067612 ignition[1029]: Ignition 2.21.0 Jul 9 23:44:34.067627 ignition[1029]: Stage: kargs Jul 9 23:44:34.072080 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 9 23:44:34.067789 ignition[1029]: no configs at "/usr/lib/ignition/base.d" Jul 9 23:44:34.078802 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 9 23:44:34.067796 ignition[1029]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 9 23:44:34.068403 ignition[1029]: kargs: kargs passed Jul 9 23:44:34.068458 ignition[1029]: Ignition finished successfully Jul 9 23:44:34.112313 ignition[1036]: Ignition 2.21.0 Jul 9 23:44:34.112329 ignition[1036]: Stage: disks Jul 9 23:44:34.116236 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 9 23:44:34.112514 ignition[1036]: no configs at "/usr/lib/ignition/base.d" Jul 9 23:44:34.122708 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 9 23:44:34.112522 ignition[1036]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 9 23:44:34.131607 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 9 23:44:34.113722 ignition[1036]: disks: disks passed Jul 9 23:44:34.140067 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 9 23:44:34.113786 ignition[1036]: Ignition finished successfully Jul 9 23:44:34.150002 systemd[1]: Reached target sysinit.target - System Initialization. Jul 9 23:44:34.159488 systemd[1]: Reached target basic.target - Basic System. Jul 9 23:44:34.170184 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 9 23:44:34.249382 systemd-fsck[1044]: ROOT: clean, 15/7326000 files, 477845/7359488 blocks Jul 9 23:44:34.258191 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 9 23:44:34.263695 systemd-networkd[1011]: eth0: Gained IPv6LL Jul 9 23:44:34.265895 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 9 23:44:34.454511 kernel: EXT4-fs (sda9): mounted filesystem 961fd3ec-635c-4a87-8aef-ca8f12cd8be8 r/w with ordered data mode. Quota mode: none. Jul 9 23:44:34.455418 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 9 23:44:34.459444 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 9 23:44:34.486010 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 9 23:44:34.494713 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 9 23:44:34.505182 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jul 9 23:44:34.516470 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 9 23:44:34.518133 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 9 23:44:34.532544 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 9 23:44:34.544357 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 9 23:44:34.571938 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1058) Jul 9 23:44:34.571982 kernel: BTRFS info (device sda6): first mount of filesystem 3e5253a1-0691-476f-bde5-7794093008ce Jul 9 23:44:34.576647 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 9 23:44:34.580000 kernel: BTRFS info (device sda6): using free-space-tree Jul 9 23:44:34.580623 systemd-networkd[1011]: enP17050s1: Gained IPv6LL Jul 9 23:44:34.587383 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 9 23:44:34.953704 coreos-metadata[1060]: Jul 09 23:44:34.953 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jul 9 23:44:34.962202 coreos-metadata[1060]: Jul 09 23:44:34.962 INFO Fetch successful Jul 9 23:44:34.966971 coreos-metadata[1060]: Jul 09 23:44:34.966 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jul 9 23:44:34.976145 coreos-metadata[1060]: Jul 09 23:44:34.976 INFO Fetch successful Jul 9 23:44:34.992615 coreos-metadata[1060]: Jul 09 23:44:34.992 INFO wrote hostname ci-4344.1.1-n-bbe652f90c to /sysroot/etc/hostname Jul 9 23:44:35.001237 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 9 23:44:35.229772 initrd-setup-root[1090]: cut: /sysroot/etc/passwd: No such file or directory Jul 9 23:44:35.251514 initrd-setup-root[1097]: cut: /sysroot/etc/group: No such file or directory Jul 9 23:44:35.258090 initrd-setup-root[1104]: cut: /sysroot/etc/shadow: No such file or directory Jul 9 23:44:35.264406 initrd-setup-root[1111]: cut: /sysroot/etc/gshadow: No such file or directory Jul 9 23:44:36.042459 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 9 23:44:36.049288 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 9 23:44:36.080285 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 9 23:44:36.094983 kernel: BTRFS info (device sda6): last unmount of filesystem 3e5253a1-0691-476f-bde5-7794093008ce Jul 9 23:44:36.093030 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 9 23:44:36.122545 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 9 23:44:36.127372 ignition[1179]: INFO : Ignition 2.21.0 Jul 9 23:44:36.127372 ignition[1179]: INFO : Stage: mount Jul 9 23:44:36.127372 ignition[1179]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 9 23:44:36.127372 ignition[1179]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 9 23:44:36.127372 ignition[1179]: INFO : mount: mount passed Jul 9 23:44:36.127372 ignition[1179]: INFO : Ignition finished successfully Jul 9 23:44:36.133516 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 9 23:44:36.140658 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 9 23:44:36.172711 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 9 23:44:36.202611 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1190) Jul 9 23:44:36.213696 kernel: BTRFS info (device sda6): first mount of filesystem 3e5253a1-0691-476f-bde5-7794093008ce Jul 9 23:44:36.213763 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 9 23:44:36.217616 kernel: BTRFS info (device sda6): using free-space-tree Jul 9 23:44:36.220402 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 9 23:44:36.249739 ignition[1207]: INFO : Ignition 2.21.0 Jul 9 23:44:36.253847 ignition[1207]: INFO : Stage: files Jul 9 23:44:36.253847 ignition[1207]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 9 23:44:36.253847 ignition[1207]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 9 23:44:36.253847 ignition[1207]: DEBUG : files: compiled without relabeling support, skipping Jul 9 23:44:36.272578 ignition[1207]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 9 23:44:36.272578 ignition[1207]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 9 23:44:36.319664 ignition[1207]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 9 23:44:36.326012 ignition[1207]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 9 23:44:36.326012 ignition[1207]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 9 23:44:36.320691 unknown[1207]: wrote ssh authorized keys file for user: core Jul 9 23:44:36.343087 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 9 23:44:36.352251 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jul 9 23:44:36.385443 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 9 23:44:36.634548 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 9 23:44:36.634548 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 9 23:44:36.651846 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jul 9 23:44:37.146811 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 9 23:44:37.782568 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 9 23:44:37.782568 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 9 23:44:37.800252 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 9 23:44:37.800252 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 9 23:44:37.800252 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 9 23:44:37.800252 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 9 23:44:37.800252 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 9 23:44:37.800252 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 9 23:44:37.800252 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 9 23:44:37.862701 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 9 23:44:37.862701 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 9 23:44:37.862701 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 9 23:44:37.862701 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 9 23:44:37.862701 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 9 23:44:37.862701 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Jul 9 23:44:38.612965 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 9 23:44:39.965583 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 9 23:44:39.965583 ignition[1207]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 9 23:44:39.980560 ignition[1207]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 9 23:44:39.993777 ignition[1207]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 9 23:44:40.004738 ignition[1207]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 9 23:44:40.004738 ignition[1207]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jul 9 23:44:40.004738 ignition[1207]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jul 9 23:44:40.004738 ignition[1207]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 9 23:44:40.004738 ignition[1207]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 9 23:44:40.004738 ignition[1207]: INFO : files: files passed Jul 9 23:44:40.004738 ignition[1207]: INFO : Ignition finished successfully Jul 9 23:44:40.004546 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 9 23:44:40.011215 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 9 23:44:40.029261 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 9 23:44:40.042695 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 9 23:44:40.109754 initrd-setup-root-after-ignition[1235]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 9 23:44:40.109754 initrd-setup-root-after-ignition[1235]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 9 23:44:40.044521 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 9 23:44:40.131286 initrd-setup-root-after-ignition[1239]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 9 23:44:40.078011 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 9 23:44:40.087136 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 9 23:44:40.097586 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 9 23:44:40.153034 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 9 23:44:40.153143 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 9 23:44:40.164095 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 9 23:44:40.174729 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 9 23:44:40.186442 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 9 23:44:40.187192 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 9 23:44:40.220649 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 9 23:44:40.227476 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 9 23:44:40.261023 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 9 23:44:40.267168 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 9 23:44:40.277990 systemd[1]: Stopped target timers.target - Timer Units. Jul 9 23:44:40.287792 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 9 23:44:40.287913 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 9 23:44:40.302094 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 9 23:44:40.312096 systemd[1]: Stopped target basic.target - Basic System. Jul 9 23:44:40.320588 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 9 23:44:40.329568 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 9 23:44:40.340149 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 9 23:44:40.351701 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 9 23:44:40.366048 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 9 23:44:40.376241 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 9 23:44:40.386527 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 9 23:44:40.395530 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 9 23:44:40.405163 systemd[1]: Stopped target swap.target - Swaps. Jul 9 23:44:40.413134 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 9 23:44:40.413250 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 9 23:44:40.425112 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 9 23:44:40.429657 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 9 23:44:40.439437 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 9 23:44:40.439503 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 9 23:44:40.449786 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 9 23:44:40.449892 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 9 23:44:40.464066 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 9 23:44:40.464162 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 9 23:44:40.469893 systemd[1]: ignition-files.service: Deactivated successfully. Jul 9 23:44:40.469974 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 9 23:44:40.480460 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jul 9 23:44:40.480552 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 9 23:44:40.549264 ignition[1260]: INFO : Ignition 2.21.0 Jul 9 23:44:40.549264 ignition[1260]: INFO : Stage: umount Jul 9 23:44:40.549264 ignition[1260]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 9 23:44:40.549264 ignition[1260]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 9 23:44:40.549264 ignition[1260]: INFO : umount: umount passed Jul 9 23:44:40.549264 ignition[1260]: INFO : Ignition finished successfully Jul 9 23:44:40.490926 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 9 23:44:40.520996 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 9 23:44:40.529385 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 9 23:44:40.529600 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 9 23:44:40.545750 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 9 23:44:40.545865 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 9 23:44:40.562943 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 9 23:44:40.563896 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 9 23:44:40.565523 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 9 23:44:40.578054 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 9 23:44:40.578163 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 9 23:44:40.585178 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 9 23:44:40.585240 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 9 23:44:40.598950 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 9 23:44:40.599016 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 9 23:44:40.610357 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 9 23:44:40.610408 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 9 23:44:40.620703 systemd[1]: Stopped target network.target - Network. Jul 9 23:44:40.629956 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 9 23:44:40.630011 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 9 23:44:40.640582 systemd[1]: Stopped target paths.target - Path Units. Jul 9 23:44:40.651124 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 9 23:44:40.654520 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 9 23:44:40.661092 systemd[1]: Stopped target slices.target - Slice Units. Jul 9 23:44:40.669483 systemd[1]: Stopped target sockets.target - Socket Units. Jul 9 23:44:40.677353 systemd[1]: iscsid.socket: Deactivated successfully. Jul 9 23:44:40.677405 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 9 23:44:40.686094 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 9 23:44:40.686128 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 9 23:44:40.696297 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 9 23:44:40.696352 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 9 23:44:40.708135 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 9 23:44:40.708177 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 9 23:44:40.717563 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 9 23:44:40.725402 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 9 23:44:40.739619 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 9 23:44:40.739731 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 9 23:44:40.755946 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 9 23:44:40.756193 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 9 23:44:40.756301 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 9 23:44:40.955374 kernel: hv_netvsc 00224876-f24c-0022-4876-f24c00224876 eth0: Data path switched from VF: enP17050s1 Jul 9 23:44:40.766854 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 9 23:44:40.766948 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 9 23:44:40.784688 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 9 23:44:40.786720 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 9 23:44:40.796600 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 9 23:44:40.796710 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 9 23:44:40.808559 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 9 23:44:40.808644 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 9 23:44:40.821177 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 9 23:44:40.829877 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 9 23:44:40.829945 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 9 23:44:40.838143 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 9 23:44:40.838206 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 9 23:44:40.850862 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 9 23:44:40.850911 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 9 23:44:40.856211 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 9 23:44:40.856259 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 9 23:44:40.865150 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 9 23:44:40.870722 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 9 23:44:40.870786 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 9 23:44:40.885135 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 9 23:44:40.890960 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 9 23:44:40.900508 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 9 23:44:40.900550 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 9 23:44:40.909950 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 9 23:44:40.909978 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 9 23:44:40.918397 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 9 23:44:40.918450 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 9 23:44:40.931721 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 9 23:44:40.931772 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 9 23:44:40.943307 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 9 23:44:40.943351 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 9 23:44:41.173298 systemd-journald[226]: Received SIGTERM from PID 1 (systemd). Jul 9 23:44:40.956380 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 9 23:44:40.971716 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 9 23:44:40.971791 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 9 23:44:40.988010 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 9 23:44:40.988071 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 9 23:44:41.003161 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 9 23:44:41.003227 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 9 23:44:41.017570 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 9 23:44:41.017622 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 9 23:44:41.026510 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 9 23:44:41.026589 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 9 23:44:41.041915 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jul 9 23:44:41.041966 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Jul 9 23:44:41.041990 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 9 23:44:41.042013 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 9 23:44:41.042287 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 9 23:44:41.042377 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 9 23:44:41.051829 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 9 23:44:41.051949 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 9 23:44:41.059330 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 9 23:44:41.069770 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 9 23:44:41.090397 systemd[1]: Switching root. Jul 9 23:44:41.286361 systemd-journald[226]: Journal stopped Jul 9 23:44:45.505756 kernel: SELinux: policy capability network_peer_controls=1 Jul 9 23:44:45.505780 kernel: SELinux: policy capability open_perms=1 Jul 9 23:44:45.505788 kernel: SELinux: policy capability extended_socket_class=1 Jul 9 23:44:45.505793 kernel: SELinux: policy capability always_check_network=0 Jul 9 23:44:45.505801 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 9 23:44:45.505807 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 9 23:44:45.505813 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 9 23:44:45.505819 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 9 23:44:45.505824 kernel: SELinux: policy capability userspace_initial_context=0 Jul 9 23:44:45.505830 kernel: audit: type=1403 audit(1752104682.058:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 9 23:44:45.505837 systemd[1]: Successfully loaded SELinux policy in 150.657ms. Jul 9 23:44:45.505845 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 7.909ms. Jul 9 23:44:45.505852 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 9 23:44:45.505859 systemd[1]: Detected virtualization microsoft. Jul 9 23:44:45.505866 systemd[1]: Detected architecture arm64. Jul 9 23:44:45.505873 systemd[1]: Detected first boot. Jul 9 23:44:45.505879 systemd[1]: Hostname set to . Jul 9 23:44:45.505885 systemd[1]: Initializing machine ID from random generator. Jul 9 23:44:45.505891 zram_generator::config[1302]: No configuration found. Jul 9 23:44:45.505898 kernel: NET: Registered PF_VSOCK protocol family Jul 9 23:44:45.505904 systemd[1]: Populated /etc with preset unit settings. Jul 9 23:44:45.505911 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 9 23:44:45.505917 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 9 23:44:45.505923 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 9 23:44:45.505929 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 9 23:44:45.505936 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 9 23:44:45.505943 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 9 23:44:45.505949 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 9 23:44:45.505955 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 9 23:44:45.505962 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 9 23:44:45.505968 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 9 23:44:45.505974 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 9 23:44:45.505980 systemd[1]: Created slice user.slice - User and Session Slice. Jul 9 23:44:45.505986 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 9 23:44:45.505992 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 9 23:44:45.505999 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 9 23:44:45.506004 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 9 23:44:45.506010 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 9 23:44:45.506017 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 9 23:44:45.506024 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jul 9 23:44:45.506031 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 9 23:44:45.506037 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 9 23:44:45.506044 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 9 23:44:45.506050 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 9 23:44:45.506056 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 9 23:44:45.506063 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 9 23:44:45.506069 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 9 23:44:45.506076 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 9 23:44:45.506082 systemd[1]: Reached target slices.target - Slice Units. Jul 9 23:44:45.506088 systemd[1]: Reached target swap.target - Swaps. Jul 9 23:44:45.506094 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 9 23:44:45.506100 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 9 23:44:45.506107 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 9 23:44:45.506113 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 9 23:44:45.506119 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 9 23:44:45.506126 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 9 23:44:45.506132 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 9 23:44:45.506138 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 9 23:44:45.506145 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 9 23:44:45.506151 systemd[1]: Mounting media.mount - External Media Directory... Jul 9 23:44:45.506157 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 9 23:44:45.506163 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 9 23:44:45.506170 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 9 23:44:45.506177 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 9 23:44:45.506183 systemd[1]: Reached target machines.target - Containers. Jul 9 23:44:45.506189 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 9 23:44:45.506197 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 9 23:44:45.506203 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 9 23:44:45.506209 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 9 23:44:45.506215 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 9 23:44:45.506221 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 9 23:44:45.506228 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 9 23:44:45.506234 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 9 23:44:45.506240 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 9 23:44:45.506246 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 9 23:44:45.506254 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 9 23:44:45.506260 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 9 23:44:45.506266 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 9 23:44:45.506272 systemd[1]: Stopped systemd-fsck-usr.service. Jul 9 23:44:45.506278 kernel: fuse: init (API version 7.41) Jul 9 23:44:45.506285 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 9 23:44:45.506291 kernel: loop: module loaded Jul 9 23:44:45.506297 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 9 23:44:45.506305 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 9 23:44:45.506311 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 9 23:44:45.506317 kernel: ACPI: bus type drm_connector registered Jul 9 23:44:45.506323 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 9 23:44:45.506330 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 9 23:44:45.506348 systemd-journald[1406]: Collecting audit messages is disabled. Jul 9 23:44:45.506363 systemd-journald[1406]: Journal started Jul 9 23:44:45.506377 systemd-journald[1406]: Runtime Journal (/run/log/journal/0e38c03dfd10409ab5e1ee2cb9ce6b7c) is 8M, max 78.5M, 70.5M free. Jul 9 23:44:44.700232 systemd[1]: Queued start job for default target multi-user.target. Jul 9 23:44:44.712000 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jul 9 23:44:44.712387 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 9 23:44:44.712672 systemd[1]: systemd-journald.service: Consumed 2.811s CPU time. Jul 9 23:44:45.528499 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 9 23:44:45.529703 systemd[1]: verity-setup.service: Deactivated successfully. Jul 9 23:44:45.536716 systemd[1]: Stopped verity-setup.service. Jul 9 23:44:45.551423 systemd[1]: Started systemd-journald.service - Journal Service. Jul 9 23:44:45.552132 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 9 23:44:45.556976 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 9 23:44:45.562029 systemd[1]: Mounted media.mount - External Media Directory. Jul 9 23:44:45.566331 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 9 23:44:45.573196 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 9 23:44:45.578147 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 9 23:44:45.582994 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 9 23:44:45.588899 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 9 23:44:45.595118 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 9 23:44:45.595251 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 9 23:44:45.602420 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 9 23:44:45.602562 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 9 23:44:45.608472 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 9 23:44:45.608614 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 9 23:44:45.614482 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 9 23:44:45.614714 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 9 23:44:45.620620 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 9 23:44:45.620754 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 9 23:44:45.626175 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 9 23:44:45.626302 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 9 23:44:45.631923 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 9 23:44:45.637158 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 9 23:44:45.642635 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 9 23:44:45.649727 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 9 23:44:45.655852 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 9 23:44:45.671174 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 9 23:44:45.678030 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 9 23:44:45.688239 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 9 23:44:45.693443 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 9 23:44:45.693479 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 9 23:44:45.699257 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 9 23:44:45.711604 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 9 23:44:45.716209 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 9 23:44:45.731749 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 9 23:44:45.746547 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 9 23:44:45.752305 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 9 23:44:45.753419 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 9 23:44:45.759315 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 9 23:44:45.760604 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 9 23:44:45.767645 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 9 23:44:45.778242 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 9 23:44:45.785841 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 9 23:44:45.794276 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 9 23:44:45.810771 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 9 23:44:45.821692 systemd-journald[1406]: Time spent on flushing to /var/log/journal/0e38c03dfd10409ab5e1ee2cb9ce6b7c is 49.765ms for 944 entries. Jul 9 23:44:45.821692 systemd-journald[1406]: System Journal (/var/log/journal/0e38c03dfd10409ab5e1ee2cb9ce6b7c) is 11.8M, max 2.6G, 2.6G free. Jul 9 23:44:46.045751 systemd-journald[1406]: Received client request to flush runtime journal. Jul 9 23:44:46.045819 kernel: loop0: detected capacity change from 0 to 203944 Jul 9 23:44:46.045835 systemd-journald[1406]: /var/log/journal/0e38c03dfd10409ab5e1ee2cb9ce6b7c/system.journal: Realtime clock jumped backwards relative to last journal entry, rotating. Jul 9 23:44:46.045853 systemd-journald[1406]: Rotating system journal. Jul 9 23:44:46.045874 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 9 23:44:46.045886 kernel: loop1: detected capacity change from 0 to 28936 Jul 9 23:44:45.831661 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 9 23:44:45.840689 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 9 23:44:45.876918 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 9 23:44:45.945467 systemd-tmpfiles[1442]: ACLs are not supported, ignoring. Jul 9 23:44:45.945480 systemd-tmpfiles[1442]: ACLs are not supported, ignoring. Jul 9 23:44:45.953290 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 9 23:44:45.961616 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 9 23:44:46.047908 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 9 23:44:46.074475 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 9 23:44:46.075140 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 9 23:44:46.361111 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 9 23:44:46.366952 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 9 23:44:46.383643 systemd-tmpfiles[1461]: ACLs are not supported, ignoring. Jul 9 23:44:46.383654 systemd-tmpfiles[1461]: ACLs are not supported, ignoring. Jul 9 23:44:46.386667 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 9 23:44:46.544516 kernel: loop2: detected capacity change from 0 to 138376 Jul 9 23:44:46.985047 kernel: loop3: detected capacity change from 0 to 107312 Jul 9 23:44:47.142073 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 9 23:44:47.150368 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 9 23:44:47.179660 systemd-udevd[1467]: Using default interface naming scheme 'v255'. Jul 9 23:44:47.293951 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 9 23:44:47.302812 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 9 23:44:47.354015 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 9 23:44:47.409517 kernel: loop4: detected capacity change from 0 to 203944 Jul 9 23:44:47.419512 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jul 9 23:44:47.427508 kernel: loop5: detected capacity change from 0 to 28936 Jul 9 23:44:47.439522 kernel: loop6: detected capacity change from 0 to 138376 Jul 9 23:44:47.453856 kernel: loop7: detected capacity change from 0 to 107312 Jul 9 23:44:47.458519 (sd-merge)[1501]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jul 9 23:44:47.460862 (sd-merge)[1501]: Merged extensions into '/usr'. Jul 9 23:44:47.467540 systemd[1]: Reload requested from client PID 1441 ('systemd-sysext') (unit systemd-sysext.service)... Jul 9 23:44:47.467553 systemd[1]: Reloading... Jul 9 23:44:47.499350 kernel: mousedev: PS/2 mouse device common for all mice Jul 9 23:44:47.499455 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#99 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jul 9 23:44:47.570281 zram_generator::config[1537]: No configuration found. Jul 9 23:44:47.668695 kernel: hv_vmbus: registering driver hv_balloon Jul 9 23:44:47.668825 kernel: hv_vmbus: registering driver hyperv_fb Jul 9 23:44:47.678357 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jul 9 23:44:47.678449 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jul 9 23:44:47.683353 kernel: Console: switching to colour dummy device 80x25 Jul 9 23:44:47.685587 kernel: Console: switching to colour frame buffer device 128x48 Jul 9 23:44:47.707810 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jul 9 23:44:47.707910 kernel: hv_balloon: Memory hot add disabled on ARM64 Jul 9 23:44:47.734265 systemd-networkd[1482]: lo: Link UP Jul 9 23:44:47.734604 systemd-networkd[1482]: lo: Gained carrier Jul 9 23:44:47.737411 systemd-networkd[1482]: Enumeration completed Jul 9 23:44:47.739664 systemd-networkd[1482]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 9 23:44:47.739671 systemd-networkd[1482]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 9 23:44:47.748341 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 9 23:44:47.795526 kernel: mlx5_core 429a:00:02.0 enP17050s1: Link up Jul 9 23:44:47.808875 kernel: MACsec IEEE 802.1AE Jul 9 23:44:47.822532 kernel: hv_netvsc 00224876-f24c-0022-4876-f24c00224876 eth0: Data path switched to VF: enP17050s1 Jul 9 23:44:47.824637 systemd-networkd[1482]: enP17050s1: Link UP Jul 9 23:44:47.824788 systemd-networkd[1482]: eth0: Link UP Jul 9 23:44:47.824791 systemd-networkd[1482]: eth0: Gained carrier Jul 9 23:44:47.824812 systemd-networkd[1482]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 9 23:44:47.828756 systemd-networkd[1482]: enP17050s1: Gained carrier Jul 9 23:44:47.838592 systemd-networkd[1482]: eth0: DHCPv4 address 10.200.20.40/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jul 9 23:44:47.886289 systemd[1]: Reloading finished in 417 ms. Jul 9 23:44:47.917414 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 9 23:44:47.922653 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 9 23:44:47.927573 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 9 23:44:47.958667 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jul 9 23:44:47.973646 systemd[1]: Starting ensure-sysext.service... Jul 9 23:44:47.980246 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 9 23:44:47.989907 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 9 23:44:47.997184 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 9 23:44:48.011793 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 9 23:44:48.020185 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 9 23:44:48.032891 systemd-tmpfiles[1685]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 9 23:44:48.033241 systemd-tmpfiles[1685]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 9 23:44:48.035717 systemd-tmpfiles[1685]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 9 23:44:48.035998 systemd-tmpfiles[1685]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 9 23:44:48.036556 systemd-tmpfiles[1685]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 9 23:44:48.036815 systemd-tmpfiles[1685]: ACLs are not supported, ignoring. Jul 9 23:44:48.037298 systemd-tmpfiles[1685]: ACLs are not supported, ignoring. Jul 9 23:44:48.040858 systemd[1]: Reload requested from client PID 1681 ('systemctl') (unit ensure-sysext.service)... Jul 9 23:44:48.040871 systemd[1]: Reloading... Jul 9 23:44:48.053784 systemd-tmpfiles[1685]: Detected autofs mount point /boot during canonicalization of boot. Jul 9 23:44:48.054004 systemd-tmpfiles[1685]: Skipping /boot Jul 9 23:44:48.065356 systemd-tmpfiles[1685]: Detected autofs mount point /boot during canonicalization of boot. Jul 9 23:44:48.065546 systemd-tmpfiles[1685]: Skipping /boot Jul 9 23:44:48.105518 zram_generator::config[1723]: No configuration found. Jul 9 23:44:48.182179 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 9 23:44:48.261661 systemd[1]: Reloading finished in 220 ms. Jul 9 23:44:48.289430 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 9 23:44:48.296575 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 9 23:44:48.303347 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 9 23:44:48.323372 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 9 23:44:48.336125 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 9 23:44:48.341573 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 9 23:44:48.343700 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 9 23:44:48.354700 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 9 23:44:48.363722 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 9 23:44:48.376105 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 9 23:44:48.381008 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 9 23:44:48.381359 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 9 23:44:48.382784 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 9 23:44:48.392701 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 9 23:44:48.397942 systemd[1]: Reached target time-set.target - System Time Set. Jul 9 23:44:48.405777 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 9 23:44:48.416292 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 9 23:44:48.417832 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 9 23:44:48.424874 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 9 23:44:48.425034 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 9 23:44:48.430550 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 9 23:44:48.430689 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 9 23:44:48.437294 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 9 23:44:48.437432 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 9 23:44:48.447396 systemd[1]: Finished ensure-sysext.service. Jul 9 23:44:48.457702 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 9 23:44:48.467994 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 9 23:44:48.468058 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 9 23:44:48.478740 augenrules[1817]: No rules Jul 9 23:44:48.479856 systemd[1]: audit-rules.service: Deactivated successfully. Jul 9 23:44:48.480070 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 9 23:44:48.536886 systemd-resolved[1795]: Positive Trust Anchors: Jul 9 23:44:48.537218 systemd-resolved[1795]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 9 23:44:48.537284 systemd-resolved[1795]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 9 23:44:48.540078 systemd-resolved[1795]: Using system hostname 'ci-4344.1.1-n-bbe652f90c'. Jul 9 23:44:48.541666 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 9 23:44:48.549169 systemd[1]: Reached target network.target - Network. Jul 9 23:44:48.553542 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 9 23:44:48.560989 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 9 23:44:48.582139 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 9 23:44:48.987415 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 9 23:44:48.994983 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 9 23:44:49.236670 systemd-networkd[1482]: enP17050s1: Gained IPv6LL Jul 9 23:44:49.620706 systemd-networkd[1482]: eth0: Gained IPv6LL Jul 9 23:44:49.623131 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 9 23:44:49.630368 systemd[1]: Reached target network-online.target - Network is Online. Jul 9 23:44:51.417242 ldconfig[1436]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 9 23:44:51.435303 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 9 23:44:51.442709 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 9 23:44:51.464800 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 9 23:44:51.470408 systemd[1]: Reached target sysinit.target - System Initialization. Jul 9 23:44:51.475641 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 9 23:44:51.481738 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 9 23:44:51.488682 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 9 23:44:51.493952 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 9 23:44:51.500331 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 9 23:44:51.506149 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 9 23:44:51.506177 systemd[1]: Reached target paths.target - Path Units. Jul 9 23:44:51.510306 systemd[1]: Reached target timers.target - Timer Units. Jul 9 23:44:51.515443 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 9 23:44:51.521820 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 9 23:44:51.527971 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 9 23:44:51.534665 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 9 23:44:51.540904 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 9 23:44:51.548829 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 9 23:44:51.554266 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 9 23:44:51.560205 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 9 23:44:51.567204 systemd[1]: Reached target sockets.target - Socket Units. Jul 9 23:44:51.572571 systemd[1]: Reached target basic.target - Basic System. Jul 9 23:44:51.576647 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 9 23:44:51.576670 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 9 23:44:51.578803 systemd[1]: Starting chronyd.service - NTP client/server... Jul 9 23:44:51.592607 systemd[1]: Starting containerd.service - containerd container runtime... Jul 9 23:44:51.604774 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 9 23:44:51.612701 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 9 23:44:51.617805 (chronyd)[1834]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Jul 9 23:44:51.620572 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 9 23:44:51.626613 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 9 23:44:51.633509 jq[1842]: false Jul 9 23:44:51.633782 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 9 23:44:51.638072 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 9 23:44:51.640646 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jul 9 23:44:51.648203 KVP[1844]: KVP starting; pid is:1844 Jul 9 23:44:51.648901 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jul 9 23:44:51.649997 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 23:44:51.657834 kernel: hv_utils: KVP IC version 4.0 Jul 9 23:44:51.657704 KVP[1844]: KVP LIC Version: 3.1 Jul 9 23:44:51.658910 chronyd[1851]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Jul 9 23:44:51.659099 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 9 23:44:51.665035 extend-filesystems[1843]: Found /dev/sda6 Jul 9 23:44:51.668456 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 9 23:44:51.678527 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 9 23:44:51.683815 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 9 23:44:51.684250 extend-filesystems[1843]: Found /dev/sda9 Jul 9 23:44:51.694970 extend-filesystems[1843]: Checking size of /dev/sda9 Jul 9 23:44:51.700670 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 9 23:44:51.709367 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 9 23:44:51.717576 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 9 23:44:51.720123 extend-filesystems[1843]: Old size kept for /dev/sda9 Jul 9 23:44:51.725630 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 9 23:44:51.728079 systemd[1]: Starting update-engine.service - Update Engine... Jul 9 23:44:51.736390 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 9 23:44:51.747798 chronyd[1851]: Timezone right/UTC failed leap second check, ignoring Jul 9 23:44:51.747993 chronyd[1851]: Loaded seccomp filter (level 2) Jul 9 23:44:51.752783 systemd[1]: Started chronyd.service - NTP client/server. Jul 9 23:44:51.757143 jq[1877]: true Jul 9 23:44:51.761925 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 9 23:44:51.769900 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 9 23:44:51.770083 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 9 23:44:51.770304 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 9 23:44:51.770432 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 9 23:44:51.783781 systemd[1]: motdgen.service: Deactivated successfully. Jul 9 23:44:51.786944 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 9 23:44:51.791742 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 9 23:44:51.810162 update_engine[1874]: I20250709 23:44:51.810081 1874 main.cc:92] Flatcar Update Engine starting Jul 9 23:44:51.816949 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 9 23:44:51.818534 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 9 23:44:51.854443 (ntainerd)[1905]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 9 23:44:51.858691 jq[1904]: true Jul 9 23:44:51.882055 systemd-logind[1865]: New seat seat0. Jul 9 23:44:51.887072 systemd-logind[1865]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 9 23:44:51.887272 systemd[1]: Started systemd-logind.service - User Login Management. Jul 9 23:44:51.906949 tar[1899]: linux-arm64/helm Jul 9 23:44:51.934433 sshd_keygen[1876]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 9 23:44:51.973506 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 9 23:44:51.978536 bash[1958]: Updated "/home/core/.ssh/authorized_keys" Jul 9 23:44:51.980924 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 9 23:44:51.992462 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 9 23:44:51.999480 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 9 23:44:52.001135 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jul 9 23:44:52.022630 dbus-daemon[1837]: [system] SELinux support is enabled Jul 9 23:44:52.023073 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 9 23:44:52.034277 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 9 23:44:52.034923 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 9 23:44:52.041165 update_engine[1874]: I20250709 23:44:52.040935 1874 update_check_scheduler.cc:74] Next update check in 10m38s Jul 9 23:44:52.044824 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 9 23:44:52.044844 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 9 23:44:52.060308 dbus-daemon[1837]: [system] Successfully activated service 'org.freedesktop.systemd1' Jul 9 23:44:52.060912 systemd[1]: Started update-engine.service - Update Engine. Jul 9 23:44:52.070840 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 9 23:44:52.084942 systemd[1]: issuegen.service: Deactivated successfully. Jul 9 23:44:52.085157 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 9 23:44:52.096086 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 9 23:44:52.128153 coreos-metadata[1836]: Jul 09 23:44:52.127 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jul 9 23:44:52.132184 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jul 9 23:44:52.154029 coreos-metadata[1836]: Jul 09 23:44:52.153 INFO Fetch successful Jul 9 23:44:52.154741 coreos-metadata[1836]: Jul 09 23:44:52.154 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jul 9 23:44:52.160085 coreos-metadata[1836]: Jul 09 23:44:52.160 INFO Fetch successful Jul 9 23:44:52.160868 coreos-metadata[1836]: Jul 09 23:44:52.160 INFO Fetching http://168.63.129.16/machine/f29df3ca-3550-43a3-a74a-683d891c3510/36a954b8%2Df514%2D4ada%2Db919%2Da8087316f412.%5Fci%2D4344.1.1%2Dn%2Dbbe652f90c?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jul 9 23:44:52.162587 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 9 23:44:52.169299 coreos-metadata[1836]: Jul 09 23:44:52.169 INFO Fetch successful Jul 9 23:44:52.169299 coreos-metadata[1836]: Jul 09 23:44:52.169 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jul 9 23:44:52.174620 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 9 23:44:52.184086 coreos-metadata[1836]: Jul 09 23:44:52.181 INFO Fetch successful Jul 9 23:44:52.186479 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jul 9 23:44:52.196120 systemd[1]: Reached target getty.target - Login Prompts. Jul 9 23:44:52.227082 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 9 23:44:52.236204 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 9 23:44:52.405635 locksmithd[1993]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 9 23:44:52.410783 tar[1899]: linux-arm64/LICENSE Jul 9 23:44:52.410783 tar[1899]: linux-arm64/README.md Jul 9 23:44:52.424576 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 9 23:44:52.445721 containerd[1905]: time="2025-07-09T23:44:52Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 9 23:44:52.448563 containerd[1905]: time="2025-07-09T23:44:52.448469124Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jul 9 23:44:52.454566 containerd[1905]: time="2025-07-09T23:44:52.454515556Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="8.616µs" Jul 9 23:44:52.454566 containerd[1905]: time="2025-07-09T23:44:52.454555692Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 9 23:44:52.454566 containerd[1905]: time="2025-07-09T23:44:52.454570724Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 9 23:44:52.454755 containerd[1905]: time="2025-07-09T23:44:52.454734908Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 9 23:44:52.454755 containerd[1905]: time="2025-07-09T23:44:52.454753220Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 9 23:44:52.454784 containerd[1905]: time="2025-07-09T23:44:52.454772772Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 9 23:44:52.454830 containerd[1905]: time="2025-07-09T23:44:52.454816348Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 9 23:44:52.454844 containerd[1905]: time="2025-07-09T23:44:52.454829892Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 9 23:44:52.455126 containerd[1905]: time="2025-07-09T23:44:52.455100852Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 9 23:44:52.455145 containerd[1905]: time="2025-07-09T23:44:52.455124212Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 9 23:44:52.455145 containerd[1905]: time="2025-07-09T23:44:52.455141236Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 9 23:44:52.455170 containerd[1905]: time="2025-07-09T23:44:52.455147772Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 9 23:44:52.455245 containerd[1905]: time="2025-07-09T23:44:52.455232052Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 9 23:44:52.455420 containerd[1905]: time="2025-07-09T23:44:52.455403708Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 9 23:44:52.455441 containerd[1905]: time="2025-07-09T23:44:52.455429900Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 9 23:44:52.455441 containerd[1905]: time="2025-07-09T23:44:52.455438396Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 9 23:44:52.455474 containerd[1905]: time="2025-07-09T23:44:52.455470964Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 9 23:44:52.455679 containerd[1905]: time="2025-07-09T23:44:52.455658420Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 9 23:44:52.455748 containerd[1905]: time="2025-07-09T23:44:52.455731300Z" level=info msg="metadata content store policy set" policy=shared Jul 9 23:44:52.497181 containerd[1905]: time="2025-07-09T23:44:52.496238292Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 9 23:44:52.497181 containerd[1905]: time="2025-07-09T23:44:52.496345364Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 9 23:44:52.497181 containerd[1905]: time="2025-07-09T23:44:52.496361788Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 9 23:44:52.497181 containerd[1905]: time="2025-07-09T23:44:52.496376940Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 9 23:44:52.497181 containerd[1905]: time="2025-07-09T23:44:52.496387340Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 9 23:44:52.497181 containerd[1905]: time="2025-07-09T23:44:52.496394660Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 9 23:44:52.497181 containerd[1905]: time="2025-07-09T23:44:52.496402332Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 9 23:44:52.497181 containerd[1905]: time="2025-07-09T23:44:52.496410780Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 9 23:44:52.497181 containerd[1905]: time="2025-07-09T23:44:52.496420100Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 9 23:44:52.497181 containerd[1905]: time="2025-07-09T23:44:52.496427116Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 9 23:44:52.497181 containerd[1905]: time="2025-07-09T23:44:52.496434036Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 9 23:44:52.497181 containerd[1905]: time="2025-07-09T23:44:52.496443796Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 9 23:44:52.497181 containerd[1905]: time="2025-07-09T23:44:52.496631692Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 9 23:44:52.497181 containerd[1905]: time="2025-07-09T23:44:52.496651468Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 9 23:44:52.497475 containerd[1905]: time="2025-07-09T23:44:52.496665404Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 9 23:44:52.497475 containerd[1905]: time="2025-07-09T23:44:52.496673708Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 9 23:44:52.497475 containerd[1905]: time="2025-07-09T23:44:52.496681660Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 9 23:44:52.497475 containerd[1905]: time="2025-07-09T23:44:52.496690724Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 9 23:44:52.497475 containerd[1905]: time="2025-07-09T23:44:52.496698804Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 9 23:44:52.497475 containerd[1905]: time="2025-07-09T23:44:52.496709316Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 9 23:44:52.497475 containerd[1905]: time="2025-07-09T23:44:52.496717756Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 9 23:44:52.497475 containerd[1905]: time="2025-07-09T23:44:52.496724460Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 9 23:44:52.497475 containerd[1905]: time="2025-07-09T23:44:52.496737924Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 9 23:44:52.497475 containerd[1905]: time="2025-07-09T23:44:52.496801084Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 9 23:44:52.497475 containerd[1905]: time="2025-07-09T23:44:52.496812532Z" level=info msg="Start snapshots syncer" Jul 9 23:44:52.497475 containerd[1905]: time="2025-07-09T23:44:52.496832572Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 9 23:44:52.497664 containerd[1905]: time="2025-07-09T23:44:52.497012628Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 9 23:44:52.497664 containerd[1905]: time="2025-07-09T23:44:52.497044788Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 9 23:44:52.497740 containerd[1905]: time="2025-07-09T23:44:52.497133884Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 9 23:44:52.497740 containerd[1905]: time="2025-07-09T23:44:52.497252020Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 9 23:44:52.497740 containerd[1905]: time="2025-07-09T23:44:52.497269484Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 9 23:44:52.497740 containerd[1905]: time="2025-07-09T23:44:52.497282484Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 9 23:44:52.497740 containerd[1905]: time="2025-07-09T23:44:52.497290252Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 9 23:44:52.497740 containerd[1905]: time="2025-07-09T23:44:52.497298236Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 9 23:44:52.497740 containerd[1905]: time="2025-07-09T23:44:52.497305156Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 9 23:44:52.497740 containerd[1905]: time="2025-07-09T23:44:52.497313076Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 9 23:44:52.497740 containerd[1905]: time="2025-07-09T23:44:52.497340556Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 9 23:44:52.497740 containerd[1905]: time="2025-07-09T23:44:52.497348636Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 9 23:44:52.497740 containerd[1905]: time="2025-07-09T23:44:52.497355940Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 9 23:44:52.497740 containerd[1905]: time="2025-07-09T23:44:52.497380316Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 9 23:44:52.497740 containerd[1905]: time="2025-07-09T23:44:52.497390884Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 9 23:44:52.497740 containerd[1905]: time="2025-07-09T23:44:52.497396844Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 9 23:44:52.497899 containerd[1905]: time="2025-07-09T23:44:52.497402260Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 9 23:44:52.497899 containerd[1905]: time="2025-07-09T23:44:52.497409332Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 9 23:44:52.497899 containerd[1905]: time="2025-07-09T23:44:52.497416516Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 9 23:44:52.497899 containerd[1905]: time="2025-07-09T23:44:52.497423012Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 9 23:44:52.497899 containerd[1905]: time="2025-07-09T23:44:52.497435628Z" level=info msg="runtime interface created" Jul 9 23:44:52.497899 containerd[1905]: time="2025-07-09T23:44:52.497438884Z" level=info msg="created NRI interface" Jul 9 23:44:52.497899 containerd[1905]: time="2025-07-09T23:44:52.497444708Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 9 23:44:52.497899 containerd[1905]: time="2025-07-09T23:44:52.497453196Z" level=info msg="Connect containerd service" Jul 9 23:44:52.497899 containerd[1905]: time="2025-07-09T23:44:52.497473916Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 9 23:44:52.498502 containerd[1905]: time="2025-07-09T23:44:52.498311884Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 9 23:44:52.529647 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 23:44:52.534881 (kubelet)[2036]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 9 23:44:52.795271 kubelet[2036]: E0709 23:44:52.795137 2036 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 9 23:44:52.797022 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 9 23:44:52.797138 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 9 23:44:52.797636 systemd[1]: kubelet.service: Consumed 551ms CPU time, 254.7M memory peak. Jul 9 23:44:53.081320 containerd[1905]: time="2025-07-09T23:44:53.081167724Z" level=info msg="Start subscribing containerd event" Jul 9 23:44:53.081673 containerd[1905]: time="2025-07-09T23:44:53.081244300Z" level=info msg="Start recovering state" Jul 9 23:44:53.081673 containerd[1905]: time="2025-07-09T23:44:53.081316100Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 9 23:44:53.081673 containerd[1905]: time="2025-07-09T23:44:53.081520196Z" level=info msg="Start event monitor" Jul 9 23:44:53.081673 containerd[1905]: time="2025-07-09T23:44:53.081536188Z" level=info msg="Start cni network conf syncer for default" Jul 9 23:44:53.081673 containerd[1905]: time="2025-07-09T23:44:53.081542532Z" level=info msg="Start streaming server" Jul 9 23:44:53.081673 containerd[1905]: time="2025-07-09T23:44:53.081550172Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 9 23:44:53.081673 containerd[1905]: time="2025-07-09T23:44:53.081555676Z" level=info msg="runtime interface starting up..." Jul 9 23:44:53.081673 containerd[1905]: time="2025-07-09T23:44:53.081560300Z" level=info msg="starting plugins..." Jul 9 23:44:53.081673 containerd[1905]: time="2025-07-09T23:44:53.081570564Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 9 23:44:53.081673 containerd[1905]: time="2025-07-09T23:44:53.081556340Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 9 23:44:53.081844 containerd[1905]: time="2025-07-09T23:44:53.081706100Z" level=info msg="containerd successfully booted in 0.636406s" Jul 9 23:44:53.081993 systemd[1]: Started containerd.service - containerd container runtime. Jul 9 23:44:53.088146 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 9 23:44:53.097579 systemd[1]: Startup finished in 1.671s (kernel) + 13.623s (initrd) + 11.189s (userspace) = 26.484s. Jul 9 23:44:53.395619 login[2007]: pam_lastlog(login:session): file /var/log/lastlog is locked/read, retrying Jul 9 23:44:53.395874 login[2006]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:44:53.417234 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 9 23:44:53.418185 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 9 23:44:53.423945 systemd-logind[1865]: New session 2 of user core. Jul 9 23:44:53.436515 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 9 23:44:53.439260 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 9 23:44:53.464740 (systemd)[2059]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 9 23:44:53.468127 systemd-logind[1865]: New session c1 of user core. Jul 9 23:44:53.625048 systemd[2059]: Queued start job for default target default.target. Jul 9 23:44:53.635710 systemd[2059]: Created slice app.slice - User Application Slice. Jul 9 23:44:53.635735 systemd[2059]: Reached target paths.target - Paths. Jul 9 23:44:53.635851 systemd[2059]: Reached target timers.target - Timers. Jul 9 23:44:53.636932 systemd[2059]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 9 23:44:53.645405 systemd[2059]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 9 23:44:53.645468 systemd[2059]: Reached target sockets.target - Sockets. Jul 9 23:44:53.645603 systemd[2059]: Reached target basic.target - Basic System. Jul 9 23:44:53.645643 systemd[2059]: Reached target default.target - Main User Target. Jul 9 23:44:53.645669 systemd[2059]: Startup finished in 171ms. Jul 9 23:44:53.645738 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 9 23:44:53.654654 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 9 23:44:53.848975 waagent[2004]: 2025-07-09T23:44:53.848894Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Jul 9 23:44:53.858382 waagent[2004]: 2025-07-09T23:44:53.854663Z INFO Daemon Daemon OS: flatcar 4344.1.1 Jul 9 23:44:53.858593 waagent[2004]: 2025-07-09T23:44:53.858551Z INFO Daemon Daemon Python: 3.11.12 Jul 9 23:44:53.862590 waagent[2004]: 2025-07-09T23:44:53.862548Z INFO Daemon Daemon Run daemon Jul 9 23:44:53.867465 waagent[2004]: 2025-07-09T23:44:53.867337Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4344.1.1' Jul 9 23:44:53.874502 waagent[2004]: 2025-07-09T23:44:53.874437Z INFO Daemon Daemon Using waagent for provisioning Jul 9 23:44:53.879099 waagent[2004]: 2025-07-09T23:44:53.879047Z INFO Daemon Daemon Activate resource disk Jul 9 23:44:53.884790 waagent[2004]: 2025-07-09T23:44:53.884722Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jul 9 23:44:53.894612 waagent[2004]: 2025-07-09T23:44:53.894555Z INFO Daemon Daemon Found device: None Jul 9 23:44:53.899390 waagent[2004]: 2025-07-09T23:44:53.899311Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jul 9 23:44:53.906775 waagent[2004]: 2025-07-09T23:44:53.906736Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jul 9 23:44:53.917393 waagent[2004]: 2025-07-09T23:44:53.917347Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jul 9 23:44:53.922634 waagent[2004]: 2025-07-09T23:44:53.922597Z INFO Daemon Daemon Running default provisioning handler Jul 9 23:44:53.932404 waagent[2004]: 2025-07-09T23:44:53.932346Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jul 9 23:44:53.944012 waagent[2004]: 2025-07-09T23:44:53.943959Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jul 9 23:44:53.953274 waagent[2004]: 2025-07-09T23:44:53.953234Z INFO Daemon Daemon cloud-init is enabled: False Jul 9 23:44:53.957483 waagent[2004]: 2025-07-09T23:44:53.957451Z INFO Daemon Daemon Copying ovf-env.xml Jul 9 23:44:54.039180 waagent[2004]: 2025-07-09T23:44:54.038235Z INFO Daemon Daemon Successfully mounted dvd Jul 9 23:44:54.064537 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jul 9 23:44:54.067520 waagent[2004]: 2025-07-09T23:44:54.066795Z INFO Daemon Daemon Detect protocol endpoint Jul 9 23:44:54.071310 waagent[2004]: 2025-07-09T23:44:54.071264Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jul 9 23:44:54.075653 waagent[2004]: 2025-07-09T23:44:54.075609Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jul 9 23:44:54.080228 waagent[2004]: 2025-07-09T23:44:54.080196Z INFO Daemon Daemon Test for route to 168.63.129.16 Jul 9 23:44:54.084438 waagent[2004]: 2025-07-09T23:44:54.084397Z INFO Daemon Daemon Route to 168.63.129.16 exists Jul 9 23:44:54.088957 waagent[2004]: 2025-07-09T23:44:54.088921Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jul 9 23:44:54.140858 waagent[2004]: 2025-07-09T23:44:54.140802Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jul 9 23:44:54.145593 waagent[2004]: 2025-07-09T23:44:54.145569Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jul 9 23:44:54.149803 waagent[2004]: 2025-07-09T23:44:54.149730Z INFO Daemon Daemon Server preferred version:2015-04-05 Jul 9 23:44:54.251941 waagent[2004]: 2025-07-09T23:44:54.251851Z INFO Daemon Daemon Initializing goal state during protocol detection Jul 9 23:44:54.256600 waagent[2004]: 2025-07-09T23:44:54.256555Z INFO Daemon Daemon Forcing an update of the goal state. Jul 9 23:44:54.264632 waagent[2004]: 2025-07-09T23:44:54.264589Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jul 9 23:44:54.284178 waagent[2004]: 2025-07-09T23:44:54.284143Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.175 Jul 9 23:44:54.288686 waagent[2004]: 2025-07-09T23:44:54.288651Z INFO Daemon Jul 9 23:44:54.291513 waagent[2004]: 2025-07-09T23:44:54.291475Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: cee1c5f1-b0dd-4278-921c-3a2e32052554 eTag: 9330579662438028510 source: Fabric] Jul 9 23:44:54.299826 waagent[2004]: 2025-07-09T23:44:54.299794Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jul 9 23:44:54.304524 waagent[2004]: 2025-07-09T23:44:54.304476Z INFO Daemon Jul 9 23:44:54.306945 waagent[2004]: 2025-07-09T23:44:54.306919Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jul 9 23:44:54.316050 waagent[2004]: 2025-07-09T23:44:54.316023Z INFO Daemon Daemon Downloading artifacts profile blob Jul 9 23:44:54.379652 waagent[2004]: 2025-07-09T23:44:54.379579Z INFO Daemon Downloaded certificate {'thumbprint': 'E5B05FAEAD7ED0B6E3500714905D173D5ACA1F35', 'hasPrivateKey': False} Jul 9 23:44:54.387334 waagent[2004]: 2025-07-09T23:44:54.387295Z INFO Daemon Downloaded certificate {'thumbprint': 'E10AD49D3CC3A54C08AD41525B2B19641B1ED419', 'hasPrivateKey': True} Jul 9 23:44:54.394651 waagent[2004]: 2025-07-09T23:44:54.394618Z INFO Daemon Fetch goal state completed Jul 9 23:44:54.402999 login[2007]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:44:54.407200 systemd-logind[1865]: New session 1 of user core. Jul 9 23:44:54.407852 waagent[2004]: 2025-07-09T23:44:54.407811Z INFO Daemon Daemon Starting provisioning Jul 9 23:44:54.411691 waagent[2004]: 2025-07-09T23:44:54.411653Z INFO Daemon Daemon Handle ovf-env.xml. Jul 9 23:44:54.415151 waagent[2004]: 2025-07-09T23:44:54.415126Z INFO Daemon Daemon Set hostname [ci-4344.1.1-n-bbe652f90c] Jul 9 23:44:54.420659 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 9 23:44:54.461674 waagent[2004]: 2025-07-09T23:44:54.461615Z INFO Daemon Daemon Publish hostname [ci-4344.1.1-n-bbe652f90c] Jul 9 23:44:54.466503 waagent[2004]: 2025-07-09T23:44:54.466455Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jul 9 23:44:54.470976 waagent[2004]: 2025-07-09T23:44:54.470939Z INFO Daemon Daemon Primary interface is [eth0] Jul 9 23:44:54.480832 systemd-networkd[1482]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 9 23:44:54.480839 systemd-networkd[1482]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 9 23:44:54.480872 systemd-networkd[1482]: eth0: DHCP lease lost Jul 9 23:44:54.481479 waagent[2004]: 2025-07-09T23:44:54.481422Z INFO Daemon Daemon Create user account if not exists Jul 9 23:44:54.487120 waagent[2004]: 2025-07-09T23:44:54.487076Z INFO Daemon Daemon User core already exists, skip useradd Jul 9 23:44:54.491387 waagent[2004]: 2025-07-09T23:44:54.491334Z INFO Daemon Daemon Configure sudoer Jul 9 23:44:54.499709 waagent[2004]: 2025-07-09T23:44:54.499643Z INFO Daemon Daemon Configure sshd Jul 9 23:44:54.504546 systemd-networkd[1482]: eth0: DHCPv4 address 10.200.20.40/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jul 9 23:44:54.508354 waagent[2004]: 2025-07-09T23:44:54.508294Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jul 9 23:44:54.524494 waagent[2004]: 2025-07-09T23:44:54.524436Z INFO Daemon Daemon Deploy ssh public key. Jul 9 23:44:55.596972 waagent[2004]: 2025-07-09T23:44:55.596929Z INFO Daemon Daemon Provisioning complete Jul 9 23:44:55.612720 waagent[2004]: 2025-07-09T23:44:55.612679Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jul 9 23:44:55.617799 waagent[2004]: 2025-07-09T23:44:55.617765Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jul 9 23:44:55.625868 waagent[2004]: 2025-07-09T23:44:55.625839Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Jul 9 23:44:55.726312 waagent[2113]: 2025-07-09T23:44:55.725811Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Jul 9 23:44:55.726312 waagent[2113]: 2025-07-09T23:44:55.725954Z INFO ExtHandler ExtHandler OS: flatcar 4344.1.1 Jul 9 23:44:55.726312 waagent[2113]: 2025-07-09T23:44:55.725993Z INFO ExtHandler ExtHandler Python: 3.11.12 Jul 9 23:44:55.726312 waagent[2113]: 2025-07-09T23:44:55.726030Z INFO ExtHandler ExtHandler CPU Arch: aarch64 Jul 9 23:44:55.746535 waagent[2113]: 2025-07-09T23:44:55.746434Z INFO ExtHandler ExtHandler Distro: flatcar-4344.1.1; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.12; Arch: aarch64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Jul 9 23:44:55.746708 waagent[2113]: 2025-07-09T23:44:55.746678Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 9 23:44:55.746747 waagent[2113]: 2025-07-09T23:44:55.746731Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 9 23:44:55.753653 waagent[2113]: 2025-07-09T23:44:55.753596Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jul 9 23:44:55.759664 waagent[2113]: 2025-07-09T23:44:55.759631Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.175 Jul 9 23:44:55.760107 waagent[2113]: 2025-07-09T23:44:55.760075Z INFO ExtHandler Jul 9 23:44:55.760158 waagent[2113]: 2025-07-09T23:44:55.760141Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 74d27df1-76b7-409e-983a-4f4a44ef9a42 eTag: 9330579662438028510 source: Fabric] Jul 9 23:44:55.760390 waagent[2113]: 2025-07-09T23:44:55.760364Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jul 9 23:44:55.760817 waagent[2113]: 2025-07-09T23:44:55.760786Z INFO ExtHandler Jul 9 23:44:55.760853 waagent[2113]: 2025-07-09T23:44:55.760838Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jul 9 23:44:55.764823 waagent[2113]: 2025-07-09T23:44:55.764797Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jul 9 23:44:55.830163 waagent[2113]: 2025-07-09T23:44:55.830078Z INFO ExtHandler Downloaded certificate {'thumbprint': 'E5B05FAEAD7ED0B6E3500714905D173D5ACA1F35', 'hasPrivateKey': False} Jul 9 23:44:55.830514 waagent[2113]: 2025-07-09T23:44:55.830471Z INFO ExtHandler Downloaded certificate {'thumbprint': 'E10AD49D3CC3A54C08AD41525B2B19641B1ED419', 'hasPrivateKey': True} Jul 9 23:44:55.830877 waagent[2113]: 2025-07-09T23:44:55.830845Z INFO ExtHandler Fetch goal state completed Jul 9 23:44:55.844779 waagent[2113]: 2025-07-09T23:44:55.844717Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.3.3 11 Feb 2025 (Library: OpenSSL 3.3.3 11 Feb 2025) Jul 9 23:44:55.848618 waagent[2113]: 2025-07-09T23:44:55.848483Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 2113 Jul 9 23:44:55.848689 waagent[2113]: 2025-07-09T23:44:55.848657Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jul 9 23:44:55.848950 waagent[2113]: 2025-07-09T23:44:55.848920Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Jul 9 23:44:55.850068 waagent[2113]: 2025-07-09T23:44:55.850030Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4344.1.1', '', 'Flatcar Container Linux by Kinvolk'] Jul 9 23:44:55.850398 waagent[2113]: 2025-07-09T23:44:55.850368Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4344.1.1', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Jul 9 23:44:55.850549 waagent[2113]: 2025-07-09T23:44:55.850527Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Jul 9 23:44:55.850988 waagent[2113]: 2025-07-09T23:44:55.850959Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jul 9 23:44:55.913858 waagent[2113]: 2025-07-09T23:44:55.913820Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jul 9 23:44:55.914049 waagent[2113]: 2025-07-09T23:44:55.914020Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jul 9 23:44:55.918945 waagent[2113]: 2025-07-09T23:44:55.918593Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jul 9 23:44:55.923847 systemd[1]: Reload requested from client PID 2130 ('systemctl') (unit waagent.service)... Jul 9 23:44:55.924075 systemd[1]: Reloading... Jul 9 23:44:55.992596 zram_generator::config[2168]: No configuration found. Jul 9 23:44:56.066609 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 9 23:44:56.150428 systemd[1]: Reloading finished in 226 ms. Jul 9 23:44:56.173778 waagent[2113]: 2025-07-09T23:44:56.173710Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jul 9 23:44:56.173890 waagent[2113]: 2025-07-09T23:44:56.173855Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jul 9 23:44:57.724605 waagent[2113]: 2025-07-09T23:44:57.724526Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jul 9 23:44:57.724906 waagent[2113]: 2025-07-09T23:44:57.724858Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Jul 9 23:44:57.725561 waagent[2113]: 2025-07-09T23:44:57.725487Z INFO ExtHandler ExtHandler Starting env monitor service. Jul 9 23:44:57.725642 waagent[2113]: 2025-07-09T23:44:57.725608Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 9 23:44:57.725710 waagent[2113]: 2025-07-09T23:44:57.725687Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 9 23:44:57.725872 waagent[2113]: 2025-07-09T23:44:57.725845Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jul 9 23:44:57.726218 waagent[2113]: 2025-07-09T23:44:57.726182Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jul 9 23:44:57.726338 waagent[2113]: 2025-07-09T23:44:57.726307Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 9 23:44:57.726382 waagent[2113]: 2025-07-09T23:44:57.726363Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 9 23:44:57.726473 waagent[2113]: 2025-07-09T23:44:57.726451Z INFO EnvHandler ExtHandler Configure routes Jul 9 23:44:57.726519 waagent[2113]: 2025-07-09T23:44:57.726508Z INFO EnvHandler ExtHandler Gateway:None Jul 9 23:44:57.726574 waagent[2113]: 2025-07-09T23:44:57.726541Z INFO EnvHandler ExtHandler Routes:None Jul 9 23:44:57.726618 waagent[2113]: 2025-07-09T23:44:57.726600Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jul 9 23:44:57.726618 waagent[2113]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jul 9 23:44:57.726618 waagent[2113]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Jul 9 23:44:57.726618 waagent[2113]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jul 9 23:44:57.726618 waagent[2113]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jul 9 23:44:57.726618 waagent[2113]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jul 9 23:44:57.726618 waagent[2113]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jul 9 23:44:57.727019 waagent[2113]: 2025-07-09T23:44:57.726983Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jul 9 23:44:57.727218 waagent[2113]: 2025-07-09T23:44:57.727180Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jul 9 23:44:57.727787 waagent[2113]: 2025-07-09T23:44:57.727699Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jul 9 23:44:57.727787 waagent[2113]: 2025-07-09T23:44:57.727736Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jul 9 23:44:57.727856 waagent[2113]: 2025-07-09T23:44:57.727834Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jul 9 23:44:57.733668 waagent[2113]: 2025-07-09T23:44:57.733625Z INFO ExtHandler ExtHandler Jul 9 23:44:57.733834 waagent[2113]: 2025-07-09T23:44:57.733804Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 2e30d400-c1b5-464d-ae21-99c28aff2576 correlation d23be20d-b924-40e8-bd0e-714476eb9558 created: 2025-07-09T23:43:42.305679Z] Jul 9 23:44:57.734205 waagent[2113]: 2025-07-09T23:44:57.734169Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jul 9 23:44:57.734731 waagent[2113]: 2025-07-09T23:44:57.734696Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Jul 9 23:44:57.760695 waagent[2113]: 2025-07-09T23:44:57.760648Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Jul 9 23:44:57.760695 waagent[2113]: Try `iptables -h' or 'iptables --help' for more information.) Jul 9 23:44:57.761200 waagent[2113]: 2025-07-09T23:44:57.761165Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: EE23D69C-B29E-42D2-88F1-663F918DA9D8;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Jul 9 23:44:57.769297 waagent[2113]: 2025-07-09T23:44:57.769248Z INFO MonitorHandler ExtHandler Network interfaces: Jul 9 23:44:57.769297 waagent[2113]: Executing ['ip', '-a', '-o', 'link']: Jul 9 23:44:57.769297 waagent[2113]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jul 9 23:44:57.769297 waagent[2113]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:76:f2:4c brd ff:ff:ff:ff:ff:ff Jul 9 23:44:57.769297 waagent[2113]: 3: enP17050s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:76:f2:4c brd ff:ff:ff:ff:ff:ff\ altname enP17050p0s2 Jul 9 23:44:57.769297 waagent[2113]: Executing ['ip', '-4', '-a', '-o', 'address']: Jul 9 23:44:57.769297 waagent[2113]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jul 9 23:44:57.769297 waagent[2113]: 2: eth0 inet 10.200.20.40/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Jul 9 23:44:57.769297 waagent[2113]: Executing ['ip', '-6', '-a', '-o', 'address']: Jul 9 23:44:57.769297 waagent[2113]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jul 9 23:44:57.769297 waagent[2113]: 2: eth0 inet6 fe80::222:48ff:fe76:f24c/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jul 9 23:44:57.769297 waagent[2113]: 3: enP17050s1 inet6 fe80::222:48ff:fe76:f24c/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jul 9 23:44:57.797982 waagent[2113]: 2025-07-09T23:44:57.797926Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Jul 9 23:44:57.797982 waagent[2113]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jul 9 23:44:57.797982 waagent[2113]: pkts bytes target prot opt in out source destination Jul 9 23:44:57.797982 waagent[2113]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jul 9 23:44:57.797982 waagent[2113]: pkts bytes target prot opt in out source destination Jul 9 23:44:57.797982 waagent[2113]: Chain OUTPUT (policy ACCEPT 3 packets, 534 bytes) Jul 9 23:44:57.797982 waagent[2113]: pkts bytes target prot opt in out source destination Jul 9 23:44:57.797982 waagent[2113]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jul 9 23:44:57.797982 waagent[2113]: 1 52 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jul 9 23:44:57.797982 waagent[2113]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jul 9 23:44:57.801675 waagent[2113]: 2025-07-09T23:44:57.801564Z INFO EnvHandler ExtHandler Current Firewall rules: Jul 9 23:44:57.801675 waagent[2113]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jul 9 23:44:57.801675 waagent[2113]: pkts bytes target prot opt in out source destination Jul 9 23:44:57.801675 waagent[2113]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jul 9 23:44:57.801675 waagent[2113]: pkts bytes target prot opt in out source destination Jul 9 23:44:57.801675 waagent[2113]: Chain OUTPUT (policy ACCEPT 3 packets, 534 bytes) Jul 9 23:44:57.801675 waagent[2113]: pkts bytes target prot opt in out source destination Jul 9 23:44:57.801675 waagent[2113]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jul 9 23:44:57.801675 waagent[2113]: 4 416 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jul 9 23:44:57.801675 waagent[2113]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jul 9 23:44:57.802148 waagent[2113]: 2025-07-09T23:44:57.802120Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jul 9 23:45:03.047771 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 9 23:45:03.049146 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 23:45:03.147756 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 23:45:03.153966 (kubelet)[2264]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 9 23:45:03.263624 kubelet[2264]: E0709 23:45:03.263549 2264 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 9 23:45:03.266060 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 9 23:45:03.266176 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 9 23:45:03.266669 systemd[1]: kubelet.service: Consumed 113ms CPU time, 105.7M memory peak. Jul 9 23:45:13.516790 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 9 23:45:13.518307 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 23:45:13.872300 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 23:45:13.876771 (kubelet)[2278]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 9 23:45:13.902973 kubelet[2278]: E0709 23:45:13.902894 2278 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 9 23:45:13.905247 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 9 23:45:13.905477 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 9 23:45:13.906119 systemd[1]: kubelet.service: Consumed 109ms CPU time, 107.4M memory peak. Jul 9 23:45:15.532366 chronyd[1851]: Selected source PHC0 Jul 9 23:45:24.152392 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 9 23:45:24.154336 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 23:45:24.509579 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 23:45:24.515767 (kubelet)[2293]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 9 23:45:24.541298 kubelet[2293]: E0709 23:45:24.541234 2293 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 9 23:45:24.543476 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 9 23:45:24.543724 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 9 23:45:24.544312 systemd[1]: kubelet.service: Consumed 107ms CPU time, 107.2M memory peak. Jul 9 23:45:29.256108 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 9 23:45:29.257674 systemd[1]: Started sshd@0-10.200.20.40:22-10.200.16.10:50232.service - OpenSSH per-connection server daemon (10.200.16.10:50232). Jul 9 23:45:29.839670 sshd[2301]: Accepted publickey for core from 10.200.16.10 port 50232 ssh2: RSA SHA256:zFMRRzzSGWgmvEk8T0W8VsmZJ1v5NiT01j8gkhQ3zko Jul 9 23:45:29.840788 sshd-session[2301]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:45:29.844901 systemd-logind[1865]: New session 3 of user core. Jul 9 23:45:29.851653 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 9 23:45:30.252185 systemd[1]: Started sshd@1-10.200.20.40:22-10.200.16.10:34262.service - OpenSSH per-connection server daemon (10.200.16.10:34262). Jul 9 23:45:30.733233 sshd[2306]: Accepted publickey for core from 10.200.16.10 port 34262 ssh2: RSA SHA256:zFMRRzzSGWgmvEk8T0W8VsmZJ1v5NiT01j8gkhQ3zko Jul 9 23:45:30.734404 sshd-session[2306]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:45:30.738301 systemd-logind[1865]: New session 4 of user core. Jul 9 23:45:30.746767 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 9 23:45:31.083193 sshd[2308]: Connection closed by 10.200.16.10 port 34262 Jul 9 23:45:31.083951 sshd-session[2306]: pam_unix(sshd:session): session closed for user core Jul 9 23:45:31.087034 systemd-logind[1865]: Session 4 logged out. Waiting for processes to exit. Jul 9 23:45:31.087597 systemd[1]: sshd@1-10.200.20.40:22-10.200.16.10:34262.service: Deactivated successfully. Jul 9 23:45:31.089012 systemd[1]: session-4.scope: Deactivated successfully. Jul 9 23:45:31.090615 systemd-logind[1865]: Removed session 4. Jul 9 23:45:31.161833 systemd[1]: Started sshd@2-10.200.20.40:22-10.200.16.10:34268.service - OpenSSH per-connection server daemon (10.200.16.10:34268). Jul 9 23:45:31.621636 sshd[2314]: Accepted publickey for core from 10.200.16.10 port 34268 ssh2: RSA SHA256:zFMRRzzSGWgmvEk8T0W8VsmZJ1v5NiT01j8gkhQ3zko Jul 9 23:45:31.622813 sshd-session[2314]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:45:31.626807 systemd-logind[1865]: New session 5 of user core. Jul 9 23:45:31.632637 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 9 23:45:31.955522 sshd[2316]: Connection closed by 10.200.16.10 port 34268 Jul 9 23:45:31.956087 sshd-session[2314]: pam_unix(sshd:session): session closed for user core Jul 9 23:45:31.959638 systemd[1]: sshd@2-10.200.20.40:22-10.200.16.10:34268.service: Deactivated successfully. Jul 9 23:45:31.961042 systemd[1]: session-5.scope: Deactivated successfully. Jul 9 23:45:31.962990 systemd-logind[1865]: Session 5 logged out. Waiting for processes to exit. Jul 9 23:45:31.964062 systemd-logind[1865]: Removed session 5. Jul 9 23:45:32.044332 systemd[1]: Started sshd@3-10.200.20.40:22-10.200.16.10:34282.service - OpenSSH per-connection server daemon (10.200.16.10:34282). Jul 9 23:45:32.520156 sshd[2322]: Accepted publickey for core from 10.200.16.10 port 34282 ssh2: RSA SHA256:zFMRRzzSGWgmvEk8T0W8VsmZJ1v5NiT01j8gkhQ3zko Jul 9 23:45:32.521282 sshd-session[2322]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:45:32.525272 systemd-logind[1865]: New session 6 of user core. Jul 9 23:45:32.529597 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 9 23:45:32.859065 sshd[2324]: Connection closed by 10.200.16.10 port 34282 Jul 9 23:45:32.859554 sshd-session[2322]: pam_unix(sshd:session): session closed for user core Jul 9 23:45:32.862757 systemd[1]: sshd@3-10.200.20.40:22-10.200.16.10:34282.service: Deactivated successfully. Jul 9 23:45:32.864173 systemd[1]: session-6.scope: Deactivated successfully. Jul 9 23:45:32.864781 systemd-logind[1865]: Session 6 logged out. Waiting for processes to exit. Jul 9 23:45:32.866121 systemd-logind[1865]: Removed session 6. Jul 9 23:45:32.944152 systemd[1]: Started sshd@4-10.200.20.40:22-10.200.16.10:34288.service - OpenSSH per-connection server daemon (10.200.16.10:34288). Jul 9 23:45:33.402184 sshd[2330]: Accepted publickey for core from 10.200.16.10 port 34288 ssh2: RSA SHA256:zFMRRzzSGWgmvEk8T0W8VsmZJ1v5NiT01j8gkhQ3zko Jul 9 23:45:33.403285 sshd-session[2330]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:45:33.406873 systemd-logind[1865]: New session 7 of user core. Jul 9 23:45:33.414738 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 9 23:45:33.795465 sudo[2333]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 9 23:45:33.795706 sudo[2333]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 9 23:45:33.820675 sudo[2333]: pam_unix(sudo:session): session closed for user root Jul 9 23:45:33.896532 sshd[2332]: Connection closed by 10.200.16.10 port 34288 Jul 9 23:45:33.897215 sshd-session[2330]: pam_unix(sshd:session): session closed for user core Jul 9 23:45:33.900116 systemd[1]: session-7.scope: Deactivated successfully. Jul 9 23:45:33.901021 systemd[1]: sshd@4-10.200.20.40:22-10.200.16.10:34288.service: Deactivated successfully. Jul 9 23:45:33.902370 systemd-logind[1865]: Session 7 logged out. Waiting for processes to exit. Jul 9 23:45:33.904126 systemd-logind[1865]: Removed session 7. Jul 9 23:45:33.981420 systemd[1]: Started sshd@5-10.200.20.40:22-10.200.16.10:34300.service - OpenSSH per-connection server daemon (10.200.16.10:34300). Jul 9 23:45:34.468252 sshd[2339]: Accepted publickey for core from 10.200.16.10 port 34300 ssh2: RSA SHA256:zFMRRzzSGWgmvEk8T0W8VsmZJ1v5NiT01j8gkhQ3zko Jul 9 23:45:34.469415 sshd-session[2339]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:45:34.473353 systemd-logind[1865]: New session 8 of user core. Jul 9 23:45:34.479690 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 9 23:45:34.652240 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jul 9 23:45:34.653649 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 23:45:34.735812 sudo[2346]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 9 23:45:34.736018 sudo[2346]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 9 23:45:34.804012 sudo[2346]: pam_unix(sudo:session): session closed for user root Jul 9 23:45:34.808265 sudo[2345]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 9 23:45:34.808863 sudo[2345]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 9 23:45:34.818761 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 9 23:45:34.860195 augenrules[2368]: No rules Jul 9 23:45:34.862848 systemd[1]: audit-rules.service: Deactivated successfully. Jul 9 23:45:34.863040 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 9 23:45:34.863934 sudo[2345]: pam_unix(sudo:session): session closed for user root Jul 9 23:45:34.947178 sshd[2341]: Connection closed by 10.200.16.10 port 34300 Jul 9 23:45:34.947726 sshd-session[2339]: pam_unix(sshd:session): session closed for user core Jul 9 23:45:34.950334 systemd[1]: sshd@5-10.200.20.40:22-10.200.16.10:34300.service: Deactivated successfully. Jul 9 23:45:34.951782 systemd[1]: session-8.scope: Deactivated successfully. Jul 9 23:45:34.953667 systemd-logind[1865]: Session 8 logged out. Waiting for processes to exit. Jul 9 23:45:34.954829 systemd-logind[1865]: Removed session 8. Jul 9 23:45:34.980463 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 23:45:34.987790 (kubelet)[2381]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 9 23:45:35.014996 kubelet[2381]: E0709 23:45:35.014943 2381 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 9 23:45:35.017165 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 9 23:45:35.017366 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 9 23:45:35.017965 systemd[1]: kubelet.service: Consumed 108ms CPU time, 105.4M memory peak. Jul 9 23:45:35.036149 systemd[1]: Started sshd@6-10.200.20.40:22-10.200.16.10:34312.service - OpenSSH per-connection server daemon (10.200.16.10:34312). Jul 9 23:45:35.531655 sshd[2389]: Accepted publickey for core from 10.200.16.10 port 34312 ssh2: RSA SHA256:zFMRRzzSGWgmvEk8T0W8VsmZJ1v5NiT01j8gkhQ3zko Jul 9 23:45:35.532760 sshd-session[2389]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:45:35.536398 systemd-logind[1865]: New session 9 of user core. Jul 9 23:45:35.546646 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 9 23:45:35.807108 sudo[2392]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 9 23:45:35.807314 sudo[2392]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 9 23:45:35.813516 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Jul 9 23:45:36.776733 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 9 23:45:36.786778 (dockerd)[2409]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 9 23:45:37.454431 dockerd[2409]: time="2025-07-09T23:45:37.452778341Z" level=info msg="Starting up" Jul 9 23:45:37.456187 dockerd[2409]: time="2025-07-09T23:45:37.456154581Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 9 23:45:37.498083 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3196651933-merged.mount: Deactivated successfully. Jul 9 23:45:37.581347 dockerd[2409]: time="2025-07-09T23:45:37.581158420Z" level=info msg="Loading containers: start." Jul 9 23:45:37.607518 kernel: Initializing XFRM netlink socket Jul 9 23:45:37.613917 update_engine[1874]: I20250709 23:45:37.613514 1874 update_attempter.cc:509] Updating boot flags... Jul 9 23:45:37.911661 systemd-networkd[1482]: docker0: Link UP Jul 9 23:45:37.935091 dockerd[2409]: time="2025-07-09T23:45:37.935036179Z" level=info msg="Loading containers: done." Jul 9 23:45:37.962189 dockerd[2409]: time="2025-07-09T23:45:37.962130871Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 9 23:45:37.962360 dockerd[2409]: time="2025-07-09T23:45:37.962235315Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jul 9 23:45:37.962360 dockerd[2409]: time="2025-07-09T23:45:37.962352798Z" level=info msg="Initializing buildkit" Jul 9 23:45:38.024184 dockerd[2409]: time="2025-07-09T23:45:38.024118530Z" level=info msg="Completed buildkit initialization" Jul 9 23:45:38.029030 dockerd[2409]: time="2025-07-09T23:45:38.028987323Z" level=info msg="Daemon has completed initialization" Jul 9 23:45:38.029030 dockerd[2409]: time="2025-07-09T23:45:38.029064526Z" level=info msg="API listen on /run/docker.sock" Jul 9 23:45:38.029255 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 9 23:45:38.495613 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4201179399-merged.mount: Deactivated successfully. Jul 9 23:45:38.581648 containerd[1905]: time="2025-07-09T23:45:38.581551400Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\"" Jul 9 23:45:39.673227 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount944822074.mount: Deactivated successfully. Jul 9 23:45:41.137564 containerd[1905]: time="2025-07-09T23:45:41.136841718Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:45:41.141565 containerd[1905]: time="2025-07-09T23:45:41.141513700Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.10: active requests=0, bytes read=25651793" Jul 9 23:45:41.151362 containerd[1905]: time="2025-07-09T23:45:41.151295466Z" level=info msg="ImageCreate event name:\"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:45:41.156314 containerd[1905]: time="2025-07-09T23:45:41.156202290Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:45:41.157040 containerd[1905]: time="2025-07-09T23:45:41.156814795Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.10\" with image id \"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\", size \"25648593\" in 2.575225266s" Jul 9 23:45:41.157040 containerd[1905]: time="2025-07-09T23:45:41.156859069Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\" returns image reference \"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\"" Jul 9 23:45:41.158857 containerd[1905]: time="2025-07-09T23:45:41.158809772Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\"" Jul 9 23:45:42.753539 containerd[1905]: time="2025-07-09T23:45:42.752983663Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:45:42.758270 containerd[1905]: time="2025-07-09T23:45:42.758229405Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.10: active requests=0, bytes read=22459677" Jul 9 23:45:42.763269 containerd[1905]: time="2025-07-09T23:45:42.763218840Z" level=info msg="ImageCreate event name:\"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:45:42.770772 containerd[1905]: time="2025-07-09T23:45:42.770698928Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:45:42.771276 containerd[1905]: time="2025-07-09T23:45:42.771154867Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.10\" with image id \"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\", size \"23995467\" in 1.612090861s" Jul 9 23:45:42.771276 containerd[1905]: time="2025-07-09T23:45:42.771181916Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\" returns image reference \"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\"" Jul 9 23:45:42.771603 containerd[1905]: time="2025-07-09T23:45:42.771582076Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\"" Jul 9 23:45:44.240698 containerd[1905]: time="2025-07-09T23:45:44.240642156Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:45:44.244241 containerd[1905]: time="2025-07-09T23:45:44.244083256Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.10: active requests=0, bytes read=17125066" Jul 9 23:45:44.247785 containerd[1905]: time="2025-07-09T23:45:44.247760941Z" level=info msg="ImageCreate event name:\"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:45:44.254400 containerd[1905]: time="2025-07-09T23:45:44.254370130Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:45:44.255196 containerd[1905]: time="2025-07-09T23:45:44.254844334Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.10\" with image id \"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\", size \"18660874\" in 1.483237656s" Jul 9 23:45:44.255196 containerd[1905]: time="2025-07-09T23:45:44.254873439Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\" returns image reference \"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\"" Jul 9 23:45:44.255560 containerd[1905]: time="2025-07-09T23:45:44.255540482Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jul 9 23:45:45.152119 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jul 9 23:45:45.153384 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 23:45:45.547101 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 23:45:45.556756 (kubelet)[2740]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 9 23:45:45.581119 kubelet[2740]: E0709 23:45:45.581050 2740 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 9 23:45:45.583293 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 9 23:45:45.583567 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 9 23:45:45.585575 systemd[1]: kubelet.service: Consumed 105ms CPU time, 106.9M memory peak. Jul 9 23:45:46.069589 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1285681858.mount: Deactivated successfully. Jul 9 23:45:46.360547 containerd[1905]: time="2025-07-09T23:45:46.359883526Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:45:46.365407 containerd[1905]: time="2025-07-09T23:45:46.365376509Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.10: active requests=0, bytes read=26915957" Jul 9 23:45:46.368905 containerd[1905]: time="2025-07-09T23:45:46.368881330Z" level=info msg="ImageCreate event name:\"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:45:46.374985 containerd[1905]: time="2025-07-09T23:45:46.374958924Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:45:46.375488 containerd[1905]: time="2025-07-09T23:45:46.375273070Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.10\" with image id \"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\", repo tag \"registry.k8s.io/kube-proxy:v1.31.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\", size \"26914976\" in 2.119707683s" Jul 9 23:45:46.375488 containerd[1905]: time="2025-07-09T23:45:46.375307471Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\"" Jul 9 23:45:46.375776 containerd[1905]: time="2025-07-09T23:45:46.375753294Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 9 23:45:47.082738 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2205832105.mount: Deactivated successfully. Jul 9 23:45:48.778472 containerd[1905]: time="2025-07-09T23:45:48.778415239Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:45:48.781987 containerd[1905]: time="2025-07-09T23:45:48.781949690Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" Jul 9 23:45:48.787281 containerd[1905]: time="2025-07-09T23:45:48.787234778Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:45:48.793904 containerd[1905]: time="2025-07-09T23:45:48.793836653Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:45:48.794507 containerd[1905]: time="2025-07-09T23:45:48.794361306Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 2.418551106s" Jul 9 23:45:48.794507 containerd[1905]: time="2025-07-09T23:45:48.794390011Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jul 9 23:45:48.794949 containerd[1905]: time="2025-07-09T23:45:48.794925200Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 9 23:45:49.443018 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount736064412.mount: Deactivated successfully. Jul 9 23:45:49.491449 containerd[1905]: time="2025-07-09T23:45:49.491397344Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 9 23:45:49.496583 containerd[1905]: time="2025-07-09T23:45:49.496541218Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Jul 9 23:45:49.509678 containerd[1905]: time="2025-07-09T23:45:49.509641029Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 9 23:45:49.534512 containerd[1905]: time="2025-07-09T23:45:49.534455181Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 9 23:45:49.534917 containerd[1905]: time="2025-07-09T23:45:49.534794178Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 739.843265ms" Jul 9 23:45:49.534917 containerd[1905]: time="2025-07-09T23:45:49.534821115Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 9 23:45:49.535234 containerd[1905]: time="2025-07-09T23:45:49.535212842Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 9 23:45:50.465392 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2793549608.mount: Deactivated successfully. Jul 9 23:45:53.129714 containerd[1905]: time="2025-07-09T23:45:53.129656167Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:45:53.135286 containerd[1905]: time="2025-07-09T23:45:53.135244411Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406465" Jul 9 23:45:53.140429 containerd[1905]: time="2025-07-09T23:45:53.140372444Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:45:53.145029 containerd[1905]: time="2025-07-09T23:45:53.144981001Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:45:53.145873 containerd[1905]: time="2025-07-09T23:45:53.145741711Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 3.610502612s" Jul 9 23:45:53.145873 containerd[1905]: time="2025-07-09T23:45:53.145770856Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Jul 9 23:45:55.082288 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 23:45:55.082399 systemd[1]: kubelet.service: Consumed 105ms CPU time, 106.9M memory peak. Jul 9 23:45:55.086925 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 23:45:55.104745 systemd[1]: Reload requested from client PID 2891 ('systemctl') (unit session-9.scope)... Jul 9 23:45:55.104757 systemd[1]: Reloading... Jul 9 23:45:55.205518 zram_generator::config[2937]: No configuration found. Jul 9 23:45:55.273949 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 9 23:45:55.358592 systemd[1]: Reloading finished in 253 ms. Jul 9 23:45:55.405991 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 9 23:45:55.406193 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 9 23:45:55.406554 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 23:45:55.406690 systemd[1]: kubelet.service: Consumed 75ms CPU time, 95.2M memory peak. Jul 9 23:45:55.409233 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 23:45:55.640776 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 23:45:55.643570 (kubelet)[3005]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 9 23:45:55.669555 kubelet[3005]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 9 23:45:55.669555 kubelet[3005]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 9 23:45:55.669555 kubelet[3005]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 9 23:45:55.669555 kubelet[3005]: I0709 23:45:55.669355 3005 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 9 23:45:55.932976 kubelet[3005]: I0709 23:45:55.932934 3005 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 9 23:45:55.932976 kubelet[3005]: I0709 23:45:55.932968 3005 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 9 23:45:55.933182 kubelet[3005]: I0709 23:45:55.933163 3005 server.go:934] "Client rotation is on, will bootstrap in background" Jul 9 23:45:55.944901 kubelet[3005]: E0709 23:45:55.944843 3005 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.40:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.40:6443: connect: connection refused" logger="UnhandledError" Jul 9 23:45:55.945612 kubelet[3005]: I0709 23:45:55.945513 3005 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 9 23:45:55.953535 kubelet[3005]: I0709 23:45:55.953503 3005 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 9 23:45:55.956792 kubelet[3005]: I0709 23:45:55.956773 3005 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 9 23:45:55.957208 kubelet[3005]: I0709 23:45:55.957188 3005 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 9 23:45:55.957333 kubelet[3005]: I0709 23:45:55.957309 3005 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 9 23:45:55.957461 kubelet[3005]: I0709 23:45:55.957331 3005 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4344.1.1-n-bbe652f90c","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 9 23:45:55.957550 kubelet[3005]: I0709 23:45:55.957468 3005 topology_manager.go:138] "Creating topology manager with none policy" Jul 9 23:45:55.957550 kubelet[3005]: I0709 23:45:55.957475 3005 container_manager_linux.go:300] "Creating device plugin manager" Jul 9 23:45:55.957611 kubelet[3005]: I0709 23:45:55.957596 3005 state_mem.go:36] "Initialized new in-memory state store" Jul 9 23:45:55.959248 kubelet[3005]: I0709 23:45:55.959056 3005 kubelet.go:408] "Attempting to sync node with API server" Jul 9 23:45:55.959248 kubelet[3005]: I0709 23:45:55.959079 3005 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 9 23:45:55.959248 kubelet[3005]: I0709 23:45:55.959098 3005 kubelet.go:314] "Adding apiserver pod source" Jul 9 23:45:55.959248 kubelet[3005]: I0709 23:45:55.959110 3005 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 9 23:45:55.961528 kubelet[3005]: W0709 23:45:55.961107 3005 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.40:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4344.1.1-n-bbe652f90c&limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused Jul 9 23:45:55.961528 kubelet[3005]: E0709 23:45:55.961162 3005 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.40:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4344.1.1-n-bbe652f90c&limit=500&resourceVersion=0\": dial tcp 10.200.20.40:6443: connect: connection refused" logger="UnhandledError" Jul 9 23:45:55.961528 kubelet[3005]: W0709 23:45:55.961440 3005 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.40:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused Jul 9 23:45:55.961528 kubelet[3005]: E0709 23:45:55.961470 3005 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.40:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.40:6443: connect: connection refused" logger="UnhandledError" Jul 9 23:45:55.961777 kubelet[3005]: I0709 23:45:55.961762 3005 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 9 23:45:55.962172 kubelet[3005]: I0709 23:45:55.962149 3005 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 9 23:45:55.962220 kubelet[3005]: W0709 23:45:55.962193 3005 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 9 23:45:55.963292 kubelet[3005]: I0709 23:45:55.963252 3005 server.go:1274] "Started kubelet" Jul 9 23:45:55.966222 kubelet[3005]: E0709 23:45:55.965390 3005 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.40:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.40:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4344.1.1-n-bbe652f90c.1850b9fe14760685 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4344.1.1-n-bbe652f90c,UID:ci-4344.1.1-n-bbe652f90c,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4344.1.1-n-bbe652f90c,},FirstTimestamp:2025-07-09 23:45:55.963233925 +0000 UTC m=+0.317264042,LastTimestamp:2025-07-09 23:45:55.963233925 +0000 UTC m=+0.317264042,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4344.1.1-n-bbe652f90c,}" Jul 9 23:45:55.966404 kubelet[3005]: I0709 23:45:55.966378 3005 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 9 23:45:55.966530 kubelet[3005]: I0709 23:45:55.966519 3005 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 9 23:45:55.967115 kubelet[3005]: I0709 23:45:55.967089 3005 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 9 23:45:55.967857 kubelet[3005]: I0709 23:45:55.967839 3005 server.go:449] "Adding debug handlers to kubelet server" Jul 9 23:45:55.968634 kubelet[3005]: I0709 23:45:55.968596 3005 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 9 23:45:55.968886 kubelet[3005]: I0709 23:45:55.968869 3005 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 9 23:45:55.970166 kubelet[3005]: I0709 23:45:55.970149 3005 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 9 23:45:55.971135 kubelet[3005]: I0709 23:45:55.971118 3005 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 9 23:45:55.971260 kubelet[3005]: I0709 23:45:55.971251 3005 reconciler.go:26] "Reconciler: start to sync state" Jul 9 23:45:55.971655 kubelet[3005]: E0709 23:45:55.971629 3005 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4344.1.1-n-bbe652f90c\" not found" Jul 9 23:45:55.971655 kubelet[3005]: W0709 23:45:55.971616 3005 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.40:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused Jul 9 23:45:55.971741 kubelet[3005]: E0709 23:45:55.971669 3005 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.40:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.40:6443: connect: connection refused" logger="UnhandledError" Jul 9 23:45:55.972489 kubelet[3005]: E0709 23:45:55.972461 3005 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344.1.1-n-bbe652f90c?timeout=10s\": dial tcp 10.200.20.40:6443: connect: connection refused" interval="200ms" Jul 9 23:45:55.972988 kubelet[3005]: E0709 23:45:55.972973 3005 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 9 23:45:55.973289 kubelet[3005]: I0709 23:45:55.973276 3005 factory.go:221] Registration of the systemd container factory successfully Jul 9 23:45:55.973429 kubelet[3005]: I0709 23:45:55.973414 3005 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 9 23:45:55.974362 kubelet[3005]: I0709 23:45:55.974346 3005 factory.go:221] Registration of the containerd container factory successfully Jul 9 23:45:55.983151 kubelet[3005]: I0709 23:45:55.983103 3005 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 9 23:45:55.984828 kubelet[3005]: I0709 23:45:55.984636 3005 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 9 23:45:55.984828 kubelet[3005]: I0709 23:45:55.984660 3005 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 9 23:45:55.984828 kubelet[3005]: I0709 23:45:55.984677 3005 kubelet.go:2321] "Starting kubelet main sync loop" Jul 9 23:45:55.984828 kubelet[3005]: E0709 23:45:55.984720 3005 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 9 23:45:55.990621 kubelet[3005]: W0709 23:45:55.990440 3005 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.40:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused Jul 9 23:45:55.990621 kubelet[3005]: E0709 23:45:55.990555 3005 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.40:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.40:6443: connect: connection refused" logger="UnhandledError" Jul 9 23:45:55.991544 kubelet[3005]: I0709 23:45:55.991243 3005 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 9 23:45:55.991544 kubelet[3005]: I0709 23:45:55.991279 3005 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 9 23:45:55.991544 kubelet[3005]: I0709 23:45:55.991297 3005 state_mem.go:36] "Initialized new in-memory state store" Jul 9 23:45:56.071902 kubelet[3005]: E0709 23:45:56.071859 3005 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4344.1.1-n-bbe652f90c\" not found" Jul 9 23:45:56.085135 kubelet[3005]: E0709 23:45:56.085108 3005 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 9 23:45:56.167872 kubelet[3005]: I0709 23:45:56.167835 3005 policy_none.go:49] "None policy: Start" Jul 9 23:45:56.168796 kubelet[3005]: I0709 23:45:56.168679 3005 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 9 23:45:56.168796 kubelet[3005]: I0709 23:45:56.168763 3005 state_mem.go:35] "Initializing new in-memory state store" Jul 9 23:45:56.173209 kubelet[3005]: E0709 23:45:56.173171 3005 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344.1.1-n-bbe652f90c?timeout=10s\": dial tcp 10.200.20.40:6443: connect: connection refused" interval="400ms" Jul 9 23:45:56.173265 kubelet[3005]: E0709 23:45:56.173219 3005 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4344.1.1-n-bbe652f90c\" not found" Jul 9 23:45:56.179435 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 9 23:45:56.192123 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 9 23:45:56.195923 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 9 23:45:56.206262 kubelet[3005]: I0709 23:45:56.206224 3005 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 9 23:45:56.206448 kubelet[3005]: I0709 23:45:56.206430 3005 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 9 23:45:56.206481 kubelet[3005]: I0709 23:45:56.206445 3005 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 9 23:45:56.207536 kubelet[3005]: I0709 23:45:56.207317 3005 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 9 23:45:56.209766 kubelet[3005]: E0709 23:45:56.209711 3005 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4344.1.1-n-bbe652f90c\" not found" Jul 9 23:45:56.295556 systemd[1]: Created slice kubepods-burstable-pod055ecf32cf27b21b12ead501ee504ee3.slice - libcontainer container kubepods-burstable-pod055ecf32cf27b21b12ead501ee504ee3.slice. Jul 9 23:45:56.309075 kubelet[3005]: I0709 23:45:56.309009 3005 kubelet_node_status.go:72] "Attempting to register node" node="ci-4344.1.1-n-bbe652f90c" Jul 9 23:45:56.309450 kubelet[3005]: E0709 23:45:56.309419 3005 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.40:6443/api/v1/nodes\": dial tcp 10.200.20.40:6443: connect: connection refused" node="ci-4344.1.1-n-bbe652f90c" Jul 9 23:45:56.314367 systemd[1]: Created slice kubepods-burstable-podeb2112dc2e45d1d2e8c4346bbdade7fc.slice - libcontainer container kubepods-burstable-podeb2112dc2e45d1d2e8c4346bbdade7fc.slice. Jul 9 23:45:56.324229 systemd[1]: Created slice kubepods-burstable-pod374f12f67918e2b884fa54198315a751.slice - libcontainer container kubepods-burstable-pod374f12f67918e2b884fa54198315a751.slice. Jul 9 23:45:56.373922 kubelet[3005]: I0709 23:45:56.373883 3005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/eb2112dc2e45d1d2e8c4346bbdade7fc-flexvolume-dir\") pod \"kube-controller-manager-ci-4344.1.1-n-bbe652f90c\" (UID: \"eb2112dc2e45d1d2e8c4346bbdade7fc\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-n-bbe652f90c" Jul 9 23:45:56.373922 kubelet[3005]: I0709 23:45:56.373922 3005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/eb2112dc2e45d1d2e8c4346bbdade7fc-kubeconfig\") pod \"kube-controller-manager-ci-4344.1.1-n-bbe652f90c\" (UID: \"eb2112dc2e45d1d2e8c4346bbdade7fc\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-n-bbe652f90c" Jul 9 23:45:56.374074 kubelet[3005]: I0709 23:45:56.373936 3005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/eb2112dc2e45d1d2e8c4346bbdade7fc-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4344.1.1-n-bbe652f90c\" (UID: \"eb2112dc2e45d1d2e8c4346bbdade7fc\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-n-bbe652f90c" Jul 9 23:45:56.374074 kubelet[3005]: I0709 23:45:56.373951 3005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/055ecf32cf27b21b12ead501ee504ee3-k8s-certs\") pod \"kube-apiserver-ci-4344.1.1-n-bbe652f90c\" (UID: \"055ecf32cf27b21b12ead501ee504ee3\") " pod="kube-system/kube-apiserver-ci-4344.1.1-n-bbe652f90c" Jul 9 23:45:56.374074 kubelet[3005]: I0709 23:45:56.373964 3005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/055ecf32cf27b21b12ead501ee504ee3-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4344.1.1-n-bbe652f90c\" (UID: \"055ecf32cf27b21b12ead501ee504ee3\") " pod="kube-system/kube-apiserver-ci-4344.1.1-n-bbe652f90c" Jul 9 23:45:56.374074 kubelet[3005]: I0709 23:45:56.373983 3005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/eb2112dc2e45d1d2e8c4346bbdade7fc-ca-certs\") pod \"kube-controller-manager-ci-4344.1.1-n-bbe652f90c\" (UID: \"eb2112dc2e45d1d2e8c4346bbdade7fc\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-n-bbe652f90c" Jul 9 23:45:56.374074 kubelet[3005]: I0709 23:45:56.373993 3005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/eb2112dc2e45d1d2e8c4346bbdade7fc-k8s-certs\") pod \"kube-controller-manager-ci-4344.1.1-n-bbe652f90c\" (UID: \"eb2112dc2e45d1d2e8c4346bbdade7fc\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-n-bbe652f90c" Jul 9 23:45:56.374152 kubelet[3005]: I0709 23:45:56.374003 3005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/374f12f67918e2b884fa54198315a751-kubeconfig\") pod \"kube-scheduler-ci-4344.1.1-n-bbe652f90c\" (UID: \"374f12f67918e2b884fa54198315a751\") " pod="kube-system/kube-scheduler-ci-4344.1.1-n-bbe652f90c" Jul 9 23:45:56.374152 kubelet[3005]: I0709 23:45:56.374013 3005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/055ecf32cf27b21b12ead501ee504ee3-ca-certs\") pod \"kube-apiserver-ci-4344.1.1-n-bbe652f90c\" (UID: \"055ecf32cf27b21b12ead501ee504ee3\") " pod="kube-system/kube-apiserver-ci-4344.1.1-n-bbe652f90c" Jul 9 23:45:56.511067 kubelet[3005]: I0709 23:45:56.510975 3005 kubelet_node_status.go:72] "Attempting to register node" node="ci-4344.1.1-n-bbe652f90c" Jul 9 23:45:56.511583 kubelet[3005]: E0709 23:45:56.511416 3005 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.40:6443/api/v1/nodes\": dial tcp 10.200.20.40:6443: connect: connection refused" node="ci-4344.1.1-n-bbe652f90c" Jul 9 23:45:56.573924 kubelet[3005]: E0709 23:45:56.573883 3005 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344.1.1-n-bbe652f90c?timeout=10s\": dial tcp 10.200.20.40:6443: connect: connection refused" interval="800ms" Jul 9 23:45:56.612814 containerd[1905]: time="2025-07-09T23:45:56.612769603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4344.1.1-n-bbe652f90c,Uid:055ecf32cf27b21b12ead501ee504ee3,Namespace:kube-system,Attempt:0,}" Jul 9 23:45:56.617534 containerd[1905]: time="2025-07-09T23:45:56.617364553Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4344.1.1-n-bbe652f90c,Uid:eb2112dc2e45d1d2e8c4346bbdade7fc,Namespace:kube-system,Attempt:0,}" Jul 9 23:45:56.627215 containerd[1905]: time="2025-07-09T23:45:56.627176660Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4344.1.1-n-bbe652f90c,Uid:374f12f67918e2b884fa54198315a751,Namespace:kube-system,Attempt:0,}" Jul 9 23:45:56.791959 containerd[1905]: time="2025-07-09T23:45:56.791660390Z" level=info msg="connecting to shim f33d58f586d58c6ac4a3e6521ac526be7bf276d9b8524631a326ce38a2b6f3b4" address="unix:///run/containerd/s/f69e622d546ad77e02423ca2a7ce044a331232de807e490f5d3232b050131971" namespace=k8s.io protocol=ttrpc version=3 Jul 9 23:45:56.818680 systemd[1]: Started cri-containerd-f33d58f586d58c6ac4a3e6521ac526be7bf276d9b8524631a326ce38a2b6f3b4.scope - libcontainer container f33d58f586d58c6ac4a3e6521ac526be7bf276d9b8524631a326ce38a2b6f3b4. Jul 9 23:45:56.825364 containerd[1905]: time="2025-07-09T23:45:56.825327863Z" level=info msg="connecting to shim 8e2d736b1c7a8068291f626c5694d68772bdb24c1cd2cfafca4e9653874e05ba" address="unix:///run/containerd/s/250a435293cfe5c4a4b46b74b920bb1f2c7ffae79a3d19125908a8be1d5593d8" namespace=k8s.io protocol=ttrpc version=3 Jul 9 23:45:56.828013 containerd[1905]: time="2025-07-09T23:45:56.827970330Z" level=info msg="connecting to shim 7a7e3fc11030f04fc811428d87ec9fdbb47291f90f72715a4f4cd06348a957e5" address="unix:///run/containerd/s/fd8827d90d73c453fbdb57e31f9421c321eb554a16a08cce977bc16b31f0b687" namespace=k8s.io protocol=ttrpc version=3 Jul 9 23:45:56.845844 kubelet[3005]: W0709 23:45:56.845394 3005 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.40:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4344.1.1-n-bbe652f90c&limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused Jul 9 23:45:56.845844 kubelet[3005]: E0709 23:45:56.845556 3005 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.40:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4344.1.1-n-bbe652f90c&limit=500&resourceVersion=0\": dial tcp 10.200.20.40:6443: connect: connection refused" logger="UnhandledError" Jul 9 23:45:56.858656 systemd[1]: Started cri-containerd-7a7e3fc11030f04fc811428d87ec9fdbb47291f90f72715a4f4cd06348a957e5.scope - libcontainer container 7a7e3fc11030f04fc811428d87ec9fdbb47291f90f72715a4f4cd06348a957e5. Jul 9 23:45:56.859727 systemd[1]: Started cri-containerd-8e2d736b1c7a8068291f626c5694d68772bdb24c1cd2cfafca4e9653874e05ba.scope - libcontainer container 8e2d736b1c7a8068291f626c5694d68772bdb24c1cd2cfafca4e9653874e05ba. Jul 9 23:45:56.879960 containerd[1905]: time="2025-07-09T23:45:56.879891385Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4344.1.1-n-bbe652f90c,Uid:055ecf32cf27b21b12ead501ee504ee3,Namespace:kube-system,Attempt:0,} returns sandbox id \"f33d58f586d58c6ac4a3e6521ac526be7bf276d9b8524631a326ce38a2b6f3b4\"" Jul 9 23:45:56.885581 containerd[1905]: time="2025-07-09T23:45:56.885539708Z" level=info msg="CreateContainer within sandbox \"f33d58f586d58c6ac4a3e6521ac526be7bf276d9b8524631a326ce38a2b6f3b4\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 9 23:45:56.911340 containerd[1905]: time="2025-07-09T23:45:56.911299469Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4344.1.1-n-bbe652f90c,Uid:eb2112dc2e45d1d2e8c4346bbdade7fc,Namespace:kube-system,Attempt:0,} returns sandbox id \"7a7e3fc11030f04fc811428d87ec9fdbb47291f90f72715a4f4cd06348a957e5\"" Jul 9 23:45:56.914113 containerd[1905]: time="2025-07-09T23:45:56.913754753Z" level=info msg="CreateContainer within sandbox \"7a7e3fc11030f04fc811428d87ec9fdbb47291f90f72715a4f4cd06348a957e5\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 9 23:45:56.914187 kubelet[3005]: I0709 23:45:56.913764 3005 kubelet_node_status.go:72] "Attempting to register node" node="ci-4344.1.1-n-bbe652f90c" Jul 9 23:45:56.914187 kubelet[3005]: E0709 23:45:56.914084 3005 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.40:6443/api/v1/nodes\": dial tcp 10.200.20.40:6443: connect: connection refused" node="ci-4344.1.1-n-bbe652f90c" Jul 9 23:45:56.916415 containerd[1905]: time="2025-07-09T23:45:56.916389444Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4344.1.1-n-bbe652f90c,Uid:374f12f67918e2b884fa54198315a751,Namespace:kube-system,Attempt:0,} returns sandbox id \"8e2d736b1c7a8068291f626c5694d68772bdb24c1cd2cfafca4e9653874e05ba\"" Jul 9 23:45:56.918324 containerd[1905]: time="2025-07-09T23:45:56.918295630Z" level=info msg="CreateContainer within sandbox \"8e2d736b1c7a8068291f626c5694d68772bdb24c1cd2cfafca4e9653874e05ba\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 9 23:45:56.928947 containerd[1905]: time="2025-07-09T23:45:56.928883827Z" level=info msg="Container 1fb874b76fcc83e15117f1672d37676796a9629ab412d43386036c9332fc62ac: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:45:56.984887 containerd[1905]: time="2025-07-09T23:45:56.984843798Z" level=info msg="Container d7976ff02d4fd5327e42b831db693061ecfd7e4fe159cce308e80de5d8e3151d: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:45:57.101982 kubelet[3005]: W0709 23:45:57.101805 3005 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.40:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused Jul 9 23:45:57.101982 kubelet[3005]: E0709 23:45:57.101870 3005 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.40:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.40:6443: connect: connection refused" logger="UnhandledError" Jul 9 23:45:57.171670 kubelet[3005]: W0709 23:45:57.171599 3005 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.40:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused Jul 9 23:45:57.171810 kubelet[3005]: E0709 23:45:57.171683 3005 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.40:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.40:6443: connect: connection refused" logger="UnhandledError" Jul 9 23:45:57.334115 kubelet[3005]: W0709 23:45:57.334074 3005 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.40:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused Jul 9 23:45:57.334115 kubelet[3005]: E0709 23:45:57.334118 3005 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.40:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.40:6443: connect: connection refused" logger="UnhandledError" Jul 9 23:45:57.374874 kubelet[3005]: E0709 23:45:57.374776 3005 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344.1.1-n-bbe652f90c?timeout=10s\": dial tcp 10.200.20.40:6443: connect: connection refused" interval="1.6s" Jul 9 23:45:57.716575 kubelet[3005]: I0709 23:45:57.716546 3005 kubelet_node_status.go:72] "Attempting to register node" node="ci-4344.1.1-n-bbe652f90c" Jul 9 23:45:57.716933 kubelet[3005]: E0709 23:45:57.716903 3005 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.40:6443/api/v1/nodes\": dial tcp 10.200.20.40:6443: connect: connection refused" node="ci-4344.1.1-n-bbe652f90c" Jul 9 23:45:57.981565 kubelet[3005]: E0709 23:45:57.981444 3005 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.40:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.40:6443: connect: connection refused" logger="UnhandledError" Jul 9 23:45:58.072931 kubelet[3005]: E0709 23:45:58.072823 3005 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.40:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.40:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4344.1.1-n-bbe652f90c.1850b9fe14760685 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4344.1.1-n-bbe652f90c,UID:ci-4344.1.1-n-bbe652f90c,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4344.1.1-n-bbe652f90c,},FirstTimestamp:2025-07-09 23:45:55.963233925 +0000 UTC m=+0.317264042,LastTimestamp:2025-07-09 23:45:55.963233925 +0000 UTC m=+0.317264042,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4344.1.1-n-bbe652f90c,}" Jul 9 23:45:58.262902 containerd[1905]: time="2025-07-09T23:45:58.262791747Z" level=info msg="CreateContainer within sandbox \"f33d58f586d58c6ac4a3e6521ac526be7bf276d9b8524631a326ce38a2b6f3b4\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1fb874b76fcc83e15117f1672d37676796a9629ab412d43386036c9332fc62ac\"" Jul 9 23:45:58.263949 containerd[1905]: time="2025-07-09T23:45:58.263623775Z" level=info msg="StartContainer for \"1fb874b76fcc83e15117f1672d37676796a9629ab412d43386036c9332fc62ac\"" Jul 9 23:45:58.265800 containerd[1905]: time="2025-07-09T23:45:58.265772681Z" level=info msg="connecting to shim 1fb874b76fcc83e15117f1672d37676796a9629ab412d43386036c9332fc62ac" address="unix:///run/containerd/s/f69e622d546ad77e02423ca2a7ce044a331232de807e490f5d3232b050131971" protocol=ttrpc version=3 Jul 9 23:45:58.277034 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount467837824.mount: Deactivated successfully. Jul 9 23:45:58.278453 containerd[1905]: time="2025-07-09T23:45:58.278403805Z" level=info msg="Container fe84e5d981f48995cc83648b3cdc306ac6f58961e850590d5b6f472dcdf0b0f0: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:45:58.295641 systemd[1]: Started cri-containerd-1fb874b76fcc83e15117f1672d37676796a9629ab412d43386036c9332fc62ac.scope - libcontainer container 1fb874b76fcc83e15117f1672d37676796a9629ab412d43386036c9332fc62ac. Jul 9 23:45:58.298661 containerd[1905]: time="2025-07-09T23:45:58.298565420Z" level=info msg="CreateContainer within sandbox \"7a7e3fc11030f04fc811428d87ec9fdbb47291f90f72715a4f4cd06348a957e5\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d7976ff02d4fd5327e42b831db693061ecfd7e4fe159cce308e80de5d8e3151d\"" Jul 9 23:45:58.299083 containerd[1905]: time="2025-07-09T23:45:58.299059406Z" level=info msg="StartContainer for \"d7976ff02d4fd5327e42b831db693061ecfd7e4fe159cce308e80de5d8e3151d\"" Jul 9 23:45:58.302750 containerd[1905]: time="2025-07-09T23:45:58.302548630Z" level=info msg="connecting to shim d7976ff02d4fd5327e42b831db693061ecfd7e4fe159cce308e80de5d8e3151d" address="unix:///run/containerd/s/fd8827d90d73c453fbdb57e31f9421c321eb554a16a08cce977bc16b31f0b687" protocol=ttrpc version=3 Jul 9 23:45:58.314278 containerd[1905]: time="2025-07-09T23:45:58.314032186Z" level=info msg="CreateContainer within sandbox \"8e2d736b1c7a8068291f626c5694d68772bdb24c1cd2cfafca4e9653874e05ba\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"fe84e5d981f48995cc83648b3cdc306ac6f58961e850590d5b6f472dcdf0b0f0\"" Jul 9 23:45:58.314951 containerd[1905]: time="2025-07-09T23:45:58.314923409Z" level=info msg="StartContainer for \"fe84e5d981f48995cc83648b3cdc306ac6f58961e850590d5b6f472dcdf0b0f0\"" Jul 9 23:45:58.317751 containerd[1905]: time="2025-07-09T23:45:58.317663479Z" level=info msg="connecting to shim fe84e5d981f48995cc83648b3cdc306ac6f58961e850590d5b6f472dcdf0b0f0" address="unix:///run/containerd/s/250a435293cfe5c4a4b46b74b920bb1f2c7ffae79a3d19125908a8be1d5593d8" protocol=ttrpc version=3 Jul 9 23:45:58.318638 systemd[1]: Started cri-containerd-d7976ff02d4fd5327e42b831db693061ecfd7e4fe159cce308e80de5d8e3151d.scope - libcontainer container d7976ff02d4fd5327e42b831db693061ecfd7e4fe159cce308e80de5d8e3151d. Jul 9 23:45:58.353636 systemd[1]: Started cri-containerd-fe84e5d981f48995cc83648b3cdc306ac6f58961e850590d5b6f472dcdf0b0f0.scope - libcontainer container fe84e5d981f48995cc83648b3cdc306ac6f58961e850590d5b6f472dcdf0b0f0. Jul 9 23:45:58.354244 containerd[1905]: time="2025-07-09T23:45:58.353485651Z" level=info msg="StartContainer for \"1fb874b76fcc83e15117f1672d37676796a9629ab412d43386036c9332fc62ac\" returns successfully" Jul 9 23:45:58.377239 containerd[1905]: time="2025-07-09T23:45:58.377198349Z" level=info msg="StartContainer for \"d7976ff02d4fd5327e42b831db693061ecfd7e4fe159cce308e80de5d8e3151d\" returns successfully" Jul 9 23:45:58.426246 containerd[1905]: time="2025-07-09T23:45:58.426158702Z" level=info msg="StartContainer for \"fe84e5d981f48995cc83648b3cdc306ac6f58961e850590d5b6f472dcdf0b0f0\" returns successfully" Jul 9 23:45:59.319716 kubelet[3005]: I0709 23:45:59.319672 3005 kubelet_node_status.go:72] "Attempting to register node" node="ci-4344.1.1-n-bbe652f90c" Jul 9 23:45:59.560413 kubelet[3005]: E0709 23:45:59.560357 3005 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4344.1.1-n-bbe652f90c\" not found" node="ci-4344.1.1-n-bbe652f90c" Jul 9 23:45:59.653998 kubelet[3005]: I0709 23:45:59.653817 3005 kubelet_node_status.go:75] "Successfully registered node" node="ci-4344.1.1-n-bbe652f90c" Jul 9 23:45:59.653998 kubelet[3005]: E0709 23:45:59.653855 3005 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4344.1.1-n-bbe652f90c\": node \"ci-4344.1.1-n-bbe652f90c\" not found" Jul 9 23:45:59.964888 kubelet[3005]: I0709 23:45:59.964645 3005 apiserver.go:52] "Watching apiserver" Jul 9 23:45:59.971382 kubelet[3005]: I0709 23:45:59.971354 3005 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 9 23:46:00.014689 kubelet[3005]: E0709 23:46:00.014649 3005 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4344.1.1-n-bbe652f90c\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4344.1.1-n-bbe652f90c" Jul 9 23:46:00.015252 kubelet[3005]: E0709 23:46:00.015024 3005 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4344.1.1-n-bbe652f90c\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4344.1.1-n-bbe652f90c" Jul 9 23:46:00.015252 kubelet[3005]: E0709 23:46:00.015162 3005 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4344.1.1-n-bbe652f90c\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4344.1.1-n-bbe652f90c" Jul 9 23:46:01.017676 kubelet[3005]: W0709 23:46:01.017569 3005 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 9 23:46:01.674241 systemd[1]: Reload requested from client PID 3271 ('systemctl') (unit session-9.scope)... Jul 9 23:46:01.674256 systemd[1]: Reloading... Jul 9 23:46:01.754644 zram_generator::config[3320]: No configuration found. Jul 9 23:46:01.821156 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 9 23:46:01.912635 systemd[1]: Reloading finished in 238 ms. Jul 9 23:46:01.944376 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 23:46:01.953406 systemd[1]: kubelet.service: Deactivated successfully. Jul 9 23:46:01.953637 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 23:46:01.953701 systemd[1]: kubelet.service: Consumed 591ms CPU time, 125.6M memory peak. Jul 9 23:46:01.955363 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 23:46:02.060662 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 23:46:02.067832 (kubelet)[3381]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 9 23:46:02.091555 kubelet[3381]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 9 23:46:02.091555 kubelet[3381]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 9 23:46:02.091555 kubelet[3381]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 9 23:46:02.092128 kubelet[3381]: I0709 23:46:02.092009 3381 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 9 23:46:02.096937 kubelet[3381]: I0709 23:46:02.096908 3381 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 9 23:46:02.097519 kubelet[3381]: I0709 23:46:02.097039 3381 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 9 23:46:02.097519 kubelet[3381]: I0709 23:46:02.097198 3381 server.go:934] "Client rotation is on, will bootstrap in background" Jul 9 23:46:02.098323 kubelet[3381]: I0709 23:46:02.098301 3381 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 9 23:46:02.144945 kubelet[3381]: I0709 23:46:02.144874 3381 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 9 23:46:02.150724 kubelet[3381]: I0709 23:46:02.150698 3381 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 9 23:46:02.153710 kubelet[3381]: I0709 23:46:02.153686 3381 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 9 23:46:02.154155 kubelet[3381]: I0709 23:46:02.153818 3381 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 9 23:46:02.154155 kubelet[3381]: I0709 23:46:02.153942 3381 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 9 23:46:02.154155 kubelet[3381]: I0709 23:46:02.153966 3381 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4344.1.1-n-bbe652f90c","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 9 23:46:02.154155 kubelet[3381]: I0709 23:46:02.154110 3381 topology_manager.go:138] "Creating topology manager with none policy" Jul 9 23:46:02.155500 kubelet[3381]: I0709 23:46:02.154118 3381 container_manager_linux.go:300] "Creating device plugin manager" Jul 9 23:46:02.155500 kubelet[3381]: I0709 23:46:02.154147 3381 state_mem.go:36] "Initialized new in-memory state store" Jul 9 23:46:02.155500 kubelet[3381]: I0709 23:46:02.154338 3381 kubelet.go:408] "Attempting to sync node with API server" Jul 9 23:46:02.155500 kubelet[3381]: I0709 23:46:02.154349 3381 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 9 23:46:02.155500 kubelet[3381]: I0709 23:46:02.154363 3381 kubelet.go:314] "Adding apiserver pod source" Jul 9 23:46:02.155500 kubelet[3381]: I0709 23:46:02.154374 3381 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 9 23:46:02.157516 kubelet[3381]: I0709 23:46:02.157293 3381 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 9 23:46:02.157745 kubelet[3381]: I0709 23:46:02.157727 3381 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 9 23:46:02.160183 kubelet[3381]: I0709 23:46:02.158287 3381 server.go:1274] "Started kubelet" Jul 9 23:46:02.160888 kubelet[3381]: I0709 23:46:02.160792 3381 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 9 23:46:02.165561 kubelet[3381]: I0709 23:46:02.165530 3381 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 9 23:46:02.171566 kubelet[3381]: I0709 23:46:02.165949 3381 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 9 23:46:02.171923 kubelet[3381]: I0709 23:46:02.171756 3381 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 9 23:46:02.171923 kubelet[3381]: I0709 23:46:02.166616 3381 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 9 23:46:02.172177 kubelet[3381]: I0709 23:46:02.172150 3381 server.go:449] "Adding debug handlers to kubelet server" Jul 9 23:46:02.172801 kubelet[3381]: E0709 23:46:02.166730 3381 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4344.1.1-n-bbe652f90c\" not found" Jul 9 23:46:02.172801 kubelet[3381]: I0709 23:46:02.169673 3381 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 9 23:46:02.173860 kubelet[3381]: I0709 23:46:02.173488 3381 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 9 23:46:02.173860 kubelet[3381]: I0709 23:46:02.173579 3381 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 9 23:46:02.173860 kubelet[3381]: I0709 23:46:02.173597 3381 kubelet.go:2321] "Starting kubelet main sync loop" Jul 9 23:46:02.173860 kubelet[3381]: E0709 23:46:02.173632 3381 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 9 23:46:02.173860 kubelet[3381]: I0709 23:46:02.166546 3381 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 9 23:46:02.173860 kubelet[3381]: I0709 23:46:02.166626 3381 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 9 23:46:02.174001 kubelet[3381]: I0709 23:46:02.173941 3381 reconciler.go:26] "Reconciler: start to sync state" Jul 9 23:46:02.179710 kubelet[3381]: I0709 23:46:02.179654 3381 factory.go:221] Registration of the systemd container factory successfully Jul 9 23:46:02.180164 kubelet[3381]: I0709 23:46:02.180058 3381 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 9 23:46:02.181173 kubelet[3381]: E0709 23:46:02.181144 3381 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 9 23:46:02.189044 kubelet[3381]: I0709 23:46:02.189018 3381 factory.go:221] Registration of the containerd container factory successfully Jul 9 23:46:02.236663 kubelet[3381]: I0709 23:46:02.236321 3381 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 9 23:46:02.236663 kubelet[3381]: I0709 23:46:02.236342 3381 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 9 23:46:02.236663 kubelet[3381]: I0709 23:46:02.236360 3381 state_mem.go:36] "Initialized new in-memory state store" Jul 9 23:46:02.236663 kubelet[3381]: I0709 23:46:02.236578 3381 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 9 23:46:02.236663 kubelet[3381]: I0709 23:46:02.236597 3381 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 9 23:46:02.236663 kubelet[3381]: I0709 23:46:02.236612 3381 policy_none.go:49] "None policy: Start" Jul 9 23:46:02.237528 kubelet[3381]: I0709 23:46:02.237065 3381 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 9 23:46:02.237528 kubelet[3381]: I0709 23:46:02.237106 3381 state_mem.go:35] "Initializing new in-memory state store" Jul 9 23:46:02.237528 kubelet[3381]: I0709 23:46:02.237281 3381 state_mem.go:75] "Updated machine memory state" Jul 9 23:46:02.241137 kubelet[3381]: I0709 23:46:02.241113 3381 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 9 23:46:02.241619 kubelet[3381]: I0709 23:46:02.241552 3381 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 9 23:46:02.241619 kubelet[3381]: I0709 23:46:02.241567 3381 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 9 23:46:02.242025 kubelet[3381]: I0709 23:46:02.242002 3381 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 9 23:46:02.281734 kubelet[3381]: W0709 23:46:02.281697 3381 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 9 23:46:02.286170 kubelet[3381]: W0709 23:46:02.286147 3381 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 9 23:46:02.286277 kubelet[3381]: W0709 23:46:02.286270 3381 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 9 23:46:02.286326 kubelet[3381]: E0709 23:46:02.286300 3381 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4344.1.1-n-bbe652f90c\" already exists" pod="kube-system/kube-apiserver-ci-4344.1.1-n-bbe652f90c" Jul 9 23:46:02.345325 kubelet[3381]: I0709 23:46:02.345297 3381 kubelet_node_status.go:72] "Attempting to register node" node="ci-4344.1.1-n-bbe652f90c" Jul 9 23:46:02.356032 kubelet[3381]: I0709 23:46:02.356002 3381 kubelet_node_status.go:111] "Node was previously registered" node="ci-4344.1.1-n-bbe652f90c" Jul 9 23:46:02.356309 kubelet[3381]: I0709 23:46:02.356211 3381 kubelet_node_status.go:75] "Successfully registered node" node="ci-4344.1.1-n-bbe652f90c" Jul 9 23:46:02.374455 kubelet[3381]: I0709 23:46:02.374429 3381 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/055ecf32cf27b21b12ead501ee504ee3-ca-certs\") pod \"kube-apiserver-ci-4344.1.1-n-bbe652f90c\" (UID: \"055ecf32cf27b21b12ead501ee504ee3\") " pod="kube-system/kube-apiserver-ci-4344.1.1-n-bbe652f90c" Jul 9 23:46:02.374738 kubelet[3381]: I0709 23:46:02.374533 3381 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/eb2112dc2e45d1d2e8c4346bbdade7fc-ca-certs\") pod \"kube-controller-manager-ci-4344.1.1-n-bbe652f90c\" (UID: \"eb2112dc2e45d1d2e8c4346bbdade7fc\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-n-bbe652f90c" Jul 9 23:46:02.374738 kubelet[3381]: I0709 23:46:02.374554 3381 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/eb2112dc2e45d1d2e8c4346bbdade7fc-k8s-certs\") pod \"kube-controller-manager-ci-4344.1.1-n-bbe652f90c\" (UID: \"eb2112dc2e45d1d2e8c4346bbdade7fc\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-n-bbe652f90c" Jul 9 23:46:02.374738 kubelet[3381]: I0709 23:46:02.374565 3381 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/eb2112dc2e45d1d2e8c4346bbdade7fc-kubeconfig\") pod \"kube-controller-manager-ci-4344.1.1-n-bbe652f90c\" (UID: \"eb2112dc2e45d1d2e8c4346bbdade7fc\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-n-bbe652f90c" Jul 9 23:46:02.374738 kubelet[3381]: I0709 23:46:02.374580 3381 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/eb2112dc2e45d1d2e8c4346bbdade7fc-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4344.1.1-n-bbe652f90c\" (UID: \"eb2112dc2e45d1d2e8c4346bbdade7fc\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-n-bbe652f90c" Jul 9 23:46:02.374738 kubelet[3381]: I0709 23:46:02.374593 3381 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/374f12f67918e2b884fa54198315a751-kubeconfig\") pod \"kube-scheduler-ci-4344.1.1-n-bbe652f90c\" (UID: \"374f12f67918e2b884fa54198315a751\") " pod="kube-system/kube-scheduler-ci-4344.1.1-n-bbe652f90c" Jul 9 23:46:02.374837 kubelet[3381]: I0709 23:46:02.374603 3381 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/055ecf32cf27b21b12ead501ee504ee3-k8s-certs\") pod \"kube-apiserver-ci-4344.1.1-n-bbe652f90c\" (UID: \"055ecf32cf27b21b12ead501ee504ee3\") " pod="kube-system/kube-apiserver-ci-4344.1.1-n-bbe652f90c" Jul 9 23:46:02.374837 kubelet[3381]: I0709 23:46:02.374614 3381 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/055ecf32cf27b21b12ead501ee504ee3-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4344.1.1-n-bbe652f90c\" (UID: \"055ecf32cf27b21b12ead501ee504ee3\") " pod="kube-system/kube-apiserver-ci-4344.1.1-n-bbe652f90c" Jul 9 23:46:02.374837 kubelet[3381]: I0709 23:46:02.374624 3381 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/eb2112dc2e45d1d2e8c4346bbdade7fc-flexvolume-dir\") pod \"kube-controller-manager-ci-4344.1.1-n-bbe652f90c\" (UID: \"eb2112dc2e45d1d2e8c4346bbdade7fc\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-n-bbe652f90c" Jul 9 23:46:02.694018 sudo[3414]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 9 23:46:02.694730 sudo[3414]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jul 9 23:46:03.050879 sudo[3414]: pam_unix(sudo:session): session closed for user root Jul 9 23:46:03.155517 kubelet[3381]: I0709 23:46:03.155428 3381 apiserver.go:52] "Watching apiserver" Jul 9 23:46:03.174099 kubelet[3381]: I0709 23:46:03.174043 3381 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 9 23:46:03.187886 kubelet[3381]: I0709 23:46:03.187788 3381 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4344.1.1-n-bbe652f90c" podStartSLOduration=1.187774294 podStartE2EDuration="1.187774294s" podCreationTimestamp="2025-07-09 23:46:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 23:46:03.187643354 +0000 UTC m=+1.117017102" watchObservedRunningTime="2025-07-09 23:46:03.187774294 +0000 UTC m=+1.117148034" Jul 9 23:46:03.215006 kubelet[3381]: I0709 23:46:03.214603 3381 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4344.1.1-n-bbe652f90c" podStartSLOduration=2.214586267 podStartE2EDuration="2.214586267s" podCreationTimestamp="2025-07-09 23:46:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 23:46:03.201542345 +0000 UTC m=+1.130916117" watchObservedRunningTime="2025-07-09 23:46:03.214586267 +0000 UTC m=+1.143960015" Jul 9 23:46:03.239472 kubelet[3381]: W0709 23:46:03.239019 3381 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 9 23:46:03.239472 kubelet[3381]: E0709 23:46:03.239077 3381 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4344.1.1-n-bbe652f90c\" already exists" pod="kube-system/kube-apiserver-ci-4344.1.1-n-bbe652f90c" Jul 9 23:46:03.239800 kubelet[3381]: I0709 23:46:03.239765 3381 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4344.1.1-n-bbe652f90c" podStartSLOduration=1.239752183 podStartE2EDuration="1.239752183s" podCreationTimestamp="2025-07-09 23:46:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 23:46:03.21553026 +0000 UTC m=+1.144904008" watchObservedRunningTime="2025-07-09 23:46:03.239752183 +0000 UTC m=+1.169125923" Jul 9 23:46:04.367710 sudo[2392]: pam_unix(sudo:session): session closed for user root Jul 9 23:46:04.445106 sshd[2391]: Connection closed by 10.200.16.10 port 34312 Jul 9 23:46:04.445690 sshd-session[2389]: pam_unix(sshd:session): session closed for user core Jul 9 23:46:04.449301 systemd-logind[1865]: Session 9 logged out. Waiting for processes to exit. Jul 9 23:46:04.449896 systemd[1]: sshd@6-10.200.20.40:22-10.200.16.10:34312.service: Deactivated successfully. Jul 9 23:46:04.452569 systemd[1]: session-9.scope: Deactivated successfully. Jul 9 23:46:04.452760 systemd[1]: session-9.scope: Consumed 2.909s CPU time, 267.8M memory peak. Jul 9 23:46:04.455153 systemd-logind[1865]: Removed session 9. Jul 9 23:46:07.103822 kubelet[3381]: I0709 23:46:07.103724 3381 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 9 23:46:07.104476 containerd[1905]: time="2025-07-09T23:46:07.104377121Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 9 23:46:07.104796 kubelet[3381]: I0709 23:46:07.104602 3381 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 9 23:46:07.966030 systemd[1]: Created slice kubepods-besteffort-podbbdbcd1b_665d_4624_a9ef_cbfdf0435409.slice - libcontainer container kubepods-besteffort-podbbdbcd1b_665d_4624_a9ef_cbfdf0435409.slice. Jul 9 23:46:07.970574 kubelet[3381]: W0709 23:46:07.969941 3381 reflector.go:561] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-4344.1.1-n-bbe652f90c" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4344.1.1-n-bbe652f90c' and this object Jul 9 23:46:07.970574 kubelet[3381]: E0709 23:46:07.969990 3381 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:ci-4344.1.1-n-bbe652f90c\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4344.1.1-n-bbe652f90c' and this object" logger="UnhandledError" Jul 9 23:46:07.970574 kubelet[3381]: W0709 23:46:07.969939 3381 reflector.go:561] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-4344.1.1-n-bbe652f90c" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4344.1.1-n-bbe652f90c' and this object Jul 9 23:46:07.970574 kubelet[3381]: W0709 23:46:07.970067 3381 reflector.go:561] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-4344.1.1-n-bbe652f90c" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4344.1.1-n-bbe652f90c' and this object Jul 9 23:46:07.970574 kubelet[3381]: E0709 23:46:07.970079 3381 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:ci-4344.1.1-n-bbe652f90c\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4344.1.1-n-bbe652f90c' and this object" logger="UnhandledError" Jul 9 23:46:07.970732 kubelet[3381]: E0709 23:46:07.970061 3381 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:ci-4344.1.1-n-bbe652f90c\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4344.1.1-n-bbe652f90c' and this object" logger="UnhandledError" Jul 9 23:46:07.982651 systemd[1]: Created slice kubepods-burstable-pod0654bc50_8b19_462c_8cd1_7cd980bf5074.slice - libcontainer container kubepods-burstable-pod0654bc50_8b19_462c_8cd1_7cd980bf5074.slice. Jul 9 23:46:08.005481 kubelet[3381]: I0709 23:46:08.005431 3381 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0654bc50-8b19-462c-8cd1-7cd980bf5074-cni-path\") pod \"cilium-5d7lx\" (UID: \"0654bc50-8b19-462c-8cd1-7cd980bf5074\") " pod="kube-system/cilium-5d7lx" Jul 9 23:46:08.005481 kubelet[3381]: I0709 23:46:08.005470 3381 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bbdbcd1b-665d-4624-a9ef-cbfdf0435409-xtables-lock\") pod \"kube-proxy-lr5dl\" (UID: \"bbdbcd1b-665d-4624-a9ef-cbfdf0435409\") " pod="kube-system/kube-proxy-lr5dl" Jul 9 23:46:08.005481 kubelet[3381]: I0709 23:46:08.005482 3381 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bbdbcd1b-665d-4624-a9ef-cbfdf0435409-lib-modules\") pod \"kube-proxy-lr5dl\" (UID: \"bbdbcd1b-665d-4624-a9ef-cbfdf0435409\") " pod="kube-system/kube-proxy-lr5dl" Jul 9 23:46:08.005481 kubelet[3381]: I0709 23:46:08.005500 3381 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0654bc50-8b19-462c-8cd1-7cd980bf5074-lib-modules\") pod \"cilium-5d7lx\" (UID: \"0654bc50-8b19-462c-8cd1-7cd980bf5074\") " pod="kube-system/cilium-5d7lx" Jul 9 23:46:08.005683 kubelet[3381]: I0709 23:46:08.005511 3381 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0654bc50-8b19-462c-8cd1-7cd980bf5074-cilium-config-path\") pod \"cilium-5d7lx\" (UID: \"0654bc50-8b19-462c-8cd1-7cd980bf5074\") " pod="kube-system/cilium-5d7lx" Jul 9 23:46:08.005683 kubelet[3381]: I0709 23:46:08.005522 3381 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9v9qm\" (UniqueName: \"kubernetes.io/projected/0654bc50-8b19-462c-8cd1-7cd980bf5074-kube-api-access-9v9qm\") pod \"cilium-5d7lx\" (UID: \"0654bc50-8b19-462c-8cd1-7cd980bf5074\") " pod="kube-system/cilium-5d7lx" Jul 9 23:46:08.005683 kubelet[3381]: I0709 23:46:08.005532 3381 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/bbdbcd1b-665d-4624-a9ef-cbfdf0435409-kube-proxy\") pod \"kube-proxy-lr5dl\" (UID: \"bbdbcd1b-665d-4624-a9ef-cbfdf0435409\") " pod="kube-system/kube-proxy-lr5dl" Jul 9 23:46:08.005683 kubelet[3381]: I0709 23:46:08.005540 3381 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0654bc50-8b19-462c-8cd1-7cd980bf5074-bpf-maps\") pod \"cilium-5d7lx\" (UID: \"0654bc50-8b19-462c-8cd1-7cd980bf5074\") " pod="kube-system/cilium-5d7lx" Jul 9 23:46:08.005683 kubelet[3381]: I0709 23:46:08.005548 3381 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0654bc50-8b19-462c-8cd1-7cd980bf5074-hostproc\") pod \"cilium-5d7lx\" (UID: \"0654bc50-8b19-462c-8cd1-7cd980bf5074\") " pod="kube-system/cilium-5d7lx" Jul 9 23:46:08.005683 kubelet[3381]: I0709 23:46:08.005557 3381 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0654bc50-8b19-462c-8cd1-7cd980bf5074-host-proc-sys-net\") pod \"cilium-5d7lx\" (UID: \"0654bc50-8b19-462c-8cd1-7cd980bf5074\") " pod="kube-system/cilium-5d7lx" Jul 9 23:46:08.005774 kubelet[3381]: I0709 23:46:08.005566 3381 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0654bc50-8b19-462c-8cd1-7cd980bf5074-host-proc-sys-kernel\") pod \"cilium-5d7lx\" (UID: \"0654bc50-8b19-462c-8cd1-7cd980bf5074\") " pod="kube-system/cilium-5d7lx" Jul 9 23:46:08.005774 kubelet[3381]: I0709 23:46:08.005578 3381 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0654bc50-8b19-462c-8cd1-7cd980bf5074-xtables-lock\") pod \"cilium-5d7lx\" (UID: \"0654bc50-8b19-462c-8cd1-7cd980bf5074\") " pod="kube-system/cilium-5d7lx" Jul 9 23:46:08.005774 kubelet[3381]: I0709 23:46:08.005588 3381 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0654bc50-8b19-462c-8cd1-7cd980bf5074-etc-cni-netd\") pod \"cilium-5d7lx\" (UID: \"0654bc50-8b19-462c-8cd1-7cd980bf5074\") " pod="kube-system/cilium-5d7lx" Jul 9 23:46:08.005774 kubelet[3381]: I0709 23:46:08.005597 3381 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98zr2\" (UniqueName: \"kubernetes.io/projected/bbdbcd1b-665d-4624-a9ef-cbfdf0435409-kube-api-access-98zr2\") pod \"kube-proxy-lr5dl\" (UID: \"bbdbcd1b-665d-4624-a9ef-cbfdf0435409\") " pod="kube-system/kube-proxy-lr5dl" Jul 9 23:46:08.005774 kubelet[3381]: I0709 23:46:08.005606 3381 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0654bc50-8b19-462c-8cd1-7cd980bf5074-cilium-cgroup\") pod \"cilium-5d7lx\" (UID: \"0654bc50-8b19-462c-8cd1-7cd980bf5074\") " pod="kube-system/cilium-5d7lx" Jul 9 23:46:08.005848 kubelet[3381]: I0709 23:46:08.005615 3381 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0654bc50-8b19-462c-8cd1-7cd980bf5074-clustermesh-secrets\") pod \"cilium-5d7lx\" (UID: \"0654bc50-8b19-462c-8cd1-7cd980bf5074\") " pod="kube-system/cilium-5d7lx" Jul 9 23:46:08.005848 kubelet[3381]: I0709 23:46:08.005624 3381 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0654bc50-8b19-462c-8cd1-7cd980bf5074-cilium-run\") pod \"cilium-5d7lx\" (UID: \"0654bc50-8b19-462c-8cd1-7cd980bf5074\") " pod="kube-system/cilium-5d7lx" Jul 9 23:46:08.005848 kubelet[3381]: I0709 23:46:08.005635 3381 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0654bc50-8b19-462c-8cd1-7cd980bf5074-hubble-tls\") pod \"cilium-5d7lx\" (UID: \"0654bc50-8b19-462c-8cd1-7cd980bf5074\") " pod="kube-system/cilium-5d7lx" Jul 9 23:46:08.118162 systemd[1]: Created slice kubepods-besteffort-podbb5d4317_56cd_4f56_a0db_75b5b9e53b4a.slice - libcontainer container kubepods-besteffort-podbb5d4317_56cd_4f56_a0db_75b5b9e53b4a.slice. Jul 9 23:46:08.207243 kubelet[3381]: I0709 23:46:08.207192 3381 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sj6c8\" (UniqueName: \"kubernetes.io/projected/bb5d4317-56cd-4f56-a0db-75b5b9e53b4a-kube-api-access-sj6c8\") pod \"cilium-operator-5d85765b45-m94g8\" (UID: \"bb5d4317-56cd-4f56-a0db-75b5b9e53b4a\") " pod="kube-system/cilium-operator-5d85765b45-m94g8" Jul 9 23:46:08.207243 kubelet[3381]: I0709 23:46:08.207248 3381 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bb5d4317-56cd-4f56-a0db-75b5b9e53b4a-cilium-config-path\") pod \"cilium-operator-5d85765b45-m94g8\" (UID: \"bb5d4317-56cd-4f56-a0db-75b5b9e53b4a\") " pod="kube-system/cilium-operator-5d85765b45-m94g8" Jul 9 23:46:08.279435 containerd[1905]: time="2025-07-09T23:46:08.279250950Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lr5dl,Uid:bbdbcd1b-665d-4624-a9ef-cbfdf0435409,Namespace:kube-system,Attempt:0,}" Jul 9 23:46:08.335663 containerd[1905]: time="2025-07-09T23:46:08.335625245Z" level=info msg="connecting to shim 0681776537e1038678a4ca75e52bab31444d56603f06397188980a889bc5a1fd" address="unix:///run/containerd/s/402ced404ac5a69629dd6d7bcc074d3c119ae0425402141516060640f85d929c" namespace=k8s.io protocol=ttrpc version=3 Jul 9 23:46:08.354647 systemd[1]: Started cri-containerd-0681776537e1038678a4ca75e52bab31444d56603f06397188980a889bc5a1fd.scope - libcontainer container 0681776537e1038678a4ca75e52bab31444d56603f06397188980a889bc5a1fd. Jul 9 23:46:08.377038 containerd[1905]: time="2025-07-09T23:46:08.376970511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lr5dl,Uid:bbdbcd1b-665d-4624-a9ef-cbfdf0435409,Namespace:kube-system,Attempt:0,} returns sandbox id \"0681776537e1038678a4ca75e52bab31444d56603f06397188980a889bc5a1fd\"" Jul 9 23:46:08.380402 containerd[1905]: time="2025-07-09T23:46:08.380051259Z" level=info msg="CreateContainer within sandbox \"0681776537e1038678a4ca75e52bab31444d56603f06397188980a889bc5a1fd\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 9 23:46:08.407184 containerd[1905]: time="2025-07-09T23:46:08.407144300Z" level=info msg="Container f53e9e6c715d8885d6233ffbac572eeed5470ace18a93a6cef2120c71ef4ef7c: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:46:08.437405 containerd[1905]: time="2025-07-09T23:46:08.437360778Z" level=info msg="CreateContainer within sandbox \"0681776537e1038678a4ca75e52bab31444d56603f06397188980a889bc5a1fd\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f53e9e6c715d8885d6233ffbac572eeed5470ace18a93a6cef2120c71ef4ef7c\"" Jul 9 23:46:08.438688 containerd[1905]: time="2025-07-09T23:46:08.438663880Z" level=info msg="StartContainer for \"f53e9e6c715d8885d6233ffbac572eeed5470ace18a93a6cef2120c71ef4ef7c\"" Jul 9 23:46:08.441483 containerd[1905]: time="2025-07-09T23:46:08.441286259Z" level=info msg="connecting to shim f53e9e6c715d8885d6233ffbac572eeed5470ace18a93a6cef2120c71ef4ef7c" address="unix:///run/containerd/s/402ced404ac5a69629dd6d7bcc074d3c119ae0425402141516060640f85d929c" protocol=ttrpc version=3 Jul 9 23:46:08.459651 systemd[1]: Started cri-containerd-f53e9e6c715d8885d6233ffbac572eeed5470ace18a93a6cef2120c71ef4ef7c.scope - libcontainer container f53e9e6c715d8885d6233ffbac572eeed5470ace18a93a6cef2120c71ef4ef7c. Jul 9 23:46:08.493601 containerd[1905]: time="2025-07-09T23:46:08.493545194Z" level=info msg="StartContainer for \"f53e9e6c715d8885d6233ffbac572eeed5470ace18a93a6cef2120c71ef4ef7c\" returns successfully" Jul 9 23:46:09.108466 kubelet[3381]: E0709 23:46:09.108422 3381 projected.go:263] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Jul 9 23:46:09.108466 kubelet[3381]: E0709 23:46:09.108458 3381 projected.go:194] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-5d7lx: failed to sync secret cache: timed out waiting for the condition Jul 9 23:46:09.108659 kubelet[3381]: E0709 23:46:09.108542 3381 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0654bc50-8b19-462c-8cd1-7cd980bf5074-hubble-tls podName:0654bc50-8b19-462c-8cd1-7cd980bf5074 nodeName:}" failed. No retries permitted until 2025-07-09 23:46:09.608521082 +0000 UTC m=+7.537894822 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/0654bc50-8b19-462c-8cd1-7cd980bf5074-hubble-tls") pod "cilium-5d7lx" (UID: "0654bc50-8b19-462c-8cd1-7cd980bf5074") : failed to sync secret cache: timed out waiting for the condition Jul 9 23:46:09.245126 kubelet[3381]: I0709 23:46:09.245054 3381 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-lr5dl" podStartSLOduration=2.245026356 podStartE2EDuration="2.245026356s" podCreationTimestamp="2025-07-09 23:46:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 23:46:09.244533283 +0000 UTC m=+7.173907207" watchObservedRunningTime="2025-07-09 23:46:09.245026356 +0000 UTC m=+7.174400096" Jul 9 23:46:09.323020 containerd[1905]: time="2025-07-09T23:46:09.322758700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-m94g8,Uid:bb5d4317-56cd-4f56-a0db-75b5b9e53b4a,Namespace:kube-system,Attempt:0,}" Jul 9 23:46:09.396063 containerd[1905]: time="2025-07-09T23:46:09.395914821Z" level=info msg="connecting to shim a20e9af7ba30940da76573b3e284de2327689504262fe41387abe5e30b7d0067" address="unix:///run/containerd/s/0971b51c6c5606308ef31fd6ebfeac3e182c3f024d5f5b40172d7d7ae0a40f11" namespace=k8s.io protocol=ttrpc version=3 Jul 9 23:46:09.417640 systemd[1]: Started cri-containerd-a20e9af7ba30940da76573b3e284de2327689504262fe41387abe5e30b7d0067.scope - libcontainer container a20e9af7ba30940da76573b3e284de2327689504262fe41387abe5e30b7d0067. Jul 9 23:46:09.455054 containerd[1905]: time="2025-07-09T23:46:09.455011627Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-m94g8,Uid:bb5d4317-56cd-4f56-a0db-75b5b9e53b4a,Namespace:kube-system,Attempt:0,} returns sandbox id \"a20e9af7ba30940da76573b3e284de2327689504262fe41387abe5e30b7d0067\"" Jul 9 23:46:09.458686 containerd[1905]: time="2025-07-09T23:46:09.458593448Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 9 23:46:09.786265 containerd[1905]: time="2025-07-09T23:46:09.786216422Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5d7lx,Uid:0654bc50-8b19-462c-8cd1-7cd980bf5074,Namespace:kube-system,Attempt:0,}" Jul 9 23:46:09.873896 containerd[1905]: time="2025-07-09T23:46:09.873798701Z" level=info msg="connecting to shim 9a8dd2bb8de2be0ff5e34aa7d317be6fdad94e12769eeb494b18a3867d803757" address="unix:///run/containerd/s/eeacda6a52f92bab20ca158311962d9cb699fb426b6e1be6b58810cdc9bd7763" namespace=k8s.io protocol=ttrpc version=3 Jul 9 23:46:09.897638 systemd[1]: Started cri-containerd-9a8dd2bb8de2be0ff5e34aa7d317be6fdad94e12769eeb494b18a3867d803757.scope - libcontainer container 9a8dd2bb8de2be0ff5e34aa7d317be6fdad94e12769eeb494b18a3867d803757. Jul 9 23:46:09.921270 containerd[1905]: time="2025-07-09T23:46:09.921202403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5d7lx,Uid:0654bc50-8b19-462c-8cd1-7cd980bf5074,Namespace:kube-system,Attempt:0,} returns sandbox id \"9a8dd2bb8de2be0ff5e34aa7d317be6fdad94e12769eeb494b18a3867d803757\"" Jul 9 23:46:12.106607 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1028179030.mount: Deactivated successfully. Jul 9 23:46:13.664429 containerd[1905]: time="2025-07-09T23:46:13.664371542Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:46:13.667948 containerd[1905]: time="2025-07-09T23:46:13.667902569Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Jul 9 23:46:13.678056 containerd[1905]: time="2025-07-09T23:46:13.677982447Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:46:13.679003 containerd[1905]: time="2025-07-09T23:46:13.678971066Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 4.220349906s" Jul 9 23:46:13.679003 containerd[1905]: time="2025-07-09T23:46:13.679006099Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jul 9 23:46:13.681040 containerd[1905]: time="2025-07-09T23:46:13.680744911Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 9 23:46:13.681552 containerd[1905]: time="2025-07-09T23:46:13.681524835Z" level=info msg="CreateContainer within sandbox \"a20e9af7ba30940da76573b3e284de2327689504262fe41387abe5e30b7d0067\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 9 23:46:13.748391 containerd[1905]: time="2025-07-09T23:46:13.746100926Z" level=info msg="Container f8cf4ff4dca75a33df3fdf6d387ff3773cdb3092ef54536d51ec29dc02e293c8: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:46:13.747643 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1297421756.mount: Deactivated successfully. Jul 9 23:46:13.784719 containerd[1905]: time="2025-07-09T23:46:13.784675667Z" level=info msg="CreateContainer within sandbox \"a20e9af7ba30940da76573b3e284de2327689504262fe41387abe5e30b7d0067\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"f8cf4ff4dca75a33df3fdf6d387ff3773cdb3092ef54536d51ec29dc02e293c8\"" Jul 9 23:46:13.785422 containerd[1905]: time="2025-07-09T23:46:13.785337506Z" level=info msg="StartContainer for \"f8cf4ff4dca75a33df3fdf6d387ff3773cdb3092ef54536d51ec29dc02e293c8\"" Jul 9 23:46:13.787475 containerd[1905]: time="2025-07-09T23:46:13.787433314Z" level=info msg="connecting to shim f8cf4ff4dca75a33df3fdf6d387ff3773cdb3092ef54536d51ec29dc02e293c8" address="unix:///run/containerd/s/0971b51c6c5606308ef31fd6ebfeac3e182c3f024d5f5b40172d7d7ae0a40f11" protocol=ttrpc version=3 Jul 9 23:46:13.804643 systemd[1]: Started cri-containerd-f8cf4ff4dca75a33df3fdf6d387ff3773cdb3092ef54536d51ec29dc02e293c8.scope - libcontainer container f8cf4ff4dca75a33df3fdf6d387ff3773cdb3092ef54536d51ec29dc02e293c8. Jul 9 23:46:13.832765 containerd[1905]: time="2025-07-09T23:46:13.832732472Z" level=info msg="StartContainer for \"f8cf4ff4dca75a33df3fdf6d387ff3773cdb3092ef54536d51ec29dc02e293c8\" returns successfully" Jul 9 23:46:16.676011 kubelet[3381]: I0709 23:46:16.675830 3381 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-m94g8" podStartSLOduration=4.453017482 podStartE2EDuration="8.675815128s" podCreationTimestamp="2025-07-09 23:46:08 +0000 UTC" firstStartedPulling="2025-07-09 23:46:09.457073619 +0000 UTC m=+7.386447359" lastFinishedPulling="2025-07-09 23:46:13.679871265 +0000 UTC m=+11.609245005" observedRunningTime="2025-07-09 23:46:14.27640552 +0000 UTC m=+12.205779260" watchObservedRunningTime="2025-07-09 23:46:16.675815128 +0000 UTC m=+14.605188916" Jul 9 23:46:23.765399 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3650623458.mount: Deactivated successfully. Jul 9 23:46:26.101430 containerd[1905]: time="2025-07-09T23:46:26.101369431Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:46:26.106400 containerd[1905]: time="2025-07-09T23:46:26.106249727Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Jul 9 23:46:26.111776 containerd[1905]: time="2025-07-09T23:46:26.111723218Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:46:26.112881 containerd[1905]: time="2025-07-09T23:46:26.112766894Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 12.431992078s" Jul 9 23:46:26.112881 containerd[1905]: time="2025-07-09T23:46:26.112800119Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jul 9 23:46:26.114963 containerd[1905]: time="2025-07-09T23:46:26.114929648Z" level=info msg="CreateContainer within sandbox \"9a8dd2bb8de2be0ff5e34aa7d317be6fdad94e12769eeb494b18a3867d803757\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 9 23:46:26.145836 containerd[1905]: time="2025-07-09T23:46:26.145797963Z" level=info msg="Container 51cf16359cc442f534529be49f8b817c80b711bff714d09e52be4927607640d6: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:46:26.168664 containerd[1905]: time="2025-07-09T23:46:26.168613529Z" level=info msg="CreateContainer within sandbox \"9a8dd2bb8de2be0ff5e34aa7d317be6fdad94e12769eeb494b18a3867d803757\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"51cf16359cc442f534529be49f8b817c80b711bff714d09e52be4927607640d6\"" Jul 9 23:46:26.169224 containerd[1905]: time="2025-07-09T23:46:26.169199253Z" level=info msg="StartContainer for \"51cf16359cc442f534529be49f8b817c80b711bff714d09e52be4927607640d6\"" Jul 9 23:46:26.170299 containerd[1905]: time="2025-07-09T23:46:26.170273289Z" level=info msg="connecting to shim 51cf16359cc442f534529be49f8b817c80b711bff714d09e52be4927607640d6" address="unix:///run/containerd/s/eeacda6a52f92bab20ca158311962d9cb699fb426b6e1be6b58810cdc9bd7763" protocol=ttrpc version=3 Jul 9 23:46:26.196667 systemd[1]: Started cri-containerd-51cf16359cc442f534529be49f8b817c80b711bff714d09e52be4927607640d6.scope - libcontainer container 51cf16359cc442f534529be49f8b817c80b711bff714d09e52be4927607640d6. Jul 9 23:46:26.223613 containerd[1905]: time="2025-07-09T23:46:26.223511962Z" level=info msg="StartContainer for \"51cf16359cc442f534529be49f8b817c80b711bff714d09e52be4927607640d6\" returns successfully" Jul 9 23:46:26.230728 systemd[1]: cri-containerd-51cf16359cc442f534529be49f8b817c80b711bff714d09e52be4927607640d6.scope: Deactivated successfully. Jul 9 23:46:26.234287 containerd[1905]: time="2025-07-09T23:46:26.234167767Z" level=info msg="received exit event container_id:\"51cf16359cc442f534529be49f8b817c80b711bff714d09e52be4927607640d6\" id:\"51cf16359cc442f534529be49f8b817c80b711bff714d09e52be4927607640d6\" pid:3843 exited_at:{seconds:1752104786 nanos:232387474}" Jul 9 23:46:26.234599 containerd[1905]: time="2025-07-09T23:46:26.234389567Z" level=info msg="TaskExit event in podsandbox handler container_id:\"51cf16359cc442f534529be49f8b817c80b711bff714d09e52be4927607640d6\" id:\"51cf16359cc442f534529be49f8b817c80b711bff714d09e52be4927607640d6\" pid:3843 exited_at:{seconds:1752104786 nanos:232387474}" Jul 9 23:46:27.144120 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-51cf16359cc442f534529be49f8b817c80b711bff714d09e52be4927607640d6-rootfs.mount: Deactivated successfully. Jul 9 23:46:28.280863 containerd[1905]: time="2025-07-09T23:46:28.280822223Z" level=info msg="CreateContainer within sandbox \"9a8dd2bb8de2be0ff5e34aa7d317be6fdad94e12769eeb494b18a3867d803757\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 9 23:46:28.312106 containerd[1905]: time="2025-07-09T23:46:28.312065247Z" level=info msg="Container b96544f5e55e3de1912f7ee267b7c327d2f64af7971c5fefbe2d8b3fa5fa20c4: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:46:28.332555 containerd[1905]: time="2025-07-09T23:46:28.332489796Z" level=info msg="CreateContainer within sandbox \"9a8dd2bb8de2be0ff5e34aa7d317be6fdad94e12769eeb494b18a3867d803757\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b96544f5e55e3de1912f7ee267b7c327d2f64af7971c5fefbe2d8b3fa5fa20c4\"" Jul 9 23:46:28.333244 containerd[1905]: time="2025-07-09T23:46:28.333218461Z" level=info msg="StartContainer for \"b96544f5e55e3de1912f7ee267b7c327d2f64af7971c5fefbe2d8b3fa5fa20c4\"" Jul 9 23:46:28.334144 containerd[1905]: time="2025-07-09T23:46:28.334082779Z" level=info msg="connecting to shim b96544f5e55e3de1912f7ee267b7c327d2f64af7971c5fefbe2d8b3fa5fa20c4" address="unix:///run/containerd/s/eeacda6a52f92bab20ca158311962d9cb699fb426b6e1be6b58810cdc9bd7763" protocol=ttrpc version=3 Jul 9 23:46:28.354659 systemd[1]: Started cri-containerd-b96544f5e55e3de1912f7ee267b7c327d2f64af7971c5fefbe2d8b3fa5fa20c4.scope - libcontainer container b96544f5e55e3de1912f7ee267b7c327d2f64af7971c5fefbe2d8b3fa5fa20c4. Jul 9 23:46:28.380432 containerd[1905]: time="2025-07-09T23:46:28.380393016Z" level=info msg="StartContainer for \"b96544f5e55e3de1912f7ee267b7c327d2f64af7971c5fefbe2d8b3fa5fa20c4\" returns successfully" Jul 9 23:46:28.390356 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 9 23:46:28.390558 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 9 23:46:28.391640 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 9 23:46:28.393568 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 9 23:46:28.395165 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 9 23:46:28.397653 systemd[1]: cri-containerd-b96544f5e55e3de1912f7ee267b7c327d2f64af7971c5fefbe2d8b3fa5fa20c4.scope: Deactivated successfully. Jul 9 23:46:28.398294 containerd[1905]: time="2025-07-09T23:46:28.398261877Z" level=info msg="received exit event container_id:\"b96544f5e55e3de1912f7ee267b7c327d2f64af7971c5fefbe2d8b3fa5fa20c4\" id:\"b96544f5e55e3de1912f7ee267b7c327d2f64af7971c5fefbe2d8b3fa5fa20c4\" pid:3890 exited_at:{seconds:1752104788 nanos:398042766}" Jul 9 23:46:28.398573 containerd[1905]: time="2025-07-09T23:46:28.398310583Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b96544f5e55e3de1912f7ee267b7c327d2f64af7971c5fefbe2d8b3fa5fa20c4\" id:\"b96544f5e55e3de1912f7ee267b7c327d2f64af7971c5fefbe2d8b3fa5fa20c4\" pid:3890 exited_at:{seconds:1752104788 nanos:398042766}" Jul 9 23:46:28.418632 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 9 23:46:29.283717 containerd[1905]: time="2025-07-09T23:46:29.283577054Z" level=info msg="CreateContainer within sandbox \"9a8dd2bb8de2be0ff5e34aa7d317be6fdad94e12769eeb494b18a3867d803757\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 9 23:46:29.313111 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b96544f5e55e3de1912f7ee267b7c327d2f64af7971c5fefbe2d8b3fa5fa20c4-rootfs.mount: Deactivated successfully. Jul 9 23:46:29.320589 containerd[1905]: time="2025-07-09T23:46:29.320538979Z" level=info msg="Container 804a16e97e896988e0ee03efe2106eb52e5c1f3e2ab928c063704bc720d430f7: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:46:29.323027 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4114598874.mount: Deactivated successfully. Jul 9 23:46:29.342429 containerd[1905]: time="2025-07-09T23:46:29.342321159Z" level=info msg="CreateContainer within sandbox \"9a8dd2bb8de2be0ff5e34aa7d317be6fdad94e12769eeb494b18a3867d803757\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"804a16e97e896988e0ee03efe2106eb52e5c1f3e2ab928c063704bc720d430f7\"" Jul 9 23:46:29.343428 containerd[1905]: time="2025-07-09T23:46:29.343172628Z" level=info msg="StartContainer for \"804a16e97e896988e0ee03efe2106eb52e5c1f3e2ab928c063704bc720d430f7\"" Jul 9 23:46:29.345361 containerd[1905]: time="2025-07-09T23:46:29.345329222Z" level=info msg="connecting to shim 804a16e97e896988e0ee03efe2106eb52e5c1f3e2ab928c063704bc720d430f7" address="unix:///run/containerd/s/eeacda6a52f92bab20ca158311962d9cb699fb426b6e1be6b58810cdc9bd7763" protocol=ttrpc version=3 Jul 9 23:46:29.364665 systemd[1]: Started cri-containerd-804a16e97e896988e0ee03efe2106eb52e5c1f3e2ab928c063704bc720d430f7.scope - libcontainer container 804a16e97e896988e0ee03efe2106eb52e5c1f3e2ab928c063704bc720d430f7. Jul 9 23:46:29.392105 systemd[1]: cri-containerd-804a16e97e896988e0ee03efe2106eb52e5c1f3e2ab928c063704bc720d430f7.scope: Deactivated successfully. Jul 9 23:46:29.393901 containerd[1905]: time="2025-07-09T23:46:29.393866991Z" level=info msg="TaskExit event in podsandbox handler container_id:\"804a16e97e896988e0ee03efe2106eb52e5c1f3e2ab928c063704bc720d430f7\" id:\"804a16e97e896988e0ee03efe2106eb52e5c1f3e2ab928c063704bc720d430f7\" pid:3935 exited_at:{seconds:1752104789 nanos:393641888}" Jul 9 23:46:29.396499 containerd[1905]: time="2025-07-09T23:46:29.395548425Z" level=info msg="received exit event container_id:\"804a16e97e896988e0ee03efe2106eb52e5c1f3e2ab928c063704bc720d430f7\" id:\"804a16e97e896988e0ee03efe2106eb52e5c1f3e2ab928c063704bc720d430f7\" pid:3935 exited_at:{seconds:1752104789 nanos:393641888}" Jul 9 23:46:29.397255 containerd[1905]: time="2025-07-09T23:46:29.397220739Z" level=info msg="StartContainer for \"804a16e97e896988e0ee03efe2106eb52e5c1f3e2ab928c063704bc720d430f7\" returns successfully" Jul 9 23:46:30.288471 containerd[1905]: time="2025-07-09T23:46:30.288380213Z" level=info msg="CreateContainer within sandbox \"9a8dd2bb8de2be0ff5e34aa7d317be6fdad94e12769eeb494b18a3867d803757\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 9 23:46:30.313037 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-804a16e97e896988e0ee03efe2106eb52e5c1f3e2ab928c063704bc720d430f7-rootfs.mount: Deactivated successfully. Jul 9 23:46:30.340953 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1613999223.mount: Deactivated successfully. Jul 9 23:46:30.341389 containerd[1905]: time="2025-07-09T23:46:30.340946808Z" level=info msg="Container 6dbe85701104dcb789450f93c5a25c6b9ec2bcb5d02d2ca782991c40bbe4521c: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:46:30.366314 containerd[1905]: time="2025-07-09T23:46:30.366235764Z" level=info msg="CreateContainer within sandbox \"9a8dd2bb8de2be0ff5e34aa7d317be6fdad94e12769eeb494b18a3867d803757\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"6dbe85701104dcb789450f93c5a25c6b9ec2bcb5d02d2ca782991c40bbe4521c\"" Jul 9 23:46:30.367203 containerd[1905]: time="2025-07-09T23:46:30.367155172Z" level=info msg="StartContainer for \"6dbe85701104dcb789450f93c5a25c6b9ec2bcb5d02d2ca782991c40bbe4521c\"" Jul 9 23:46:30.368132 containerd[1905]: time="2025-07-09T23:46:30.368049082Z" level=info msg="connecting to shim 6dbe85701104dcb789450f93c5a25c6b9ec2bcb5d02d2ca782991c40bbe4521c" address="unix:///run/containerd/s/eeacda6a52f92bab20ca158311962d9cb699fb426b6e1be6b58810cdc9bd7763" protocol=ttrpc version=3 Jul 9 23:46:30.387668 systemd[1]: Started cri-containerd-6dbe85701104dcb789450f93c5a25c6b9ec2bcb5d02d2ca782991c40bbe4521c.scope - libcontainer container 6dbe85701104dcb789450f93c5a25c6b9ec2bcb5d02d2ca782991c40bbe4521c. Jul 9 23:46:30.408940 systemd[1]: cri-containerd-6dbe85701104dcb789450f93c5a25c6b9ec2bcb5d02d2ca782991c40bbe4521c.scope: Deactivated successfully. Jul 9 23:46:30.412120 containerd[1905]: time="2025-07-09T23:46:30.409741217Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6dbe85701104dcb789450f93c5a25c6b9ec2bcb5d02d2ca782991c40bbe4521c\" id:\"6dbe85701104dcb789450f93c5a25c6b9ec2bcb5d02d2ca782991c40bbe4521c\" pid:3977 exited_at:{seconds:1752104790 nanos:408771616}" Jul 9 23:46:30.414641 containerd[1905]: time="2025-07-09T23:46:30.414530133Z" level=info msg="received exit event container_id:\"6dbe85701104dcb789450f93c5a25c6b9ec2bcb5d02d2ca782991c40bbe4521c\" id:\"6dbe85701104dcb789450f93c5a25c6b9ec2bcb5d02d2ca782991c40bbe4521c\" pid:3977 exited_at:{seconds:1752104790 nanos:408771616}" Jul 9 23:46:30.420821 containerd[1905]: time="2025-07-09T23:46:30.420789684Z" level=info msg="StartContainer for \"6dbe85701104dcb789450f93c5a25c6b9ec2bcb5d02d2ca782991c40bbe4521c\" returns successfully" Jul 9 23:46:31.293383 containerd[1905]: time="2025-07-09T23:46:31.293341432Z" level=info msg="CreateContainer within sandbox \"9a8dd2bb8de2be0ff5e34aa7d317be6fdad94e12769eeb494b18a3867d803757\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 9 23:46:31.313001 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6dbe85701104dcb789450f93c5a25c6b9ec2bcb5d02d2ca782991c40bbe4521c-rootfs.mount: Deactivated successfully. Jul 9 23:46:31.327975 containerd[1905]: time="2025-07-09T23:46:31.327936691Z" level=info msg="Container f2dc6afa23a8a7430803c6cc1ae4e6aa0c0b3d266ee544c16f687ae1baa4df60: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:46:31.349936 containerd[1905]: time="2025-07-09T23:46:31.349895140Z" level=info msg="CreateContainer within sandbox \"9a8dd2bb8de2be0ff5e34aa7d317be6fdad94e12769eeb494b18a3867d803757\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f2dc6afa23a8a7430803c6cc1ae4e6aa0c0b3d266ee544c16f687ae1baa4df60\"" Jul 9 23:46:31.350407 containerd[1905]: time="2025-07-09T23:46:31.350387933Z" level=info msg="StartContainer for \"f2dc6afa23a8a7430803c6cc1ae4e6aa0c0b3d266ee544c16f687ae1baa4df60\"" Jul 9 23:46:31.351650 containerd[1905]: time="2025-07-09T23:46:31.351611223Z" level=info msg="connecting to shim f2dc6afa23a8a7430803c6cc1ae4e6aa0c0b3d266ee544c16f687ae1baa4df60" address="unix:///run/containerd/s/eeacda6a52f92bab20ca158311962d9cb699fb426b6e1be6b58810cdc9bd7763" protocol=ttrpc version=3 Jul 9 23:46:31.369638 systemd[1]: Started cri-containerd-f2dc6afa23a8a7430803c6cc1ae4e6aa0c0b3d266ee544c16f687ae1baa4df60.scope - libcontainer container f2dc6afa23a8a7430803c6cc1ae4e6aa0c0b3d266ee544c16f687ae1baa4df60. Jul 9 23:46:31.418532 containerd[1905]: time="2025-07-09T23:46:31.418469061Z" level=info msg="StartContainer for \"f2dc6afa23a8a7430803c6cc1ae4e6aa0c0b3d266ee544c16f687ae1baa4df60\" returns successfully" Jul 9 23:46:31.468264 containerd[1905]: time="2025-07-09T23:46:31.468181519Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f2dc6afa23a8a7430803c6cc1ae4e6aa0c0b3d266ee544c16f687ae1baa4df60\" id:\"9de560dcf9567256b54bf8d489187ecad802e54ff96e4a7030d233f9df98c618\" pid:4050 exited_at:{seconds:1752104791 nanos:467737280}" Jul 9 23:46:31.483739 kubelet[3381]: I0709 23:46:31.483697 3381 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 9 23:46:31.526884 systemd[1]: Created slice kubepods-burstable-pod38bc82b9_f125_46b9_9526_ac5ebc52c845.slice - libcontainer container kubepods-burstable-pod38bc82b9_f125_46b9_9526_ac5ebc52c845.slice. Jul 9 23:46:31.536324 systemd[1]: Created slice kubepods-burstable-podef5a29c1_f9c2_42db_993e_598fd473c675.slice - libcontainer container kubepods-burstable-podef5a29c1_f9c2_42db_993e_598fd473c675.slice. Jul 9 23:46:31.544178 kubelet[3381]: I0709 23:46:31.543707 3381 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwkpm\" (UniqueName: \"kubernetes.io/projected/ef5a29c1-f9c2-42db-993e-598fd473c675-kube-api-access-wwkpm\") pod \"coredns-7c65d6cfc9-cr4gb\" (UID: \"ef5a29c1-f9c2-42db-993e-598fd473c675\") " pod="kube-system/coredns-7c65d6cfc9-cr4gb" Jul 9 23:46:31.544178 kubelet[3381]: I0709 23:46:31.544138 3381 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wg8zv\" (UniqueName: \"kubernetes.io/projected/38bc82b9-f125-46b9-9526-ac5ebc52c845-kube-api-access-wg8zv\") pod \"coredns-7c65d6cfc9-tnjmt\" (UID: \"38bc82b9-f125-46b9-9526-ac5ebc52c845\") " pod="kube-system/coredns-7c65d6cfc9-tnjmt" Jul 9 23:46:31.544392 kubelet[3381]: I0709 23:46:31.544318 3381 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ef5a29c1-f9c2-42db-993e-598fd473c675-config-volume\") pod \"coredns-7c65d6cfc9-cr4gb\" (UID: \"ef5a29c1-f9c2-42db-993e-598fd473c675\") " pod="kube-system/coredns-7c65d6cfc9-cr4gb" Jul 9 23:46:31.544392 kubelet[3381]: I0709 23:46:31.544344 3381 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/38bc82b9-f125-46b9-9526-ac5ebc52c845-config-volume\") pod \"coredns-7c65d6cfc9-tnjmt\" (UID: \"38bc82b9-f125-46b9-9526-ac5ebc52c845\") " pod="kube-system/coredns-7c65d6cfc9-tnjmt" Jul 9 23:46:31.832347 containerd[1905]: time="2025-07-09T23:46:31.831845286Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-tnjmt,Uid:38bc82b9-f125-46b9-9526-ac5ebc52c845,Namespace:kube-system,Attempt:0,}" Jul 9 23:46:31.839442 containerd[1905]: time="2025-07-09T23:46:31.839410697Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-cr4gb,Uid:ef5a29c1-f9c2-42db-993e-598fd473c675,Namespace:kube-system,Attempt:0,}" Jul 9 23:46:32.315786 kubelet[3381]: I0709 23:46:32.313130 3381 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-5d7lx" podStartSLOduration=9.121873832 podStartE2EDuration="25.313111031s" podCreationTimestamp="2025-07-09 23:46:07 +0000 UTC" firstStartedPulling="2025-07-09 23:46:09.922432598 +0000 UTC m=+7.851806338" lastFinishedPulling="2025-07-09 23:46:26.113669797 +0000 UTC m=+24.043043537" observedRunningTime="2025-07-09 23:46:32.312329037 +0000 UTC m=+30.241702777" watchObservedRunningTime="2025-07-09 23:46:32.313111031 +0000 UTC m=+30.242484771" Jul 9 23:46:33.382428 systemd-networkd[1482]: cilium_host: Link UP Jul 9 23:46:33.383268 systemd-networkd[1482]: cilium_net: Link UP Jul 9 23:46:33.383386 systemd-networkd[1482]: cilium_net: Gained carrier Jul 9 23:46:33.383462 systemd-networkd[1482]: cilium_host: Gained carrier Jul 9 23:46:33.542389 systemd-networkd[1482]: cilium_vxlan: Link UP Jul 9 23:46:33.542555 systemd-networkd[1482]: cilium_vxlan: Gained carrier Jul 9 23:46:33.788312 kernel: NET: Registered PF_ALG protocol family Jul 9 23:46:33.940747 systemd-networkd[1482]: cilium_net: Gained IPv6LL Jul 9 23:46:34.303136 systemd-networkd[1482]: lxc_health: Link UP Jul 9 23:46:34.312463 systemd-networkd[1482]: lxc_health: Gained carrier Jul 9 23:46:34.388753 systemd-networkd[1482]: cilium_host: Gained IPv6LL Jul 9 23:46:34.864856 systemd-networkd[1482]: lxc69a914943b21: Link UP Jul 9 23:46:34.876074 kernel: eth0: renamed from tmpb5459 Jul 9 23:46:34.879296 systemd-networkd[1482]: lxc69a914943b21: Gained carrier Jul 9 23:46:34.883089 systemd-networkd[1482]: lxcf4cf246b10fd: Link UP Jul 9 23:46:34.894521 kernel: eth0: renamed from tmp3d1ac Jul 9 23:46:34.898377 systemd-networkd[1482]: lxcf4cf246b10fd: Gained carrier Jul 9 23:46:35.220697 systemd-networkd[1482]: cilium_vxlan: Gained IPv6LL Jul 9 23:46:35.732751 systemd-networkd[1482]: lxc_health: Gained IPv6LL Jul 9 23:46:36.565684 systemd-networkd[1482]: lxcf4cf246b10fd: Gained IPv6LL Jul 9 23:46:36.628660 systemd-networkd[1482]: lxc69a914943b21: Gained IPv6LL Jul 9 23:46:37.466155 containerd[1905]: time="2025-07-09T23:46:37.466077970Z" level=info msg="connecting to shim 3d1acd7870da29efb483e8da8529eabac05f92c1c40a861494488510896dcc07" address="unix:///run/containerd/s/ed2c32ea4f3b5329a4af959daeea6bea762799f82ff4047dd7dfbed32fddcad3" namespace=k8s.io protocol=ttrpc version=3 Jul 9 23:46:37.488645 systemd[1]: Started cri-containerd-3d1acd7870da29efb483e8da8529eabac05f92c1c40a861494488510896dcc07.scope - libcontainer container 3d1acd7870da29efb483e8da8529eabac05f92c1c40a861494488510896dcc07. Jul 9 23:46:37.498332 containerd[1905]: time="2025-07-09T23:46:37.498103192Z" level=info msg="connecting to shim b5459273f1003b55ef9b5be31596d7e9e40cf5dd5a4601e5e59e51aa31a065de" address="unix:///run/containerd/s/523c6ed69be8a062a4bf46059400fec61d6956fd62a7563439bc3efe3746b5bb" namespace=k8s.io protocol=ttrpc version=3 Jul 9 23:46:37.521636 systemd[1]: Started cri-containerd-b5459273f1003b55ef9b5be31596d7e9e40cf5dd5a4601e5e59e51aa31a065de.scope - libcontainer container b5459273f1003b55ef9b5be31596d7e9e40cf5dd5a4601e5e59e51aa31a065de. Jul 9 23:46:37.529709 containerd[1905]: time="2025-07-09T23:46:37.529655293Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-cr4gb,Uid:ef5a29c1-f9c2-42db-993e-598fd473c675,Namespace:kube-system,Attempt:0,} returns sandbox id \"3d1acd7870da29efb483e8da8529eabac05f92c1c40a861494488510896dcc07\"" Jul 9 23:46:37.532891 containerd[1905]: time="2025-07-09T23:46:37.532857955Z" level=info msg="CreateContainer within sandbox \"3d1acd7870da29efb483e8da8529eabac05f92c1c40a861494488510896dcc07\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 9 23:46:37.568087 containerd[1905]: time="2025-07-09T23:46:37.568049109Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-tnjmt,Uid:38bc82b9-f125-46b9-9526-ac5ebc52c845,Namespace:kube-system,Attempt:0,} returns sandbox id \"b5459273f1003b55ef9b5be31596d7e9e40cf5dd5a4601e5e59e51aa31a065de\"" Jul 9 23:46:37.570547 containerd[1905]: time="2025-07-09T23:46:37.570518689Z" level=info msg="CreateContainer within sandbox \"b5459273f1003b55ef9b5be31596d7e9e40cf5dd5a4601e5e59e51aa31a065de\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 9 23:46:37.584008 containerd[1905]: time="2025-07-09T23:46:37.583977557Z" level=info msg="Container 8cf23e622640ee4dae012ba84365f2e58a6e8cf268b37cc798f6fd263a7629ab: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:46:37.637744 containerd[1905]: time="2025-07-09T23:46:37.637623318Z" level=info msg="CreateContainer within sandbox \"3d1acd7870da29efb483e8da8529eabac05f92c1c40a861494488510896dcc07\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8cf23e622640ee4dae012ba84365f2e58a6e8cf268b37cc798f6fd263a7629ab\"" Jul 9 23:46:37.638183 containerd[1905]: time="2025-07-09T23:46:37.638135351Z" level=info msg="StartContainer for \"8cf23e622640ee4dae012ba84365f2e58a6e8cf268b37cc798f6fd263a7629ab\"" Jul 9 23:46:37.640092 containerd[1905]: time="2025-07-09T23:46:37.640061657Z" level=info msg="connecting to shim 8cf23e622640ee4dae012ba84365f2e58a6e8cf268b37cc798f6fd263a7629ab" address="unix:///run/containerd/s/ed2c32ea4f3b5329a4af959daeea6bea762799f82ff4047dd7dfbed32fddcad3" protocol=ttrpc version=3 Jul 9 23:46:37.643225 containerd[1905]: time="2025-07-09T23:46:37.643201620Z" level=info msg="Container 870d2a81545795f3368852530128f8415f17c9886678cdd4fcdfd1c4aef7abc3: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:46:37.659650 systemd[1]: Started cri-containerd-8cf23e622640ee4dae012ba84365f2e58a6e8cf268b37cc798f6fd263a7629ab.scope - libcontainer container 8cf23e622640ee4dae012ba84365f2e58a6e8cf268b37cc798f6fd263a7629ab. Jul 9 23:46:37.668194 containerd[1905]: time="2025-07-09T23:46:37.668098543Z" level=info msg="CreateContainer within sandbox \"b5459273f1003b55ef9b5be31596d7e9e40cf5dd5a4601e5e59e51aa31a065de\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"870d2a81545795f3368852530128f8415f17c9886678cdd4fcdfd1c4aef7abc3\"" Jul 9 23:46:37.670528 containerd[1905]: time="2025-07-09T23:46:37.669840442Z" level=info msg="StartContainer for \"870d2a81545795f3368852530128f8415f17c9886678cdd4fcdfd1c4aef7abc3\"" Jul 9 23:46:37.670528 containerd[1905]: time="2025-07-09T23:46:37.670442519Z" level=info msg="connecting to shim 870d2a81545795f3368852530128f8415f17c9886678cdd4fcdfd1c4aef7abc3" address="unix:///run/containerd/s/523c6ed69be8a062a4bf46059400fec61d6956fd62a7563439bc3efe3746b5bb" protocol=ttrpc version=3 Jul 9 23:46:37.692757 systemd[1]: Started cri-containerd-870d2a81545795f3368852530128f8415f17c9886678cdd4fcdfd1c4aef7abc3.scope - libcontainer container 870d2a81545795f3368852530128f8415f17c9886678cdd4fcdfd1c4aef7abc3. Jul 9 23:46:37.698183 containerd[1905]: time="2025-07-09T23:46:37.697994956Z" level=info msg="StartContainer for \"8cf23e622640ee4dae012ba84365f2e58a6e8cf268b37cc798f6fd263a7629ab\" returns successfully" Jul 9 23:46:37.741232 containerd[1905]: time="2025-07-09T23:46:37.741123598Z" level=info msg="StartContainer for \"870d2a81545795f3368852530128f8415f17c9886678cdd4fcdfd1c4aef7abc3\" returns successfully" Jul 9 23:46:38.330061 kubelet[3381]: I0709 23:46:38.329979 3381 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-tnjmt" podStartSLOduration=30.329858422 podStartE2EDuration="30.329858422s" podCreationTimestamp="2025-07-09 23:46:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 23:46:38.328250319 +0000 UTC m=+36.257624059" watchObservedRunningTime="2025-07-09 23:46:38.329858422 +0000 UTC m=+36.259232170" Jul 9 23:46:38.463121 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount723028504.mount: Deactivated successfully. Jul 9 23:47:37.000711 systemd[1]: Started sshd@7-10.200.20.40:22-10.200.16.10:32858.service - OpenSSH per-connection server daemon (10.200.16.10:32858). Jul 9 23:47:37.476059 sshd[4699]: Accepted publickey for core from 10.200.16.10 port 32858 ssh2: RSA SHA256:zFMRRzzSGWgmvEk8T0W8VsmZJ1v5NiT01j8gkhQ3zko Jul 9 23:47:37.477210 sshd-session[4699]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:47:37.482038 systemd-logind[1865]: New session 10 of user core. Jul 9 23:47:37.491631 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 9 23:47:37.867810 sshd[4702]: Connection closed by 10.200.16.10 port 32858 Jul 9 23:47:37.868335 sshd-session[4699]: pam_unix(sshd:session): session closed for user core Jul 9 23:47:37.871570 systemd[1]: sshd@7-10.200.20.40:22-10.200.16.10:32858.service: Deactivated successfully. Jul 9 23:47:37.873107 systemd[1]: session-10.scope: Deactivated successfully. Jul 9 23:47:37.873804 systemd-logind[1865]: Session 10 logged out. Waiting for processes to exit. Jul 9 23:47:37.875181 systemd-logind[1865]: Removed session 10. Jul 9 23:47:42.957032 systemd[1]: Started sshd@8-10.200.20.40:22-10.200.16.10:53028.service - OpenSSH per-connection server daemon (10.200.16.10:53028). Jul 9 23:47:43.410912 sshd[4716]: Accepted publickey for core from 10.200.16.10 port 53028 ssh2: RSA SHA256:zFMRRzzSGWgmvEk8T0W8VsmZJ1v5NiT01j8gkhQ3zko Jul 9 23:47:43.412011 sshd-session[4716]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:47:43.415781 systemd-logind[1865]: New session 11 of user core. Jul 9 23:47:43.421619 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 9 23:47:43.796635 sshd[4718]: Connection closed by 10.200.16.10 port 53028 Jul 9 23:47:43.796983 sshd-session[4716]: pam_unix(sshd:session): session closed for user core Jul 9 23:47:43.799359 systemd[1]: sshd@8-10.200.20.40:22-10.200.16.10:53028.service: Deactivated successfully. Jul 9 23:47:43.802777 systemd[1]: session-11.scope: Deactivated successfully. Jul 9 23:47:43.805253 systemd-logind[1865]: Session 11 logged out. Waiting for processes to exit. Jul 9 23:47:43.807001 systemd-logind[1865]: Removed session 11. Jul 9 23:47:48.894695 systemd[1]: Started sshd@9-10.200.20.40:22-10.200.16.10:53034.service - OpenSSH per-connection server daemon (10.200.16.10:53034). Jul 9 23:47:49.387382 sshd[4730]: Accepted publickey for core from 10.200.16.10 port 53034 ssh2: RSA SHA256:zFMRRzzSGWgmvEk8T0W8VsmZJ1v5NiT01j8gkhQ3zko Jul 9 23:47:49.388534 sshd-session[4730]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:47:49.392558 systemd-logind[1865]: New session 12 of user core. Jul 9 23:47:49.401624 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 9 23:47:49.789052 sshd[4732]: Connection closed by 10.200.16.10 port 53034 Jul 9 23:47:49.789430 sshd-session[4730]: pam_unix(sshd:session): session closed for user core Jul 9 23:47:49.792528 systemd-logind[1865]: Session 12 logged out. Waiting for processes to exit. Jul 9 23:47:49.793321 systemd[1]: sshd@9-10.200.20.40:22-10.200.16.10:53034.service: Deactivated successfully. Jul 9 23:47:49.795956 systemd[1]: session-12.scope: Deactivated successfully. Jul 9 23:47:49.799256 systemd-logind[1865]: Removed session 12. Jul 9 23:47:54.875466 systemd[1]: Started sshd@10-10.200.20.40:22-10.200.16.10:37438.service - OpenSSH per-connection server daemon (10.200.16.10:37438). Jul 9 23:47:55.348779 sshd[4744]: Accepted publickey for core from 10.200.16.10 port 37438 ssh2: RSA SHA256:zFMRRzzSGWgmvEk8T0W8VsmZJ1v5NiT01j8gkhQ3zko Jul 9 23:47:55.349916 sshd-session[4744]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:47:55.353855 systemd-logind[1865]: New session 13 of user core. Jul 9 23:47:55.357634 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 9 23:47:55.727047 sshd[4746]: Connection closed by 10.200.16.10 port 37438 Jul 9 23:47:55.726589 sshd-session[4744]: pam_unix(sshd:session): session closed for user core Jul 9 23:47:55.729622 systemd[1]: sshd@10-10.200.20.40:22-10.200.16.10:37438.service: Deactivated successfully. Jul 9 23:47:55.731787 systemd[1]: session-13.scope: Deactivated successfully. Jul 9 23:47:55.735675 systemd-logind[1865]: Session 13 logged out. Waiting for processes to exit. Jul 9 23:47:55.736803 systemd-logind[1865]: Removed session 13. Jul 9 23:47:55.824685 systemd[1]: Started sshd@11-10.200.20.40:22-10.200.16.10:37442.service - OpenSSH per-connection server daemon (10.200.16.10:37442). Jul 9 23:47:56.320410 sshd[4758]: Accepted publickey for core from 10.200.16.10 port 37442 ssh2: RSA SHA256:zFMRRzzSGWgmvEk8T0W8VsmZJ1v5NiT01j8gkhQ3zko Jul 9 23:47:56.321582 sshd-session[4758]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:47:56.325889 systemd-logind[1865]: New session 14 of user core. Jul 9 23:47:56.328616 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 9 23:47:56.735643 sshd[4760]: Connection closed by 10.200.16.10 port 37442 Jul 9 23:47:56.735549 sshd-session[4758]: pam_unix(sshd:session): session closed for user core Jul 9 23:47:56.738607 systemd[1]: sshd@11-10.200.20.40:22-10.200.16.10:37442.service: Deactivated successfully. Jul 9 23:47:56.740025 systemd[1]: session-14.scope: Deactivated successfully. Jul 9 23:47:56.741464 systemd-logind[1865]: Session 14 logged out. Waiting for processes to exit. Jul 9 23:47:56.743145 systemd-logind[1865]: Removed session 14. Jul 9 23:47:56.811326 systemd[1]: Started sshd@12-10.200.20.40:22-10.200.16.10:37444.service - OpenSSH per-connection server daemon (10.200.16.10:37444). Jul 9 23:47:57.248339 sshd[4770]: Accepted publickey for core from 10.200.16.10 port 37444 ssh2: RSA SHA256:zFMRRzzSGWgmvEk8T0W8VsmZJ1v5NiT01j8gkhQ3zko Jul 9 23:47:57.249461 sshd-session[4770]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:47:57.253173 systemd-logind[1865]: New session 15 of user core. Jul 9 23:47:57.259610 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 9 23:47:57.617514 sshd[4772]: Connection closed by 10.200.16.10 port 37444 Jul 9 23:47:57.621583 sshd-session[4770]: pam_unix(sshd:session): session closed for user core Jul 9 23:47:57.624772 systemd[1]: sshd@12-10.200.20.40:22-10.200.16.10:37444.service: Deactivated successfully. Jul 9 23:47:57.627356 systemd[1]: session-15.scope: Deactivated successfully. Jul 9 23:47:57.628193 systemd-logind[1865]: Session 15 logged out. Waiting for processes to exit. Jul 9 23:47:57.629379 systemd-logind[1865]: Removed session 15. Jul 9 23:48:02.707317 systemd[1]: Started sshd@13-10.200.20.40:22-10.200.16.10:55158.service - OpenSSH per-connection server daemon (10.200.16.10:55158). Jul 9 23:48:03.187184 sshd[4785]: Accepted publickey for core from 10.200.16.10 port 55158 ssh2: RSA SHA256:zFMRRzzSGWgmvEk8T0W8VsmZJ1v5NiT01j8gkhQ3zko Jul 9 23:48:03.188261 sshd-session[4785]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:48:03.192311 systemd-logind[1865]: New session 16 of user core. Jul 9 23:48:03.198627 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 9 23:48:03.569935 sshd[4787]: Connection closed by 10.200.16.10 port 55158 Jul 9 23:48:03.569362 sshd-session[4785]: pam_unix(sshd:session): session closed for user core Jul 9 23:48:03.572622 systemd[1]: sshd@13-10.200.20.40:22-10.200.16.10:55158.service: Deactivated successfully. Jul 9 23:48:03.574044 systemd[1]: session-16.scope: Deactivated successfully. Jul 9 23:48:03.575526 systemd-logind[1865]: Session 16 logged out. Waiting for processes to exit. Jul 9 23:48:03.577456 systemd-logind[1865]: Removed session 16. Jul 9 23:48:03.662697 systemd[1]: Started sshd@14-10.200.20.40:22-10.200.16.10:55160.service - OpenSSH per-connection server daemon (10.200.16.10:55160). Jul 9 23:48:04.145813 sshd[4799]: Accepted publickey for core from 10.200.16.10 port 55160 ssh2: RSA SHA256:zFMRRzzSGWgmvEk8T0W8VsmZJ1v5NiT01j8gkhQ3zko Jul 9 23:48:04.146876 sshd-session[4799]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:48:04.150759 systemd-logind[1865]: New session 17 of user core. Jul 9 23:48:04.156618 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 9 23:48:04.563199 sshd[4801]: Connection closed by 10.200.16.10 port 55160 Jul 9 23:48:04.562360 sshd-session[4799]: pam_unix(sshd:session): session closed for user core Jul 9 23:48:04.565387 systemd[1]: sshd@14-10.200.20.40:22-10.200.16.10:55160.service: Deactivated successfully. Jul 9 23:48:04.567675 systemd[1]: session-17.scope: Deactivated successfully. Jul 9 23:48:04.568624 systemd-logind[1865]: Session 17 logged out. Waiting for processes to exit. Jul 9 23:48:04.570370 systemd-logind[1865]: Removed session 17. Jul 9 23:48:04.650821 systemd[1]: Started sshd@15-10.200.20.40:22-10.200.16.10:55170.service - OpenSSH per-connection server daemon (10.200.16.10:55170). Jul 9 23:48:05.142426 sshd[4811]: Accepted publickey for core from 10.200.16.10 port 55170 ssh2: RSA SHA256:zFMRRzzSGWgmvEk8T0W8VsmZJ1v5NiT01j8gkhQ3zko Jul 9 23:48:05.143655 sshd-session[4811]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:48:05.147304 systemd-logind[1865]: New session 18 of user core. Jul 9 23:48:05.154728 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 9 23:48:06.571535 sshd[4813]: Connection closed by 10.200.16.10 port 55170 Jul 9 23:48:06.571849 sshd-session[4811]: pam_unix(sshd:session): session closed for user core Jul 9 23:48:06.575606 systemd-logind[1865]: Session 18 logged out. Waiting for processes to exit. Jul 9 23:48:06.576135 systemd[1]: sshd@15-10.200.20.40:22-10.200.16.10:55170.service: Deactivated successfully. Jul 9 23:48:06.580173 systemd[1]: session-18.scope: Deactivated successfully. Jul 9 23:48:06.580761 systemd[1]: session-18.scope: Consumed 275ms CPU time, 68.9M memory peak. Jul 9 23:48:06.582865 systemd-logind[1865]: Removed session 18. Jul 9 23:48:06.662717 systemd[1]: Started sshd@16-10.200.20.40:22-10.200.16.10:55180.service - OpenSSH per-connection server daemon (10.200.16.10:55180). Jul 9 23:48:07.152962 sshd[4830]: Accepted publickey for core from 10.200.16.10 port 55180 ssh2: RSA SHA256:zFMRRzzSGWgmvEk8T0W8VsmZJ1v5NiT01j8gkhQ3zko Jul 9 23:48:07.154086 sshd-session[4830]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:48:07.158058 systemd-logind[1865]: New session 19 of user core. Jul 9 23:48:07.169613 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 9 23:48:07.629531 sshd[4832]: Connection closed by 10.200.16.10 port 55180 Jul 9 23:48:07.630690 sshd-session[4830]: pam_unix(sshd:session): session closed for user core Jul 9 23:48:07.633657 systemd[1]: sshd@16-10.200.20.40:22-10.200.16.10:55180.service: Deactivated successfully. Jul 9 23:48:07.635159 systemd[1]: session-19.scope: Deactivated successfully. Jul 9 23:48:07.635977 systemd-logind[1865]: Session 19 logged out. Waiting for processes to exit. Jul 9 23:48:07.637213 systemd-logind[1865]: Removed session 19. Jul 9 23:48:07.711565 systemd[1]: Started sshd@17-10.200.20.40:22-10.200.16.10:55196.service - OpenSSH per-connection server daemon (10.200.16.10:55196). Jul 9 23:48:08.190060 sshd[4841]: Accepted publickey for core from 10.200.16.10 port 55196 ssh2: RSA SHA256:zFMRRzzSGWgmvEk8T0W8VsmZJ1v5NiT01j8gkhQ3zko Jul 9 23:48:08.192788 sshd-session[4841]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:48:08.196720 systemd-logind[1865]: New session 20 of user core. Jul 9 23:48:08.206651 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 9 23:48:08.568152 sshd[4844]: Connection closed by 10.200.16.10 port 55196 Jul 9 23:48:08.568961 sshd-session[4841]: pam_unix(sshd:session): session closed for user core Jul 9 23:48:08.571982 systemd[1]: sshd@17-10.200.20.40:22-10.200.16.10:55196.service: Deactivated successfully. Jul 9 23:48:08.573720 systemd[1]: session-20.scope: Deactivated successfully. Jul 9 23:48:08.575050 systemd-logind[1865]: Session 20 logged out. Waiting for processes to exit. Jul 9 23:48:08.576459 systemd-logind[1865]: Removed session 20. Jul 9 23:48:13.664466 systemd[1]: Started sshd@18-10.200.20.40:22-10.200.16.10:50482.service - OpenSSH per-connection server daemon (10.200.16.10:50482). Jul 9 23:48:14.161578 sshd[4862]: Accepted publickey for core from 10.200.16.10 port 50482 ssh2: RSA SHA256:zFMRRzzSGWgmvEk8T0W8VsmZJ1v5NiT01j8gkhQ3zko Jul 9 23:48:14.162678 sshd-session[4862]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:48:14.166303 systemd-logind[1865]: New session 21 of user core. Jul 9 23:48:14.172624 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 9 23:48:14.561623 sshd[4864]: Connection closed by 10.200.16.10 port 50482 Jul 9 23:48:14.562348 sshd-session[4862]: pam_unix(sshd:session): session closed for user core Jul 9 23:48:14.565786 systemd[1]: sshd@18-10.200.20.40:22-10.200.16.10:50482.service: Deactivated successfully. Jul 9 23:48:14.567378 systemd[1]: session-21.scope: Deactivated successfully. Jul 9 23:48:14.568089 systemd-logind[1865]: Session 21 logged out. Waiting for processes to exit. Jul 9 23:48:14.569598 systemd-logind[1865]: Removed session 21. Jul 9 23:48:19.663692 systemd[1]: Started sshd@19-10.200.20.40:22-10.200.16.10:43662.service - OpenSSH per-connection server daemon (10.200.16.10:43662). Jul 9 23:48:20.159698 sshd[4876]: Accepted publickey for core from 10.200.16.10 port 43662 ssh2: RSA SHA256:zFMRRzzSGWgmvEk8T0W8VsmZJ1v5NiT01j8gkhQ3zko Jul 9 23:48:20.160740 sshd-session[4876]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:48:20.164597 systemd-logind[1865]: New session 22 of user core. Jul 9 23:48:20.169698 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 9 23:48:20.551204 sshd[4878]: Connection closed by 10.200.16.10 port 43662 Jul 9 23:48:20.551889 sshd-session[4876]: pam_unix(sshd:session): session closed for user core Jul 9 23:48:20.555170 systemd[1]: sshd@19-10.200.20.40:22-10.200.16.10:43662.service: Deactivated successfully. Jul 9 23:48:20.557831 systemd[1]: session-22.scope: Deactivated successfully. Jul 9 23:48:20.558597 systemd-logind[1865]: Session 22 logged out. Waiting for processes to exit. Jul 9 23:48:20.559698 systemd-logind[1865]: Removed session 22. Jul 9 23:48:25.641421 systemd[1]: Started sshd@20-10.200.20.40:22-10.200.16.10:43664.service - OpenSSH per-connection server daemon (10.200.16.10:43664). Jul 9 23:48:26.138314 sshd[4889]: Accepted publickey for core from 10.200.16.10 port 43664 ssh2: RSA SHA256:zFMRRzzSGWgmvEk8T0W8VsmZJ1v5NiT01j8gkhQ3zko Jul 9 23:48:26.139397 sshd-session[4889]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:48:26.143133 systemd-logind[1865]: New session 23 of user core. Jul 9 23:48:26.152623 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 9 23:48:26.537318 sshd[4891]: Connection closed by 10.200.16.10 port 43664 Jul 9 23:48:26.537884 sshd-session[4889]: pam_unix(sshd:session): session closed for user core Jul 9 23:48:26.540876 systemd[1]: sshd@20-10.200.20.40:22-10.200.16.10:43664.service: Deactivated successfully. Jul 9 23:48:26.542217 systemd[1]: session-23.scope: Deactivated successfully. Jul 9 23:48:26.543255 systemd-logind[1865]: Session 23 logged out. Waiting for processes to exit. Jul 9 23:48:26.544363 systemd-logind[1865]: Removed session 23. Jul 9 23:48:26.629315 systemd[1]: Started sshd@21-10.200.20.40:22-10.200.16.10:43672.service - OpenSSH per-connection server daemon (10.200.16.10:43672). Jul 9 23:48:27.124407 sshd[4902]: Accepted publickey for core from 10.200.16.10 port 43672 ssh2: RSA SHA256:zFMRRzzSGWgmvEk8T0W8VsmZJ1v5NiT01j8gkhQ3zko Jul 9 23:48:27.125516 sshd-session[4902]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:48:27.129216 systemd-logind[1865]: New session 24 of user core. Jul 9 23:48:27.137817 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 9 23:48:28.671119 kubelet[3381]: I0709 23:48:28.671049 3381 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-cr4gb" podStartSLOduration=140.67097188 podStartE2EDuration="2m20.67097188s" podCreationTimestamp="2025-07-09 23:46:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 23:46:38.362224752 +0000 UTC m=+36.291598492" watchObservedRunningTime="2025-07-09 23:48:28.67097188 +0000 UTC m=+146.600345620" Jul 9 23:48:28.689892 containerd[1905]: time="2025-07-09T23:48:28.689855475Z" level=info msg="StopContainer for \"f8cf4ff4dca75a33df3fdf6d387ff3773cdb3092ef54536d51ec29dc02e293c8\" with timeout 30 (s)" Jul 9 23:48:28.690806 containerd[1905]: time="2025-07-09T23:48:28.690779059Z" level=info msg="Stop container \"f8cf4ff4dca75a33df3fdf6d387ff3773cdb3092ef54536d51ec29dc02e293c8\" with signal terminated" Jul 9 23:48:28.697088 containerd[1905]: time="2025-07-09T23:48:28.697050924Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 9 23:48:28.701642 containerd[1905]: time="2025-07-09T23:48:28.701615241Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f2dc6afa23a8a7430803c6cc1ae4e6aa0c0b3d266ee544c16f687ae1baa4df60\" id:\"8d005c4d93747653049fbf5c1bc63a0beda68a10914809cb32686a3a1619b089\" pid:4924 exited_at:{seconds:1752104908 nanos:701250573}" Jul 9 23:48:28.702766 systemd[1]: cri-containerd-f8cf4ff4dca75a33df3fdf6d387ff3773cdb3092ef54536d51ec29dc02e293c8.scope: Deactivated successfully. Jul 9 23:48:28.704960 containerd[1905]: time="2025-07-09T23:48:28.704724540Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f8cf4ff4dca75a33df3fdf6d387ff3773cdb3092ef54536d51ec29dc02e293c8\" id:\"f8cf4ff4dca75a33df3fdf6d387ff3773cdb3092ef54536d51ec29dc02e293c8\" pid:3779 exited_at:{seconds:1752104908 nanos:704374664}" Jul 9 23:48:28.704960 containerd[1905]: time="2025-07-09T23:48:28.704780206Z" level=info msg="received exit event container_id:\"f8cf4ff4dca75a33df3fdf6d387ff3773cdb3092ef54536d51ec29dc02e293c8\" id:\"f8cf4ff4dca75a33df3fdf6d387ff3773cdb3092ef54536d51ec29dc02e293c8\" pid:3779 exited_at:{seconds:1752104908 nanos:704374664}" Jul 9 23:48:28.705406 containerd[1905]: time="2025-07-09T23:48:28.705384035Z" level=info msg="StopContainer for \"f2dc6afa23a8a7430803c6cc1ae4e6aa0c0b3d266ee544c16f687ae1baa4df60\" with timeout 2 (s)" Jul 9 23:48:28.705653 containerd[1905]: time="2025-07-09T23:48:28.705629980Z" level=info msg="Stop container \"f2dc6afa23a8a7430803c6cc1ae4e6aa0c0b3d266ee544c16f687ae1baa4df60\" with signal terminated" Jul 9 23:48:28.712548 systemd-networkd[1482]: lxc_health: Link DOWN Jul 9 23:48:28.713080 systemd-networkd[1482]: lxc_health: Lost carrier Jul 9 23:48:28.729408 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f8cf4ff4dca75a33df3fdf6d387ff3773cdb3092ef54536d51ec29dc02e293c8-rootfs.mount: Deactivated successfully. Jul 9 23:48:28.730559 systemd[1]: cri-containerd-f2dc6afa23a8a7430803c6cc1ae4e6aa0c0b3d266ee544c16f687ae1baa4df60.scope: Deactivated successfully. Jul 9 23:48:28.731687 systemd[1]: cri-containerd-f2dc6afa23a8a7430803c6cc1ae4e6aa0c0b3d266ee544c16f687ae1baa4df60.scope: Consumed 4.414s CPU time, 121.6M memory peak, 128K read from disk, 12.9M written to disk. Jul 9 23:48:28.732843 containerd[1905]: time="2025-07-09T23:48:28.732814622Z" level=info msg="received exit event container_id:\"f2dc6afa23a8a7430803c6cc1ae4e6aa0c0b3d266ee544c16f687ae1baa4df60\" id:\"f2dc6afa23a8a7430803c6cc1ae4e6aa0c0b3d266ee544c16f687ae1baa4df60\" pid:4017 exited_at:{seconds:1752104908 nanos:732580478}" Jul 9 23:48:28.733121 containerd[1905]: time="2025-07-09T23:48:28.733097640Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f2dc6afa23a8a7430803c6cc1ae4e6aa0c0b3d266ee544c16f687ae1baa4df60\" id:\"f2dc6afa23a8a7430803c6cc1ae4e6aa0c0b3d266ee544c16f687ae1baa4df60\" pid:4017 exited_at:{seconds:1752104908 nanos:732580478}" Jul 9 23:48:28.749963 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f2dc6afa23a8a7430803c6cc1ae4e6aa0c0b3d266ee544c16f687ae1baa4df60-rootfs.mount: Deactivated successfully. Jul 9 23:48:28.783177 containerd[1905]: time="2025-07-09T23:48:28.782941432Z" level=info msg="StopContainer for \"f8cf4ff4dca75a33df3fdf6d387ff3773cdb3092ef54536d51ec29dc02e293c8\" returns successfully" Jul 9 23:48:28.783177 containerd[1905]: time="2025-07-09T23:48:28.783135390Z" level=info msg="StopContainer for \"f2dc6afa23a8a7430803c6cc1ae4e6aa0c0b3d266ee544c16f687ae1baa4df60\" returns successfully" Jul 9 23:48:28.784050 containerd[1905]: time="2025-07-09T23:48:28.783832414Z" level=info msg="StopPodSandbox for \"9a8dd2bb8de2be0ff5e34aa7d317be6fdad94e12769eeb494b18a3867d803757\"" Jul 9 23:48:28.784050 containerd[1905]: time="2025-07-09T23:48:28.783885632Z" level=info msg="Container to stop \"b96544f5e55e3de1912f7ee267b7c327d2f64af7971c5fefbe2d8b3fa5fa20c4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 9 23:48:28.784050 containerd[1905]: time="2025-07-09T23:48:28.783894280Z" level=info msg="Container to stop \"804a16e97e896988e0ee03efe2106eb52e5c1f3e2ab928c063704bc720d430f7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 9 23:48:28.784050 containerd[1905]: time="2025-07-09T23:48:28.783902001Z" level=info msg="Container to stop \"f2dc6afa23a8a7430803c6cc1ae4e6aa0c0b3d266ee544c16f687ae1baa4df60\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 9 23:48:28.784050 containerd[1905]: time="2025-07-09T23:48:28.783909905Z" level=info msg="Container to stop \"51cf16359cc442f534529be49f8b817c80b711bff714d09e52be4927607640d6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 9 23:48:28.784050 containerd[1905]: time="2025-07-09T23:48:28.783918705Z" level=info msg="Container to stop \"6dbe85701104dcb789450f93c5a25c6b9ec2bcb5d02d2ca782991c40bbe4521c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 9 23:48:28.784050 containerd[1905]: time="2025-07-09T23:48:28.783831238Z" level=info msg="StopPodSandbox for \"a20e9af7ba30940da76573b3e284de2327689504262fe41387abe5e30b7d0067\"" Jul 9 23:48:28.784050 containerd[1905]: time="2025-07-09T23:48:28.784020205Z" level=info msg="Container to stop \"f8cf4ff4dca75a33df3fdf6d387ff3773cdb3092ef54536d51ec29dc02e293c8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 9 23:48:28.788827 systemd[1]: cri-containerd-9a8dd2bb8de2be0ff5e34aa7d317be6fdad94e12769eeb494b18a3867d803757.scope: Deactivated successfully. Jul 9 23:48:28.791350 systemd[1]: cri-containerd-a20e9af7ba30940da76573b3e284de2327689504262fe41387abe5e30b7d0067.scope: Deactivated successfully. Jul 9 23:48:28.791934 containerd[1905]: time="2025-07-09T23:48:28.791836419Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9a8dd2bb8de2be0ff5e34aa7d317be6fdad94e12769eeb494b18a3867d803757\" id:\"9a8dd2bb8de2be0ff5e34aa7d317be6fdad94e12769eeb494b18a3867d803757\" pid:3735 exit_status:137 exited_at:{seconds:1752104908 nanos:791081688}" Jul 9 23:48:28.811806 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a20e9af7ba30940da76573b3e284de2327689504262fe41387abe5e30b7d0067-rootfs.mount: Deactivated successfully. Jul 9 23:48:28.817002 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9a8dd2bb8de2be0ff5e34aa7d317be6fdad94e12769eeb494b18a3867d803757-rootfs.mount: Deactivated successfully. Jul 9 23:48:28.829074 containerd[1905]: time="2025-07-09T23:48:28.828999053Z" level=info msg="received exit event sandbox_id:\"9a8dd2bb8de2be0ff5e34aa7d317be6fdad94e12769eeb494b18a3867d803757\" exit_status:137 exited_at:{seconds:1752104908 nanos:791081688}" Jul 9 23:48:28.829505 containerd[1905]: time="2025-07-09T23:48:28.829474165Z" level=info msg="shim disconnected" id=9a8dd2bb8de2be0ff5e34aa7d317be6fdad94e12769eeb494b18a3867d803757 namespace=k8s.io Jul 9 23:48:28.830387 containerd[1905]: time="2025-07-09T23:48:28.830221711Z" level=warning msg="cleaning up after shim disconnected" id=9a8dd2bb8de2be0ff5e34aa7d317be6fdad94e12769eeb494b18a3867d803757 namespace=k8s.io Jul 9 23:48:28.830387 containerd[1905]: time="2025-07-09T23:48:28.830262872Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 9 23:48:28.833437 containerd[1905]: time="2025-07-09T23:48:28.832621906Z" level=info msg="shim disconnected" id=a20e9af7ba30940da76573b3e284de2327689504262fe41387abe5e30b7d0067 namespace=k8s.io Jul 9 23:48:28.833437 containerd[1905]: time="2025-07-09T23:48:28.832644323Z" level=warning msg="cleaning up after shim disconnected" id=a20e9af7ba30940da76573b3e284de2327689504262fe41387abe5e30b7d0067 namespace=k8s.io Jul 9 23:48:28.833437 containerd[1905]: time="2025-07-09T23:48:28.832684124Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 9 23:48:28.833070 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9a8dd2bb8de2be0ff5e34aa7d317be6fdad94e12769eeb494b18a3867d803757-shm.mount: Deactivated successfully. Jul 9 23:48:28.834699 containerd[1905]: time="2025-07-09T23:48:28.833969400Z" level=info msg="TearDown network for sandbox \"9a8dd2bb8de2be0ff5e34aa7d317be6fdad94e12769eeb494b18a3867d803757\" successfully" Jul 9 23:48:28.834699 containerd[1905]: time="2025-07-09T23:48:28.833986185Z" level=info msg="StopPodSandbox for \"9a8dd2bb8de2be0ff5e34aa7d317be6fdad94e12769eeb494b18a3867d803757\" returns successfully" Jul 9 23:48:28.850313 containerd[1905]: time="2025-07-09T23:48:28.850273763Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a20e9af7ba30940da76573b3e284de2327689504262fe41387abe5e30b7d0067\" id:\"a20e9af7ba30940da76573b3e284de2327689504262fe41387abe5e30b7d0067\" pid:3689 exit_status:137 exited_at:{seconds:1752104908 nanos:792912744}" Jul 9 23:48:28.850768 containerd[1905]: time="2025-07-09T23:48:28.850744051Z" level=info msg="TearDown network for sandbox \"a20e9af7ba30940da76573b3e284de2327689504262fe41387abe5e30b7d0067\" successfully" Jul 9 23:48:28.850877 containerd[1905]: time="2025-07-09T23:48:28.850859351Z" level=info msg="StopPodSandbox for \"a20e9af7ba30940da76573b3e284de2327689504262fe41387abe5e30b7d0067\" returns successfully" Jul 9 23:48:28.851145 containerd[1905]: time="2025-07-09T23:48:28.851116960Z" level=info msg="received exit event sandbox_id:\"a20e9af7ba30940da76573b3e284de2327689504262fe41387abe5e30b7d0067\" exit_status:137 exited_at:{seconds:1752104908 nanos:792912744}" Jul 9 23:48:29.004338 kubelet[3381]: I0709 23:48:29.004204 3381 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0654bc50-8b19-462c-8cd1-7cd980bf5074-cilium-config-path\") pod \"0654bc50-8b19-462c-8cd1-7cd980bf5074\" (UID: \"0654bc50-8b19-462c-8cd1-7cd980bf5074\") " Jul 9 23:48:29.004338 kubelet[3381]: I0709 23:48:29.004245 3381 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0654bc50-8b19-462c-8cd1-7cd980bf5074-cilium-run\") pod \"0654bc50-8b19-462c-8cd1-7cd980bf5074\" (UID: \"0654bc50-8b19-462c-8cd1-7cd980bf5074\") " Jul 9 23:48:29.004338 kubelet[3381]: I0709 23:48:29.004261 3381 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0654bc50-8b19-462c-8cd1-7cd980bf5074-etc-cni-netd\") pod \"0654bc50-8b19-462c-8cd1-7cd980bf5074\" (UID: \"0654bc50-8b19-462c-8cd1-7cd980bf5074\") " Jul 9 23:48:29.004338 kubelet[3381]: I0709 23:48:29.004277 3381 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sj6c8\" (UniqueName: \"kubernetes.io/projected/bb5d4317-56cd-4f56-a0db-75b5b9e53b4a-kube-api-access-sj6c8\") pod \"bb5d4317-56cd-4f56-a0db-75b5b9e53b4a\" (UID: \"bb5d4317-56cd-4f56-a0db-75b5b9e53b4a\") " Jul 9 23:48:29.004338 kubelet[3381]: I0709 23:48:29.004287 3381 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0654bc50-8b19-462c-8cd1-7cd980bf5074-lib-modules\") pod \"0654bc50-8b19-462c-8cd1-7cd980bf5074\" (UID: \"0654bc50-8b19-462c-8cd1-7cd980bf5074\") " Jul 9 23:48:29.004338 kubelet[3381]: I0709 23:48:29.004296 3381 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0654bc50-8b19-462c-8cd1-7cd980bf5074-xtables-lock\") pod \"0654bc50-8b19-462c-8cd1-7cd980bf5074\" (UID: \"0654bc50-8b19-462c-8cd1-7cd980bf5074\") " Jul 9 23:48:29.005654 kubelet[3381]: I0709 23:48:29.004306 3381 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0654bc50-8b19-462c-8cd1-7cd980bf5074-hubble-tls\") pod \"0654bc50-8b19-462c-8cd1-7cd980bf5074\" (UID: \"0654bc50-8b19-462c-8cd1-7cd980bf5074\") " Jul 9 23:48:29.005654 kubelet[3381]: I0709 23:48:29.004316 3381 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0654bc50-8b19-462c-8cd1-7cd980bf5074-cni-path\") pod \"0654bc50-8b19-462c-8cd1-7cd980bf5074\" (UID: \"0654bc50-8b19-462c-8cd1-7cd980bf5074\") " Jul 9 23:48:29.005654 kubelet[3381]: I0709 23:48:29.004347 3381 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9v9qm\" (UniqueName: \"kubernetes.io/projected/0654bc50-8b19-462c-8cd1-7cd980bf5074-kube-api-access-9v9qm\") pod \"0654bc50-8b19-462c-8cd1-7cd980bf5074\" (UID: \"0654bc50-8b19-462c-8cd1-7cd980bf5074\") " Jul 9 23:48:29.005654 kubelet[3381]: I0709 23:48:29.004367 3381 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0654bc50-8b19-462c-8cd1-7cd980bf5074-cilium-cgroup\") pod \"0654bc50-8b19-462c-8cd1-7cd980bf5074\" (UID: \"0654bc50-8b19-462c-8cd1-7cd980bf5074\") " Jul 9 23:48:29.005654 kubelet[3381]: I0709 23:48:29.004378 3381 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bb5d4317-56cd-4f56-a0db-75b5b9e53b4a-cilium-config-path\") pod \"bb5d4317-56cd-4f56-a0db-75b5b9e53b4a\" (UID: \"bb5d4317-56cd-4f56-a0db-75b5b9e53b4a\") " Jul 9 23:48:29.005654 kubelet[3381]: I0709 23:48:29.004389 3381 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0654bc50-8b19-462c-8cd1-7cd980bf5074-clustermesh-secrets\") pod \"0654bc50-8b19-462c-8cd1-7cd980bf5074\" (UID: \"0654bc50-8b19-462c-8cd1-7cd980bf5074\") " Jul 9 23:48:29.005752 kubelet[3381]: I0709 23:48:29.004398 3381 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0654bc50-8b19-462c-8cd1-7cd980bf5074-bpf-maps\") pod \"0654bc50-8b19-462c-8cd1-7cd980bf5074\" (UID: \"0654bc50-8b19-462c-8cd1-7cd980bf5074\") " Jul 9 23:48:29.005752 kubelet[3381]: I0709 23:48:29.004407 3381 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0654bc50-8b19-462c-8cd1-7cd980bf5074-hostproc\") pod \"0654bc50-8b19-462c-8cd1-7cd980bf5074\" (UID: \"0654bc50-8b19-462c-8cd1-7cd980bf5074\") " Jul 9 23:48:29.005752 kubelet[3381]: I0709 23:48:29.004416 3381 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0654bc50-8b19-462c-8cd1-7cd980bf5074-host-proc-sys-net\") pod \"0654bc50-8b19-462c-8cd1-7cd980bf5074\" (UID: \"0654bc50-8b19-462c-8cd1-7cd980bf5074\") " Jul 9 23:48:29.005752 kubelet[3381]: I0709 23:48:29.004427 3381 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0654bc50-8b19-462c-8cd1-7cd980bf5074-host-proc-sys-kernel\") pod \"0654bc50-8b19-462c-8cd1-7cd980bf5074\" (UID: \"0654bc50-8b19-462c-8cd1-7cd980bf5074\") " Jul 9 23:48:29.005752 kubelet[3381]: I0709 23:48:29.004519 3381 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0654bc50-8b19-462c-8cd1-7cd980bf5074-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "0654bc50-8b19-462c-8cd1-7cd980bf5074" (UID: "0654bc50-8b19-462c-8cd1-7cd980bf5074"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 9 23:48:29.005826 kubelet[3381]: I0709 23:48:29.005018 3381 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0654bc50-8b19-462c-8cd1-7cd980bf5074-cni-path" (OuterVolumeSpecName: "cni-path") pod "0654bc50-8b19-462c-8cd1-7cd980bf5074" (UID: "0654bc50-8b19-462c-8cd1-7cd980bf5074"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 9 23:48:29.005826 kubelet[3381]: I0709 23:48:29.005051 3381 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0654bc50-8b19-462c-8cd1-7cd980bf5074-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "0654bc50-8b19-462c-8cd1-7cd980bf5074" (UID: "0654bc50-8b19-462c-8cd1-7cd980bf5074"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 9 23:48:29.005826 kubelet[3381]: I0709 23:48:29.005061 3381 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0654bc50-8b19-462c-8cd1-7cd980bf5074-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "0654bc50-8b19-462c-8cd1-7cd980bf5074" (UID: "0654bc50-8b19-462c-8cd1-7cd980bf5074"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 9 23:48:29.005826 kubelet[3381]: I0709 23:48:29.005742 3381 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0654bc50-8b19-462c-8cd1-7cd980bf5074-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "0654bc50-8b19-462c-8cd1-7cd980bf5074" (UID: "0654bc50-8b19-462c-8cd1-7cd980bf5074"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 9 23:48:29.006477 kubelet[3381]: I0709 23:48:29.006275 3381 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0654bc50-8b19-462c-8cd1-7cd980bf5074-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "0654bc50-8b19-462c-8cd1-7cd980bf5074" (UID: "0654bc50-8b19-462c-8cd1-7cd980bf5074"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 9 23:48:29.006477 kubelet[3381]: I0709 23:48:29.006305 3381 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0654bc50-8b19-462c-8cd1-7cd980bf5074-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "0654bc50-8b19-462c-8cd1-7cd980bf5074" (UID: "0654bc50-8b19-462c-8cd1-7cd980bf5074"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 9 23:48:29.007357 kubelet[3381]: I0709 23:48:29.007333 3381 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0654bc50-8b19-462c-8cd1-7cd980bf5074-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "0654bc50-8b19-462c-8cd1-7cd980bf5074" (UID: "0654bc50-8b19-462c-8cd1-7cd980bf5074"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 9 23:48:29.007691 kubelet[3381]: I0709 23:48:29.007669 3381 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0654bc50-8b19-462c-8cd1-7cd980bf5074-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "0654bc50-8b19-462c-8cd1-7cd980bf5074" (UID: "0654bc50-8b19-462c-8cd1-7cd980bf5074"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 9 23:48:29.008524 kubelet[3381]: I0709 23:48:29.007949 3381 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0654bc50-8b19-462c-8cd1-7cd980bf5074-hostproc" (OuterVolumeSpecName: "hostproc") pod "0654bc50-8b19-462c-8cd1-7cd980bf5074" (UID: "0654bc50-8b19-462c-8cd1-7cd980bf5074"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 9 23:48:29.008524 kubelet[3381]: I0709 23:48:29.007977 3381 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0654bc50-8b19-462c-8cd1-7cd980bf5074-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "0654bc50-8b19-462c-8cd1-7cd980bf5074" (UID: "0654bc50-8b19-462c-8cd1-7cd980bf5074"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 9 23:48:29.008524 kubelet[3381]: I0709 23:48:29.008372 3381 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0654bc50-8b19-462c-8cd1-7cd980bf5074-kube-api-access-9v9qm" (OuterVolumeSpecName: "kube-api-access-9v9qm") pod "0654bc50-8b19-462c-8cd1-7cd980bf5074" (UID: "0654bc50-8b19-462c-8cd1-7cd980bf5074"). InnerVolumeSpecName "kube-api-access-9v9qm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 9 23:48:29.009820 kubelet[3381]: I0709 23:48:29.009792 3381 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb5d4317-56cd-4f56-a0db-75b5b9e53b4a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "bb5d4317-56cd-4f56-a0db-75b5b9e53b4a" (UID: "bb5d4317-56cd-4f56-a0db-75b5b9e53b4a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 9 23:48:29.010433 kubelet[3381]: I0709 23:48:29.010411 3381 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0654bc50-8b19-462c-8cd1-7cd980bf5074-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "0654bc50-8b19-462c-8cd1-7cd980bf5074" (UID: "0654bc50-8b19-462c-8cd1-7cd980bf5074"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 9 23:48:29.010629 kubelet[3381]: I0709 23:48:29.010614 3381 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb5d4317-56cd-4f56-a0db-75b5b9e53b4a-kube-api-access-sj6c8" (OuterVolumeSpecName: "kube-api-access-sj6c8") pod "bb5d4317-56cd-4f56-a0db-75b5b9e53b4a" (UID: "bb5d4317-56cd-4f56-a0db-75b5b9e53b4a"). InnerVolumeSpecName "kube-api-access-sj6c8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 9 23:48:29.011071 kubelet[3381]: I0709 23:48:29.011044 3381 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0654bc50-8b19-462c-8cd1-7cd980bf5074-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "0654bc50-8b19-462c-8cd1-7cd980bf5074" (UID: "0654bc50-8b19-462c-8cd1-7cd980bf5074"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 9 23:48:29.104984 kubelet[3381]: I0709 23:48:29.104944 3381 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0654bc50-8b19-462c-8cd1-7cd980bf5074-bpf-maps\") on node \"ci-4344.1.1-n-bbe652f90c\" DevicePath \"\"" Jul 9 23:48:29.104984 kubelet[3381]: I0709 23:48:29.104978 3381 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0654bc50-8b19-462c-8cd1-7cd980bf5074-hostproc\") on node \"ci-4344.1.1-n-bbe652f90c\" DevicePath \"\"" Jul 9 23:48:29.104984 kubelet[3381]: I0709 23:48:29.104986 3381 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0654bc50-8b19-462c-8cd1-7cd980bf5074-host-proc-sys-net\") on node \"ci-4344.1.1-n-bbe652f90c\" DevicePath \"\"" Jul 9 23:48:29.104984 kubelet[3381]: I0709 23:48:29.104993 3381 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0654bc50-8b19-462c-8cd1-7cd980bf5074-host-proc-sys-kernel\") on node \"ci-4344.1.1-n-bbe652f90c\" DevicePath \"\"" Jul 9 23:48:29.104984 kubelet[3381]: I0709 23:48:29.105001 3381 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0654bc50-8b19-462c-8cd1-7cd980bf5074-cilium-run\") on node \"ci-4344.1.1-n-bbe652f90c\" DevicePath \"\"" Jul 9 23:48:29.105211 kubelet[3381]: I0709 23:48:29.105007 3381 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0654bc50-8b19-462c-8cd1-7cd980bf5074-cilium-config-path\") on node \"ci-4344.1.1-n-bbe652f90c\" DevicePath \"\"" Jul 9 23:48:29.105211 kubelet[3381]: I0709 23:48:29.105013 3381 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0654bc50-8b19-462c-8cd1-7cd980bf5074-lib-modules\") on node \"ci-4344.1.1-n-bbe652f90c\" DevicePath \"\"" Jul 9 23:48:29.105211 kubelet[3381]: I0709 23:48:29.105021 3381 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0654bc50-8b19-462c-8cd1-7cd980bf5074-etc-cni-netd\") on node \"ci-4344.1.1-n-bbe652f90c\" DevicePath \"\"" Jul 9 23:48:29.105211 kubelet[3381]: I0709 23:48:29.105026 3381 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sj6c8\" (UniqueName: \"kubernetes.io/projected/bb5d4317-56cd-4f56-a0db-75b5b9e53b4a-kube-api-access-sj6c8\") on node \"ci-4344.1.1-n-bbe652f90c\" DevicePath \"\"" Jul 9 23:48:29.105211 kubelet[3381]: I0709 23:48:29.105034 3381 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0654bc50-8b19-462c-8cd1-7cd980bf5074-xtables-lock\") on node \"ci-4344.1.1-n-bbe652f90c\" DevicePath \"\"" Jul 9 23:48:29.105211 kubelet[3381]: I0709 23:48:29.105039 3381 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0654bc50-8b19-462c-8cd1-7cd980bf5074-hubble-tls\") on node \"ci-4344.1.1-n-bbe652f90c\" DevicePath \"\"" Jul 9 23:48:29.105211 kubelet[3381]: I0709 23:48:29.105044 3381 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0654bc50-8b19-462c-8cd1-7cd980bf5074-cni-path\") on node \"ci-4344.1.1-n-bbe652f90c\" DevicePath \"\"" Jul 9 23:48:29.105211 kubelet[3381]: I0709 23:48:29.105049 3381 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9v9qm\" (UniqueName: \"kubernetes.io/projected/0654bc50-8b19-462c-8cd1-7cd980bf5074-kube-api-access-9v9qm\") on node \"ci-4344.1.1-n-bbe652f90c\" DevicePath \"\"" Jul 9 23:48:29.105327 kubelet[3381]: I0709 23:48:29.105054 3381 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0654bc50-8b19-462c-8cd1-7cd980bf5074-cilium-cgroup\") on node \"ci-4344.1.1-n-bbe652f90c\" DevicePath \"\"" Jul 9 23:48:29.105327 kubelet[3381]: I0709 23:48:29.105062 3381 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0654bc50-8b19-462c-8cd1-7cd980bf5074-clustermesh-secrets\") on node \"ci-4344.1.1-n-bbe652f90c\" DevicePath \"\"" Jul 9 23:48:29.105327 kubelet[3381]: I0709 23:48:29.105067 3381 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bb5d4317-56cd-4f56-a0db-75b5b9e53b4a-cilium-config-path\") on node \"ci-4344.1.1-n-bbe652f90c\" DevicePath \"\"" Jul 9 23:48:29.518018 kubelet[3381]: I0709 23:48:29.517986 3381 scope.go:117] "RemoveContainer" containerID="f2dc6afa23a8a7430803c6cc1ae4e6aa0c0b3d266ee544c16f687ae1baa4df60" Jul 9 23:48:29.519897 containerd[1905]: time="2025-07-09T23:48:29.519766265Z" level=info msg="RemoveContainer for \"f2dc6afa23a8a7430803c6cc1ae4e6aa0c0b3d266ee544c16f687ae1baa4df60\"" Jul 9 23:48:29.526267 systemd[1]: Removed slice kubepods-burstable-pod0654bc50_8b19_462c_8cd1_7cd980bf5074.slice - libcontainer container kubepods-burstable-pod0654bc50_8b19_462c_8cd1_7cd980bf5074.slice. Jul 9 23:48:29.526340 systemd[1]: kubepods-burstable-pod0654bc50_8b19_462c_8cd1_7cd980bf5074.slice: Consumed 4.476s CPU time, 122M memory peak, 128K read from disk, 12.9M written to disk. Jul 9 23:48:29.535080 systemd[1]: Removed slice kubepods-besteffort-podbb5d4317_56cd_4f56_a0db_75b5b9e53b4a.slice - libcontainer container kubepods-besteffort-podbb5d4317_56cd_4f56_a0db_75b5b9e53b4a.slice. Jul 9 23:48:29.540240 containerd[1905]: time="2025-07-09T23:48:29.540128752Z" level=info msg="RemoveContainer for \"f2dc6afa23a8a7430803c6cc1ae4e6aa0c0b3d266ee544c16f687ae1baa4df60\" returns successfully" Jul 9 23:48:29.540476 kubelet[3381]: I0709 23:48:29.540453 3381 scope.go:117] "RemoveContainer" containerID="6dbe85701104dcb789450f93c5a25c6b9ec2bcb5d02d2ca782991c40bbe4521c" Jul 9 23:48:29.541710 containerd[1905]: time="2025-07-09T23:48:29.541685573Z" level=info msg="RemoveContainer for \"6dbe85701104dcb789450f93c5a25c6b9ec2bcb5d02d2ca782991c40bbe4521c\"" Jul 9 23:48:29.554055 containerd[1905]: time="2025-07-09T23:48:29.554015335Z" level=info msg="RemoveContainer for \"6dbe85701104dcb789450f93c5a25c6b9ec2bcb5d02d2ca782991c40bbe4521c\" returns successfully" Jul 9 23:48:29.554351 kubelet[3381]: I0709 23:48:29.554276 3381 scope.go:117] "RemoveContainer" containerID="804a16e97e896988e0ee03efe2106eb52e5c1f3e2ab928c063704bc720d430f7" Jul 9 23:48:29.556220 containerd[1905]: time="2025-07-09T23:48:29.556186162Z" level=info msg="RemoveContainer for \"804a16e97e896988e0ee03efe2106eb52e5c1f3e2ab928c063704bc720d430f7\"" Jul 9 23:48:29.569760 containerd[1905]: time="2025-07-09T23:48:29.569722165Z" level=info msg="RemoveContainer for \"804a16e97e896988e0ee03efe2106eb52e5c1f3e2ab928c063704bc720d430f7\" returns successfully" Jul 9 23:48:29.570018 kubelet[3381]: I0709 23:48:29.569932 3381 scope.go:117] "RemoveContainer" containerID="b96544f5e55e3de1912f7ee267b7c327d2f64af7971c5fefbe2d8b3fa5fa20c4" Jul 9 23:48:29.571250 containerd[1905]: time="2025-07-09T23:48:29.571226081Z" level=info msg="RemoveContainer for \"b96544f5e55e3de1912f7ee267b7c327d2f64af7971c5fefbe2d8b3fa5fa20c4\"" Jul 9 23:48:29.587773 containerd[1905]: time="2025-07-09T23:48:29.587598382Z" level=info msg="RemoveContainer for \"b96544f5e55e3de1912f7ee267b7c327d2f64af7971c5fefbe2d8b3fa5fa20c4\" returns successfully" Jul 9 23:48:29.588008 kubelet[3381]: I0709 23:48:29.587981 3381 scope.go:117] "RemoveContainer" containerID="51cf16359cc442f534529be49f8b817c80b711bff714d09e52be4927607640d6" Jul 9 23:48:29.589235 containerd[1905]: time="2025-07-09T23:48:29.589211845Z" level=info msg="RemoveContainer for \"51cf16359cc442f534529be49f8b817c80b711bff714d09e52be4927607640d6\"" Jul 9 23:48:29.613924 containerd[1905]: time="2025-07-09T23:48:29.613871968Z" level=info msg="RemoveContainer for \"51cf16359cc442f534529be49f8b817c80b711bff714d09e52be4927607640d6\" returns successfully" Jul 9 23:48:29.614288 kubelet[3381]: I0709 23:48:29.614258 3381 scope.go:117] "RemoveContainer" containerID="f2dc6afa23a8a7430803c6cc1ae4e6aa0c0b3d266ee544c16f687ae1baa4df60" Jul 9 23:48:29.614849 containerd[1905]: time="2025-07-09T23:48:29.614581713Z" level=error msg="ContainerStatus for \"f2dc6afa23a8a7430803c6cc1ae4e6aa0c0b3d266ee544c16f687ae1baa4df60\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f2dc6afa23a8a7430803c6cc1ae4e6aa0c0b3d266ee544c16f687ae1baa4df60\": not found" Jul 9 23:48:29.614923 kubelet[3381]: E0709 23:48:29.614708 3381 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f2dc6afa23a8a7430803c6cc1ae4e6aa0c0b3d266ee544c16f687ae1baa4df60\": not found" containerID="f2dc6afa23a8a7430803c6cc1ae4e6aa0c0b3d266ee544c16f687ae1baa4df60" Jul 9 23:48:29.614923 kubelet[3381]: I0709 23:48:29.614731 3381 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f2dc6afa23a8a7430803c6cc1ae4e6aa0c0b3d266ee544c16f687ae1baa4df60"} err="failed to get container status \"f2dc6afa23a8a7430803c6cc1ae4e6aa0c0b3d266ee544c16f687ae1baa4df60\": rpc error: code = NotFound desc = an error occurred when try to find container \"f2dc6afa23a8a7430803c6cc1ae4e6aa0c0b3d266ee544c16f687ae1baa4df60\": not found" Jul 9 23:48:29.614923 kubelet[3381]: I0709 23:48:29.614788 3381 scope.go:117] "RemoveContainer" containerID="6dbe85701104dcb789450f93c5a25c6b9ec2bcb5d02d2ca782991c40bbe4521c" Jul 9 23:48:29.614977 containerd[1905]: time="2025-07-09T23:48:29.614942213Z" level=error msg="ContainerStatus for \"6dbe85701104dcb789450f93c5a25c6b9ec2bcb5d02d2ca782991c40bbe4521c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6dbe85701104dcb789450f93c5a25c6b9ec2bcb5d02d2ca782991c40bbe4521c\": not found" Jul 9 23:48:29.615188 kubelet[3381]: E0709 23:48:29.615120 3381 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6dbe85701104dcb789450f93c5a25c6b9ec2bcb5d02d2ca782991c40bbe4521c\": not found" containerID="6dbe85701104dcb789450f93c5a25c6b9ec2bcb5d02d2ca782991c40bbe4521c" Jul 9 23:48:29.615226 kubelet[3381]: I0709 23:48:29.615186 3381 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6dbe85701104dcb789450f93c5a25c6b9ec2bcb5d02d2ca782991c40bbe4521c"} err="failed to get container status \"6dbe85701104dcb789450f93c5a25c6b9ec2bcb5d02d2ca782991c40bbe4521c\": rpc error: code = NotFound desc = an error occurred when try to find container \"6dbe85701104dcb789450f93c5a25c6b9ec2bcb5d02d2ca782991c40bbe4521c\": not found" Jul 9 23:48:29.615226 kubelet[3381]: I0709 23:48:29.615207 3381 scope.go:117] "RemoveContainer" containerID="804a16e97e896988e0ee03efe2106eb52e5c1f3e2ab928c063704bc720d430f7" Jul 9 23:48:29.615362 containerd[1905]: time="2025-07-09T23:48:29.615335923Z" level=error msg="ContainerStatus for \"804a16e97e896988e0ee03efe2106eb52e5c1f3e2ab928c063704bc720d430f7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"804a16e97e896988e0ee03efe2106eb52e5c1f3e2ab928c063704bc720d430f7\": not found" Jul 9 23:48:29.615472 kubelet[3381]: E0709 23:48:29.615451 3381 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"804a16e97e896988e0ee03efe2106eb52e5c1f3e2ab928c063704bc720d430f7\": not found" containerID="804a16e97e896988e0ee03efe2106eb52e5c1f3e2ab928c063704bc720d430f7" Jul 9 23:48:29.615564 kubelet[3381]: I0709 23:48:29.615544 3381 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"804a16e97e896988e0ee03efe2106eb52e5c1f3e2ab928c063704bc720d430f7"} err="failed to get container status \"804a16e97e896988e0ee03efe2106eb52e5c1f3e2ab928c063704bc720d430f7\": rpc error: code = NotFound desc = an error occurred when try to find container \"804a16e97e896988e0ee03efe2106eb52e5c1f3e2ab928c063704bc720d430f7\": not found" Jul 9 23:48:29.615588 kubelet[3381]: I0709 23:48:29.615565 3381 scope.go:117] "RemoveContainer" containerID="b96544f5e55e3de1912f7ee267b7c327d2f64af7971c5fefbe2d8b3fa5fa20c4" Jul 9 23:48:29.615815 containerd[1905]: time="2025-07-09T23:48:29.615793931Z" level=error msg="ContainerStatus for \"b96544f5e55e3de1912f7ee267b7c327d2f64af7971c5fefbe2d8b3fa5fa20c4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b96544f5e55e3de1912f7ee267b7c327d2f64af7971c5fefbe2d8b3fa5fa20c4\": not found" Jul 9 23:48:29.616029 kubelet[3381]: E0709 23:48:29.615901 3381 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b96544f5e55e3de1912f7ee267b7c327d2f64af7971c5fefbe2d8b3fa5fa20c4\": not found" containerID="b96544f5e55e3de1912f7ee267b7c327d2f64af7971c5fefbe2d8b3fa5fa20c4" Jul 9 23:48:29.616029 kubelet[3381]: I0709 23:48:29.615919 3381 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b96544f5e55e3de1912f7ee267b7c327d2f64af7971c5fefbe2d8b3fa5fa20c4"} err="failed to get container status \"b96544f5e55e3de1912f7ee267b7c327d2f64af7971c5fefbe2d8b3fa5fa20c4\": rpc error: code = NotFound desc = an error occurred when try to find container \"b96544f5e55e3de1912f7ee267b7c327d2f64af7971c5fefbe2d8b3fa5fa20c4\": not found" Jul 9 23:48:29.616029 kubelet[3381]: I0709 23:48:29.615932 3381 scope.go:117] "RemoveContainer" containerID="51cf16359cc442f534529be49f8b817c80b711bff714d09e52be4927607640d6" Jul 9 23:48:29.616181 containerd[1905]: time="2025-07-09T23:48:29.616136623Z" level=error msg="ContainerStatus for \"51cf16359cc442f534529be49f8b817c80b711bff714d09e52be4927607640d6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"51cf16359cc442f534529be49f8b817c80b711bff714d09e52be4927607640d6\": not found" Jul 9 23:48:29.616290 kubelet[3381]: E0709 23:48:29.616270 3381 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"51cf16359cc442f534529be49f8b817c80b711bff714d09e52be4927607640d6\": not found" containerID="51cf16359cc442f534529be49f8b817c80b711bff714d09e52be4927607640d6" Jul 9 23:48:29.616320 kubelet[3381]: I0709 23:48:29.616292 3381 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"51cf16359cc442f534529be49f8b817c80b711bff714d09e52be4927607640d6"} err="failed to get container status \"51cf16359cc442f534529be49f8b817c80b711bff714d09e52be4927607640d6\": rpc error: code = NotFound desc = an error occurred when try to find container \"51cf16359cc442f534529be49f8b817c80b711bff714d09e52be4927607640d6\": not found" Jul 9 23:48:29.616344 kubelet[3381]: I0709 23:48:29.616322 3381 scope.go:117] "RemoveContainer" containerID="f8cf4ff4dca75a33df3fdf6d387ff3773cdb3092ef54536d51ec29dc02e293c8" Jul 9 23:48:29.617712 containerd[1905]: time="2025-07-09T23:48:29.617688972Z" level=info msg="RemoveContainer for \"f8cf4ff4dca75a33df3fdf6d387ff3773cdb3092ef54536d51ec29dc02e293c8\"" Jul 9 23:48:29.634620 containerd[1905]: time="2025-07-09T23:48:29.634587283Z" level=info msg="RemoveContainer for \"f8cf4ff4dca75a33df3fdf6d387ff3773cdb3092ef54536d51ec29dc02e293c8\" returns successfully" Jul 9 23:48:29.634842 kubelet[3381]: I0709 23:48:29.634817 3381 scope.go:117] "RemoveContainer" containerID="f8cf4ff4dca75a33df3fdf6d387ff3773cdb3092ef54536d51ec29dc02e293c8" Jul 9 23:48:29.635033 containerd[1905]: time="2025-07-09T23:48:29.635005226Z" level=error msg="ContainerStatus for \"f8cf4ff4dca75a33df3fdf6d387ff3773cdb3092ef54536d51ec29dc02e293c8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f8cf4ff4dca75a33df3fdf6d387ff3773cdb3092ef54536d51ec29dc02e293c8\": not found" Jul 9 23:48:29.635196 kubelet[3381]: E0709 23:48:29.635177 3381 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f8cf4ff4dca75a33df3fdf6d387ff3773cdb3092ef54536d51ec29dc02e293c8\": not found" containerID="f8cf4ff4dca75a33df3fdf6d387ff3773cdb3092ef54536d51ec29dc02e293c8" Jul 9 23:48:29.635281 kubelet[3381]: I0709 23:48:29.635199 3381 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f8cf4ff4dca75a33df3fdf6d387ff3773cdb3092ef54536d51ec29dc02e293c8"} err="failed to get container status \"f8cf4ff4dca75a33df3fdf6d387ff3773cdb3092ef54536d51ec29dc02e293c8\": rpc error: code = NotFound desc = an error occurred when try to find container \"f8cf4ff4dca75a33df3fdf6d387ff3773cdb3092ef54536d51ec29dc02e293c8\": not found" Jul 9 23:48:29.727577 systemd[1]: var-lib-kubelet-pods-0654bc50\x2d8b19\x2d462c\x2d8cd1\x2d7cd980bf5074-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 9 23:48:29.727657 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a20e9af7ba30940da76573b3e284de2327689504262fe41387abe5e30b7d0067-shm.mount: Deactivated successfully. Jul 9 23:48:29.727699 systemd[1]: var-lib-kubelet-pods-0654bc50\x2d8b19\x2d462c\x2d8cd1\x2d7cd980bf5074-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 9 23:48:29.727735 systemd[1]: var-lib-kubelet-pods-bb5d4317\x2d56cd\x2d4f56\x2da0db\x2d75b5b9e53b4a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dsj6c8.mount: Deactivated successfully. Jul 9 23:48:29.727776 systemd[1]: var-lib-kubelet-pods-0654bc50\x2d8b19\x2d462c\x2d8cd1\x2d7cd980bf5074-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9v9qm.mount: Deactivated successfully. Jul 9 23:48:30.176764 kubelet[3381]: I0709 23:48:30.176712 3381 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0654bc50-8b19-462c-8cd1-7cd980bf5074" path="/var/lib/kubelet/pods/0654bc50-8b19-462c-8cd1-7cd980bf5074/volumes" Jul 9 23:48:30.177149 kubelet[3381]: I0709 23:48:30.177129 3381 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb5d4317-56cd-4f56-a0db-75b5b9e53b4a" path="/var/lib/kubelet/pods/bb5d4317-56cd-4f56-a0db-75b5b9e53b4a/volumes" Jul 9 23:48:30.713676 sshd[4904]: Connection closed by 10.200.16.10 port 43672 Jul 9 23:48:30.714320 sshd-session[4902]: pam_unix(sshd:session): session closed for user core Jul 9 23:48:30.718227 systemd[1]: sshd@21-10.200.20.40:22-10.200.16.10:43672.service: Deactivated successfully. Jul 9 23:48:30.720003 systemd[1]: session-24.scope: Deactivated successfully. Jul 9 23:48:30.720738 systemd-logind[1865]: Session 24 logged out. Waiting for processes to exit. Jul 9 23:48:30.722044 systemd-logind[1865]: Removed session 24. Jul 9 23:48:30.805793 systemd[1]: Started sshd@22-10.200.20.40:22-10.200.16.10:44698.service - OpenSSH per-connection server daemon (10.200.16.10:44698). Jul 9 23:48:31.298939 sshd[5058]: Accepted publickey for core from 10.200.16.10 port 44698 ssh2: RSA SHA256:zFMRRzzSGWgmvEk8T0W8VsmZJ1v5NiT01j8gkhQ3zko Jul 9 23:48:31.300100 sshd-session[5058]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:48:31.303880 systemd-logind[1865]: New session 25 of user core. Jul 9 23:48:31.311631 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 9 23:48:32.188227 kubelet[3381]: E0709 23:48:32.188184 3381 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bb5d4317-56cd-4f56-a0db-75b5b9e53b4a" containerName="cilium-operator" Jul 9 23:48:32.188227 kubelet[3381]: E0709 23:48:32.188210 3381 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0654bc50-8b19-462c-8cd1-7cd980bf5074" containerName="apply-sysctl-overwrites" Jul 9 23:48:32.188227 kubelet[3381]: E0709 23:48:32.188216 3381 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0654bc50-8b19-462c-8cd1-7cd980bf5074" containerName="cilium-agent" Jul 9 23:48:32.188227 kubelet[3381]: E0709 23:48:32.188222 3381 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0654bc50-8b19-462c-8cd1-7cd980bf5074" containerName="mount-cgroup" Jul 9 23:48:32.188227 kubelet[3381]: E0709 23:48:32.188226 3381 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0654bc50-8b19-462c-8cd1-7cd980bf5074" containerName="mount-bpf-fs" Jul 9 23:48:32.188227 kubelet[3381]: E0709 23:48:32.188231 3381 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0654bc50-8b19-462c-8cd1-7cd980bf5074" containerName="clean-cilium-state" Jul 9 23:48:32.188688 kubelet[3381]: I0709 23:48:32.188249 3381 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb5d4317-56cd-4f56-a0db-75b5b9e53b4a" containerName="cilium-operator" Jul 9 23:48:32.188688 kubelet[3381]: I0709 23:48:32.188254 3381 memory_manager.go:354] "RemoveStaleState removing state" podUID="0654bc50-8b19-462c-8cd1-7cd980bf5074" containerName="cilium-agent" Jul 9 23:48:32.197852 systemd[1]: Created slice kubepods-burstable-podea3b21a3_a1a8_4d4c_9c12_cffbef84048f.slice - libcontainer container kubepods-burstable-podea3b21a3_a1a8_4d4c_9c12_cffbef84048f.slice. Jul 9 23:48:32.259175 sshd[5060]: Connection closed by 10.200.16.10 port 44698 Jul 9 23:48:32.259547 sshd-session[5058]: pam_unix(sshd:session): session closed for user core Jul 9 23:48:32.262963 systemd[1]: sshd@22-10.200.20.40:22-10.200.16.10:44698.service: Deactivated successfully. Jul 9 23:48:32.265191 systemd[1]: session-25.scope: Deactivated successfully. Jul 9 23:48:32.267609 systemd-logind[1865]: Session 25 logged out. Waiting for processes to exit. Jul 9 23:48:32.268804 systemd-logind[1865]: Removed session 25. Jul 9 23:48:32.275592 kubelet[3381]: E0709 23:48:32.275553 3381 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 9 23:48:32.319005 kubelet[3381]: I0709 23:48:32.318958 3381 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txr26\" (UniqueName: \"kubernetes.io/projected/ea3b21a3-a1a8-4d4c-9c12-cffbef84048f-kube-api-access-txr26\") pod \"cilium-c2pvp\" (UID: \"ea3b21a3-a1a8-4d4c-9c12-cffbef84048f\") " pod="kube-system/cilium-c2pvp" Jul 9 23:48:32.319005 kubelet[3381]: I0709 23:48:32.319007 3381 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ea3b21a3-a1a8-4d4c-9c12-cffbef84048f-cilium-config-path\") pod \"cilium-c2pvp\" (UID: \"ea3b21a3-a1a8-4d4c-9c12-cffbef84048f\") " pod="kube-system/cilium-c2pvp" Jul 9 23:48:32.319171 kubelet[3381]: I0709 23:48:32.319022 3381 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ea3b21a3-a1a8-4d4c-9c12-cffbef84048f-cilium-ipsec-secrets\") pod \"cilium-c2pvp\" (UID: \"ea3b21a3-a1a8-4d4c-9c12-cffbef84048f\") " pod="kube-system/cilium-c2pvp" Jul 9 23:48:32.319171 kubelet[3381]: I0709 23:48:32.319032 3381 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ea3b21a3-a1a8-4d4c-9c12-cffbef84048f-hubble-tls\") pod \"cilium-c2pvp\" (UID: \"ea3b21a3-a1a8-4d4c-9c12-cffbef84048f\") " pod="kube-system/cilium-c2pvp" Jul 9 23:48:32.319171 kubelet[3381]: I0709 23:48:32.319045 3381 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ea3b21a3-a1a8-4d4c-9c12-cffbef84048f-lib-modules\") pod \"cilium-c2pvp\" (UID: \"ea3b21a3-a1a8-4d4c-9c12-cffbef84048f\") " pod="kube-system/cilium-c2pvp" Jul 9 23:48:32.319171 kubelet[3381]: I0709 23:48:32.319054 3381 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ea3b21a3-a1a8-4d4c-9c12-cffbef84048f-host-proc-sys-net\") pod \"cilium-c2pvp\" (UID: \"ea3b21a3-a1a8-4d4c-9c12-cffbef84048f\") " pod="kube-system/cilium-c2pvp" Jul 9 23:48:32.319171 kubelet[3381]: I0709 23:48:32.319064 3381 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ea3b21a3-a1a8-4d4c-9c12-cffbef84048f-hostproc\") pod \"cilium-c2pvp\" (UID: \"ea3b21a3-a1a8-4d4c-9c12-cffbef84048f\") " pod="kube-system/cilium-c2pvp" Jul 9 23:48:32.319171 kubelet[3381]: I0709 23:48:32.319074 3381 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ea3b21a3-a1a8-4d4c-9c12-cffbef84048f-cni-path\") pod \"cilium-c2pvp\" (UID: \"ea3b21a3-a1a8-4d4c-9c12-cffbef84048f\") " pod="kube-system/cilium-c2pvp" Jul 9 23:48:32.319264 kubelet[3381]: I0709 23:48:32.319087 3381 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ea3b21a3-a1a8-4d4c-9c12-cffbef84048f-host-proc-sys-kernel\") pod \"cilium-c2pvp\" (UID: \"ea3b21a3-a1a8-4d4c-9c12-cffbef84048f\") " pod="kube-system/cilium-c2pvp" Jul 9 23:48:32.319264 kubelet[3381]: I0709 23:48:32.319100 3381 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ea3b21a3-a1a8-4d4c-9c12-cffbef84048f-bpf-maps\") pod \"cilium-c2pvp\" (UID: \"ea3b21a3-a1a8-4d4c-9c12-cffbef84048f\") " pod="kube-system/cilium-c2pvp" Jul 9 23:48:32.319264 kubelet[3381]: I0709 23:48:32.319110 3381 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ea3b21a3-a1a8-4d4c-9c12-cffbef84048f-clustermesh-secrets\") pod \"cilium-c2pvp\" (UID: \"ea3b21a3-a1a8-4d4c-9c12-cffbef84048f\") " pod="kube-system/cilium-c2pvp" Jul 9 23:48:32.319264 kubelet[3381]: I0709 23:48:32.319120 3381 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ea3b21a3-a1a8-4d4c-9c12-cffbef84048f-cilium-run\") pod \"cilium-c2pvp\" (UID: \"ea3b21a3-a1a8-4d4c-9c12-cffbef84048f\") " pod="kube-system/cilium-c2pvp" Jul 9 23:48:32.319264 kubelet[3381]: I0709 23:48:32.319143 3381 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ea3b21a3-a1a8-4d4c-9c12-cffbef84048f-cilium-cgroup\") pod \"cilium-c2pvp\" (UID: \"ea3b21a3-a1a8-4d4c-9c12-cffbef84048f\") " pod="kube-system/cilium-c2pvp" Jul 9 23:48:32.319264 kubelet[3381]: I0709 23:48:32.319154 3381 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ea3b21a3-a1a8-4d4c-9c12-cffbef84048f-etc-cni-netd\") pod \"cilium-c2pvp\" (UID: \"ea3b21a3-a1a8-4d4c-9c12-cffbef84048f\") " pod="kube-system/cilium-c2pvp" Jul 9 23:48:32.319347 kubelet[3381]: I0709 23:48:32.319163 3381 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ea3b21a3-a1a8-4d4c-9c12-cffbef84048f-xtables-lock\") pod \"cilium-c2pvp\" (UID: \"ea3b21a3-a1a8-4d4c-9c12-cffbef84048f\") " pod="kube-system/cilium-c2pvp" Jul 9 23:48:32.352729 systemd[1]: Started sshd@23-10.200.20.40:22-10.200.16.10:44706.service - OpenSSH per-connection server daemon (10.200.16.10:44706). Jul 9 23:48:32.502440 containerd[1905]: time="2025-07-09T23:48:32.501904979Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-c2pvp,Uid:ea3b21a3-a1a8-4d4c-9c12-cffbef84048f,Namespace:kube-system,Attempt:0,}" Jul 9 23:48:32.556050 containerd[1905]: time="2025-07-09T23:48:32.555965885Z" level=info msg="connecting to shim 0ddca02cd8613c5adcb32583b4f02e337aea9e2b54105c70e58d48505c13fa3b" address="unix:///run/containerd/s/29f3f850a7ef692070d910584993adb680381f6279f9093d9db457bedde8d619" namespace=k8s.io protocol=ttrpc version=3 Jul 9 23:48:32.574639 systemd[1]: Started cri-containerd-0ddca02cd8613c5adcb32583b4f02e337aea9e2b54105c70e58d48505c13fa3b.scope - libcontainer container 0ddca02cd8613c5adcb32583b4f02e337aea9e2b54105c70e58d48505c13fa3b. Jul 9 23:48:32.596952 containerd[1905]: time="2025-07-09T23:48:32.596913258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-c2pvp,Uid:ea3b21a3-a1a8-4d4c-9c12-cffbef84048f,Namespace:kube-system,Attempt:0,} returns sandbox id \"0ddca02cd8613c5adcb32583b4f02e337aea9e2b54105c70e58d48505c13fa3b\"" Jul 9 23:48:32.599926 containerd[1905]: time="2025-07-09T23:48:32.599519292Z" level=info msg="CreateContainer within sandbox \"0ddca02cd8613c5adcb32583b4f02e337aea9e2b54105c70e58d48505c13fa3b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 9 23:48:32.636440 containerd[1905]: time="2025-07-09T23:48:32.636061224Z" level=info msg="Container 9915c0349c6858bee10b9f7d3dd6466e4c51a4bb3a84e72874034886224ecf65: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:48:32.656200 containerd[1905]: time="2025-07-09T23:48:32.656153158Z" level=info msg="CreateContainer within sandbox \"0ddca02cd8613c5adcb32583b4f02e337aea9e2b54105c70e58d48505c13fa3b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9915c0349c6858bee10b9f7d3dd6466e4c51a4bb3a84e72874034886224ecf65\"" Jul 9 23:48:32.657577 containerd[1905]: time="2025-07-09T23:48:32.656801980Z" level=info msg="StartContainer for \"9915c0349c6858bee10b9f7d3dd6466e4c51a4bb3a84e72874034886224ecf65\"" Jul 9 23:48:32.657577 containerd[1905]: time="2025-07-09T23:48:32.657423722Z" level=info msg="connecting to shim 9915c0349c6858bee10b9f7d3dd6466e4c51a4bb3a84e72874034886224ecf65" address="unix:///run/containerd/s/29f3f850a7ef692070d910584993adb680381f6279f9093d9db457bedde8d619" protocol=ttrpc version=3 Jul 9 23:48:32.675648 systemd[1]: Started cri-containerd-9915c0349c6858bee10b9f7d3dd6466e4c51a4bb3a84e72874034886224ecf65.scope - libcontainer container 9915c0349c6858bee10b9f7d3dd6466e4c51a4bb3a84e72874034886224ecf65. Jul 9 23:48:32.704456 systemd[1]: cri-containerd-9915c0349c6858bee10b9f7d3dd6466e4c51a4bb3a84e72874034886224ecf65.scope: Deactivated successfully. Jul 9 23:48:32.706732 containerd[1905]: time="2025-07-09T23:48:32.706686446Z" level=info msg="received exit event container_id:\"9915c0349c6858bee10b9f7d3dd6466e4c51a4bb3a84e72874034886224ecf65\" id:\"9915c0349c6858bee10b9f7d3dd6466e4c51a4bb3a84e72874034886224ecf65\" pid:5133 exited_at:{seconds:1752104912 nanos:706441061}" Jul 9 23:48:32.707060 containerd[1905]: time="2025-07-09T23:48:32.707039442Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9915c0349c6858bee10b9f7d3dd6466e4c51a4bb3a84e72874034886224ecf65\" id:\"9915c0349c6858bee10b9f7d3dd6466e4c51a4bb3a84e72874034886224ecf65\" pid:5133 exited_at:{seconds:1752104912 nanos:706441061}" Jul 9 23:48:32.707786 containerd[1905]: time="2025-07-09T23:48:32.707640534Z" level=info msg="StartContainer for \"9915c0349c6858bee10b9f7d3dd6466e4c51a4bb3a84e72874034886224ecf65\" returns successfully" Jul 9 23:48:32.810939 sshd[5070]: Accepted publickey for core from 10.200.16.10 port 44706 ssh2: RSA SHA256:zFMRRzzSGWgmvEk8T0W8VsmZJ1v5NiT01j8gkhQ3zko Jul 9 23:48:32.812094 sshd-session[5070]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:48:32.816245 systemd-logind[1865]: New session 26 of user core. Jul 9 23:48:32.820621 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 9 23:48:33.150397 sshd[5170]: Connection closed by 10.200.16.10 port 44706 Jul 9 23:48:33.151079 sshd-session[5070]: pam_unix(sshd:session): session closed for user core Jul 9 23:48:33.154243 systemd[1]: sshd@23-10.200.20.40:22-10.200.16.10:44706.service: Deactivated successfully. Jul 9 23:48:33.155770 systemd[1]: session-26.scope: Deactivated successfully. Jul 9 23:48:33.156400 systemd-logind[1865]: Session 26 logged out. Waiting for processes to exit. Jul 9 23:48:33.158046 systemd-logind[1865]: Removed session 26. Jul 9 23:48:33.243727 systemd[1]: Started sshd@24-10.200.20.40:22-10.200.16.10:44710.service - OpenSSH per-connection server daemon (10.200.16.10:44710). Jul 9 23:48:33.537599 containerd[1905]: time="2025-07-09T23:48:33.537429120Z" level=info msg="CreateContainer within sandbox \"0ddca02cd8613c5adcb32583b4f02e337aea9e2b54105c70e58d48505c13fa3b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 9 23:48:33.576102 containerd[1905]: time="2025-07-09T23:48:33.576013020Z" level=info msg="Container c9cb4c924ffe4635001ca781cd5c1fbc016be170d2184f4db6958c0d26916279: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:48:33.576470 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2390571829.mount: Deactivated successfully. Jul 9 23:48:33.596153 containerd[1905]: time="2025-07-09T23:48:33.596090697Z" level=info msg="CreateContainer within sandbox \"0ddca02cd8613c5adcb32583b4f02e337aea9e2b54105c70e58d48505c13fa3b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c9cb4c924ffe4635001ca781cd5c1fbc016be170d2184f4db6958c0d26916279\"" Jul 9 23:48:33.597596 containerd[1905]: time="2025-07-09T23:48:33.597564796Z" level=info msg="StartContainer for \"c9cb4c924ffe4635001ca781cd5c1fbc016be170d2184f4db6958c0d26916279\"" Jul 9 23:48:33.598368 containerd[1905]: time="2025-07-09T23:48:33.598336806Z" level=info msg="connecting to shim c9cb4c924ffe4635001ca781cd5c1fbc016be170d2184f4db6958c0d26916279" address="unix:///run/containerd/s/29f3f850a7ef692070d910584993adb680381f6279f9093d9db457bedde8d619" protocol=ttrpc version=3 Jul 9 23:48:33.617637 systemd[1]: Started cri-containerd-c9cb4c924ffe4635001ca781cd5c1fbc016be170d2184f4db6958c0d26916279.scope - libcontainer container c9cb4c924ffe4635001ca781cd5c1fbc016be170d2184f4db6958c0d26916279. Jul 9 23:48:33.649056 containerd[1905]: time="2025-07-09T23:48:33.648866390Z" level=info msg="StartContainer for \"c9cb4c924ffe4635001ca781cd5c1fbc016be170d2184f4db6958c0d26916279\" returns successfully" Jul 9 23:48:33.651481 systemd[1]: cri-containerd-c9cb4c924ffe4635001ca781cd5c1fbc016be170d2184f4db6958c0d26916279.scope: Deactivated successfully. Jul 9 23:48:33.653472 containerd[1905]: time="2025-07-09T23:48:33.653253261Z" level=info msg="received exit event container_id:\"c9cb4c924ffe4635001ca781cd5c1fbc016be170d2184f4db6958c0d26916279\" id:\"c9cb4c924ffe4635001ca781cd5c1fbc016be170d2184f4db6958c0d26916279\" pid:5192 exited_at:{seconds:1752104913 nanos:652785717}" Jul 9 23:48:33.653472 containerd[1905]: time="2025-07-09T23:48:33.653410987Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c9cb4c924ffe4635001ca781cd5c1fbc016be170d2184f4db6958c0d26916279\" id:\"c9cb4c924ffe4635001ca781cd5c1fbc016be170d2184f4db6958c0d26916279\" pid:5192 exited_at:{seconds:1752104913 nanos:652785717}" Jul 9 23:48:33.728082 sshd[5178]: Accepted publickey for core from 10.200.16.10 port 44710 ssh2: RSA SHA256:zFMRRzzSGWgmvEk8T0W8VsmZJ1v5NiT01j8gkhQ3zko Jul 9 23:48:33.729482 sshd-session[5178]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:48:33.733576 systemd-logind[1865]: New session 27 of user core. Jul 9 23:48:33.743616 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 9 23:48:34.423625 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c9cb4c924ffe4635001ca781cd5c1fbc016be170d2184f4db6958c0d26916279-rootfs.mount: Deactivated successfully. Jul 9 23:48:34.541652 containerd[1905]: time="2025-07-09T23:48:34.541339524Z" level=info msg="CreateContainer within sandbox \"0ddca02cd8613c5adcb32583b4f02e337aea9e2b54105c70e58d48505c13fa3b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 9 23:48:34.576630 containerd[1905]: time="2025-07-09T23:48:34.575918645Z" level=info msg="Container 2a4ffc194c9812ce901e4902c0462d243d52160013a870a79b2a787f1a9c5d68: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:48:34.603520 containerd[1905]: time="2025-07-09T23:48:34.602846430Z" level=info msg="CreateContainer within sandbox \"0ddca02cd8613c5adcb32583b4f02e337aea9e2b54105c70e58d48505c13fa3b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"2a4ffc194c9812ce901e4902c0462d243d52160013a870a79b2a787f1a9c5d68\"" Jul 9 23:48:34.604776 containerd[1905]: time="2025-07-09T23:48:34.604754104Z" level=info msg="StartContainer for \"2a4ffc194c9812ce901e4902c0462d243d52160013a870a79b2a787f1a9c5d68\"" Jul 9 23:48:34.607307 containerd[1905]: time="2025-07-09T23:48:34.607283023Z" level=info msg="connecting to shim 2a4ffc194c9812ce901e4902c0462d243d52160013a870a79b2a787f1a9c5d68" address="unix:///run/containerd/s/29f3f850a7ef692070d910584993adb680381f6279f9093d9db457bedde8d619" protocol=ttrpc version=3 Jul 9 23:48:34.627044 systemd[1]: Started cri-containerd-2a4ffc194c9812ce901e4902c0462d243d52160013a870a79b2a787f1a9c5d68.scope - libcontainer container 2a4ffc194c9812ce901e4902c0462d243d52160013a870a79b2a787f1a9c5d68. Jul 9 23:48:34.667456 systemd[1]: cri-containerd-2a4ffc194c9812ce901e4902c0462d243d52160013a870a79b2a787f1a9c5d68.scope: Deactivated successfully. Jul 9 23:48:34.670157 containerd[1905]: time="2025-07-09T23:48:34.670057517Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2a4ffc194c9812ce901e4902c0462d243d52160013a870a79b2a787f1a9c5d68\" id:\"2a4ffc194c9812ce901e4902c0462d243d52160013a870a79b2a787f1a9c5d68\" pid:5244 exited_at:{seconds:1752104914 nanos:669155726}" Jul 9 23:48:34.670157 containerd[1905]: time="2025-07-09T23:48:34.670151809Z" level=info msg="received exit event container_id:\"2a4ffc194c9812ce901e4902c0462d243d52160013a870a79b2a787f1a9c5d68\" id:\"2a4ffc194c9812ce901e4902c0462d243d52160013a870a79b2a787f1a9c5d68\" pid:5244 exited_at:{seconds:1752104914 nanos:669155726}" Jul 9 23:48:34.673813 containerd[1905]: time="2025-07-09T23:48:34.673721164Z" level=info msg="StartContainer for \"2a4ffc194c9812ce901e4902c0462d243d52160013a870a79b2a787f1a9c5d68\" returns successfully" Jul 9 23:48:34.695467 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2a4ffc194c9812ce901e4902c0462d243d52160013a870a79b2a787f1a9c5d68-rootfs.mount: Deactivated successfully. Jul 9 23:48:35.547100 containerd[1905]: time="2025-07-09T23:48:35.546760035Z" level=info msg="CreateContainer within sandbox \"0ddca02cd8613c5adcb32583b4f02e337aea9e2b54105c70e58d48505c13fa3b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 9 23:48:35.578911 containerd[1905]: time="2025-07-09T23:48:35.578373773Z" level=info msg="Container bcc820f9f596afbd2a0979b4a685267988b1ea1587e3c70ff03d80ab657c9d54: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:48:35.596240 containerd[1905]: time="2025-07-09T23:48:35.596199356Z" level=info msg="CreateContainer within sandbox \"0ddca02cd8613c5adcb32583b4f02e337aea9e2b54105c70e58d48505c13fa3b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"bcc820f9f596afbd2a0979b4a685267988b1ea1587e3c70ff03d80ab657c9d54\"" Jul 9 23:48:35.597444 containerd[1905]: time="2025-07-09T23:48:35.596875220Z" level=info msg="StartContainer for \"bcc820f9f596afbd2a0979b4a685267988b1ea1587e3c70ff03d80ab657c9d54\"" Jul 9 23:48:35.598151 containerd[1905]: time="2025-07-09T23:48:35.598128319Z" level=info msg="connecting to shim bcc820f9f596afbd2a0979b4a685267988b1ea1587e3c70ff03d80ab657c9d54" address="unix:///run/containerd/s/29f3f850a7ef692070d910584993adb680381f6279f9093d9db457bedde8d619" protocol=ttrpc version=3 Jul 9 23:48:35.618630 systemd[1]: Started cri-containerd-bcc820f9f596afbd2a0979b4a685267988b1ea1587e3c70ff03d80ab657c9d54.scope - libcontainer container bcc820f9f596afbd2a0979b4a685267988b1ea1587e3c70ff03d80ab657c9d54. Jul 9 23:48:35.639952 systemd[1]: cri-containerd-bcc820f9f596afbd2a0979b4a685267988b1ea1587e3c70ff03d80ab657c9d54.scope: Deactivated successfully. Jul 9 23:48:35.642099 containerd[1905]: time="2025-07-09T23:48:35.641994081Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bcc820f9f596afbd2a0979b4a685267988b1ea1587e3c70ff03d80ab657c9d54\" id:\"bcc820f9f596afbd2a0979b4a685267988b1ea1587e3c70ff03d80ab657c9d54\" pid:5284 exited_at:{seconds:1752104915 nanos:641646789}" Jul 9 23:48:35.647536 containerd[1905]: time="2025-07-09T23:48:35.646026044Z" level=info msg="received exit event container_id:\"bcc820f9f596afbd2a0979b4a685267988b1ea1587e3c70ff03d80ab657c9d54\" id:\"bcc820f9f596afbd2a0979b4a685267988b1ea1587e3c70ff03d80ab657c9d54\" pid:5284 exited_at:{seconds:1752104915 nanos:641646789}" Jul 9 23:48:35.648528 containerd[1905]: time="2025-07-09T23:48:35.648450183Z" level=info msg="StartContainer for \"bcc820f9f596afbd2a0979b4a685267988b1ea1587e3c70ff03d80ab657c9d54\" returns successfully" Jul 9 23:48:35.665714 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bcc820f9f596afbd2a0979b4a685267988b1ea1587e3c70ff03d80ab657c9d54-rootfs.mount: Deactivated successfully. Jul 9 23:48:35.825090 kubelet[3381]: I0709 23:48:35.824962 3381 setters.go:600] "Node became not ready" node="ci-4344.1.1-n-bbe652f90c" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-07-09T23:48:35Z","lastTransitionTime":"2025-07-09T23:48:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 9 23:48:36.551312 containerd[1905]: time="2025-07-09T23:48:36.551147169Z" level=info msg="CreateContainer within sandbox \"0ddca02cd8613c5adcb32583b4f02e337aea9e2b54105c70e58d48505c13fa3b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 9 23:48:36.587796 containerd[1905]: time="2025-07-09T23:48:36.586445137Z" level=info msg="Container 1a54d0346b67840839eea566419bf98530f101efb0b5a7940b9258b19142ca2e: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:48:36.605199 containerd[1905]: time="2025-07-09T23:48:36.605158169Z" level=info msg="CreateContainer within sandbox \"0ddca02cd8613c5adcb32583b4f02e337aea9e2b54105c70e58d48505c13fa3b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"1a54d0346b67840839eea566419bf98530f101efb0b5a7940b9258b19142ca2e\"" Jul 9 23:48:36.606197 containerd[1905]: time="2025-07-09T23:48:36.606174692Z" level=info msg="StartContainer for \"1a54d0346b67840839eea566419bf98530f101efb0b5a7940b9258b19142ca2e\"" Jul 9 23:48:36.607089 containerd[1905]: time="2025-07-09T23:48:36.607047361Z" level=info msg="connecting to shim 1a54d0346b67840839eea566419bf98530f101efb0b5a7940b9258b19142ca2e" address="unix:///run/containerd/s/29f3f850a7ef692070d910584993adb680381f6279f9093d9db457bedde8d619" protocol=ttrpc version=3 Jul 9 23:48:36.627651 systemd[1]: Started cri-containerd-1a54d0346b67840839eea566419bf98530f101efb0b5a7940b9258b19142ca2e.scope - libcontainer container 1a54d0346b67840839eea566419bf98530f101efb0b5a7940b9258b19142ca2e. Jul 9 23:48:36.662041 containerd[1905]: time="2025-07-09T23:48:36.661890358Z" level=info msg="StartContainer for \"1a54d0346b67840839eea566419bf98530f101efb0b5a7940b9258b19142ca2e\" returns successfully" Jul 9 23:48:36.733932 containerd[1905]: time="2025-07-09T23:48:36.733887957Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1a54d0346b67840839eea566419bf98530f101efb0b5a7940b9258b19142ca2e\" id:\"fa75c7ef41ded92707d4e9ee80a644dd294f5f17ad565a8aa8d6fbb43bc45faf\" pid:5356 exited_at:{seconds:1752104916 nanos:733324745}" Jul 9 23:48:36.968615 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jul 9 23:48:37.567935 kubelet[3381]: I0709 23:48:37.567654 3381 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-c2pvp" podStartSLOduration=5.567635943 podStartE2EDuration="5.567635943s" podCreationTimestamp="2025-07-09 23:48:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 23:48:37.567610598 +0000 UTC m=+155.496984338" watchObservedRunningTime="2025-07-09 23:48:37.567635943 +0000 UTC m=+155.497009683" Jul 9 23:48:38.128735 containerd[1905]: time="2025-07-09T23:48:38.128581788Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1a54d0346b67840839eea566419bf98530f101efb0b5a7940b9258b19142ca2e\" id:\"961ff8fba85a524ec32c4873fcd5a66c66345d470f092859f00528c14b54b6fe\" pid:5429 exit_status:1 exited_at:{seconds:1752104918 nanos:128130917}" Jul 9 23:48:39.368558 systemd-networkd[1482]: lxc_health: Link UP Jul 9 23:48:39.369589 systemd-networkd[1482]: lxc_health: Gained carrier Jul 9 23:48:40.213347 containerd[1905]: time="2025-07-09T23:48:40.213296354Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1a54d0346b67840839eea566419bf98530f101efb0b5a7940b9258b19142ca2e\" id:\"e8989d7a4c2309eb9e7def28399eeeae36577b2a9c827fc1dc903825dadc373c\" pid:5884 exited_at:{seconds:1752104920 nanos:212960990}" Jul 9 23:48:40.596699 systemd-networkd[1482]: lxc_health: Gained IPv6LL Jul 9 23:48:42.292851 containerd[1905]: time="2025-07-09T23:48:42.292806693Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1a54d0346b67840839eea566419bf98530f101efb0b5a7940b9258b19142ca2e\" id:\"0cacbb1317556eec7487048fea62ce9fe36553a633062ea138817bcf0a7e70b0\" pid:5917 exited_at:{seconds:1752104922 nanos:292426680}" Jul 9 23:48:44.377298 containerd[1905]: time="2025-07-09T23:48:44.377255276Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1a54d0346b67840839eea566419bf98530f101efb0b5a7940b9258b19142ca2e\" id:\"beab5a03aa438bcc16d86e7ecf868710315ff19386ff632f4a669ab2d06aa9b5\" pid:5940 exited_at:{seconds:1752104924 nanos:376561428}" Jul 9 23:48:44.455342 sshd[5225]: Connection closed by 10.200.16.10 port 44710 Jul 9 23:48:44.455695 sshd-session[5178]: pam_unix(sshd:session): session closed for user core Jul 9 23:48:44.459173 systemd[1]: sshd@24-10.200.20.40:22-10.200.16.10:44710.service: Deactivated successfully. Jul 9 23:48:44.460626 systemd[1]: session-27.scope: Deactivated successfully. Jul 9 23:48:44.461321 systemd-logind[1865]: Session 27 logged out. Waiting for processes to exit. Jul 9 23:48:44.463479 systemd-logind[1865]: Removed session 27.