Jul 7 00:00:48.043087 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd490] Jul 7 00:00:48.043105 kernel: Linux version 6.12.35-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Sun Jul 6 21:57:11 -00 2025 Jul 7 00:00:48.043112 kernel: KASLR enabled Jul 7 00:00:48.043115 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jul 7 00:00:48.043120 kernel: printk: legacy bootconsole [pl11] enabled Jul 7 00:00:48.043124 kernel: efi: EFI v2.7 by EDK II Jul 7 00:00:48.043129 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f20e018 RNG=0x3fd5f998 MEMRESERVE=0x3e471598 Jul 7 00:00:48.043133 kernel: random: crng init done Jul 7 00:00:48.043137 kernel: secureboot: Secure boot disabled Jul 7 00:00:48.043141 kernel: ACPI: Early table checksum verification disabled Jul 7 00:00:48.043144 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Jul 7 00:00:48.043148 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 7 00:00:48.043152 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 7 00:00:48.043157 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jul 7 00:00:48.043162 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 7 00:00:48.043166 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 7 00:00:48.043171 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 7 00:00:48.043176 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 7 00:00:48.043180 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 7 00:00:48.043184 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 7 00:00:48.043188 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jul 7 00:00:48.043192 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 7 00:00:48.043196 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jul 7 00:00:48.043201 kernel: ACPI: Use ACPI SPCR as default console: Yes Jul 7 00:00:48.043205 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Jul 7 00:00:48.043209 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] hotplug Jul 7 00:00:48.043213 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] hotplug Jul 7 00:00:48.043217 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Jul 7 00:00:48.043221 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Jul 7 00:00:48.043227 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Jul 7 00:00:48.043231 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Jul 7 00:00:48.043235 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Jul 7 00:00:48.043239 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Jul 7 00:00:48.043243 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Jul 7 00:00:48.043247 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Jul 7 00:00:48.043251 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Jul 7 00:00:48.043256 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x1bfffffff] -> [mem 0x00000000-0x1bfffffff] Jul 7 00:00:48.043260 kernel: NODE_DATA(0) allocated [mem 0x1bf7fda00-0x1bf804fff] Jul 7 00:00:48.043264 kernel: Zone ranges: Jul 7 00:00:48.043268 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jul 7 00:00:48.043275 kernel: DMA32 empty Jul 7 00:00:48.043279 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jul 7 00:00:48.043284 kernel: Device empty Jul 7 00:00:48.043288 kernel: Movable zone start for each node Jul 7 00:00:48.043292 kernel: Early memory node ranges Jul 7 00:00:48.043297 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jul 7 00:00:48.043302 kernel: node 0: [mem 0x0000000000824000-0x000000003e45ffff] Jul 7 00:00:48.043306 kernel: node 0: [mem 0x000000003e460000-0x000000003e46ffff] Jul 7 00:00:48.043310 kernel: node 0: [mem 0x000000003e470000-0x000000003e54ffff] Jul 7 00:00:48.043314 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Jul 7 00:00:48.043319 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Jul 7 00:00:48.043323 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Jul 7 00:00:48.043327 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Jul 7 00:00:48.043331 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jul 7 00:00:48.043336 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jul 7 00:00:48.043340 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jul 7 00:00:48.043344 kernel: cma: Reserved 16 MiB at 0x000000003d400000 on node -1 Jul 7 00:00:48.043349 kernel: psci: probing for conduit method from ACPI. Jul 7 00:00:48.043354 kernel: psci: PSCIv1.1 detected in firmware. Jul 7 00:00:48.043358 kernel: psci: Using standard PSCI v0.2 function IDs Jul 7 00:00:48.043362 kernel: psci: MIGRATE_INFO_TYPE not supported. Jul 7 00:00:48.043366 kernel: psci: SMC Calling Convention v1.4 Jul 7 00:00:48.043371 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jul 7 00:00:48.043375 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jul 7 00:00:48.043379 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Jul 7 00:00:48.043384 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Jul 7 00:00:48.043388 kernel: pcpu-alloc: [0] 0 [0] 1 Jul 7 00:00:48.043392 kernel: Detected PIPT I-cache on CPU0 Jul 7 00:00:48.043398 kernel: CPU features: detected: Address authentication (architected QARMA5 algorithm) Jul 7 00:00:48.043402 kernel: CPU features: detected: GIC system register CPU interface Jul 7 00:00:48.043406 kernel: CPU features: detected: Spectre-v4 Jul 7 00:00:48.043411 kernel: CPU features: detected: Spectre-BHB Jul 7 00:00:48.043415 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 7 00:00:48.043419 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 7 00:00:48.043424 kernel: CPU features: detected: ARM erratum 2067961 or 2054223 Jul 7 00:00:48.043428 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 7 00:00:48.043433 kernel: alternatives: applying boot alternatives Jul 7 00:00:48.043438 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=d1bbaf8ae8f23de11dc703e14022523825f85f007c0c35003d7559228cbdda22 Jul 7 00:00:48.043443 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 7 00:00:48.043448 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 7 00:00:48.043452 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 7 00:00:48.043457 kernel: Fallback order for Node 0: 0 Jul 7 00:00:48.043461 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048540 Jul 7 00:00:48.043465 kernel: Policy zone: Normal Jul 7 00:00:48.043470 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 7 00:00:48.043474 kernel: software IO TLB: area num 2. Jul 7 00:00:48.043478 kernel: software IO TLB: mapped [mem 0x0000000036200000-0x000000003a200000] (64MB) Jul 7 00:00:48.043483 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 7 00:00:48.043487 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 7 00:00:48.043492 kernel: rcu: RCU event tracing is enabled. Jul 7 00:00:48.043497 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 7 00:00:48.043502 kernel: Trampoline variant of Tasks RCU enabled. Jul 7 00:00:48.043506 kernel: Tracing variant of Tasks RCU enabled. Jul 7 00:00:48.043511 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 7 00:00:48.043515 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 7 00:00:48.043520 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 7 00:00:48.043524 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 7 00:00:48.043528 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 7 00:00:48.043533 kernel: GICv3: 960 SPIs implemented Jul 7 00:00:48.043537 kernel: GICv3: 0 Extended SPIs implemented Jul 7 00:00:48.043541 kernel: Root IRQ handler: gic_handle_irq Jul 7 00:00:48.043546 kernel: GICv3: GICv3 features: 16 PPIs, RSS Jul 7 00:00:48.043551 kernel: GICv3: GICD_CTRL.DS=0, SCR_EL3.FIQ=0 Jul 7 00:00:48.043555 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jul 7 00:00:48.043559 kernel: ITS: No ITS available, not enabling LPIs Jul 7 00:00:48.043564 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 7 00:00:48.043568 kernel: arch_timer: cp15 timer(s) running at 1000.00MHz (virt). Jul 7 00:00:48.043573 kernel: clocksource: arch_sys_counter: mask: 0x1fffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 7 00:00:48.043577 kernel: sched_clock: 61 bits at 1000MHz, resolution 1ns, wraps every 4398046511103ns Jul 7 00:00:48.043582 kernel: Console: colour dummy device 80x25 Jul 7 00:00:48.043587 kernel: printk: legacy console [tty1] enabled Jul 7 00:00:48.043591 kernel: ACPI: Core revision 20240827 Jul 7 00:00:48.043596 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 2000.00 BogoMIPS (lpj=1000000) Jul 7 00:00:48.043601 kernel: pid_max: default: 32768 minimum: 301 Jul 7 00:00:48.043606 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 7 00:00:48.043610 kernel: landlock: Up and running. Jul 7 00:00:48.043615 kernel: SELinux: Initializing. Jul 7 00:00:48.043620 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 7 00:00:48.043628 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 7 00:00:48.043657 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x1a0000e, misc 0x31e1 Jul 7 00:00:48.043662 kernel: Hyper-V: Host Build 10.0.26100.1261-1-0 Jul 7 00:00:48.043667 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jul 7 00:00:48.043671 kernel: rcu: Hierarchical SRCU implementation. Jul 7 00:00:48.043676 kernel: rcu: Max phase no-delay instances is 400. Jul 7 00:00:48.043682 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 7 00:00:48.043687 kernel: Remapping and enabling EFI services. Jul 7 00:00:48.043692 kernel: smp: Bringing up secondary CPUs ... Jul 7 00:00:48.043696 kernel: Detected PIPT I-cache on CPU1 Jul 7 00:00:48.043701 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jul 7 00:00:48.043707 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd490] Jul 7 00:00:48.043712 kernel: smp: Brought up 1 node, 2 CPUs Jul 7 00:00:48.043716 kernel: SMP: Total of 2 processors activated. Jul 7 00:00:48.043721 kernel: CPU: All CPU(s) started at EL1 Jul 7 00:00:48.043726 kernel: CPU features: detected: 32-bit EL0 Support Jul 7 00:00:48.043730 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jul 7 00:00:48.043735 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 7 00:00:48.043740 kernel: CPU features: detected: Common not Private translations Jul 7 00:00:48.043745 kernel: CPU features: detected: CRC32 instructions Jul 7 00:00:48.043750 kernel: CPU features: detected: Generic authentication (architected QARMA5 algorithm) Jul 7 00:00:48.043755 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 7 00:00:48.043760 kernel: CPU features: detected: LSE atomic instructions Jul 7 00:00:48.043764 kernel: CPU features: detected: Privileged Access Never Jul 7 00:00:48.043769 kernel: CPU features: detected: Speculation barrier (SB) Jul 7 00:00:48.043774 kernel: CPU features: detected: TLB range maintenance instructions Jul 7 00:00:48.043779 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 7 00:00:48.043783 kernel: CPU features: detected: Scalable Vector Extension Jul 7 00:00:48.043788 kernel: alternatives: applying system-wide alternatives Jul 7 00:00:48.043794 kernel: CPU features: detected: Hardware dirty bit management on CPU0-1 Jul 7 00:00:48.043798 kernel: SVE: maximum available vector length 16 bytes per vector Jul 7 00:00:48.043803 kernel: SVE: default vector length 16 bytes per vector Jul 7 00:00:48.043808 kernel: Memory: 3959092K/4194160K available (11136K kernel code, 2436K rwdata, 9076K rodata, 39488K init, 1038K bss, 213880K reserved, 16384K cma-reserved) Jul 7 00:00:48.043813 kernel: devtmpfs: initialized Jul 7 00:00:48.043817 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 7 00:00:48.043822 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 7 00:00:48.043827 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 7 00:00:48.043831 kernel: 0 pages in range for non-PLT usage Jul 7 00:00:48.043837 kernel: 508432 pages in range for PLT usage Jul 7 00:00:48.043842 kernel: pinctrl core: initialized pinctrl subsystem Jul 7 00:00:48.043846 kernel: SMBIOS 3.1.0 present. Jul 7 00:00:48.043851 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Jul 7 00:00:48.043856 kernel: DMI: Memory slots populated: 2/2 Jul 7 00:00:48.043860 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 7 00:00:48.043865 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 7 00:00:48.043870 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 7 00:00:48.043875 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 7 00:00:48.043880 kernel: audit: initializing netlink subsys (disabled) Jul 7 00:00:48.043885 kernel: audit: type=2000 audit(0.059:1): state=initialized audit_enabled=0 res=1 Jul 7 00:00:48.043890 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 7 00:00:48.043894 kernel: cpuidle: using governor menu Jul 7 00:00:48.043899 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 7 00:00:48.043904 kernel: ASID allocator initialised with 32768 entries Jul 7 00:00:48.043908 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 7 00:00:48.043913 kernel: Serial: AMBA PL011 UART driver Jul 7 00:00:48.043918 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 7 00:00:48.043923 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 7 00:00:48.043928 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 7 00:00:48.043933 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 7 00:00:48.043937 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 7 00:00:48.043942 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 7 00:00:48.043947 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 7 00:00:48.043952 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 7 00:00:48.043956 kernel: ACPI: Added _OSI(Module Device) Jul 7 00:00:48.043961 kernel: ACPI: Added _OSI(Processor Device) Jul 7 00:00:48.043967 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 7 00:00:48.043971 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 7 00:00:48.043976 kernel: ACPI: Interpreter enabled Jul 7 00:00:48.043981 kernel: ACPI: Using GIC for interrupt routing Jul 7 00:00:48.043986 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jul 7 00:00:48.043990 kernel: printk: legacy console [ttyAMA0] enabled Jul 7 00:00:48.043995 kernel: printk: legacy bootconsole [pl11] disabled Jul 7 00:00:48.044000 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jul 7 00:00:48.044004 kernel: ACPI: CPU0 has been hot-added Jul 7 00:00:48.044010 kernel: ACPI: CPU1 has been hot-added Jul 7 00:00:48.044015 kernel: iommu: Default domain type: Translated Jul 7 00:00:48.044019 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 7 00:00:48.044024 kernel: efivars: Registered efivars operations Jul 7 00:00:48.044029 kernel: vgaarb: loaded Jul 7 00:00:48.044033 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 7 00:00:48.044038 kernel: VFS: Disk quotas dquot_6.6.0 Jul 7 00:00:48.044043 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 7 00:00:48.044047 kernel: pnp: PnP ACPI init Jul 7 00:00:48.044053 kernel: pnp: PnP ACPI: found 0 devices Jul 7 00:00:48.044057 kernel: NET: Registered PF_INET protocol family Jul 7 00:00:48.044062 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 7 00:00:48.044067 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 7 00:00:48.044072 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 7 00:00:48.044076 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 7 00:00:48.044081 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 7 00:00:48.044086 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 7 00:00:48.044091 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 7 00:00:48.044096 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 7 00:00:48.044101 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 7 00:00:48.044106 kernel: PCI: CLS 0 bytes, default 64 Jul 7 00:00:48.044110 kernel: kvm [1]: HYP mode not available Jul 7 00:00:48.044115 kernel: Initialise system trusted keyrings Jul 7 00:00:48.044120 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 7 00:00:48.044124 kernel: Key type asymmetric registered Jul 7 00:00:48.044129 kernel: Asymmetric key parser 'x509' registered Jul 7 00:00:48.044134 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 7 00:00:48.044139 kernel: io scheduler mq-deadline registered Jul 7 00:00:48.044144 kernel: io scheduler kyber registered Jul 7 00:00:48.044148 kernel: io scheduler bfq registered Jul 7 00:00:48.044153 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 7 00:00:48.044158 kernel: thunder_xcv, ver 1.0 Jul 7 00:00:48.044162 kernel: thunder_bgx, ver 1.0 Jul 7 00:00:48.044167 kernel: nicpf, ver 1.0 Jul 7 00:00:48.044172 kernel: nicvf, ver 1.0 Jul 7 00:00:48.044297 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 7 00:00:48.044351 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-07T00:00:47 UTC (1751846447) Jul 7 00:00:48.044357 kernel: efifb: probing for efifb Jul 7 00:00:48.044362 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jul 7 00:00:48.044367 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jul 7 00:00:48.044372 kernel: efifb: scrolling: redraw Jul 7 00:00:48.044376 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jul 7 00:00:48.044381 kernel: Console: switching to colour frame buffer device 128x48 Jul 7 00:00:48.044386 kernel: fb0: EFI VGA frame buffer device Jul 7 00:00:48.044391 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jul 7 00:00:48.044396 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 7 00:00:48.044401 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Jul 7 00:00:48.044405 kernel: NET: Registered PF_INET6 protocol family Jul 7 00:00:48.044410 kernel: watchdog: NMI not fully supported Jul 7 00:00:48.044415 kernel: watchdog: Hard watchdog permanently disabled Jul 7 00:00:48.044420 kernel: Segment Routing with IPv6 Jul 7 00:00:48.044424 kernel: In-situ OAM (IOAM) with IPv6 Jul 7 00:00:48.044429 kernel: NET: Registered PF_PACKET protocol family Jul 7 00:00:48.044434 kernel: Key type dns_resolver registered Jul 7 00:00:48.044439 kernel: registered taskstats version 1 Jul 7 00:00:48.044444 kernel: Loading compiled-in X.509 certificates Jul 7 00:00:48.044449 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.35-flatcar: f8c1d02496b1c3f2ac4a0c4b5b2a55d3dc0ca718' Jul 7 00:00:48.044453 kernel: Demotion targets for Node 0: null Jul 7 00:00:48.044458 kernel: Key type .fscrypt registered Jul 7 00:00:48.044463 kernel: Key type fscrypt-provisioning registered Jul 7 00:00:48.044468 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 7 00:00:48.044472 kernel: ima: Allocated hash algorithm: sha1 Jul 7 00:00:48.044478 kernel: ima: No architecture policies found Jul 7 00:00:48.044483 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 7 00:00:48.044487 kernel: clk: Disabling unused clocks Jul 7 00:00:48.044492 kernel: PM: genpd: Disabling unused power domains Jul 7 00:00:48.044497 kernel: Warning: unable to open an initial console. Jul 7 00:00:48.044502 kernel: Freeing unused kernel memory: 39488K Jul 7 00:00:48.044506 kernel: Run /init as init process Jul 7 00:00:48.044511 kernel: with arguments: Jul 7 00:00:48.044516 kernel: /init Jul 7 00:00:48.044521 kernel: with environment: Jul 7 00:00:48.044525 kernel: HOME=/ Jul 7 00:00:48.044530 kernel: TERM=linux Jul 7 00:00:48.044534 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 7 00:00:48.044540 systemd[1]: Successfully made /usr/ read-only. Jul 7 00:00:48.044547 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 7 00:00:48.044553 systemd[1]: Detected virtualization microsoft. Jul 7 00:00:48.044558 systemd[1]: Detected architecture arm64. Jul 7 00:00:48.044563 systemd[1]: Running in initrd. Jul 7 00:00:48.044568 systemd[1]: No hostname configured, using default hostname. Jul 7 00:00:48.044573 systemd[1]: Hostname set to . Jul 7 00:00:48.044578 systemd[1]: Initializing machine ID from random generator. Jul 7 00:00:48.044583 systemd[1]: Queued start job for default target initrd.target. Jul 7 00:00:48.044589 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 00:00:48.044594 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 00:00:48.044599 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 7 00:00:48.044605 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 7 00:00:48.044610 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 7 00:00:48.044616 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 7 00:00:48.044622 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 7 00:00:48.044627 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 7 00:00:48.044644 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 00:00:48.044650 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 7 00:00:48.044655 systemd[1]: Reached target paths.target - Path Units. Jul 7 00:00:48.044661 systemd[1]: Reached target slices.target - Slice Units. Jul 7 00:00:48.044666 systemd[1]: Reached target swap.target - Swaps. Jul 7 00:00:48.044671 systemd[1]: Reached target timers.target - Timer Units. Jul 7 00:00:48.044676 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 7 00:00:48.044681 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 7 00:00:48.044686 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 7 00:00:48.044692 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 7 00:00:48.044698 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 7 00:00:48.044703 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 7 00:00:48.044708 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 00:00:48.044713 systemd[1]: Reached target sockets.target - Socket Units. Jul 7 00:00:48.044718 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 7 00:00:48.044723 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 7 00:00:48.044729 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 7 00:00:48.044734 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 7 00:00:48.044740 systemd[1]: Starting systemd-fsck-usr.service... Jul 7 00:00:48.044745 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 7 00:00:48.044750 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 7 00:00:48.044769 systemd-journald[224]: Collecting audit messages is disabled. Jul 7 00:00:48.044785 systemd-journald[224]: Journal started Jul 7 00:00:48.044799 systemd-journald[224]: Runtime Journal (/run/log/journal/bdac7a57878b4b9380971ae185076947) is 8M, max 78.5M, 70.5M free. Jul 7 00:00:48.058334 systemd-modules-load[226]: Inserted module 'overlay' Jul 7 00:00:48.067060 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 00:00:48.078645 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 7 00:00:48.078684 systemd[1]: Started systemd-journald.service - Journal Service. Jul 7 00:00:48.084156 systemd-modules-load[226]: Inserted module 'br_netfilter' Jul 7 00:00:48.087943 kernel: Bridge firewalling registered Jul 7 00:00:48.092153 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 7 00:00:48.096881 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 00:00:48.113679 systemd[1]: Finished systemd-fsck-usr.service. Jul 7 00:00:48.117092 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 7 00:00:48.125492 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 00:00:48.137503 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 7 00:00:48.146241 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 00:00:48.163387 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 7 00:00:48.171342 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 7 00:00:48.192986 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 00:00:48.200248 systemd-tmpfiles[246]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 7 00:00:48.202241 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 00:00:48.211352 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 7 00:00:48.222071 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 00:00:48.232700 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 7 00:00:48.260804 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 7 00:00:48.275219 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 7 00:00:48.287801 dracut-cmdline[261]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=d1bbaf8ae8f23de11dc703e14022523825f85f007c0c35003d7559228cbdda22 Jul 7 00:00:48.315861 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 00:00:48.334239 systemd-resolved[262]: Positive Trust Anchors: Jul 7 00:00:48.334252 systemd-resolved[262]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 7 00:00:48.334272 systemd-resolved[262]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 7 00:00:48.335950 systemd-resolved[262]: Defaulting to hostname 'linux'. Jul 7 00:00:48.342907 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 7 00:00:48.352800 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 7 00:00:48.435658 kernel: SCSI subsystem initialized Jul 7 00:00:48.440640 kernel: Loading iSCSI transport class v2.0-870. Jul 7 00:00:48.448661 kernel: iscsi: registered transport (tcp) Jul 7 00:00:48.461386 kernel: iscsi: registered transport (qla4xxx) Jul 7 00:00:48.461431 kernel: QLogic iSCSI HBA Driver Jul 7 00:00:48.475089 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 7 00:00:48.495249 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 7 00:00:48.501547 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 7 00:00:48.549040 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 7 00:00:48.558042 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 7 00:00:48.612651 kernel: raid6: neonx8 gen() 18537 MB/s Jul 7 00:00:48.630639 kernel: raid6: neonx4 gen() 18556 MB/s Jul 7 00:00:48.649638 kernel: raid6: neonx2 gen() 17074 MB/s Jul 7 00:00:48.668638 kernel: raid6: neonx1 gen() 15052 MB/s Jul 7 00:00:48.687724 kernel: raid6: int64x8 gen() 10530 MB/s Jul 7 00:00:48.706721 kernel: raid6: int64x4 gen() 10608 MB/s Jul 7 00:00:48.725725 kernel: raid6: int64x2 gen() 8980 MB/s Jul 7 00:00:48.747844 kernel: raid6: int64x1 gen() 7013 MB/s Jul 7 00:00:48.747914 kernel: raid6: using algorithm neonx4 gen() 18556 MB/s Jul 7 00:00:48.769915 kernel: raid6: .... xor() 15147 MB/s, rmw enabled Jul 7 00:00:48.769989 kernel: raid6: using neon recovery algorithm Jul 7 00:00:48.777756 kernel: xor: measuring software checksum speed Jul 7 00:00:48.777764 kernel: 8regs : 28625 MB/sec Jul 7 00:00:48.780287 kernel: 32regs : 28838 MB/sec Jul 7 00:00:48.782791 kernel: arm64_neon : 37690 MB/sec Jul 7 00:00:48.785790 kernel: xor: using function: arm64_neon (37690 MB/sec) Jul 7 00:00:48.823646 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 7 00:00:48.828891 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 7 00:00:48.837741 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 00:00:48.867295 systemd-udevd[473]: Using default interface naming scheme 'v255'. Jul 7 00:00:48.871123 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 00:00:48.883349 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 7 00:00:48.911023 dracut-pre-trigger[482]: rd.md=0: removing MD RAID activation Jul 7 00:00:48.931141 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 7 00:00:48.937141 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 7 00:00:48.986845 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 00:00:48.997584 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 7 00:00:49.065655 kernel: hv_vmbus: Vmbus version:5.3 Jul 7 00:00:49.068129 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 00:00:49.072289 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 00:00:49.082601 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 00:00:49.122689 kernel: hv_vmbus: registering driver hid_hyperv Jul 7 00:00:49.122712 kernel: hv_vmbus: registering driver hyperv_keyboard Jul 7 00:00:49.122729 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Jul 7 00:00:49.122736 kernel: hv_vmbus: registering driver hv_netvsc Jul 7 00:00:49.122742 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Jul 7 00:00:49.122749 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jul 7 00:00:49.122904 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 7 00:00:49.126271 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 00:00:49.139598 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 7 00:00:49.139613 kernel: hv_vmbus: registering driver hv_storvsc Jul 7 00:00:49.143809 kernel: scsi host0: storvsc_host_t Jul 7 00:00:49.147237 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 7 00:00:49.160259 kernel: scsi host1: storvsc_host_t Jul 7 00:00:49.160372 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jul 7 00:00:49.166661 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Jul 7 00:00:49.180658 kernel: PTP clock support registered Jul 7 00:00:49.175223 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 00:00:49.203764 kernel: hv_utils: Registering HyperV Utility Driver Jul 7 00:00:49.203779 kernel: hv_vmbus: registering driver hv_utils Jul 7 00:00:49.203794 kernel: hv_utils: Heartbeat IC version 3.0 Jul 7 00:00:49.203801 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jul 7 00:00:49.203938 kernel: hv_utils: Shutdown IC version 3.2 Jul 7 00:00:49.203945 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jul 7 00:00:49.208189 kernel: hv_utils: TimeSync IC version 4.0 Jul 7 00:00:49.208227 kernel: sd 0:0:0:0: [sda] Write Protect is off Jul 7 00:00:48.817008 systemd-resolved[262]: Clock change detected. Flushing caches. Jul 7 00:00:48.846300 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jul 7 00:00:48.846417 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jul 7 00:00:48.846483 kernel: hv_netvsc 002248b5-bc21-0022-48b5-bc21002248b5 eth0: VF slot 1 added Jul 7 00:00:48.846551 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#6 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Jul 7 00:00:48.846608 systemd-journald[224]: Time jumped backwards, rotating. Jul 7 00:00:48.846635 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#13 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Jul 7 00:00:48.851213 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 7 00:00:48.855203 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jul 7 00:00:48.863858 kernel: hv_vmbus: registering driver hv_pci Jul 7 00:00:48.863886 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jul 7 00:00:48.864017 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 7 00:00:48.864024 kernel: hv_pci 755872d3-1185-4703-a027-f69760acc9c2: PCI VMBus probing: Using version 0x10004 Jul 7 00:00:48.871211 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jul 7 00:00:48.880481 kernel: hv_pci 755872d3-1185-4703-a027-f69760acc9c2: PCI host bridge to bus 1185:00 Jul 7 00:00:48.880624 kernel: pci_bus 1185:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jul 7 00:00:48.880717 kernel: pci_bus 1185:00: No busn resource found for root bus, will use [bus 00-ff] Jul 7 00:00:48.890379 kernel: pci 1185:00:02.0: [15b3:101a] type 00 class 0x020000 PCIe Endpoint Jul 7 00:00:48.895217 kernel: pci 1185:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref] Jul 7 00:00:48.899196 kernel: pci 1185:00:02.0: enabling Extended Tags Jul 7 00:00:48.918678 kernel: pci 1185:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 1185:00:02.0 (capable of 252.048 Gb/s with 16.0 GT/s PCIe x16 link) Jul 7 00:00:48.918867 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#56 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jul 7 00:00:48.918945 kernel: pci_bus 1185:00: busn_res: [bus 00-ff] end is updated to 00 Jul 7 00:00:48.927614 kernel: pci 1185:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref]: assigned Jul 7 00:00:48.947205 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#26 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jul 7 00:00:48.993943 kernel: mlx5_core 1185:00:02.0: enabling device (0000 -> 0002) Jul 7 00:00:49.001465 kernel: mlx5_core 1185:00:02.0: PTM is not supported by PCIe Jul 7 00:00:49.001563 kernel: mlx5_core 1185:00:02.0: firmware version: 16.30.5006 Jul 7 00:00:49.169921 kernel: hv_netvsc 002248b5-bc21-0022-48b5-bc21002248b5 eth0: VF registering: eth1 Jul 7 00:00:49.170161 kernel: mlx5_core 1185:00:02.0 eth1: joined to eth0 Jul 7 00:00:49.176302 kernel: mlx5_core 1185:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Jul 7 00:00:49.183213 kernel: mlx5_core 1185:00:02.0 enP4485s1: renamed from eth1 Jul 7 00:00:49.332773 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jul 7 00:00:49.411290 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jul 7 00:00:49.416799 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jul 7 00:00:49.441318 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jul 7 00:00:49.451507 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 7 00:00:49.544540 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jul 7 00:00:49.655473 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 7 00:00:49.660554 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 7 00:00:49.668863 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 00:00:49.677920 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 7 00:00:49.687288 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 7 00:00:49.709729 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 7 00:00:50.482773 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#189 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Jul 7 00:00:50.494233 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 7 00:00:50.494267 disk-uuid[645]: The operation has completed successfully. Jul 7 00:00:50.550330 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 7 00:00:50.550420 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 7 00:00:50.582013 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 7 00:00:50.598398 sh[822]: Success Jul 7 00:00:50.629180 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 7 00:00:50.629238 kernel: device-mapper: uevent: version 1.0.3 Jul 7 00:00:50.634072 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 7 00:00:50.643211 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jul 7 00:00:50.798158 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 7 00:00:50.807576 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 7 00:00:50.811810 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 7 00:00:50.840810 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 7 00:00:50.840853 kernel: BTRFS: device fsid 2cfafe0a-eb24-4e1d-b9c9-dec7de7e4c4d devid 1 transid 38 /dev/mapper/usr (254:0) scanned by mount (840) Jul 7 00:00:50.850210 kernel: BTRFS info (device dm-0): first mount of filesystem 2cfafe0a-eb24-4e1d-b9c9-dec7de7e4c4d Jul 7 00:00:50.850252 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 7 00:00:50.853164 kernel: BTRFS info (device dm-0): using free-space-tree Jul 7 00:00:51.060236 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 7 00:00:51.064238 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 7 00:00:51.072624 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 7 00:00:51.073301 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 7 00:00:51.093828 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 7 00:00:51.117543 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (863) Jul 7 00:00:51.117565 kernel: BTRFS info (device sda6): first mount of filesystem f2591801-6ba1-4aa7-8261-bdb292e2060d Jul 7 00:00:51.123065 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 7 00:00:51.126236 kernel: BTRFS info (device sda6): using free-space-tree Jul 7 00:00:51.149227 kernel: BTRFS info (device sda6): last unmount of filesystem f2591801-6ba1-4aa7-8261-bdb292e2060d Jul 7 00:00:51.149608 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 7 00:00:51.155861 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 7 00:00:51.213245 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 7 00:00:51.223546 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 7 00:00:51.255488 systemd-networkd[1009]: lo: Link UP Jul 7 00:00:51.255496 systemd-networkd[1009]: lo: Gained carrier Jul 7 00:00:51.256966 systemd-networkd[1009]: Enumeration completed Jul 7 00:00:51.257547 systemd-networkd[1009]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 00:00:51.257550 systemd-networkd[1009]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 00:00:51.258447 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 7 00:00:51.269015 systemd[1]: Reached target network.target - Network. Jul 7 00:00:51.314208 kernel: mlx5_core 1185:00:02.0 enP4485s1: Link up Jul 7 00:00:51.350220 kernel: hv_netvsc 002248b5-bc21-0022-48b5-bc21002248b5 eth0: Data path switched to VF: enP4485s1 Jul 7 00:00:51.350666 systemd-networkd[1009]: enP4485s1: Link UP Jul 7 00:00:51.350723 systemd-networkd[1009]: eth0: Link UP Jul 7 00:00:51.350810 systemd-networkd[1009]: eth0: Gained carrier Jul 7 00:00:51.350818 systemd-networkd[1009]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 00:00:51.368452 systemd-networkd[1009]: enP4485s1: Gained carrier Jul 7 00:00:51.382220 systemd-networkd[1009]: eth0: DHCPv4 address 10.200.20.4/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jul 7 00:00:51.923486 ignition[928]: Ignition 2.21.0 Jul 7 00:00:51.923500 ignition[928]: Stage: fetch-offline Jul 7 00:00:51.926837 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 7 00:00:51.923567 ignition[928]: no configs at "/usr/lib/ignition/base.d" Jul 7 00:00:51.935012 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 7 00:00:51.923572 ignition[928]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 7 00:00:51.923667 ignition[928]: parsed url from cmdline: "" Jul 7 00:00:51.923670 ignition[928]: no config URL provided Jul 7 00:00:51.923673 ignition[928]: reading system config file "/usr/lib/ignition/user.ign" Jul 7 00:00:51.923677 ignition[928]: no config at "/usr/lib/ignition/user.ign" Jul 7 00:00:51.923680 ignition[928]: failed to fetch config: resource requires networking Jul 7 00:00:51.923889 ignition[928]: Ignition finished successfully Jul 7 00:00:51.965519 ignition[1020]: Ignition 2.21.0 Jul 7 00:00:51.965525 ignition[1020]: Stage: fetch Jul 7 00:00:51.966550 ignition[1020]: no configs at "/usr/lib/ignition/base.d" Jul 7 00:00:51.966559 ignition[1020]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 7 00:00:51.967285 ignition[1020]: parsed url from cmdline: "" Jul 7 00:00:51.967289 ignition[1020]: no config URL provided Jul 7 00:00:51.967295 ignition[1020]: reading system config file "/usr/lib/ignition/user.ign" Jul 7 00:00:51.967304 ignition[1020]: no config at "/usr/lib/ignition/user.ign" Jul 7 00:00:51.967347 ignition[1020]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jul 7 00:00:52.066752 ignition[1020]: GET result: OK Jul 7 00:00:52.066864 ignition[1020]: config has been read from IMDS userdata Jul 7 00:00:52.066886 ignition[1020]: parsing config with SHA512: 521e94b8ca692c2b4c05ad17860f0e1734d78094e42d711e58282b949abdf222cacb0ee36fe9e4526062a231b46cb1b214db74f8aa890d9f78c2f94aa2b5fbe2 Jul 7 00:00:52.070251 unknown[1020]: fetched base config from "system" Jul 7 00:00:52.070556 ignition[1020]: fetch: fetch complete Jul 7 00:00:52.070255 unknown[1020]: fetched base config from "system" Jul 7 00:00:52.070560 ignition[1020]: fetch: fetch passed Jul 7 00:00:52.070258 unknown[1020]: fetched user config from "azure" Jul 7 00:00:52.070603 ignition[1020]: Ignition finished successfully Jul 7 00:00:52.072519 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 7 00:00:52.077681 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 7 00:00:52.112128 ignition[1026]: Ignition 2.21.0 Jul 7 00:00:52.112140 ignition[1026]: Stage: kargs Jul 7 00:00:52.115969 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 7 00:00:52.112443 ignition[1026]: no configs at "/usr/lib/ignition/base.d" Jul 7 00:00:52.120870 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 7 00:00:52.112451 ignition[1026]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 7 00:00:52.113329 ignition[1026]: kargs: kargs passed Jul 7 00:00:52.113393 ignition[1026]: Ignition finished successfully Jul 7 00:00:52.150667 ignition[1033]: Ignition 2.21.0 Jul 7 00:00:52.150678 ignition[1033]: Stage: disks Jul 7 00:00:52.154161 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 7 00:00:52.150889 ignition[1033]: no configs at "/usr/lib/ignition/base.d" Jul 7 00:00:52.160209 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 7 00:00:52.150896 ignition[1033]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 7 00:00:52.168025 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 7 00:00:52.151612 ignition[1033]: disks: disks passed Jul 7 00:00:52.176438 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 7 00:00:52.151654 ignition[1033]: Ignition finished successfully Jul 7 00:00:52.184950 systemd[1]: Reached target sysinit.target - System Initialization. Jul 7 00:00:52.193467 systemd[1]: Reached target basic.target - Basic System. Jul 7 00:00:52.202412 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 7 00:00:52.277740 systemd-fsck[1041]: ROOT: clean, 15/7326000 files, 477845/7359488 blocks Jul 7 00:00:52.286072 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 7 00:00:52.292028 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 7 00:00:52.448229 kernel: EXT4-fs (sda9): mounted filesystem 8d88df29-f94d-4ab8-8fb6-af875603e6d4 r/w with ordered data mode. Quota mode: none. Jul 7 00:00:52.448785 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 7 00:00:52.452642 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 7 00:00:52.471904 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 7 00:00:52.476724 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 7 00:00:52.494361 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jul 7 00:00:52.504447 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 7 00:00:52.504594 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 7 00:00:52.511087 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 7 00:00:52.540268 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1055) Jul 7 00:00:52.533573 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 7 00:00:52.535493 systemd-networkd[1009]: enP4485s1: Gained IPv6LL Jul 7 00:00:52.560323 kernel: BTRFS info (device sda6): first mount of filesystem f2591801-6ba1-4aa7-8261-bdb292e2060d Jul 7 00:00:52.560343 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 7 00:00:52.560350 kernel: BTRFS info (device sda6): using free-space-tree Jul 7 00:00:52.563174 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 7 00:00:52.913060 coreos-metadata[1057]: Jul 07 00:00:52.912 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jul 7 00:00:52.919117 coreos-metadata[1057]: Jul 07 00:00:52.919 INFO Fetch successful Jul 7 00:00:52.919117 coreos-metadata[1057]: Jul 07 00:00:52.919 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jul 7 00:00:52.930861 coreos-metadata[1057]: Jul 07 00:00:52.930 INFO Fetch successful Jul 7 00:00:52.943310 coreos-metadata[1057]: Jul 07 00:00:52.943 INFO wrote hostname ci-4372.0.1-a-609ca7abb9 to /sysroot/etc/hostname Jul 7 00:00:52.949915 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 7 00:00:52.982312 systemd-networkd[1009]: eth0: Gained IPv6LL Jul 7 00:00:53.102517 initrd-setup-root[1086]: cut: /sysroot/etc/passwd: No such file or directory Jul 7 00:00:53.145702 initrd-setup-root[1093]: cut: /sysroot/etc/group: No such file or directory Jul 7 00:00:53.164381 initrd-setup-root[1100]: cut: /sysroot/etc/shadow: No such file or directory Jul 7 00:00:53.172081 initrd-setup-root[1107]: cut: /sysroot/etc/gshadow: No such file or directory Jul 7 00:00:53.942081 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 7 00:00:53.948057 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 7 00:00:53.971590 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 7 00:00:53.983117 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 7 00:00:53.991537 kernel: BTRFS info (device sda6): last unmount of filesystem f2591801-6ba1-4aa7-8261-bdb292e2060d Jul 7 00:00:54.008690 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 7 00:00:54.018126 ignition[1174]: INFO : Ignition 2.21.0 Jul 7 00:00:54.018126 ignition[1174]: INFO : Stage: mount Jul 7 00:00:54.025389 ignition[1174]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 00:00:54.025389 ignition[1174]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 7 00:00:54.025389 ignition[1174]: INFO : mount: mount passed Jul 7 00:00:54.025389 ignition[1174]: INFO : Ignition finished successfully Jul 7 00:00:54.024462 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 7 00:00:54.034651 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 7 00:00:54.063538 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 7 00:00:54.081205 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1187) Jul 7 00:00:54.081453 kernel: BTRFS info (device sda6): first mount of filesystem f2591801-6ba1-4aa7-8261-bdb292e2060d Jul 7 00:00:54.090724 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 7 00:00:54.093838 kernel: BTRFS info (device sda6): using free-space-tree Jul 7 00:00:54.096056 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 7 00:00:54.123554 ignition[1204]: INFO : Ignition 2.21.0 Jul 7 00:00:54.123554 ignition[1204]: INFO : Stage: files Jul 7 00:00:54.130930 ignition[1204]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 00:00:54.130930 ignition[1204]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 7 00:00:54.130930 ignition[1204]: DEBUG : files: compiled without relabeling support, skipping Jul 7 00:00:54.144005 ignition[1204]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 7 00:00:54.144005 ignition[1204]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 7 00:00:54.180092 ignition[1204]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 7 00:00:54.185595 ignition[1204]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 7 00:00:54.185595 ignition[1204]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 7 00:00:54.180477 unknown[1204]: wrote ssh authorized keys file for user: core Jul 7 00:00:54.199534 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 7 00:00:54.206931 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jul 7 00:00:54.249070 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 7 00:00:54.366460 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 7 00:00:54.375595 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 7 00:00:54.375595 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jul 7 00:00:54.838976 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 7 00:00:54.903851 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 7 00:00:54.910770 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 7 00:00:54.910770 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 7 00:00:54.910770 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 7 00:00:54.910770 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 7 00:00:54.910770 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 7 00:00:54.910770 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 7 00:00:54.910770 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 7 00:00:54.910770 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 7 00:00:54.965600 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 7 00:00:54.965600 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 7 00:00:54.965600 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 7 00:00:54.965600 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 7 00:00:54.965600 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 7 00:00:54.965600 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Jul 7 00:00:55.644768 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 7 00:00:55.834375 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 7 00:00:55.834375 ignition[1204]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 7 00:00:55.859532 ignition[1204]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 7 00:00:55.876943 ignition[1204]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 7 00:00:55.876943 ignition[1204]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 7 00:00:55.898371 ignition[1204]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jul 7 00:00:55.898371 ignition[1204]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jul 7 00:00:55.898371 ignition[1204]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 7 00:00:55.898371 ignition[1204]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 7 00:00:55.898371 ignition[1204]: INFO : files: files passed Jul 7 00:00:55.898371 ignition[1204]: INFO : Ignition finished successfully Jul 7 00:00:55.878552 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 7 00:00:55.889890 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 7 00:00:55.919020 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 7 00:00:55.934723 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 7 00:00:55.958809 initrd-setup-root-after-ignition[1233]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 7 00:00:55.958809 initrd-setup-root-after-ignition[1233]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 7 00:00:55.934794 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 7 00:00:55.990165 initrd-setup-root-after-ignition[1237]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 7 00:00:55.955820 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 7 00:00:55.964003 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 7 00:00:55.974806 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 7 00:00:56.014509 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 7 00:00:56.015374 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 7 00:00:56.023916 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 7 00:00:56.028303 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 7 00:00:56.037278 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 7 00:00:56.037901 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 7 00:00:56.072701 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 7 00:00:56.079577 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 7 00:00:56.103291 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 7 00:00:56.108101 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 00:00:56.116880 systemd[1]: Stopped target timers.target - Timer Units. Jul 7 00:00:56.125256 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 7 00:00:56.125352 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 7 00:00:56.136654 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 7 00:00:56.141198 systemd[1]: Stopped target basic.target - Basic System. Jul 7 00:00:56.149447 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 7 00:00:56.158033 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 7 00:00:56.165693 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 7 00:00:56.174267 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 7 00:00:56.183087 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 7 00:00:56.191348 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 7 00:00:56.200185 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 7 00:00:56.208049 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 7 00:00:56.216586 systemd[1]: Stopped target swap.target - Swaps. Jul 7 00:00:56.223467 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 7 00:00:56.223570 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 7 00:00:56.234313 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 7 00:00:56.238777 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 00:00:56.247392 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 7 00:00:56.251211 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 00:00:56.256267 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 7 00:00:56.256360 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 7 00:00:56.269525 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 7 00:00:56.269602 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 7 00:00:56.274857 systemd[1]: ignition-files.service: Deactivated successfully. Jul 7 00:00:56.274927 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 7 00:00:56.282313 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jul 7 00:00:56.340253 ignition[1258]: INFO : Ignition 2.21.0 Jul 7 00:00:56.340253 ignition[1258]: INFO : Stage: umount Jul 7 00:00:56.340253 ignition[1258]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 00:00:56.340253 ignition[1258]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 7 00:00:56.340253 ignition[1258]: INFO : umount: umount passed Jul 7 00:00:56.340253 ignition[1258]: INFO : Ignition finished successfully Jul 7 00:00:56.282383 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 7 00:00:56.296423 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 7 00:00:56.309467 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 7 00:00:56.309578 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 00:00:56.322926 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 7 00:00:56.333223 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 7 00:00:56.333356 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 00:00:56.345950 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 7 00:00:56.346031 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 7 00:00:56.361813 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 7 00:00:56.362679 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 7 00:00:56.362767 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 7 00:00:56.372802 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 7 00:00:56.374234 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 7 00:00:56.383518 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 7 00:00:56.383593 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 7 00:00:56.392270 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 7 00:00:56.392318 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 7 00:00:56.399897 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 7 00:00:56.399926 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 7 00:00:56.407501 systemd[1]: Stopped target network.target - Network. Jul 7 00:00:56.415727 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 7 00:00:56.415760 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 7 00:00:56.424247 systemd[1]: Stopped target paths.target - Path Units. Jul 7 00:00:56.432415 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 7 00:00:56.436441 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 00:00:56.441339 systemd[1]: Stopped target slices.target - Slice Units. Jul 7 00:00:56.448639 systemd[1]: Stopped target sockets.target - Socket Units. Jul 7 00:00:56.456482 systemd[1]: iscsid.socket: Deactivated successfully. Jul 7 00:00:56.456526 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 7 00:00:56.464022 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 7 00:00:56.464049 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 7 00:00:56.471550 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 7 00:00:56.471599 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 7 00:00:56.479068 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 7 00:00:56.479097 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 7 00:00:56.487729 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 7 00:00:56.496037 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 7 00:00:56.517996 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 7 00:00:56.518100 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 7 00:00:56.528979 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 7 00:00:56.529206 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 7 00:00:56.529283 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 7 00:00:56.546007 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 7 00:00:56.546491 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 7 00:00:56.707842 kernel: hv_netvsc 002248b5-bc21-0022-48b5-bc21002248b5 eth0: Data path switched from VF: enP4485s1 Jul 7 00:00:56.553593 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 7 00:00:56.553629 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 7 00:00:56.563073 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 7 00:00:56.577751 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 7 00:00:56.577808 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 7 00:00:56.585957 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 7 00:00:56.585994 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 7 00:00:56.594034 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 7 00:00:56.594069 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 7 00:00:56.598421 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 7 00:00:56.598453 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 00:00:56.610036 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 00:00:56.618339 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 7 00:00:56.618402 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 7 00:00:56.630763 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 7 00:00:56.638116 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 00:00:56.646538 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 7 00:00:56.646573 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 7 00:00:56.654986 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 7 00:00:56.655006 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 00:00:56.662901 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 7 00:00:56.662957 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 7 00:00:56.676328 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 7 00:00:56.676382 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 7 00:00:56.694142 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 7 00:00:56.694193 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 00:00:56.710326 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 7 00:00:56.717440 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 7 00:00:56.717498 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 7 00:00:56.730164 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 7 00:00:56.889651 systemd-journald[224]: Received SIGTERM from PID 1 (systemd). Jul 7 00:00:56.730218 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 00:00:56.735738 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 00:00:56.735778 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 00:00:56.744799 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jul 7 00:00:56.744840 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 7 00:00:56.744864 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 7 00:00:56.745932 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 7 00:00:56.745992 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 7 00:00:56.758925 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 7 00:00:56.759035 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 7 00:00:56.764516 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 7 00:00:56.764594 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 7 00:00:56.789562 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 7 00:00:56.789646 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 7 00:00:56.800050 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 7 00:00:56.809396 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 7 00:00:56.827825 systemd[1]: Switching root. Jul 7 00:00:56.961399 systemd-journald[224]: Journal stopped Jul 7 00:01:00.444486 kernel: SELinux: policy capability network_peer_controls=1 Jul 7 00:01:00.444507 kernel: SELinux: policy capability open_perms=1 Jul 7 00:01:00.444515 kernel: SELinux: policy capability extended_socket_class=1 Jul 7 00:01:00.444521 kernel: SELinux: policy capability always_check_network=0 Jul 7 00:01:00.444527 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 7 00:01:00.444532 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 7 00:01:00.444538 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 7 00:01:00.444543 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 7 00:01:00.444549 kernel: SELinux: policy capability userspace_initial_context=0 Jul 7 00:01:00.444556 systemd[1]: Successfully loaded SELinux policy in 134.198ms. Jul 7 00:01:00.444563 kernel: audit: type=1403 audit(1751846457.627:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 7 00:01:00.444569 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 7.970ms. Jul 7 00:01:00.444576 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 7 00:01:00.444582 systemd[1]: Detected virtualization microsoft. Jul 7 00:01:00.444588 systemd[1]: Detected architecture arm64. Jul 7 00:01:00.444595 systemd[1]: Detected first boot. Jul 7 00:01:00.444601 systemd[1]: Hostname set to . Jul 7 00:01:00.444607 systemd[1]: Initializing machine ID from random generator. Jul 7 00:01:00.444613 zram_generator::config[1301]: No configuration found. Jul 7 00:01:00.444619 kernel: NET: Registered PF_VSOCK protocol family Jul 7 00:01:00.444625 systemd[1]: Populated /etc with preset unit settings. Jul 7 00:01:00.444631 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 7 00:01:00.444638 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 7 00:01:00.444644 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 7 00:01:00.444650 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 7 00:01:00.444656 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 7 00:01:00.444664 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 7 00:01:00.444670 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 7 00:01:00.444676 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 7 00:01:00.444683 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 7 00:01:00.444689 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 7 00:01:00.444695 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 7 00:01:00.444701 systemd[1]: Created slice user.slice - User and Session Slice. Jul 7 00:01:00.444707 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 00:01:00.444713 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 00:01:00.444719 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 7 00:01:00.444725 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 7 00:01:00.444731 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 7 00:01:00.444738 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 7 00:01:00.444744 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jul 7 00:01:00.444752 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 00:01:00.444758 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 7 00:01:00.444764 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 7 00:01:00.444771 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 7 00:01:00.444777 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 7 00:01:00.444784 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 7 00:01:00.444791 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 00:01:00.444797 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 7 00:01:00.444803 systemd[1]: Reached target slices.target - Slice Units. Jul 7 00:01:00.444809 systemd[1]: Reached target swap.target - Swaps. Jul 7 00:01:00.444815 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 7 00:01:00.444821 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 7 00:01:00.444829 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 7 00:01:00.444835 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 7 00:01:00.444842 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 7 00:01:00.444848 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 00:01:00.444854 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 7 00:01:00.444860 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 7 00:01:00.444867 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 7 00:01:00.444873 systemd[1]: Mounting media.mount - External Media Directory... Jul 7 00:01:00.444879 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 7 00:01:00.444885 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 7 00:01:00.444893 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 7 00:01:00.444900 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 7 00:01:00.444906 systemd[1]: Reached target machines.target - Containers. Jul 7 00:01:00.444912 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 7 00:01:00.444919 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 00:01:00.444925 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 7 00:01:00.444931 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 7 00:01:00.444938 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 00:01:00.444944 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 7 00:01:00.444950 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 00:01:00.444956 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 7 00:01:00.444962 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 00:01:00.444969 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 7 00:01:00.444976 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 7 00:01:00.444982 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 7 00:01:00.444988 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 7 00:01:00.444994 systemd[1]: Stopped systemd-fsck-usr.service. Jul 7 00:01:00.445001 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 7 00:01:00.445007 kernel: fuse: init (API version 7.41) Jul 7 00:01:00.445013 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 7 00:01:00.445019 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 7 00:01:00.445026 kernel: loop: module loaded Jul 7 00:01:00.445032 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 7 00:01:00.445038 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 7 00:01:00.445044 kernel: ACPI: bus type drm_connector registered Jul 7 00:01:00.445050 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 7 00:01:00.445057 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 7 00:01:00.445063 systemd[1]: verity-setup.service: Deactivated successfully. Jul 7 00:01:00.445069 systemd[1]: Stopped verity-setup.service. Jul 7 00:01:00.445076 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 7 00:01:00.445082 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 7 00:01:00.445088 systemd[1]: Mounted media.mount - External Media Directory. Jul 7 00:01:00.445094 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 7 00:01:00.445100 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 7 00:01:00.445107 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 7 00:01:00.445113 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 7 00:01:00.445119 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 00:01:00.445138 systemd-journald[1402]: Collecting audit messages is disabled. Jul 7 00:01:00.445152 systemd-journald[1402]: Journal started Jul 7 00:01:00.445168 systemd-journald[1402]: Runtime Journal (/run/log/journal/02fa393fd4774cf99744959b0b3a6565) is 8M, max 78.5M, 70.5M free. Jul 7 00:00:59.675294 systemd[1]: Queued start job for default target multi-user.target. Jul 7 00:00:59.682542 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jul 7 00:00:59.682900 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 7 00:00:59.683161 systemd[1]: systemd-journald.service: Consumed 2.329s CPU time. Jul 7 00:01:00.455442 systemd[1]: Started systemd-journald.service - Journal Service. Jul 7 00:01:00.456331 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 7 00:01:00.456486 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 7 00:01:00.461283 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 00:01:00.461405 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 00:01:00.466266 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 7 00:01:00.466381 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 7 00:01:00.470632 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 00:01:00.470747 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 00:01:00.476134 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 7 00:01:00.476260 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 7 00:01:00.480704 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 00:01:00.480817 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 00:01:00.485373 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 7 00:01:00.490037 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 7 00:01:00.495136 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 7 00:01:00.500331 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 7 00:01:00.505645 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 00:01:00.520024 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 7 00:01:00.525605 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 7 00:01:00.535617 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 7 00:01:00.540142 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 7 00:01:00.540170 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 7 00:01:00.544834 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 7 00:01:00.555720 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 7 00:01:00.559831 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 00:01:00.560810 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 7 00:01:00.565836 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 7 00:01:00.570302 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 00:01:00.571159 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 7 00:01:00.575457 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 00:01:00.576525 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 00:01:00.583334 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 7 00:01:00.590113 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 7 00:01:00.598636 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 7 00:01:00.603370 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 7 00:01:00.613031 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 7 00:01:00.617966 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 7 00:01:00.626676 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 7 00:01:00.635461 systemd-journald[1402]: Time spent on flushing to /var/log/journal/02fa393fd4774cf99744959b0b3a6565 is 45.105ms for 939 entries. Jul 7 00:01:00.635461 systemd-journald[1402]: System Journal (/var/log/journal/02fa393fd4774cf99744959b0b3a6565) is 11.8M, max 2.6G, 2.6G free. Jul 7 00:01:00.719574 systemd-journald[1402]: Received client request to flush runtime journal. Jul 7 00:01:00.719627 systemd-journald[1402]: /var/log/journal/02fa393fd4774cf99744959b0b3a6565/system.journal: Realtime clock jumped backwards relative to last journal entry, rotating. Jul 7 00:01:00.719644 systemd-journald[1402]: Rotating system journal. Jul 7 00:01:00.719660 kernel: loop0: detected capacity change from 0 to 107312 Jul 7 00:01:00.663978 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 00:01:00.692384 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 7 00:01:00.692963 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 7 00:01:00.721261 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 7 00:01:00.744168 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 7 00:01:00.750322 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 7 00:01:00.828633 systemd-tmpfiles[1455]: ACLs are not supported, ignoring. Jul 7 00:01:00.828966 systemd-tmpfiles[1455]: ACLs are not supported, ignoring. Jul 7 00:01:00.844886 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 00:01:00.976229 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 7 00:01:01.026209 kernel: loop1: detected capacity change from 0 to 138376 Jul 7 00:01:01.347207 kernel: loop2: detected capacity change from 0 to 28936 Jul 7 00:01:01.587915 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 7 00:01:01.594509 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 00:01:01.607208 kernel: loop3: detected capacity change from 0 to 203944 Jul 7 00:01:01.623231 kernel: loop4: detected capacity change from 0 to 107312 Jul 7 00:01:01.624030 systemd-udevd[1463]: Using default interface naming scheme 'v255'. Jul 7 00:01:01.630201 kernel: loop5: detected capacity change from 0 to 138376 Jul 7 00:01:01.637253 kernel: loop6: detected capacity change from 0 to 28936 Jul 7 00:01:01.643201 kernel: loop7: detected capacity change from 0 to 203944 Jul 7 00:01:01.645444 (sd-merge)[1465]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jul 7 00:01:01.645778 (sd-merge)[1465]: Merged extensions into '/usr'. Jul 7 00:01:01.648014 systemd[1]: Reload requested from client PID 1440 ('systemd-sysext') (unit systemd-sysext.service)... Jul 7 00:01:01.648026 systemd[1]: Reloading... Jul 7 00:01:01.697228 zram_generator::config[1488]: No configuration found. Jul 7 00:01:01.772266 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 00:01:01.853065 systemd[1]: Reloading finished in 204 ms. Jul 7 00:01:01.868161 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 7 00:01:01.873467 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 00:01:01.888123 systemd[1]: Starting ensure-sysext.service... Jul 7 00:01:01.895477 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 7 00:01:01.902267 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 7 00:01:01.925683 systemd[1]: Reload requested from client PID 1570 ('systemctl') (unit ensure-sysext.service)... Jul 7 00:01:01.925694 systemd[1]: Reloading... Jul 7 00:01:01.969674 systemd-tmpfiles[1575]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 7 00:01:01.969716 systemd-tmpfiles[1575]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 7 00:01:01.969912 systemd-tmpfiles[1575]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 7 00:01:01.970052 systemd-tmpfiles[1575]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 7 00:01:01.971519 systemd-tmpfiles[1575]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 7 00:01:01.971671 systemd-tmpfiles[1575]: ACLs are not supported, ignoring. Jul 7 00:01:01.971701 systemd-tmpfiles[1575]: ACLs are not supported, ignoring. Jul 7 00:01:01.983451 systemd-tmpfiles[1575]: Detected autofs mount point /boot during canonicalization of boot. Jul 7 00:01:01.983460 systemd-tmpfiles[1575]: Skipping /boot Jul 7 00:01:02.002339 systemd-tmpfiles[1575]: Detected autofs mount point /boot during canonicalization of boot. Jul 7 00:01:02.002350 systemd-tmpfiles[1575]: Skipping /boot Jul 7 00:01:02.026213 zram_generator::config[1605]: No configuration found. Jul 7 00:01:02.095206 kernel: mousedev: PS/2 mouse device common for all mice Jul 7 00:01:02.109207 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#47 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jul 7 00:01:02.162636 kernel: hv_vmbus: registering driver hv_balloon Jul 7 00:01:02.167544 kernel: hv_vmbus: registering driver hyperv_fb Jul 7 00:01:02.167564 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jul 7 00:01:02.167573 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jul 7 00:01:02.172022 kernel: Console: switching to colour dummy device 80x25 Jul 7 00:01:02.176099 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jul 7 00:01:02.180631 kernel: Console: switching to colour frame buffer device 128x48 Jul 7 00:01:02.181202 kernel: hv_balloon: Memory hot add disabled on ARM64 Jul 7 00:01:02.185905 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 00:01:02.297298 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jul 7 00:01:02.297452 systemd[1]: Reloading finished in 371 ms. Jul 7 00:01:02.308192 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 00:01:02.358537 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jul 7 00:01:02.359206 kernel: MACsec IEEE 802.1AE Jul 7 00:01:02.366866 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 7 00:01:02.379552 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 7 00:01:02.384793 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 00:01:02.396724 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 00:01:02.405877 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 00:01:02.416285 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 00:01:02.420707 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 00:01:02.422432 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 7 00:01:02.427501 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 7 00:01:02.428793 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 7 00:01:02.436929 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 7 00:01:02.445909 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 7 00:01:02.459438 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 7 00:01:02.467402 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 00:01:02.474914 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 00:01:02.480366 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 00:01:02.486833 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 00:01:02.486965 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 00:01:02.494368 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 00:01:02.494491 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 00:01:02.502099 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 7 00:01:02.508332 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 7 00:01:02.518583 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 7 00:01:02.521834 augenrules[1794]: No rules Jul 7 00:01:02.523769 systemd[1]: audit-rules.service: Deactivated successfully. Jul 7 00:01:02.523933 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 7 00:01:02.534637 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 7 00:01:02.540668 systemd[1]: Finished ensure-sysext.service. Jul 7 00:01:02.550099 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 7 00:01:02.553993 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 00:01:02.559838 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 00:01:02.566558 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 7 00:01:02.575292 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 00:01:02.580313 augenrules[1804]: /sbin/augenrules: No change Jul 7 00:01:02.584284 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 00:01:02.588286 augenrules[1828]: No rules Jul 7 00:01:02.589474 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 00:01:02.589615 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 7 00:01:02.589719 systemd[1]: Reached target time-set.target - System Time Set. Jul 7 00:01:02.597649 systemd[1]: audit-rules.service: Deactivated successfully. Jul 7 00:01:02.598049 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 7 00:01:02.605637 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 00:01:02.605767 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 00:01:02.612564 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 7 00:01:02.612687 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 7 00:01:02.617115 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 00:01:02.617246 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 00:01:02.622288 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 00:01:02.622403 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 00:01:02.629839 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 00:01:02.629913 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 00:01:02.663612 systemd-resolved[1769]: Positive Trust Anchors: Jul 7 00:01:02.664239 systemd-resolved[1769]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 7 00:01:02.664263 systemd-resolved[1769]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 7 00:01:02.667279 systemd-resolved[1769]: Using system hostname 'ci-4372.0.1-a-609ca7abb9'. Jul 7 00:01:02.668464 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 7 00:01:02.670880 systemd-networkd[1572]: lo: Link UP Jul 7 00:01:02.671071 systemd-networkd[1572]: lo: Gained carrier Jul 7 00:01:02.672582 systemd-networkd[1572]: Enumeration completed Jul 7 00:01:02.672883 systemd-networkd[1572]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 00:01:02.672945 systemd-networkd[1572]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 00:01:02.673331 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 7 00:01:02.677602 systemd[1]: Reached target network.target - Network. Jul 7 00:01:02.681159 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 7 00:01:02.686978 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 7 00:01:02.694314 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 7 00:01:02.738218 kernel: mlx5_core 1185:00:02.0 enP4485s1: Link up Jul 7 00:01:02.761203 kernel: hv_netvsc 002248b5-bc21-0022-48b5-bc21002248b5 eth0: Data path switched to VF: enP4485s1 Jul 7 00:01:02.763638 systemd-networkd[1572]: enP4485s1: Link UP Jul 7 00:01:02.763740 systemd-networkd[1572]: eth0: Link UP Jul 7 00:01:02.763746 systemd-networkd[1572]: eth0: Gained carrier Jul 7 00:01:02.763757 systemd-networkd[1572]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 00:01:02.763784 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 7 00:01:02.773529 systemd-networkd[1572]: enP4485s1: Gained carrier Jul 7 00:01:02.784221 systemd-networkd[1572]: eth0: DHCPv4 address 10.200.20.4/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jul 7 00:01:02.894826 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 00:01:03.135229 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 7 00:01:03.141607 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 7 00:01:04.118332 systemd-networkd[1572]: eth0: Gained IPv6LL Jul 7 00:01:04.120592 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 7 00:01:04.126232 systemd[1]: Reached target network-online.target - Network is Online. Jul 7 00:01:04.438343 systemd-networkd[1572]: enP4485s1: Gained IPv6LL Jul 7 00:01:04.688180 ldconfig[1435]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 7 00:01:04.699557 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 7 00:01:04.705778 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 7 00:01:04.717625 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 7 00:01:04.722361 systemd[1]: Reached target sysinit.target - System Initialization. Jul 7 00:01:04.726493 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 7 00:01:04.731273 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 7 00:01:04.736276 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 7 00:01:04.740532 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 7 00:01:04.745612 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 7 00:01:04.750644 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 7 00:01:04.750667 systemd[1]: Reached target paths.target - Path Units. Jul 7 00:01:04.754165 systemd[1]: Reached target timers.target - Timer Units. Jul 7 00:01:04.770653 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 7 00:01:04.776207 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 7 00:01:04.781329 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 7 00:01:04.786543 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 7 00:01:04.791616 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 7 00:01:04.804701 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 7 00:01:04.808971 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 7 00:01:04.813825 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 7 00:01:04.817907 systemd[1]: Reached target sockets.target - Socket Units. Jul 7 00:01:04.821985 systemd[1]: Reached target basic.target - Basic System. Jul 7 00:01:04.825509 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 7 00:01:04.825528 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 7 00:01:04.827026 systemd[1]: Starting chronyd.service - NTP client/server... Jul 7 00:01:04.840266 systemd[1]: Starting containerd.service - containerd container runtime... Jul 7 00:01:04.845940 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 7 00:01:04.853299 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 7 00:01:04.862279 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 7 00:01:04.867989 (chronyd)[1852]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Jul 7 00:01:04.873273 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 7 00:01:04.878314 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 7 00:01:04.882938 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 7 00:01:04.886173 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jul 7 00:01:04.890538 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jul 7 00:01:04.892206 jq[1860]: false Jul 7 00:01:04.894061 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:01:04.894779 KVP[1862]: KVP starting; pid is:1862 Jul 7 00:01:04.898980 chronyd[1868]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Jul 7 00:01:04.902286 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 7 00:01:04.902789 KVP[1862]: KVP LIC Version: 3.1 Jul 7 00:01:04.905963 kernel: hv_utils: KVP IC version 4.0 Jul 7 00:01:04.913164 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 7 00:01:04.917171 extend-filesystems[1861]: Found /dev/sda6 Jul 7 00:01:04.921066 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 7 00:01:04.928461 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 7 00:01:04.938546 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 7 00:01:04.939060 chronyd[1868]: Timezone right/UTC failed leap second check, ignoring Jul 7 00:01:04.943985 chronyd[1868]: Loaded seccomp filter (level 2) Jul 7 00:01:04.947850 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 7 00:01:04.955424 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 7 00:01:04.955984 extend-filesystems[1861]: Found /dev/sda9 Jul 7 00:01:04.959050 extend-filesystems[1861]: Checking size of /dev/sda9 Jul 7 00:01:04.962614 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 7 00:01:04.963288 systemd[1]: Starting update-engine.service - Update Engine... Jul 7 00:01:04.976080 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 7 00:01:04.983597 systemd[1]: Started chronyd.service - NTP client/server. Jul 7 00:01:04.992227 jq[1893]: true Jul 7 00:01:04.994552 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 7 00:01:05.002108 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 7 00:01:05.002723 extend-filesystems[1861]: Old size kept for /dev/sda9 Jul 7 00:01:05.003057 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 7 00:01:05.005500 systemd[1]: motdgen.service: Deactivated successfully. Jul 7 00:01:05.005636 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 7 00:01:05.014726 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 7 00:01:05.014884 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 7 00:01:05.021990 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 7 00:01:05.027988 update_engine[1888]: I20250707 00:01:05.026377 1888 main.cc:92] Flatcar Update Engine starting Jul 7 00:01:05.029608 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 7 00:01:05.029763 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 7 00:01:05.045154 systemd-logind[1880]: New seat seat0. Jul 7 00:01:05.048342 systemd-logind[1880]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 7 00:01:05.048996 systemd[1]: Started systemd-logind.service - User Login Management. Jul 7 00:01:05.064338 (ntainerd)[1907]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 7 00:01:05.070756 jq[1904]: true Jul 7 00:01:05.104210 tar[1903]: linux-arm64/helm Jul 7 00:01:05.143062 dbus-daemon[1855]: [system] SELinux support is enabled Jul 7 00:01:05.143224 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 7 00:01:05.148712 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 7 00:01:05.148737 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 7 00:01:05.150631 dbus-daemon[1855]: [system] Successfully activated service 'org.freedesktop.systemd1' Jul 7 00:01:05.153830 update_engine[1888]: I20250707 00:01:05.153788 1888 update_check_scheduler.cc:74] Next update check in 7m27s Jul 7 00:01:05.153966 sshd_keygen[1894]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 7 00:01:05.154952 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 7 00:01:05.154974 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 7 00:01:05.161769 systemd[1]: Started update-engine.service - Update Engine. Jul 7 00:01:05.168460 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 7 00:01:05.223976 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 7 00:01:05.239089 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 7 00:01:05.245302 bash[1947]: Updated "/home/core/.ssh/authorized_keys" Jul 7 00:01:05.247933 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jul 7 00:01:05.249378 coreos-metadata[1854]: Jul 07 00:01:05.249 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jul 7 00:01:05.256471 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 7 00:01:05.257732 coreos-metadata[1854]: Jul 07 00:01:05.257 INFO Fetch successful Jul 7 00:01:05.257732 coreos-metadata[1854]: Jul 07 00:01:05.257 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jul 7 00:01:05.264432 coreos-metadata[1854]: Jul 07 00:01:05.264 INFO Fetch successful Jul 7 00:01:05.264432 coreos-metadata[1854]: Jul 07 00:01:05.264 INFO Fetching http://168.63.129.16/machine/0fd7c400-e653-4190-8813-eb79fb081454/582862b0%2D5918%2D4235%2Da4de%2D378f0f5d290c.%5Fci%2D4372.0.1%2Da%2D609ca7abb9?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jul 7 00:01:05.271438 coreos-metadata[1854]: Jul 07 00:01:05.271 INFO Fetch successful Jul 7 00:01:05.271438 coreos-metadata[1854]: Jul 07 00:01:05.271 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jul 7 00:01:05.278766 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 7 00:01:05.287616 coreos-metadata[1854]: Jul 07 00:01:05.287 INFO Fetch successful Jul 7 00:01:05.296266 systemd[1]: issuegen.service: Deactivated successfully. Jul 7 00:01:05.296414 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 7 00:01:05.310757 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 7 00:01:05.319288 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jul 7 00:01:05.334936 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 7 00:01:05.341985 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 7 00:01:05.349293 locksmithd[1960]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 7 00:01:05.351061 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 7 00:01:05.357399 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 7 00:01:05.365341 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jul 7 00:01:05.372172 systemd[1]: Reached target getty.target - Login Prompts. Jul 7 00:01:05.537128 containerd[1907]: time="2025-07-07T00:01:05Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 7 00:01:05.542339 containerd[1907]: time="2025-07-07T00:01:05.542302808Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jul 7 00:01:05.556656 containerd[1907]: time="2025-07-07T00:01:05.556449056Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="8.456µs" Jul 7 00:01:05.556656 containerd[1907]: time="2025-07-07T00:01:05.556473800Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 7 00:01:05.556656 containerd[1907]: time="2025-07-07T00:01:05.556488560Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 7 00:01:05.556656 containerd[1907]: time="2025-07-07T00:01:05.556617528Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 7 00:01:05.556656 containerd[1907]: time="2025-07-07T00:01:05.556632472Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 7 00:01:05.556656 containerd[1907]: time="2025-07-07T00:01:05.556651568Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 7 00:01:05.556788 containerd[1907]: time="2025-07-07T00:01:05.556690760Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 7 00:01:05.556788 containerd[1907]: time="2025-07-07T00:01:05.556698416Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 7 00:01:05.556886 containerd[1907]: time="2025-07-07T00:01:05.556862088Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 7 00:01:05.556886 containerd[1907]: time="2025-07-07T00:01:05.556879800Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 7 00:01:05.556918 containerd[1907]: time="2025-07-07T00:01:05.556887280Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 7 00:01:05.556918 containerd[1907]: time="2025-07-07T00:01:05.556892888Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 7 00:01:05.557058 containerd[1907]: time="2025-07-07T00:01:05.556960976Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 7 00:01:05.557159 containerd[1907]: time="2025-07-07T00:01:05.557103912Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 7 00:01:05.557159 containerd[1907]: time="2025-07-07T00:01:05.557128016Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 7 00:01:05.557159 containerd[1907]: time="2025-07-07T00:01:05.557134608Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 7 00:01:05.557159 containerd[1907]: time="2025-07-07T00:01:05.557156200Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 7 00:01:05.557574 containerd[1907]: time="2025-07-07T00:01:05.557308776Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 7 00:01:05.557574 containerd[1907]: time="2025-07-07T00:01:05.557362688Z" level=info msg="metadata content store policy set" policy=shared Jul 7 00:01:05.584377 containerd[1907]: time="2025-07-07T00:01:05.583131312Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 7 00:01:05.584377 containerd[1907]: time="2025-07-07T00:01:05.583174360Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 7 00:01:05.584377 containerd[1907]: time="2025-07-07T00:01:05.583201528Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 7 00:01:05.584377 containerd[1907]: time="2025-07-07T00:01:05.583210440Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 7 00:01:05.584377 containerd[1907]: time="2025-07-07T00:01:05.583218392Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 7 00:01:05.584377 containerd[1907]: time="2025-07-07T00:01:05.583226864Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 7 00:01:05.584377 containerd[1907]: time="2025-07-07T00:01:05.583234840Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 7 00:01:05.584377 containerd[1907]: time="2025-07-07T00:01:05.583242088Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 7 00:01:05.584377 containerd[1907]: time="2025-07-07T00:01:05.583249720Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 7 00:01:05.584377 containerd[1907]: time="2025-07-07T00:01:05.583256392Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 7 00:01:05.584377 containerd[1907]: time="2025-07-07T00:01:05.583261768Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 7 00:01:05.584377 containerd[1907]: time="2025-07-07T00:01:05.583272416Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 7 00:01:05.584377 containerd[1907]: time="2025-07-07T00:01:05.583380688Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 7 00:01:05.584377 containerd[1907]: time="2025-07-07T00:01:05.583395112Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 7 00:01:05.584604 containerd[1907]: time="2025-07-07T00:01:05.583417144Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 7 00:01:05.584604 containerd[1907]: time="2025-07-07T00:01:05.583424952Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 7 00:01:05.584604 containerd[1907]: time="2025-07-07T00:01:05.583432016Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 7 00:01:05.584604 containerd[1907]: time="2025-07-07T00:01:05.583438448Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 7 00:01:05.584604 containerd[1907]: time="2025-07-07T00:01:05.583446392Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 7 00:01:05.584604 containerd[1907]: time="2025-07-07T00:01:05.583452768Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 7 00:01:05.584604 containerd[1907]: time="2025-07-07T00:01:05.583459088Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 7 00:01:05.584604 containerd[1907]: time="2025-07-07T00:01:05.583465168Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 7 00:01:05.584604 containerd[1907]: time="2025-07-07T00:01:05.583471568Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 7 00:01:05.584604 containerd[1907]: time="2025-07-07T00:01:05.583517696Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 7 00:01:05.584604 containerd[1907]: time="2025-07-07T00:01:05.583527184Z" level=info msg="Start snapshots syncer" Jul 7 00:01:05.584604 containerd[1907]: time="2025-07-07T00:01:05.583543016Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 7 00:01:05.584763 containerd[1907]: time="2025-07-07T00:01:05.583677024Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 7 00:01:05.584763 containerd[1907]: time="2025-07-07T00:01:05.583708544Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 7 00:01:05.584844 containerd[1907]: time="2025-07-07T00:01:05.583762728Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 7 00:01:05.584844 containerd[1907]: time="2025-07-07T00:01:05.583850568Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 7 00:01:05.584844 containerd[1907]: time="2025-07-07T00:01:05.583865584Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 7 00:01:05.584844 containerd[1907]: time="2025-07-07T00:01:05.583872560Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 7 00:01:05.584844 containerd[1907]: time="2025-07-07T00:01:05.583879384Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 7 00:01:05.584844 containerd[1907]: time="2025-07-07T00:01:05.583889640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 7 00:01:05.584844 containerd[1907]: time="2025-07-07T00:01:05.583897016Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 7 00:01:05.584844 containerd[1907]: time="2025-07-07T00:01:05.583903360Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 7 00:01:05.584844 containerd[1907]: time="2025-07-07T00:01:05.583923000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 7 00:01:05.584844 containerd[1907]: time="2025-07-07T00:01:05.583929512Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 7 00:01:05.584844 containerd[1907]: time="2025-07-07T00:01:05.583935544Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 7 00:01:05.584844 containerd[1907]: time="2025-07-07T00:01:05.583958840Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 7 00:01:05.584844 containerd[1907]: time="2025-07-07T00:01:05.583966808Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 7 00:01:05.584844 containerd[1907]: time="2025-07-07T00:01:05.583971992Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 7 00:01:05.584997 containerd[1907]: time="2025-07-07T00:01:05.583977216Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 7 00:01:05.584997 containerd[1907]: time="2025-07-07T00:01:05.583981768Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 7 00:01:05.584997 containerd[1907]: time="2025-07-07T00:01:05.583989064Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 7 00:01:05.584997 containerd[1907]: time="2025-07-07T00:01:05.583995216Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 7 00:01:05.584997 containerd[1907]: time="2025-07-07T00:01:05.584005624Z" level=info msg="runtime interface created" Jul 7 00:01:05.584997 containerd[1907]: time="2025-07-07T00:01:05.584008688Z" level=info msg="created NRI interface" Jul 7 00:01:05.584997 containerd[1907]: time="2025-07-07T00:01:05.584013352Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 7 00:01:05.584997 containerd[1907]: time="2025-07-07T00:01:05.584021616Z" level=info msg="Connect containerd service" Jul 7 00:01:05.584997 containerd[1907]: time="2025-07-07T00:01:05.584041568Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 7 00:01:05.587697 containerd[1907]: time="2025-07-07T00:01:05.587507736Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 7 00:01:05.604743 tar[1903]: linux-arm64/LICENSE Jul 7 00:01:05.604815 tar[1903]: linux-arm64/README.md Jul 7 00:01:05.620460 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 7 00:01:05.736047 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:01:05.741200 (kubelet)[2039]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 00:01:05.977636 kubelet[2039]: E0707 00:01:05.977527 2039 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 00:01:05.980096 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 00:01:05.980314 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 00:01:05.980877 systemd[1]: kubelet.service: Consumed 537ms CPU time, 255.2M memory peak. Jul 7 00:01:06.039920 containerd[1907]: time="2025-07-07T00:01:06.039817184Z" level=info msg="Start subscribing containerd event" Jul 7 00:01:06.039920 containerd[1907]: time="2025-07-07T00:01:06.039881248Z" level=info msg="Start recovering state" Jul 7 00:01:06.040014 containerd[1907]: time="2025-07-07T00:01:06.039963216Z" level=info msg="Start event monitor" Jul 7 00:01:06.040014 containerd[1907]: time="2025-07-07T00:01:06.039975240Z" level=info msg="Start cni network conf syncer for default" Jul 7 00:01:06.040014 containerd[1907]: time="2025-07-07T00:01:06.039981904Z" level=info msg="Start streaming server" Jul 7 00:01:06.040014 containerd[1907]: time="2025-07-07T00:01:06.039987864Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 7 00:01:06.040014 containerd[1907]: time="2025-07-07T00:01:06.039992648Z" level=info msg="runtime interface starting up..." Jul 7 00:01:06.040014 containerd[1907]: time="2025-07-07T00:01:06.039996432Z" level=info msg="starting plugins..." Jul 7 00:01:06.040014 containerd[1907]: time="2025-07-07T00:01:06.040006056Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 7 00:01:06.040381 containerd[1907]: time="2025-07-07T00:01:06.040346896Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 7 00:01:06.040404 containerd[1907]: time="2025-07-07T00:01:06.040400208Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 7 00:01:06.041040 containerd[1907]: time="2025-07-07T00:01:06.041015536Z" level=info msg="containerd successfully booted in 0.505155s" Jul 7 00:01:06.041087 systemd[1]: Started containerd.service - containerd container runtime. Jul 7 00:01:06.046359 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 7 00:01:06.052370 systemd[1]: Startup finished in 1.632s (kernel) + 10.270s (initrd) + 8.558s (userspace) = 20.461s. Jul 7 00:01:06.289029 login[2019]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:01:06.290087 login[2020]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:01:06.295111 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 7 00:01:06.296404 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 7 00:01:06.302256 systemd-logind[1880]: New session 2 of user core. Jul 7 00:01:06.305516 systemd-logind[1880]: New session 1 of user core. Jul 7 00:01:06.312418 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 7 00:01:06.314843 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 7 00:01:06.337025 (systemd)[2060]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 7 00:01:06.339487 systemd-logind[1880]: New session c1 of user core. Jul 7 00:01:06.477166 systemd[2060]: Queued start job for default target default.target. Jul 7 00:01:06.484313 systemd[2060]: Created slice app.slice - User Application Slice. Jul 7 00:01:06.484455 systemd[2060]: Reached target paths.target - Paths. Jul 7 00:01:06.484569 systemd[2060]: Reached target timers.target - Timers. Jul 7 00:01:06.485758 systemd[2060]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 7 00:01:06.492794 systemd[2060]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 7 00:01:06.492839 systemd[2060]: Reached target sockets.target - Sockets. Jul 7 00:01:06.492968 systemd[2060]: Reached target basic.target - Basic System. Jul 7 00:01:06.493001 systemd[2060]: Reached target default.target - Main User Target. Jul 7 00:01:06.493019 systemd[2060]: Startup finished in 149ms. Jul 7 00:01:06.493086 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 7 00:01:06.503297 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 7 00:01:06.503840 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 7 00:01:06.613303 waagent[2013]: 2025-07-07T00:01:06.609270Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Jul 7 00:01:06.613767 waagent[2013]: 2025-07-07T00:01:06.613726Z INFO Daemon Daemon OS: flatcar 4372.0.1 Jul 7 00:01:06.617152 waagent[2013]: 2025-07-07T00:01:06.617115Z INFO Daemon Daemon Python: 3.11.12 Jul 7 00:01:06.622193 waagent[2013]: 2025-07-07T00:01:06.621271Z INFO Daemon Daemon Run daemon Jul 7 00:01:06.624336 waagent[2013]: 2025-07-07T00:01:06.624199Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4372.0.1' Jul 7 00:01:06.630525 waagent[2013]: 2025-07-07T00:01:06.630493Z INFO Daemon Daemon Using waagent for provisioning Jul 7 00:01:06.634292 waagent[2013]: 2025-07-07T00:01:06.634260Z INFO Daemon Daemon Activate resource disk Jul 7 00:01:06.637441 waagent[2013]: 2025-07-07T00:01:06.637416Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jul 7 00:01:06.645408 waagent[2013]: 2025-07-07T00:01:06.645373Z INFO Daemon Daemon Found device: None Jul 7 00:01:06.648411 waagent[2013]: 2025-07-07T00:01:06.648384Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jul 7 00:01:06.654243 waagent[2013]: 2025-07-07T00:01:06.654215Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jul 7 00:01:06.662568 waagent[2013]: 2025-07-07T00:01:06.662531Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jul 7 00:01:06.666713 waagent[2013]: 2025-07-07T00:01:06.666683Z INFO Daemon Daemon Running default provisioning handler Jul 7 00:01:06.675394 waagent[2013]: 2025-07-07T00:01:06.675342Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jul 7 00:01:06.685660 waagent[2013]: 2025-07-07T00:01:06.685612Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jul 7 00:01:06.692568 waagent[2013]: 2025-07-07T00:01:06.692534Z INFO Daemon Daemon cloud-init is enabled: False Jul 7 00:01:06.696062 waagent[2013]: 2025-07-07T00:01:06.696036Z INFO Daemon Daemon Copying ovf-env.xml Jul 7 00:01:06.804160 waagent[2013]: 2025-07-07T00:01:06.804074Z INFO Daemon Daemon Successfully mounted dvd Jul 7 00:01:06.825822 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jul 7 00:01:06.832174 waagent[2013]: 2025-07-07T00:01:06.827810Z INFO Daemon Daemon Detect protocol endpoint Jul 7 00:01:06.832469 waagent[2013]: 2025-07-07T00:01:06.832433Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jul 7 00:01:06.836652 waagent[2013]: 2025-07-07T00:01:06.836620Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jul 7 00:01:06.841302 waagent[2013]: 2025-07-07T00:01:06.841270Z INFO Daemon Daemon Test for route to 168.63.129.16 Jul 7 00:01:06.845290 waagent[2013]: 2025-07-07T00:01:06.845257Z INFO Daemon Daemon Route to 168.63.129.16 exists Jul 7 00:01:06.849307 waagent[2013]: 2025-07-07T00:01:06.849276Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jul 7 00:01:06.886437 waagent[2013]: 2025-07-07T00:01:06.886323Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jul 7 00:01:06.891532 waagent[2013]: 2025-07-07T00:01:06.891503Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jul 7 00:01:06.895543 waagent[2013]: 2025-07-07T00:01:06.895503Z INFO Daemon Daemon Server preferred version:2015-04-05 Jul 7 00:01:07.086460 waagent[2013]: 2025-07-07T00:01:07.086386Z INFO Daemon Daemon Initializing goal state during protocol detection Jul 7 00:01:07.091165 waagent[2013]: 2025-07-07T00:01:07.091132Z INFO Daemon Daemon Forcing an update of the goal state. Jul 7 00:01:07.098498 waagent[2013]: 2025-07-07T00:01:07.098465Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jul 7 00:01:07.122836 waagent[2013]: 2025-07-07T00:01:07.122790Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.175 Jul 7 00:01:07.127411 waagent[2013]: 2025-07-07T00:01:07.127366Z INFO Daemon Jul 7 00:01:07.129675 waagent[2013]: 2025-07-07T00:01:07.129639Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 900b2935-4338-4511-8a0e-cfdbb1db866f eTag: 16979315717173079622 source: Fabric] Jul 7 00:01:07.137967 waagent[2013]: 2025-07-07T00:01:07.137891Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jul 7 00:01:07.142764 waagent[2013]: 2025-07-07T00:01:07.142731Z INFO Daemon Jul 7 00:01:07.144763 waagent[2013]: 2025-07-07T00:01:07.144733Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jul 7 00:01:07.155340 waagent[2013]: 2025-07-07T00:01:07.155311Z INFO Daemon Daemon Downloading artifacts profile blob Jul 7 00:01:07.289331 waagent[2013]: 2025-07-07T00:01:07.289278Z INFO Daemon Downloaded certificate {'thumbprint': 'C8E3ED64A0929350B8820BC0B37C0650D0808764', 'hasPrivateKey': False} Jul 7 00:01:07.296986 waagent[2013]: 2025-07-07T00:01:07.296947Z INFO Daemon Downloaded certificate {'thumbprint': '85564A95BE8DACFB0F4B750E33B39EA6D3CF6752', 'hasPrivateKey': True} Jul 7 00:01:07.304603 waagent[2013]: 2025-07-07T00:01:07.304568Z INFO Daemon Fetch goal state completed Jul 7 00:01:07.343907 waagent[2013]: 2025-07-07T00:01:07.343844Z INFO Daemon Daemon Starting provisioning Jul 7 00:01:07.347869 waagent[2013]: 2025-07-07T00:01:07.347831Z INFO Daemon Daemon Handle ovf-env.xml. Jul 7 00:01:07.351875 waagent[2013]: 2025-07-07T00:01:07.351847Z INFO Daemon Daemon Set hostname [ci-4372.0.1-a-609ca7abb9] Jul 7 00:01:07.357680 waagent[2013]: 2025-07-07T00:01:07.357637Z INFO Daemon Daemon Publish hostname [ci-4372.0.1-a-609ca7abb9] Jul 7 00:01:07.362254 waagent[2013]: 2025-07-07T00:01:07.362223Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jul 7 00:01:07.367035 waagent[2013]: 2025-07-07T00:01:07.367006Z INFO Daemon Daemon Primary interface is [eth0] Jul 7 00:01:07.376627 systemd-networkd[1572]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 00:01:07.376633 systemd-networkd[1572]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 00:01:07.377001 waagent[2013]: 2025-07-07T00:01:07.376866Z INFO Daemon Daemon Create user account if not exists Jul 7 00:01:07.376672 systemd-networkd[1572]: eth0: DHCP lease lost Jul 7 00:01:07.381053 waagent[2013]: 2025-07-07T00:01:07.381020Z INFO Daemon Daemon User core already exists, skip useradd Jul 7 00:01:07.385388 waagent[2013]: 2025-07-07T00:01:07.385352Z INFO Daemon Daemon Configure sudoer Jul 7 00:01:07.393338 waagent[2013]: 2025-07-07T00:01:07.393266Z INFO Daemon Daemon Configure sshd Jul 7 00:01:07.394244 systemd-networkd[1572]: eth0: DHCPv4 address 10.200.20.4/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jul 7 00:01:07.404168 waagent[2013]: 2025-07-07T00:01:07.404124Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jul 7 00:01:07.413133 waagent[2013]: 2025-07-07T00:01:07.413093Z INFO Daemon Daemon Deploy ssh public key. Jul 7 00:01:08.526780 waagent[2013]: 2025-07-07T00:01:08.526721Z INFO Daemon Daemon Provisioning complete Jul 7 00:01:08.539143 waagent[2013]: 2025-07-07T00:01:08.539114Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jul 7 00:01:08.543511 waagent[2013]: 2025-07-07T00:01:08.543483Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jul 7 00:01:08.550570 waagent[2013]: 2025-07-07T00:01:08.550545Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Jul 7 00:01:08.643559 waagent[2116]: 2025-07-07T00:01:08.643178Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Jul 7 00:01:08.643559 waagent[2116]: 2025-07-07T00:01:08.643292Z INFO ExtHandler ExtHandler OS: flatcar 4372.0.1 Jul 7 00:01:08.643559 waagent[2116]: 2025-07-07T00:01:08.643329Z INFO ExtHandler ExtHandler Python: 3.11.12 Jul 7 00:01:08.643559 waagent[2116]: 2025-07-07T00:01:08.643361Z INFO ExtHandler ExtHandler CPU Arch: aarch64 Jul 7 00:01:08.671675 waagent[2116]: 2025-07-07T00:01:08.671637Z INFO ExtHandler ExtHandler Distro: flatcar-4372.0.1; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.12; Arch: aarch64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Jul 7 00:01:08.671878 waagent[2116]: 2025-07-07T00:01:08.671854Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 7 00:01:08.671999 waagent[2116]: 2025-07-07T00:01:08.671976Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 7 00:01:08.680119 waagent[2116]: 2025-07-07T00:01:08.680075Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jul 7 00:01:08.686573 waagent[2116]: 2025-07-07T00:01:08.686545Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.175 Jul 7 00:01:08.686965 waagent[2116]: 2025-07-07T00:01:08.686936Z INFO ExtHandler Jul 7 00:01:08.687079 waagent[2116]: 2025-07-07T00:01:08.687056Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: ba543ad1-d141-4d3e-a333-879d17ffb446 eTag: 16979315717173079622 source: Fabric] Jul 7 00:01:08.687380 waagent[2116]: 2025-07-07T00:01:08.687348Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jul 7 00:01:08.687835 waagent[2116]: 2025-07-07T00:01:08.687807Z INFO ExtHandler Jul 7 00:01:08.687956 waagent[2116]: 2025-07-07T00:01:08.687933Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jul 7 00:01:08.691131 waagent[2116]: 2025-07-07T00:01:08.691106Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jul 7 00:01:08.764759 waagent[2116]: 2025-07-07T00:01:08.764724Z INFO ExtHandler Downloaded certificate {'thumbprint': 'C8E3ED64A0929350B8820BC0B37C0650D0808764', 'hasPrivateKey': False} Jul 7 00:01:08.765067 waagent[2116]: 2025-07-07T00:01:08.765041Z INFO ExtHandler Downloaded certificate {'thumbprint': '85564A95BE8DACFB0F4B750E33B39EA6D3CF6752', 'hasPrivateKey': True} Jul 7 00:01:08.765456 waagent[2116]: 2025-07-07T00:01:08.765423Z INFO ExtHandler Fetch goal state completed Jul 7 00:01:08.775956 waagent[2116]: 2025-07-07T00:01:08.775929Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.3.3 11 Feb 2025 (Library: OpenSSL 3.3.3 11 Feb 2025) Jul 7 00:01:08.779057 waagent[2116]: 2025-07-07T00:01:08.778996Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 2116 Jul 7 00:01:08.779233 waagent[2116]: 2025-07-07T00:01:08.779207Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jul 7 00:01:08.779542 waagent[2116]: 2025-07-07T00:01:08.779516Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Jul 7 00:01:08.780605 waagent[2116]: 2025-07-07T00:01:08.780573Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4372.0.1', '', 'Flatcar Container Linux by Kinvolk'] Jul 7 00:01:08.780974 waagent[2116]: 2025-07-07T00:01:08.780945Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4372.0.1', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Jul 7 00:01:08.781142 waagent[2116]: 2025-07-07T00:01:08.781117Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Jul 7 00:01:08.781658 waagent[2116]: 2025-07-07T00:01:08.781630Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jul 7 00:01:08.829944 waagent[2116]: 2025-07-07T00:01:08.829922Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jul 7 00:01:08.830132 waagent[2116]: 2025-07-07T00:01:08.830106Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jul 7 00:01:08.834283 waagent[2116]: 2025-07-07T00:01:08.834263Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jul 7 00:01:08.838484 systemd[1]: Reload requested from client PID 2133 ('systemctl') (unit waagent.service)... Jul 7 00:01:08.838496 systemd[1]: Reloading... Jul 7 00:01:08.886166 zram_generator::config[2171]: No configuration found. Jul 7 00:01:08.957690 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 00:01:09.037916 systemd[1]: Reloading finished in 199 ms. Jul 7 00:01:09.051637 waagent[2116]: 2025-07-07T00:01:09.050396Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jul 7 00:01:09.051637 waagent[2116]: 2025-07-07T00:01:09.050520Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jul 7 00:01:09.536469 waagent[2116]: 2025-07-07T00:01:09.536350Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jul 7 00:01:09.536671 waagent[2116]: 2025-07-07T00:01:09.536642Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Jul 7 00:01:09.537285 waagent[2116]: 2025-07-07T00:01:09.537246Z INFO ExtHandler ExtHandler Starting env monitor service. Jul 7 00:01:09.537566 waagent[2116]: 2025-07-07T00:01:09.537533Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jul 7 00:01:09.537761 waagent[2116]: 2025-07-07T00:01:09.537713Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jul 7 00:01:09.537892 waagent[2116]: 2025-07-07T00:01:09.537841Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jul 7 00:01:09.538163 waagent[2116]: 2025-07-07T00:01:09.538115Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jul 7 00:01:09.538312 waagent[2116]: 2025-07-07T00:01:09.538259Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jul 7 00:01:09.538820 waagent[2116]: 2025-07-07T00:01:09.538786Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jul 7 00:01:09.540557 waagent[2116]: 2025-07-07T00:01:09.540532Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 7 00:01:09.540783 waagent[2116]: 2025-07-07T00:01:09.540760Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 7 00:01:09.540845 waagent[2116]: 2025-07-07T00:01:09.540822Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 7 00:01:09.541001 waagent[2116]: 2025-07-07T00:01:09.540979Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jul 7 00:01:09.541442 waagent[2116]: 2025-07-07T00:01:09.541403Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 7 00:01:09.541558 waagent[2116]: 2025-07-07T00:01:09.541507Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jul 7 00:01:09.541558 waagent[2116]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jul 7 00:01:09.541558 waagent[2116]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Jul 7 00:01:09.541558 waagent[2116]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jul 7 00:01:09.541558 waagent[2116]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jul 7 00:01:09.541558 waagent[2116]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jul 7 00:01:09.541558 waagent[2116]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jul 7 00:01:09.541828 waagent[2116]: 2025-07-07T00:01:09.541581Z INFO EnvHandler ExtHandler Configure routes Jul 7 00:01:09.542002 waagent[2116]: 2025-07-07T00:01:09.541976Z INFO EnvHandler ExtHandler Gateway:None Jul 7 00:01:09.542034 waagent[2116]: 2025-07-07T00:01:09.542025Z INFO EnvHandler ExtHandler Routes:None Jul 7 00:01:09.545571 waagent[2116]: 2025-07-07T00:01:09.545539Z INFO ExtHandler ExtHandler Jul 7 00:01:09.545617 waagent[2116]: 2025-07-07T00:01:09.545599Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 90d1ed2b-3392-427d-ab61-5dc63d7f2d67 correlation e410f356-cd15-4bd8-832e-ecef2bc6c853 created: 2025-07-07T00:00:11.058151Z] Jul 7 00:01:09.545925 waagent[2116]: 2025-07-07T00:01:09.545895Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jul 7 00:01:09.546330 waagent[2116]: 2025-07-07T00:01:09.546304Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 0 ms] Jul 7 00:01:09.573873 waagent[2116]: 2025-07-07T00:01:09.573823Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Jul 7 00:01:09.573873 waagent[2116]: Try `iptables -h' or 'iptables --help' for more information.) Jul 7 00:01:09.574140 waagent[2116]: 2025-07-07T00:01:09.574108Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 9A398A5A-5C0E-4BE7-8903-4847292D958A;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Jul 7 00:01:09.586366 waagent[2116]: 2025-07-07T00:01:09.586324Z INFO MonitorHandler ExtHandler Network interfaces: Jul 7 00:01:09.586366 waagent[2116]: Executing ['ip', '-a', '-o', 'link']: Jul 7 00:01:09.586366 waagent[2116]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jul 7 00:01:09.586366 waagent[2116]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:b5:bc:21 brd ff:ff:ff:ff:ff:ff Jul 7 00:01:09.586366 waagent[2116]: 3: enP4485s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:b5:bc:21 brd ff:ff:ff:ff:ff:ff\ altname enP4485p0s2 Jul 7 00:01:09.586366 waagent[2116]: Executing ['ip', '-4', '-a', '-o', 'address']: Jul 7 00:01:09.586366 waagent[2116]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jul 7 00:01:09.586366 waagent[2116]: 2: eth0 inet 10.200.20.4/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Jul 7 00:01:09.586366 waagent[2116]: Executing ['ip', '-6', '-a', '-o', 'address']: Jul 7 00:01:09.586366 waagent[2116]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jul 7 00:01:09.586366 waagent[2116]: 2: eth0 inet6 fe80::222:48ff:feb5:bc21/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jul 7 00:01:09.586366 waagent[2116]: 3: enP4485s1 inet6 fe80::222:48ff:feb5:bc21/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jul 7 00:01:09.649216 waagent[2116]: 2025-07-07T00:01:09.648850Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Jul 7 00:01:09.649216 waagent[2116]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jul 7 00:01:09.649216 waagent[2116]: pkts bytes target prot opt in out source destination Jul 7 00:01:09.649216 waagent[2116]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jul 7 00:01:09.649216 waagent[2116]: pkts bytes target prot opt in out source destination Jul 7 00:01:09.649216 waagent[2116]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jul 7 00:01:09.649216 waagent[2116]: pkts bytes target prot opt in out source destination Jul 7 00:01:09.649216 waagent[2116]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jul 7 00:01:09.649216 waagent[2116]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jul 7 00:01:09.649216 waagent[2116]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jul 7 00:01:09.651295 waagent[2116]: 2025-07-07T00:01:09.651255Z INFO EnvHandler ExtHandler Current Firewall rules: Jul 7 00:01:09.651295 waagent[2116]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jul 7 00:01:09.651295 waagent[2116]: pkts bytes target prot opt in out source destination Jul 7 00:01:09.651295 waagent[2116]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jul 7 00:01:09.651295 waagent[2116]: pkts bytes target prot opt in out source destination Jul 7 00:01:09.651295 waagent[2116]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jul 7 00:01:09.651295 waagent[2116]: pkts bytes target prot opt in out source destination Jul 7 00:01:09.651295 waagent[2116]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jul 7 00:01:09.651295 waagent[2116]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jul 7 00:01:09.651295 waagent[2116]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jul 7 00:01:09.651476 waagent[2116]: 2025-07-07T00:01:09.651450Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jul 7 00:01:16.231088 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 7 00:01:16.232836 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:01:16.328305 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:01:16.339370 (kubelet)[2266]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 00:01:16.439423 kubelet[2266]: E0707 00:01:16.439359 2266 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 00:01:16.442102 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 00:01:16.442345 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 00:01:16.442806 systemd[1]: kubelet.service: Consumed 106ms CPU time, 108.2M memory peak. Jul 7 00:01:26.647751 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 7 00:01:26.649874 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:01:26.736030 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:01:26.738593 (kubelet)[2281]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 00:01:26.861839 kubelet[2281]: E0707 00:01:26.861801 2281 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 00:01:26.863692 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 00:01:26.863798 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 00:01:26.864168 systemd[1]: kubelet.service: Consumed 194ms CPU time, 104.1M memory peak. Jul 7 00:01:27.639212 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 7 00:01:27.640323 systemd[1]: Started sshd@0-10.200.20.4:22-10.200.16.10:33608.service - OpenSSH per-connection server daemon (10.200.16.10:33608). Jul 7 00:01:28.238847 sshd[2289]: Accepted publickey for core from 10.200.16.10 port 33608 ssh2: RSA SHA256:jqhyWZ4ohIpoDXTKwGqZcjJ0o+/xEZ+M9M3EOPsEPgk Jul 7 00:01:28.239852 sshd-session[2289]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:01:28.243568 systemd-logind[1880]: New session 3 of user core. Jul 7 00:01:28.248290 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 7 00:01:28.662577 systemd[1]: Started sshd@1-10.200.20.4:22-10.200.16.10:33624.service - OpenSSH per-connection server daemon (10.200.16.10:33624). Jul 7 00:01:28.748854 chronyd[1868]: Selected source PHC0 Jul 7 00:01:29.152472 sshd[2294]: Accepted publickey for core from 10.200.16.10 port 33624 ssh2: RSA SHA256:jqhyWZ4ohIpoDXTKwGqZcjJ0o+/xEZ+M9M3EOPsEPgk Jul 7 00:01:29.153522 sshd-session[2294]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:01:29.156966 systemd-logind[1880]: New session 4 of user core. Jul 7 00:01:29.165300 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 7 00:01:29.496413 sshd[2296]: Connection closed by 10.200.16.10 port 33624 Jul 7 00:01:29.498865 sshd-session[2294]: pam_unix(sshd:session): session closed for user core Jul 7 00:01:29.501724 systemd[1]: sshd@1-10.200.20.4:22-10.200.16.10:33624.service: Deactivated successfully. Jul 7 00:01:29.503237 systemd[1]: session-4.scope: Deactivated successfully. Jul 7 00:01:29.503975 systemd-logind[1880]: Session 4 logged out. Waiting for processes to exit. Jul 7 00:01:29.505454 systemd-logind[1880]: Removed session 4. Jul 7 00:01:29.582374 systemd[1]: Started sshd@2-10.200.20.4:22-10.200.16.10:33640.service - OpenSSH per-connection server daemon (10.200.16.10:33640). Jul 7 00:01:30.060351 sshd[2302]: Accepted publickey for core from 10.200.16.10 port 33640 ssh2: RSA SHA256:jqhyWZ4ohIpoDXTKwGqZcjJ0o+/xEZ+M9M3EOPsEPgk Jul 7 00:01:30.061335 sshd-session[2302]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:01:30.064654 systemd-logind[1880]: New session 5 of user core. Jul 7 00:01:30.078291 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 7 00:01:30.400572 sshd[2304]: Connection closed by 10.200.16.10 port 33640 Jul 7 00:01:30.401041 sshd-session[2302]: pam_unix(sshd:session): session closed for user core Jul 7 00:01:30.403798 systemd[1]: sshd@2-10.200.20.4:22-10.200.16.10:33640.service: Deactivated successfully. Jul 7 00:01:30.404992 systemd[1]: session-5.scope: Deactivated successfully. Jul 7 00:01:30.405871 systemd-logind[1880]: Session 5 logged out. Waiting for processes to exit. Jul 7 00:01:30.406872 systemd-logind[1880]: Removed session 5. Jul 7 00:01:30.485325 systemd[1]: Started sshd@3-10.200.20.4:22-10.200.16.10:42732.service - OpenSSH per-connection server daemon (10.200.16.10:42732). Jul 7 00:01:30.963800 sshd[2310]: Accepted publickey for core from 10.200.16.10 port 42732 ssh2: RSA SHA256:jqhyWZ4ohIpoDXTKwGqZcjJ0o+/xEZ+M9M3EOPsEPgk Jul 7 00:01:30.964873 sshd-session[2310]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:01:30.969748 systemd-logind[1880]: New session 6 of user core. Jul 7 00:01:30.980303 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 7 00:01:31.306669 sshd[2312]: Connection closed by 10.200.16.10 port 42732 Jul 7 00:01:31.307124 sshd-session[2310]: pam_unix(sshd:session): session closed for user core Jul 7 00:01:31.310200 systemd[1]: sshd@3-10.200.20.4:22-10.200.16.10:42732.service: Deactivated successfully. Jul 7 00:01:31.311500 systemd[1]: session-6.scope: Deactivated successfully. Jul 7 00:01:31.312031 systemd-logind[1880]: Session 6 logged out. Waiting for processes to exit. Jul 7 00:01:31.313289 systemd-logind[1880]: Removed session 6. Jul 7 00:01:31.391252 systemd[1]: Started sshd@4-10.200.20.4:22-10.200.16.10:42748.service - OpenSSH per-connection server daemon (10.200.16.10:42748). Jul 7 00:01:31.866305 sshd[2318]: Accepted publickey for core from 10.200.16.10 port 42748 ssh2: RSA SHA256:jqhyWZ4ohIpoDXTKwGqZcjJ0o+/xEZ+M9M3EOPsEPgk Jul 7 00:01:31.867322 sshd-session[2318]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:01:31.871074 systemd-logind[1880]: New session 7 of user core. Jul 7 00:01:31.880291 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 7 00:01:32.193421 sudo[2321]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 7 00:01:32.193616 sudo[2321]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 00:01:32.218881 sudo[2321]: pam_unix(sudo:session): session closed for user root Jul 7 00:01:32.299876 sshd[2320]: Connection closed by 10.200.16.10 port 42748 Jul 7 00:01:32.300467 sshd-session[2318]: pam_unix(sshd:session): session closed for user core Jul 7 00:01:32.303535 systemd[1]: sshd@4-10.200.20.4:22-10.200.16.10:42748.service: Deactivated successfully. Jul 7 00:01:32.304795 systemd[1]: session-7.scope: Deactivated successfully. Jul 7 00:01:32.305386 systemd-logind[1880]: Session 7 logged out. Waiting for processes to exit. Jul 7 00:01:32.306669 systemd-logind[1880]: Removed session 7. Jul 7 00:01:32.388587 systemd[1]: Started sshd@5-10.200.20.4:22-10.200.16.10:42764.service - OpenSSH per-connection server daemon (10.200.16.10:42764). Jul 7 00:01:32.871921 sshd[2327]: Accepted publickey for core from 10.200.16.10 port 42764 ssh2: RSA SHA256:jqhyWZ4ohIpoDXTKwGqZcjJ0o+/xEZ+M9M3EOPsEPgk Jul 7 00:01:32.872991 sshd-session[2327]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:01:32.876718 systemd-logind[1880]: New session 8 of user core. Jul 7 00:01:32.885288 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 7 00:01:33.140096 sudo[2331]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 7 00:01:33.140427 sudo[2331]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 00:01:33.146926 sudo[2331]: pam_unix(sudo:session): session closed for user root Jul 7 00:01:33.150216 sudo[2330]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 7 00:01:33.150403 sudo[2330]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 00:01:33.156927 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 7 00:01:33.184624 augenrules[2353]: No rules Jul 7 00:01:33.185608 systemd[1]: audit-rules.service: Deactivated successfully. Jul 7 00:01:33.186259 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 7 00:01:33.187411 sudo[2330]: pam_unix(sudo:session): session closed for user root Jul 7 00:01:33.269724 sshd[2329]: Connection closed by 10.200.16.10 port 42764 Jul 7 00:01:33.269934 sshd-session[2327]: pam_unix(sshd:session): session closed for user core Jul 7 00:01:33.272802 systemd[1]: sshd@5-10.200.20.4:22-10.200.16.10:42764.service: Deactivated successfully. Jul 7 00:01:33.273926 systemd[1]: session-8.scope: Deactivated successfully. Jul 7 00:01:33.275246 systemd-logind[1880]: Session 8 logged out. Waiting for processes to exit. Jul 7 00:01:33.276550 systemd-logind[1880]: Removed session 8. Jul 7 00:01:33.359372 systemd[1]: Started sshd@6-10.200.20.4:22-10.200.16.10:42778.service - OpenSSH per-connection server daemon (10.200.16.10:42778). Jul 7 00:01:33.837892 sshd[2362]: Accepted publickey for core from 10.200.16.10 port 42778 ssh2: RSA SHA256:jqhyWZ4ohIpoDXTKwGqZcjJ0o+/xEZ+M9M3EOPsEPgk Jul 7 00:01:33.838895 sshd-session[2362]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:01:33.842302 systemd-logind[1880]: New session 9 of user core. Jul 7 00:01:33.849290 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 7 00:01:34.105938 sudo[2365]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 7 00:01:34.106154 sudo[2365]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 00:01:34.926738 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 7 00:01:34.939424 (dockerd)[2384]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 7 00:01:35.402335 dockerd[2384]: time="2025-07-07T00:01:35.402293555Z" level=info msg="Starting up" Jul 7 00:01:35.404944 dockerd[2384]: time="2025-07-07T00:01:35.404920707Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 7 00:01:35.440046 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3138470754-merged.mount: Deactivated successfully. Jul 7 00:01:35.490258 dockerd[2384]: time="2025-07-07T00:01:35.490218979Z" level=info msg="Loading containers: start." Jul 7 00:01:35.528216 kernel: Initializing XFRM netlink socket Jul 7 00:01:35.803107 systemd-networkd[1572]: docker0: Link UP Jul 7 00:01:35.820225 dockerd[2384]: time="2025-07-07T00:01:35.820013635Z" level=info msg="Loading containers: done." Jul 7 00:01:35.842191 dockerd[2384]: time="2025-07-07T00:01:35.842132875Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 7 00:01:35.842353 dockerd[2384]: time="2025-07-07T00:01:35.842247971Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jul 7 00:01:35.842381 dockerd[2384]: time="2025-07-07T00:01:35.842372267Z" level=info msg="Initializing buildkit" Jul 7 00:01:35.894724 dockerd[2384]: time="2025-07-07T00:01:35.894680379Z" level=info msg="Completed buildkit initialization" Jul 7 00:01:35.899269 dockerd[2384]: time="2025-07-07T00:01:35.899232443Z" level=info msg="Daemon has completed initialization" Jul 7 00:01:35.899897 dockerd[2384]: time="2025-07-07T00:01:35.899771003Z" level=info msg="API listen on /run/docker.sock" Jul 7 00:01:35.899983 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 7 00:01:36.437906 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1311124154-merged.mount: Deactivated successfully. Jul 7 00:01:36.760955 containerd[1907]: time="2025-07-07T00:01:36.760844330Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\"" Jul 7 00:01:36.896971 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 7 00:01:36.898240 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:01:37.001040 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:01:37.003246 (kubelet)[2590]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 00:01:37.027888 kubelet[2590]: E0707 00:01:37.027774 2590 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 00:01:37.029763 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 00:01:37.029960 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 00:01:37.030426 systemd[1]: kubelet.service: Consumed 99ms CPU time, 105.2M memory peak. Jul 7 00:01:37.993874 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount31523560.mount: Deactivated successfully. Jul 7 00:01:39.175225 containerd[1907]: time="2025-07-07T00:01:39.175080897Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:39.179215 containerd[1907]: time="2025-07-07T00:01:39.179190626Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.10: active requests=0, bytes read=25651793" Jul 7 00:01:39.185858 containerd[1907]: time="2025-07-07T00:01:39.185817673Z" level=info msg="ImageCreate event name:\"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:39.191427 containerd[1907]: time="2025-07-07T00:01:39.191378514Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:39.191995 containerd[1907]: time="2025-07-07T00:01:39.191859415Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.10\" with image id \"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\", size \"25648593\" in 2.430981136s" Jul 7 00:01:39.191995 containerd[1907]: time="2025-07-07T00:01:39.191889400Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\" returns image reference \"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\"" Jul 7 00:01:39.194867 containerd[1907]: time="2025-07-07T00:01:39.194850385Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\"" Jul 7 00:01:40.421223 containerd[1907]: time="2025-07-07T00:01:40.420919915Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:40.425687 containerd[1907]: time="2025-07-07T00:01:40.425650109Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.10: active requests=0, bytes read=22459677" Jul 7 00:01:40.431517 containerd[1907]: time="2025-07-07T00:01:40.431472445Z" level=info msg="ImageCreate event name:\"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:40.437832 containerd[1907]: time="2025-07-07T00:01:40.437771491Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:40.438470 containerd[1907]: time="2025-07-07T00:01:40.438204879Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.10\" with image id \"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\", size \"23995467\" in 1.24324469s" Jul 7 00:01:40.438470 containerd[1907]: time="2025-07-07T00:01:40.438231567Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\" returns image reference \"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\"" Jul 7 00:01:40.438762 containerd[1907]: time="2025-07-07T00:01:40.438741101Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\"" Jul 7 00:01:41.509147 containerd[1907]: time="2025-07-07T00:01:41.509087532Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:41.512206 containerd[1907]: time="2025-07-07T00:01:41.512181081Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.10: active requests=0, bytes read=17125066" Jul 7 00:01:41.517099 containerd[1907]: time="2025-07-07T00:01:41.517062304Z" level=info msg="ImageCreate event name:\"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:41.523287 containerd[1907]: time="2025-07-07T00:01:41.523255418Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:41.523776 containerd[1907]: time="2025-07-07T00:01:41.523665982Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.10\" with image id \"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\", size \"18660874\" in 1.084902368s" Jul 7 00:01:41.523776 containerd[1907]: time="2025-07-07T00:01:41.523692303Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\" returns image reference \"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\"" Jul 7 00:01:41.524054 containerd[1907]: time="2025-07-07T00:01:41.524034152Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jul 7 00:01:42.986844 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount425577369.mount: Deactivated successfully. Jul 7 00:01:43.287666 containerd[1907]: time="2025-07-07T00:01:43.287544267Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:43.291866 containerd[1907]: time="2025-07-07T00:01:43.291750071Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.10: active requests=0, bytes read=26915957" Jul 7 00:01:43.295022 containerd[1907]: time="2025-07-07T00:01:43.294996096Z" level=info msg="ImageCreate event name:\"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:43.299049 containerd[1907]: time="2025-07-07T00:01:43.298992774Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:43.299408 containerd[1907]: time="2025-07-07T00:01:43.299247829Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.10\" with image id \"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\", repo tag \"registry.k8s.io/kube-proxy:v1.31.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\", size \"26914976\" in 1.775191757s" Jul 7 00:01:43.299408 containerd[1907]: time="2025-07-07T00:01:43.299276270Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\"" Jul 7 00:01:43.300032 containerd[1907]: time="2025-07-07T00:01:43.300012250Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 7 00:01:43.973506 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2132869672.mount: Deactivated successfully. Jul 7 00:01:44.970224 containerd[1907]: time="2025-07-07T00:01:44.970003857Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:44.973076 containerd[1907]: time="2025-07-07T00:01:44.972872955Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" Jul 7 00:01:44.977442 containerd[1907]: time="2025-07-07T00:01:44.977417317Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:44.981730 containerd[1907]: time="2025-07-07T00:01:44.981698599Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:44.982369 containerd[1907]: time="2025-07-07T00:01:44.982342017Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.682305222s" Jul 7 00:01:44.982456 containerd[1907]: time="2025-07-07T00:01:44.982441476Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jul 7 00:01:44.982921 containerd[1907]: time="2025-07-07T00:01:44.982892857Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 7 00:01:45.561372 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount935922198.mount: Deactivated successfully. Jul 7 00:01:45.590272 containerd[1907]: time="2025-07-07T00:01:45.590234868Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 00:01:45.593595 containerd[1907]: time="2025-07-07T00:01:45.593566171Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Jul 7 00:01:45.600371 containerd[1907]: time="2025-07-07T00:01:45.600347500Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 00:01:45.605887 containerd[1907]: time="2025-07-07T00:01:45.605862217Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 00:01:45.606488 containerd[1907]: time="2025-07-07T00:01:45.606467787Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 623.545386ms" Jul 7 00:01:45.606515 containerd[1907]: time="2025-07-07T00:01:45.606493195Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 7 00:01:45.606984 containerd[1907]: time="2025-07-07T00:01:45.606958145Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 7 00:01:46.318760 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount48086614.mount: Deactivated successfully. Jul 7 00:01:47.146992 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jul 7 00:01:47.148329 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:01:47.447818 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:01:47.450349 (kubelet)[2775]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 00:01:47.473020 kubelet[2775]: E0707 00:01:47.472966 2775 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 00:01:47.474478 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 00:01:47.474577 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 00:01:47.474924 systemd[1]: kubelet.service: Consumed 94ms CPU time, 104.8M memory peak. Jul 7 00:01:49.226488 containerd[1907]: time="2025-07-07T00:01:49.226431485Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:49.233030 containerd[1907]: time="2025-07-07T00:01:49.233001832Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406465" Jul 7 00:01:49.962401 containerd[1907]: time="2025-07-07T00:01:49.962306730Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:49.967989 containerd[1907]: time="2025-07-07T00:01:49.967939339Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:49.968957 containerd[1907]: time="2025-07-07T00:01:49.968488642Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 4.361505121s" Jul 7 00:01:49.968957 containerd[1907]: time="2025-07-07T00:01:49.968522155Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Jul 7 00:01:50.309558 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Jul 7 00:01:50.606291 update_engine[1888]: I20250707 00:01:50.606220 1888 update_attempter.cc:509] Updating boot flags... Jul 7 00:01:52.803199 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:01:52.803378 systemd[1]: kubelet.service: Consumed 94ms CPU time, 104.8M memory peak. Jul 7 00:01:52.805405 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:01:52.823173 systemd[1]: Reload requested from client PID 2883 ('systemctl') (unit session-9.scope)... Jul 7 00:01:52.823195 systemd[1]: Reloading... Jul 7 00:01:52.920213 zram_generator::config[2932]: No configuration found. Jul 7 00:01:52.987406 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 00:01:53.069449 systemd[1]: Reloading finished in 245 ms. Jul 7 00:01:53.111518 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 7 00:01:53.111574 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 7 00:01:53.113219 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:01:53.113260 systemd[1]: kubelet.service: Consumed 68ms CPU time, 95M memory peak. Jul 7 00:01:53.114335 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:01:53.403161 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:01:53.406567 (kubelet)[2996]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 7 00:01:53.430848 kubelet[2996]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 00:01:53.430848 kubelet[2996]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 7 00:01:53.430848 kubelet[2996]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 00:01:53.430848 kubelet[2996]: I0707 00:01:53.430820 2996 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 7 00:01:54.007599 kubelet[2996]: I0707 00:01:54.007564 2996 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 7 00:01:54.007770 kubelet[2996]: I0707 00:01:54.007761 2996 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 7 00:01:54.008028 kubelet[2996]: I0707 00:01:54.008014 2996 server.go:934] "Client rotation is on, will bootstrap in background" Jul 7 00:01:54.019755 kubelet[2996]: E0707 00:01:54.019708 2996 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.4:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.4:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:01:54.021776 kubelet[2996]: I0707 00:01:54.021670 2996 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 00:01:54.026710 kubelet[2996]: I0707 00:01:54.026695 2996 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 7 00:01:54.030796 kubelet[2996]: I0707 00:01:54.030620 2996 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 7 00:01:54.031089 kubelet[2996]: I0707 00:01:54.031073 2996 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 7 00:01:54.031275 kubelet[2996]: I0707 00:01:54.031249 2996 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 7 00:01:54.031478 kubelet[2996]: I0707 00:01:54.031332 2996 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4372.0.1-a-609ca7abb9","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 7 00:01:54.031606 kubelet[2996]: I0707 00:01:54.031595 2996 topology_manager.go:138] "Creating topology manager with none policy" Jul 7 00:01:54.031648 kubelet[2996]: I0707 00:01:54.031642 2996 container_manager_linux.go:300] "Creating device plugin manager" Jul 7 00:01:54.031789 kubelet[2996]: I0707 00:01:54.031780 2996 state_mem.go:36] "Initialized new in-memory state store" Jul 7 00:01:54.032946 kubelet[2996]: I0707 00:01:54.032930 2996 kubelet.go:408] "Attempting to sync node with API server" Jul 7 00:01:54.033223 kubelet[2996]: I0707 00:01:54.033210 2996 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 7 00:01:54.033302 kubelet[2996]: I0707 00:01:54.033295 2996 kubelet.go:314] "Adding apiserver pod source" Jul 7 00:01:54.033366 kubelet[2996]: I0707 00:01:54.033357 2996 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 7 00:01:54.037561 kubelet[2996]: W0707 00:01:54.037490 2996 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.4:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4372.0.1-a-609ca7abb9&limit=500&resourceVersion=0": dial tcp 10.200.20.4:6443: connect: connection refused Jul 7 00:01:54.038290 kubelet[2996]: E0707 00:01:54.037714 2996 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.4:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4372.0.1-a-609ca7abb9&limit=500&resourceVersion=0\": dial tcp 10.200.20.4:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:01:54.038290 kubelet[2996]: I0707 00:01:54.037789 2996 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 7 00:01:54.038290 kubelet[2996]: I0707 00:01:54.038085 2996 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 7 00:01:54.038290 kubelet[2996]: W0707 00:01:54.038126 2996 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 7 00:01:54.038566 kubelet[2996]: I0707 00:01:54.038547 2996 server.go:1274] "Started kubelet" Jul 7 00:01:54.041590 kubelet[2996]: I0707 00:01:54.041561 2996 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 7 00:01:54.042048 kubelet[2996]: W0707 00:01:54.042015 2996 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.4:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.4:6443: connect: connection refused Jul 7 00:01:54.042158 kubelet[2996]: E0707 00:01:54.042135 2996 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.4:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.4:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:01:54.043222 kubelet[2996]: E0707 00:01:54.042250 2996 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.4:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.4:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4372.0.1-a-609ca7abb9.184fcf1f7245ddea default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4372.0.1-a-609ca7abb9,UID:ci-4372.0.1-a-609ca7abb9,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4372.0.1-a-609ca7abb9,},FirstTimestamp:2025-07-07 00:01:54.038529514 +0000 UTC m=+0.629279031,LastTimestamp:2025-07-07 00:01:54.038529514 +0000 UTC m=+0.629279031,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4372.0.1-a-609ca7abb9,}" Jul 7 00:01:54.043222 kubelet[2996]: I0707 00:01:54.043177 2996 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 7 00:01:54.043585 kubelet[2996]: I0707 00:01:54.043562 2996 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 7 00:01:54.044254 kubelet[2996]: I0707 00:01:54.044239 2996 server.go:449] "Adding debug handlers to kubelet server" Jul 7 00:01:54.044940 kubelet[2996]: I0707 00:01:54.044904 2996 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 7 00:01:54.045168 kubelet[2996]: I0707 00:01:54.045155 2996 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 7 00:01:54.046296 kubelet[2996]: I0707 00:01:54.046282 2996 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 7 00:01:54.046663 kubelet[2996]: I0707 00:01:54.046649 2996 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 7 00:01:54.046917 kubelet[2996]: E0707 00:01:54.046281 2996 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4372.0.1-a-609ca7abb9\" not found" Jul 7 00:01:54.047028 kubelet[2996]: I0707 00:01:54.047019 2996 reconciler.go:26] "Reconciler: start to sync state" Jul 7 00:01:54.047325 kubelet[2996]: W0707 00:01:54.047298 2996 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.4:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.4:6443: connect: connection refused Jul 7 00:01:54.047421 kubelet[2996]: E0707 00:01:54.047406 2996 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.4:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.4:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:01:54.047525 kubelet[2996]: E0707 00:01:54.047511 2996 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 7 00:01:54.047637 kubelet[2996]: E0707 00:01:54.047621 2996 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372.0.1-a-609ca7abb9?timeout=10s\": dial tcp 10.200.20.4:6443: connect: connection refused" interval="200ms" Jul 7 00:01:54.047799 kubelet[2996]: I0707 00:01:54.047786 2996 factory.go:221] Registration of the systemd container factory successfully Jul 7 00:01:54.047922 kubelet[2996]: I0707 00:01:54.047910 2996 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 7 00:01:54.049247 kubelet[2996]: I0707 00:01:54.049231 2996 factory.go:221] Registration of the containerd container factory successfully Jul 7 00:01:54.065946 kubelet[2996]: I0707 00:01:54.065922 2996 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 7 00:01:54.066060 kubelet[2996]: I0707 00:01:54.066050 2996 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 7 00:01:54.066137 kubelet[2996]: I0707 00:01:54.066129 2996 state_mem.go:36] "Initialized new in-memory state store" Jul 7 00:01:54.073091 kubelet[2996]: I0707 00:01:54.073060 2996 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 7 00:01:54.074203 kubelet[2996]: I0707 00:01:54.074084 2996 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 7 00:01:54.074203 kubelet[2996]: I0707 00:01:54.074109 2996 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 7 00:01:54.074203 kubelet[2996]: I0707 00:01:54.074124 2996 kubelet.go:2321] "Starting kubelet main sync loop" Jul 7 00:01:54.075013 kubelet[2996]: E0707 00:01:54.074156 2996 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 7 00:01:54.075013 kubelet[2996]: W0707 00:01:54.074773 2996 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.4:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.4:6443: connect: connection refused Jul 7 00:01:54.075013 kubelet[2996]: E0707 00:01:54.074828 2996 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.4:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.4:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:01:54.076656 kubelet[2996]: I0707 00:01:54.076638 2996 policy_none.go:49] "None policy: Start" Jul 7 00:01:54.078079 kubelet[2996]: I0707 00:01:54.078064 2996 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 7 00:01:54.078264 kubelet[2996]: I0707 00:01:54.078254 2996 state_mem.go:35] "Initializing new in-memory state store" Jul 7 00:01:54.087010 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 7 00:01:54.094340 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 7 00:01:54.097091 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 7 00:01:54.108213 kubelet[2996]: I0707 00:01:54.107794 2996 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 7 00:01:54.108213 kubelet[2996]: I0707 00:01:54.108099 2996 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 7 00:01:54.108213 kubelet[2996]: I0707 00:01:54.108109 2996 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 7 00:01:54.108834 kubelet[2996]: I0707 00:01:54.108816 2996 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 7 00:01:54.109857 kubelet[2996]: E0707 00:01:54.109844 2996 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4372.0.1-a-609ca7abb9\" not found" Jul 7 00:01:54.183245 systemd[1]: Created slice kubepods-burstable-podb30d57b790bd68f0f78f12dd24714b86.slice - libcontainer container kubepods-burstable-podb30d57b790bd68f0f78f12dd24714b86.slice. Jul 7 00:01:54.197263 systemd[1]: Created slice kubepods-burstable-pod98928de52fd30637475d2a2f5cd90fa6.slice - libcontainer container kubepods-burstable-pod98928de52fd30637475d2a2f5cd90fa6.slice. Jul 7 00:01:54.200994 systemd[1]: Created slice kubepods-burstable-pod6249933e0117939fa29737adfcb13ed1.slice - libcontainer container kubepods-burstable-pod6249933e0117939fa29737adfcb13ed1.slice. Jul 7 00:01:54.209354 kubelet[2996]: I0707 00:01:54.209321 2996 kubelet_node_status.go:72] "Attempting to register node" node="ci-4372.0.1-a-609ca7abb9" Jul 7 00:01:54.209685 kubelet[2996]: E0707 00:01:54.209663 2996 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.4:6443/api/v1/nodes\": dial tcp 10.200.20.4:6443: connect: connection refused" node="ci-4372.0.1-a-609ca7abb9" Jul 7 00:01:54.248203 kubelet[2996]: I0707 00:01:54.248042 2996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b30d57b790bd68f0f78f12dd24714b86-ca-certs\") pod \"kube-apiserver-ci-4372.0.1-a-609ca7abb9\" (UID: \"b30d57b790bd68f0f78f12dd24714b86\") " pod="kube-system/kube-apiserver-ci-4372.0.1-a-609ca7abb9" Jul 7 00:01:54.248203 kubelet[2996]: I0707 00:01:54.248073 2996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/98928de52fd30637475d2a2f5cd90fa6-ca-certs\") pod \"kube-controller-manager-ci-4372.0.1-a-609ca7abb9\" (UID: \"98928de52fd30637475d2a2f5cd90fa6\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-a-609ca7abb9" Jul 7 00:01:54.248203 kubelet[2996]: I0707 00:01:54.248088 2996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/98928de52fd30637475d2a2f5cd90fa6-kubeconfig\") pod \"kube-controller-manager-ci-4372.0.1-a-609ca7abb9\" (UID: \"98928de52fd30637475d2a2f5cd90fa6\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-a-609ca7abb9" Jul 7 00:01:54.248203 kubelet[2996]: I0707 00:01:54.248100 2996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/98928de52fd30637475d2a2f5cd90fa6-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4372.0.1-a-609ca7abb9\" (UID: \"98928de52fd30637475d2a2f5cd90fa6\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-a-609ca7abb9" Jul 7 00:01:54.248203 kubelet[2996]: I0707 00:01:54.248112 2996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b30d57b790bd68f0f78f12dd24714b86-k8s-certs\") pod \"kube-apiserver-ci-4372.0.1-a-609ca7abb9\" (UID: \"b30d57b790bd68f0f78f12dd24714b86\") " pod="kube-system/kube-apiserver-ci-4372.0.1-a-609ca7abb9" Jul 7 00:01:54.248380 kubelet[2996]: I0707 00:01:54.248122 2996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b30d57b790bd68f0f78f12dd24714b86-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4372.0.1-a-609ca7abb9\" (UID: \"b30d57b790bd68f0f78f12dd24714b86\") " pod="kube-system/kube-apiserver-ci-4372.0.1-a-609ca7abb9" Jul 7 00:01:54.248380 kubelet[2996]: E0707 00:01:54.248115 2996 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372.0.1-a-609ca7abb9?timeout=10s\": dial tcp 10.200.20.4:6443: connect: connection refused" interval="400ms" Jul 7 00:01:54.248380 kubelet[2996]: I0707 00:01:54.248131 2996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/98928de52fd30637475d2a2f5cd90fa6-flexvolume-dir\") pod \"kube-controller-manager-ci-4372.0.1-a-609ca7abb9\" (UID: \"98928de52fd30637475d2a2f5cd90fa6\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-a-609ca7abb9" Jul 7 00:01:54.248380 kubelet[2996]: I0707 00:01:54.248161 2996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/98928de52fd30637475d2a2f5cd90fa6-k8s-certs\") pod \"kube-controller-manager-ci-4372.0.1-a-609ca7abb9\" (UID: \"98928de52fd30637475d2a2f5cd90fa6\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-a-609ca7abb9" Jul 7 00:01:54.248380 kubelet[2996]: I0707 00:01:54.248195 2996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6249933e0117939fa29737adfcb13ed1-kubeconfig\") pod \"kube-scheduler-ci-4372.0.1-a-609ca7abb9\" (UID: \"6249933e0117939fa29737adfcb13ed1\") " pod="kube-system/kube-scheduler-ci-4372.0.1-a-609ca7abb9" Jul 7 00:01:54.411829 kubelet[2996]: I0707 00:01:54.411801 2996 kubelet_node_status.go:72] "Attempting to register node" node="ci-4372.0.1-a-609ca7abb9" Jul 7 00:01:54.412142 kubelet[2996]: E0707 00:01:54.412121 2996 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.4:6443/api/v1/nodes\": dial tcp 10.200.20.4:6443: connect: connection refused" node="ci-4372.0.1-a-609ca7abb9" Jul 7 00:01:54.495335 containerd[1907]: time="2025-07-07T00:01:54.495284847Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4372.0.1-a-609ca7abb9,Uid:b30d57b790bd68f0f78f12dd24714b86,Namespace:kube-system,Attempt:0,}" Jul 7 00:01:54.499764 containerd[1907]: time="2025-07-07T00:01:54.499736213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4372.0.1-a-609ca7abb9,Uid:98928de52fd30637475d2a2f5cd90fa6,Namespace:kube-system,Attempt:0,}" Jul 7 00:01:54.503381 containerd[1907]: time="2025-07-07T00:01:54.503355051Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4372.0.1-a-609ca7abb9,Uid:6249933e0117939fa29737adfcb13ed1,Namespace:kube-system,Attempt:0,}" Jul 7 00:01:54.613719 containerd[1907]: time="2025-07-07T00:01:54.613457225Z" level=info msg="connecting to shim a6ffc6539380fd7becae4f48c9ed4c854a3cdd19b349e3a848907c7262379d43" address="unix:///run/containerd/s/e73b15f49f473cfae18c071328c9f04c77c759dda8b12a59a6f055d7688e6fce" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:01:54.617589 containerd[1907]: time="2025-07-07T00:01:54.617517772Z" level=info msg="connecting to shim 3c16192fc0db02d7e03420bb64145d40d41d52cf9848d7dfe93851ce1deed631" address="unix:///run/containerd/s/7acd55b64d36be8876d6f75cd1956b553680a428d587fb922ed00836a4871dfa" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:01:54.641396 systemd[1]: Started cri-containerd-3c16192fc0db02d7e03420bb64145d40d41d52cf9848d7dfe93851ce1deed631.scope - libcontainer container 3c16192fc0db02d7e03420bb64145d40d41d52cf9848d7dfe93851ce1deed631. Jul 7 00:01:54.642213 systemd[1]: Started cri-containerd-a6ffc6539380fd7becae4f48c9ed4c854a3cdd19b349e3a848907c7262379d43.scope - libcontainer container a6ffc6539380fd7becae4f48c9ed4c854a3cdd19b349e3a848907c7262379d43. Jul 7 00:01:54.648833 containerd[1907]: time="2025-07-07T00:01:54.648800495Z" level=info msg="connecting to shim 9cc9d5d7c25c56b9d62e80c6e2a5ef576af191915ee681a1b70494912bf87a41" address="unix:///run/containerd/s/bc4a24f9ee7c31f7895dcff2eeaf467a320d9978909e9dd80113b7bbe57959b8" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:01:54.649395 kubelet[2996]: E0707 00:01:54.649353 2996 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372.0.1-a-609ca7abb9?timeout=10s\": dial tcp 10.200.20.4:6443: connect: connection refused" interval="800ms" Jul 7 00:01:54.671669 systemd[1]: Started cri-containerd-9cc9d5d7c25c56b9d62e80c6e2a5ef576af191915ee681a1b70494912bf87a41.scope - libcontainer container 9cc9d5d7c25c56b9d62e80c6e2a5ef576af191915ee681a1b70494912bf87a41. Jul 7 00:01:54.813815 kubelet[2996]: I0707 00:01:54.813787 2996 kubelet_node_status.go:72] "Attempting to register node" node="ci-4372.0.1-a-609ca7abb9" Jul 7 00:01:54.814200 kubelet[2996]: E0707 00:01:54.814163 2996 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.4:6443/api/v1/nodes\": dial tcp 10.200.20.4:6443: connect: connection refused" node="ci-4372.0.1-a-609ca7abb9" Jul 7 00:01:54.908162 kubelet[2996]: W0707 00:01:54.908056 2996 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.4:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.4:6443: connect: connection refused Jul 7 00:01:54.908162 kubelet[2996]: E0707 00:01:54.908128 2996 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.4:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.4:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:01:54.922941 kubelet[2996]: W0707 00:01:54.922678 2996 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.4:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.4:6443: connect: connection refused Jul 7 00:01:54.922941 kubelet[2996]: E0707 00:01:54.922736 2996 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.4:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.4:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:01:55.316087 containerd[1907]: time="2025-07-07T00:01:55.315984627Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4372.0.1-a-609ca7abb9,Uid:b30d57b790bd68f0f78f12dd24714b86,Namespace:kube-system,Attempt:0,} returns sandbox id \"a6ffc6539380fd7becae4f48c9ed4c854a3cdd19b349e3a848907c7262379d43\"" Jul 7 00:01:55.319026 containerd[1907]: time="2025-07-07T00:01:55.318998024Z" level=info msg="CreateContainer within sandbox \"a6ffc6539380fd7becae4f48c9ed4c854a3cdd19b349e3a848907c7262379d43\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 7 00:01:55.345744 kubelet[2996]: W0707 00:01:55.345689 2996 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.4:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4372.0.1-a-609ca7abb9&limit=500&resourceVersion=0": dial tcp 10.200.20.4:6443: connect: connection refused Jul 7 00:01:55.345807 kubelet[2996]: E0707 00:01:55.345754 2996 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.4:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4372.0.1-a-609ca7abb9&limit=500&resourceVersion=0\": dial tcp 10.200.20.4:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:01:55.367968 containerd[1907]: time="2025-07-07T00:01:55.367934941Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4372.0.1-a-609ca7abb9,Uid:6249933e0117939fa29737adfcb13ed1,Namespace:kube-system,Attempt:0,} returns sandbox id \"3c16192fc0db02d7e03420bb64145d40d41d52cf9848d7dfe93851ce1deed631\"" Jul 7 00:01:55.370456 containerd[1907]: time="2025-07-07T00:01:55.370429996Z" level=info msg="CreateContainer within sandbox \"3c16192fc0db02d7e03420bb64145d40d41d52cf9848d7dfe93851ce1deed631\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 7 00:01:55.372717 containerd[1907]: time="2025-07-07T00:01:55.372691876Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4372.0.1-a-609ca7abb9,Uid:98928de52fd30637475d2a2f5cd90fa6,Namespace:kube-system,Attempt:0,} returns sandbox id \"9cc9d5d7c25c56b9d62e80c6e2a5ef576af191915ee681a1b70494912bf87a41\"" Jul 7 00:01:55.374516 containerd[1907]: time="2025-07-07T00:01:55.374487630Z" level=info msg="CreateContainer within sandbox \"9cc9d5d7c25c56b9d62e80c6e2a5ef576af191915ee681a1b70494912bf87a41\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 7 00:01:55.416981 containerd[1907]: time="2025-07-07T00:01:55.416288962Z" level=info msg="Container ef3fb5d6335203e07c3b8769d0cf8782bf7c6911ec053f1dbe27c900349245ad: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:01:55.424709 containerd[1907]: time="2025-07-07T00:01:55.424674583Z" level=info msg="Container 782287edb812a92fb3c5c03c659b27728636105f92230cb9b6db0b38b8a06645: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:01:55.432219 containerd[1907]: time="2025-07-07T00:01:55.432050103Z" level=info msg="Container 543f92666eafbce5a2555169151739b72954cc4856ad454e8608a6c1b963fd9c: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:01:55.446566 containerd[1907]: time="2025-07-07T00:01:55.446531752Z" level=info msg="CreateContainer within sandbox \"3c16192fc0db02d7e03420bb64145d40d41d52cf9848d7dfe93851ce1deed631\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ef3fb5d6335203e07c3b8769d0cf8782bf7c6911ec053f1dbe27c900349245ad\"" Jul 7 00:01:55.448639 containerd[1907]: time="2025-07-07T00:01:55.448555745Z" level=info msg="StartContainer for \"ef3fb5d6335203e07c3b8769d0cf8782bf7c6911ec053f1dbe27c900349245ad\"" Jul 7 00:01:55.450130 containerd[1907]: time="2025-07-07T00:01:55.450100245Z" level=info msg="connecting to shim ef3fb5d6335203e07c3b8769d0cf8782bf7c6911ec053f1dbe27c900349245ad" address="unix:///run/containerd/s/7acd55b64d36be8876d6f75cd1956b553680a428d587fb922ed00836a4871dfa" protocol=ttrpc version=3 Jul 7 00:01:55.450611 kubelet[2996]: E0707 00:01:55.450537 2996 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372.0.1-a-609ca7abb9?timeout=10s\": dial tcp 10.200.20.4:6443: connect: connection refused" interval="1.6s" Jul 7 00:01:55.464939 containerd[1907]: time="2025-07-07T00:01:55.464911263Z" level=info msg="CreateContainer within sandbox \"a6ffc6539380fd7becae4f48c9ed4c854a3cdd19b349e3a848907c7262379d43\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"543f92666eafbce5a2555169151739b72954cc4856ad454e8608a6c1b963fd9c\"" Jul 7 00:01:55.465441 containerd[1907]: time="2025-07-07T00:01:55.465368788Z" level=info msg="StartContainer for \"543f92666eafbce5a2555169151739b72954cc4856ad454e8608a6c1b963fd9c\"" Jul 7 00:01:55.466299 systemd[1]: Started cri-containerd-ef3fb5d6335203e07c3b8769d0cf8782bf7c6911ec053f1dbe27c900349245ad.scope - libcontainer container ef3fb5d6335203e07c3b8769d0cf8782bf7c6911ec053f1dbe27c900349245ad. Jul 7 00:01:55.466880 containerd[1907]: time="2025-07-07T00:01:55.466745827Z" level=info msg="connecting to shim 543f92666eafbce5a2555169151739b72954cc4856ad454e8608a6c1b963fd9c" address="unix:///run/containerd/s/e73b15f49f473cfae18c071328c9f04c77c759dda8b12a59a6f055d7688e6fce" protocol=ttrpc version=3 Jul 7 00:01:55.471648 containerd[1907]: time="2025-07-07T00:01:55.471122262Z" level=info msg="CreateContainer within sandbox \"9cc9d5d7c25c56b9d62e80c6e2a5ef576af191915ee681a1b70494912bf87a41\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"782287edb812a92fb3c5c03c659b27728636105f92230cb9b6db0b38b8a06645\"" Jul 7 00:01:55.473972 containerd[1907]: time="2025-07-07T00:01:55.472476004Z" level=info msg="StartContainer for \"782287edb812a92fb3c5c03c659b27728636105f92230cb9b6db0b38b8a06645\"" Jul 7 00:01:55.474709 containerd[1907]: time="2025-07-07T00:01:55.474684011Z" level=info msg="connecting to shim 782287edb812a92fb3c5c03c659b27728636105f92230cb9b6db0b38b8a06645" address="unix:///run/containerd/s/bc4a24f9ee7c31f7895dcff2eeaf467a320d9978909e9dd80113b7bbe57959b8" protocol=ttrpc version=3 Jul 7 00:01:55.489388 systemd[1]: Started cri-containerd-543f92666eafbce5a2555169151739b72954cc4856ad454e8608a6c1b963fd9c.scope - libcontainer container 543f92666eafbce5a2555169151739b72954cc4856ad454e8608a6c1b963fd9c. Jul 7 00:01:55.496315 systemd[1]: Started cri-containerd-782287edb812a92fb3c5c03c659b27728636105f92230cb9b6db0b38b8a06645.scope - libcontainer container 782287edb812a92fb3c5c03c659b27728636105f92230cb9b6db0b38b8a06645. Jul 7 00:01:55.507352 containerd[1907]: time="2025-07-07T00:01:55.506545542Z" level=info msg="StartContainer for \"ef3fb5d6335203e07c3b8769d0cf8782bf7c6911ec053f1dbe27c900349245ad\" returns successfully" Jul 7 00:01:55.558092 containerd[1907]: time="2025-07-07T00:01:55.557978786Z" level=info msg="StartContainer for \"543f92666eafbce5a2555169151739b72954cc4856ad454e8608a6c1b963fd9c\" returns successfully" Jul 7 00:01:55.558092 containerd[1907]: time="2025-07-07T00:01:55.558058333Z" level=info msg="StartContainer for \"782287edb812a92fb3c5c03c659b27728636105f92230cb9b6db0b38b8a06645\" returns successfully" Jul 7 00:01:55.616928 kubelet[2996]: I0707 00:01:55.616824 2996 kubelet_node_status.go:72] "Attempting to register node" node="ci-4372.0.1-a-609ca7abb9" Jul 7 00:01:56.792882 kubelet[2996]: I0707 00:01:56.792843 2996 kubelet_node_status.go:75] "Successfully registered node" node="ci-4372.0.1-a-609ca7abb9" Jul 7 00:01:56.792882 kubelet[2996]: E0707 00:01:56.792878 2996 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4372.0.1-a-609ca7abb9\": node \"ci-4372.0.1-a-609ca7abb9\" not found" Jul 7 00:01:56.811650 kubelet[2996]: E0707 00:01:56.811602 2996 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4372.0.1-a-609ca7abb9\" not found" Jul 7 00:01:56.912526 kubelet[2996]: E0707 00:01:56.912490 2996 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4372.0.1-a-609ca7abb9\" not found" Jul 7 00:01:57.013347 kubelet[2996]: E0707 00:01:57.013315 2996 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4372.0.1-a-609ca7abb9\" not found" Jul 7 00:01:57.113515 kubelet[2996]: E0707 00:01:57.113379 2996 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4372.0.1-a-609ca7abb9\" not found" Jul 7 00:01:57.214441 kubelet[2996]: E0707 00:01:57.214391 2996 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4372.0.1-a-609ca7abb9\" not found" Jul 7 00:01:57.314867 kubelet[2996]: E0707 00:01:57.314828 2996 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4372.0.1-a-609ca7abb9\" not found" Jul 7 00:01:57.415283 kubelet[2996]: E0707 00:01:57.415252 2996 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4372.0.1-a-609ca7abb9\" not found" Jul 7 00:01:57.515811 kubelet[2996]: E0707 00:01:57.515766 2996 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4372.0.1-a-609ca7abb9\" not found" Jul 7 00:01:57.616684 kubelet[2996]: E0707 00:01:57.616641 2996 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4372.0.1-a-609ca7abb9\" not found" Jul 7 00:01:57.717174 kubelet[2996]: E0707 00:01:57.717060 2996 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4372.0.1-a-609ca7abb9\" not found" Jul 7 00:01:57.817669 kubelet[2996]: E0707 00:01:57.817624 2996 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4372.0.1-a-609ca7abb9\" not found" Jul 7 00:01:57.918178 kubelet[2996]: E0707 00:01:57.918137 2996 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4372.0.1-a-609ca7abb9\" not found" Jul 7 00:01:58.018696 kubelet[2996]: E0707 00:01:58.018580 2996 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4372.0.1-a-609ca7abb9\" not found" Jul 7 00:01:58.119119 kubelet[2996]: E0707 00:01:58.119083 2996 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4372.0.1-a-609ca7abb9\" not found" Jul 7 00:01:58.219665 kubelet[2996]: E0707 00:01:58.219619 2996 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4372.0.1-a-609ca7abb9\" not found" Jul 7 00:01:58.938903 systemd[1]: Reload requested from client PID 3274 ('systemctl') (unit session-9.scope)... Jul 7 00:01:58.938916 systemd[1]: Reloading... Jul 7 00:01:58.999252 zram_generator::config[3320]: No configuration found. Jul 7 00:01:59.043261 kubelet[2996]: I0707 00:01:59.043229 2996 apiserver.go:52] "Watching apiserver" Jul 7 00:01:59.047145 kubelet[2996]: I0707 00:01:59.047115 2996 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 7 00:01:59.074066 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 00:01:59.168140 systemd[1]: Reloading finished in 228 ms. Jul 7 00:01:59.195901 kubelet[2996]: I0707 00:01:59.195796 2996 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 00:01:59.196308 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:01:59.209979 systemd[1]: kubelet.service: Deactivated successfully. Jul 7 00:01:59.210205 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:01:59.210259 systemd[1]: kubelet.service: Consumed 788ms CPU time, 127M memory peak. Jul 7 00:01:59.212225 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:01:59.338340 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:01:59.343391 (kubelet)[3384]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 7 00:01:59.432819 kubelet[3384]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 00:01:59.432819 kubelet[3384]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 7 00:01:59.432819 kubelet[3384]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 00:01:59.433158 kubelet[3384]: I0707 00:01:59.432894 3384 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 7 00:01:59.443954 kubelet[3384]: I0707 00:01:59.443228 3384 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 7 00:01:59.443954 kubelet[3384]: I0707 00:01:59.443251 3384 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 7 00:01:59.443954 kubelet[3384]: I0707 00:01:59.443407 3384 server.go:934] "Client rotation is on, will bootstrap in background" Jul 7 00:01:59.444561 kubelet[3384]: I0707 00:01:59.444545 3384 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 7 00:01:59.448682 kubelet[3384]: I0707 00:01:59.447793 3384 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 00:01:59.452665 kubelet[3384]: I0707 00:01:59.451749 3384 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 7 00:01:59.455327 kubelet[3384]: I0707 00:01:59.455311 3384 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 7 00:01:59.455866 kubelet[3384]: I0707 00:01:59.455843 3384 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 7 00:01:59.456304 kubelet[3384]: I0707 00:01:59.456241 3384 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 7 00:01:59.456495 kubelet[3384]: I0707 00:01:59.456379 3384 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4372.0.1-a-609ca7abb9","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 7 00:01:59.456606 kubelet[3384]: I0707 00:01:59.456595 3384 topology_manager.go:138] "Creating topology manager with none policy" Jul 7 00:01:59.456651 kubelet[3384]: I0707 00:01:59.456643 3384 container_manager_linux.go:300] "Creating device plugin manager" Jul 7 00:01:59.456749 kubelet[3384]: I0707 00:01:59.456740 3384 state_mem.go:36] "Initialized new in-memory state store" Jul 7 00:01:59.456880 kubelet[3384]: I0707 00:01:59.456871 3384 kubelet.go:408] "Attempting to sync node with API server" Jul 7 00:01:59.456959 kubelet[3384]: I0707 00:01:59.456949 3384 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 7 00:01:59.457017 kubelet[3384]: I0707 00:01:59.457009 3384 kubelet.go:314] "Adding apiserver pod source" Jul 7 00:01:59.457067 kubelet[3384]: I0707 00:01:59.457059 3384 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 7 00:01:59.457714 kubelet[3384]: I0707 00:01:59.457694 3384 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 7 00:01:59.460478 kubelet[3384]: I0707 00:01:59.460460 3384 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 7 00:01:59.461234 kubelet[3384]: I0707 00:01:59.461180 3384 server.go:1274] "Started kubelet" Jul 7 00:01:59.463309 kubelet[3384]: I0707 00:01:59.463292 3384 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 7 00:01:59.464107 kubelet[3384]: I0707 00:01:59.463768 3384 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 7 00:01:59.466821 kubelet[3384]: I0707 00:01:59.466806 3384 server.go:449] "Adding debug handlers to kubelet server" Jul 7 00:01:59.474290 kubelet[3384]: I0707 00:01:59.466947 3384 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 7 00:01:59.476663 kubelet[3384]: I0707 00:01:59.467341 3384 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 7 00:01:59.476843 kubelet[3384]: I0707 00:01:59.469200 3384 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 7 00:01:59.478872 kubelet[3384]: I0707 00:01:59.469211 3384 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 7 00:01:59.479071 kubelet[3384]: E0707 00:01:59.469302 3384 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4372.0.1-a-609ca7abb9\" not found" Jul 7 00:01:59.479071 kubelet[3384]: I0707 00:01:59.473605 3384 factory.go:221] Registration of the systemd container factory successfully Jul 7 00:01:59.479148 kubelet[3384]: I0707 00:01:59.479126 3384 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 7 00:01:59.479586 kubelet[3384]: I0707 00:01:59.478342 3384 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 7 00:01:59.479630 kubelet[3384]: I0707 00:01:59.479037 3384 reconciler.go:26] "Reconciler: start to sync state" Jul 7 00:01:59.486742 kubelet[3384]: E0707 00:01:59.486560 3384 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 7 00:01:59.486926 kubelet[3384]: I0707 00:01:59.486905 3384 factory.go:221] Registration of the containerd container factory successfully Jul 7 00:01:59.496741 kubelet[3384]: I0707 00:01:59.496705 3384 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 7 00:01:59.501206 kubelet[3384]: I0707 00:01:59.501122 3384 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 7 00:01:59.501206 kubelet[3384]: I0707 00:01:59.501149 3384 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 7 00:01:59.501206 kubelet[3384]: I0707 00:01:59.501162 3384 kubelet.go:2321] "Starting kubelet main sync loop" Jul 7 00:01:59.501307 kubelet[3384]: E0707 00:01:59.501216 3384 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 7 00:01:59.533890 kubelet[3384]: I0707 00:01:59.533712 3384 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 7 00:01:59.533890 kubelet[3384]: I0707 00:01:59.533731 3384 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 7 00:01:59.533890 kubelet[3384]: I0707 00:01:59.533749 3384 state_mem.go:36] "Initialized new in-memory state store" Jul 7 00:01:59.533890 kubelet[3384]: I0707 00:01:59.533861 3384 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 7 00:01:59.533890 kubelet[3384]: I0707 00:01:59.533869 3384 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 7 00:01:59.533890 kubelet[3384]: I0707 00:01:59.533883 3384 policy_none.go:49] "None policy: Start" Jul 7 00:01:59.535420 kubelet[3384]: I0707 00:01:59.534986 3384 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 7 00:01:59.535420 kubelet[3384]: I0707 00:01:59.535006 3384 state_mem.go:35] "Initializing new in-memory state store" Jul 7 00:01:59.535420 kubelet[3384]: I0707 00:01:59.535108 3384 state_mem.go:75] "Updated machine memory state" Jul 7 00:01:59.543117 kubelet[3384]: I0707 00:01:59.541532 3384 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 7 00:01:59.543117 kubelet[3384]: I0707 00:01:59.542018 3384 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 7 00:01:59.543117 kubelet[3384]: I0707 00:01:59.542030 3384 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 7 00:01:59.543117 kubelet[3384]: I0707 00:01:59.543043 3384 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 7 00:01:59.612675 kubelet[3384]: W0707 00:01:59.612620 3384 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 7 00:01:59.616144 kubelet[3384]: W0707 00:01:59.616120 3384 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 7 00:01:59.616285 kubelet[3384]: W0707 00:01:59.616256 3384 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 7 00:01:59.644616 kubelet[3384]: I0707 00:01:59.644588 3384 kubelet_node_status.go:72] "Attempting to register node" node="ci-4372.0.1-a-609ca7abb9" Jul 7 00:01:59.662164 kubelet[3384]: I0707 00:01:59.662138 3384 kubelet_node_status.go:111] "Node was previously registered" node="ci-4372.0.1-a-609ca7abb9" Jul 7 00:01:59.662254 kubelet[3384]: I0707 00:01:59.662226 3384 kubelet_node_status.go:75] "Successfully registered node" node="ci-4372.0.1-a-609ca7abb9" Jul 7 00:01:59.681467 kubelet[3384]: I0707 00:01:59.681391 3384 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6249933e0117939fa29737adfcb13ed1-kubeconfig\") pod \"kube-scheduler-ci-4372.0.1-a-609ca7abb9\" (UID: \"6249933e0117939fa29737adfcb13ed1\") " pod="kube-system/kube-scheduler-ci-4372.0.1-a-609ca7abb9" Jul 7 00:01:59.681467 kubelet[3384]: I0707 00:01:59.681420 3384 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b30d57b790bd68f0f78f12dd24714b86-ca-certs\") pod \"kube-apiserver-ci-4372.0.1-a-609ca7abb9\" (UID: \"b30d57b790bd68f0f78f12dd24714b86\") " pod="kube-system/kube-apiserver-ci-4372.0.1-a-609ca7abb9" Jul 7 00:01:59.681467 kubelet[3384]: I0707 00:01:59.681432 3384 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b30d57b790bd68f0f78f12dd24714b86-k8s-certs\") pod \"kube-apiserver-ci-4372.0.1-a-609ca7abb9\" (UID: \"b30d57b790bd68f0f78f12dd24714b86\") " pod="kube-system/kube-apiserver-ci-4372.0.1-a-609ca7abb9" Jul 7 00:01:59.681467 kubelet[3384]: I0707 00:01:59.681446 3384 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b30d57b790bd68f0f78f12dd24714b86-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4372.0.1-a-609ca7abb9\" (UID: \"b30d57b790bd68f0f78f12dd24714b86\") " pod="kube-system/kube-apiserver-ci-4372.0.1-a-609ca7abb9" Jul 7 00:01:59.681467 kubelet[3384]: I0707 00:01:59.681460 3384 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/98928de52fd30637475d2a2f5cd90fa6-ca-certs\") pod \"kube-controller-manager-ci-4372.0.1-a-609ca7abb9\" (UID: \"98928de52fd30637475d2a2f5cd90fa6\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-a-609ca7abb9" Jul 7 00:01:59.681681 kubelet[3384]: I0707 00:01:59.681471 3384 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/98928de52fd30637475d2a2f5cd90fa6-flexvolume-dir\") pod \"kube-controller-manager-ci-4372.0.1-a-609ca7abb9\" (UID: \"98928de52fd30637475d2a2f5cd90fa6\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-a-609ca7abb9" Jul 7 00:01:59.681681 kubelet[3384]: I0707 00:01:59.681481 3384 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/98928de52fd30637475d2a2f5cd90fa6-k8s-certs\") pod \"kube-controller-manager-ci-4372.0.1-a-609ca7abb9\" (UID: \"98928de52fd30637475d2a2f5cd90fa6\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-a-609ca7abb9" Jul 7 00:01:59.681681 kubelet[3384]: I0707 00:01:59.681492 3384 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/98928de52fd30637475d2a2f5cd90fa6-kubeconfig\") pod \"kube-controller-manager-ci-4372.0.1-a-609ca7abb9\" (UID: \"98928de52fd30637475d2a2f5cd90fa6\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-a-609ca7abb9" Jul 7 00:01:59.681681 kubelet[3384]: I0707 00:01:59.681503 3384 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/98928de52fd30637475d2a2f5cd90fa6-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4372.0.1-a-609ca7abb9\" (UID: \"98928de52fd30637475d2a2f5cd90fa6\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-a-609ca7abb9" Jul 7 00:01:59.951496 sudo[3418]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 7 00:01:59.951702 sudo[3418]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jul 7 00:02:00.302959 sudo[3418]: pam_unix(sudo:session): session closed for user root Jul 7 00:02:00.457746 kubelet[3384]: I0707 00:02:00.457695 3384 apiserver.go:52] "Watching apiserver" Jul 7 00:02:00.479861 kubelet[3384]: I0707 00:02:00.479810 3384 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 7 00:02:00.542412 kubelet[3384]: I0707 00:02:00.541982 3384 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4372.0.1-a-609ca7abb9" podStartSLOduration=1.541969436 podStartE2EDuration="1.541969436s" podCreationTimestamp="2025-07-07 00:01:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:02:00.540808731 +0000 UTC m=+1.194864854" watchObservedRunningTime="2025-07-07 00:02:00.541969436 +0000 UTC m=+1.196025567" Jul 7 00:02:00.564856 kubelet[3384]: I0707 00:02:00.564240 3384 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4372.0.1-a-609ca7abb9" podStartSLOduration=1.564228432 podStartE2EDuration="1.564228432s" podCreationTimestamp="2025-07-07 00:01:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:02:00.563552405 +0000 UTC m=+1.217608528" watchObservedRunningTime="2025-07-07 00:02:00.564228432 +0000 UTC m=+1.218284555" Jul 7 00:02:00.564856 kubelet[3384]: I0707 00:02:00.564801 3384 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4372.0.1-a-609ca7abb9" podStartSLOduration=1.564791016 podStartE2EDuration="1.564791016s" podCreationTimestamp="2025-07-07 00:01:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:02:00.553973247 +0000 UTC m=+1.208029378" watchObservedRunningTime="2025-07-07 00:02:00.564791016 +0000 UTC m=+1.218847155" Jul 7 00:02:01.468696 sudo[2365]: pam_unix(sudo:session): session closed for user root Jul 7 00:02:01.550048 sshd[2364]: Connection closed by 10.200.16.10 port 42778 Jul 7 00:02:01.550545 sshd-session[2362]: pam_unix(sshd:session): session closed for user core Jul 7 00:02:01.553785 systemd-logind[1880]: Session 9 logged out. Waiting for processes to exit. Jul 7 00:02:01.554017 systemd[1]: sshd@6-10.200.20.4:22-10.200.16.10:42778.service: Deactivated successfully. Jul 7 00:02:01.556438 systemd[1]: session-9.scope: Deactivated successfully. Jul 7 00:02:01.556673 systemd[1]: session-9.scope: Consumed 3.680s CPU time, 266.6M memory peak. Jul 7 00:02:01.559133 systemd-logind[1880]: Removed session 9. Jul 7 00:02:03.990522 kubelet[3384]: I0707 00:02:03.990488 3384 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 7 00:02:03.991271 containerd[1907]: time="2025-07-07T00:02:03.991213881Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 7 00:02:03.992020 kubelet[3384]: I0707 00:02:03.991587 3384 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 7 00:02:04.930395 systemd[1]: Created slice kubepods-besteffort-pod5b4c6054_4a1f_4ee0_bd48_261941f5aee8.slice - libcontainer container kubepods-besteffort-pod5b4c6054_4a1f_4ee0_bd48_261941f5aee8.slice. Jul 7 00:02:04.942077 systemd[1]: Created slice kubepods-burstable-podda2d577c_1a96_4387_ad21_a7ad1db235e8.slice - libcontainer container kubepods-burstable-podda2d577c_1a96_4387_ad21_a7ad1db235e8.slice. Jul 7 00:02:05.012415 kubelet[3384]: I0707 00:02:05.012375 3384 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5b4c6054-4a1f-4ee0-bd48-261941f5aee8-xtables-lock\") pod \"kube-proxy-cbptq\" (UID: \"5b4c6054-4a1f-4ee0-bd48-261941f5aee8\") " pod="kube-system/kube-proxy-cbptq" Jul 7 00:02:05.012415 kubelet[3384]: I0707 00:02:05.012412 3384 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gq9k5\" (UniqueName: \"kubernetes.io/projected/5b4c6054-4a1f-4ee0-bd48-261941f5aee8-kube-api-access-gq9k5\") pod \"kube-proxy-cbptq\" (UID: \"5b4c6054-4a1f-4ee0-bd48-261941f5aee8\") " pod="kube-system/kube-proxy-cbptq" Jul 7 00:02:05.012793 kubelet[3384]: I0707 00:02:05.012455 3384 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/da2d577c-1a96-4387-ad21-a7ad1db235e8-hubble-tls\") pod \"cilium-p8kdl\" (UID: \"da2d577c-1a96-4387-ad21-a7ad1db235e8\") " pod="kube-system/cilium-p8kdl" Jul 7 00:02:05.012793 kubelet[3384]: I0707 00:02:05.012468 3384 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5b4c6054-4a1f-4ee0-bd48-261941f5aee8-kube-proxy\") pod \"kube-proxy-cbptq\" (UID: \"5b4c6054-4a1f-4ee0-bd48-261941f5aee8\") " pod="kube-system/kube-proxy-cbptq" Jul 7 00:02:05.012793 kubelet[3384]: I0707 00:02:05.012479 3384 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5b4c6054-4a1f-4ee0-bd48-261941f5aee8-lib-modules\") pod \"kube-proxy-cbptq\" (UID: \"5b4c6054-4a1f-4ee0-bd48-261941f5aee8\") " pod="kube-system/kube-proxy-cbptq" Jul 7 00:02:05.012793 kubelet[3384]: I0707 00:02:05.012488 3384 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/da2d577c-1a96-4387-ad21-a7ad1db235e8-etc-cni-netd\") pod \"cilium-p8kdl\" (UID: \"da2d577c-1a96-4387-ad21-a7ad1db235e8\") " pod="kube-system/cilium-p8kdl" Jul 7 00:02:05.012793 kubelet[3384]: I0707 00:02:05.012496 3384 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r578h\" (UniqueName: \"kubernetes.io/projected/da2d577c-1a96-4387-ad21-a7ad1db235e8-kube-api-access-r578h\") pod \"cilium-p8kdl\" (UID: \"da2d577c-1a96-4387-ad21-a7ad1db235e8\") " pod="kube-system/cilium-p8kdl" Jul 7 00:02:05.012793 kubelet[3384]: I0707 00:02:05.012565 3384 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/da2d577c-1a96-4387-ad21-a7ad1db235e8-xtables-lock\") pod \"cilium-p8kdl\" (UID: \"da2d577c-1a96-4387-ad21-a7ad1db235e8\") " pod="kube-system/cilium-p8kdl" Jul 7 00:02:05.012889 kubelet[3384]: I0707 00:02:05.012577 3384 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/da2d577c-1a96-4387-ad21-a7ad1db235e8-cilium-cgroup\") pod \"cilium-p8kdl\" (UID: \"da2d577c-1a96-4387-ad21-a7ad1db235e8\") " pod="kube-system/cilium-p8kdl" Jul 7 00:02:05.012889 kubelet[3384]: I0707 00:02:05.012587 3384 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/da2d577c-1a96-4387-ad21-a7ad1db235e8-cilium-config-path\") pod \"cilium-p8kdl\" (UID: \"da2d577c-1a96-4387-ad21-a7ad1db235e8\") " pod="kube-system/cilium-p8kdl" Jul 7 00:02:05.012889 kubelet[3384]: I0707 00:02:05.012598 3384 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/da2d577c-1a96-4387-ad21-a7ad1db235e8-lib-modules\") pod \"cilium-p8kdl\" (UID: \"da2d577c-1a96-4387-ad21-a7ad1db235e8\") " pod="kube-system/cilium-p8kdl" Jul 7 00:02:05.012889 kubelet[3384]: I0707 00:02:05.012629 3384 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/da2d577c-1a96-4387-ad21-a7ad1db235e8-cilium-run\") pod \"cilium-p8kdl\" (UID: \"da2d577c-1a96-4387-ad21-a7ad1db235e8\") " pod="kube-system/cilium-p8kdl" Jul 7 00:02:05.012889 kubelet[3384]: I0707 00:02:05.012640 3384 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/da2d577c-1a96-4387-ad21-a7ad1db235e8-hostproc\") pod \"cilium-p8kdl\" (UID: \"da2d577c-1a96-4387-ad21-a7ad1db235e8\") " pod="kube-system/cilium-p8kdl" Jul 7 00:02:05.012889 kubelet[3384]: I0707 00:02:05.012653 3384 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/da2d577c-1a96-4387-ad21-a7ad1db235e8-clustermesh-secrets\") pod \"cilium-p8kdl\" (UID: \"da2d577c-1a96-4387-ad21-a7ad1db235e8\") " pod="kube-system/cilium-p8kdl" Jul 7 00:02:05.012975 kubelet[3384]: I0707 00:02:05.012661 3384 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/da2d577c-1a96-4387-ad21-a7ad1db235e8-cni-path\") pod \"cilium-p8kdl\" (UID: \"da2d577c-1a96-4387-ad21-a7ad1db235e8\") " pod="kube-system/cilium-p8kdl" Jul 7 00:02:05.012975 kubelet[3384]: I0707 00:02:05.012671 3384 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/da2d577c-1a96-4387-ad21-a7ad1db235e8-bpf-maps\") pod \"cilium-p8kdl\" (UID: \"da2d577c-1a96-4387-ad21-a7ad1db235e8\") " pod="kube-system/cilium-p8kdl" Jul 7 00:02:05.012975 kubelet[3384]: I0707 00:02:05.012702 3384 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/da2d577c-1a96-4387-ad21-a7ad1db235e8-host-proc-sys-net\") pod \"cilium-p8kdl\" (UID: \"da2d577c-1a96-4387-ad21-a7ad1db235e8\") " pod="kube-system/cilium-p8kdl" Jul 7 00:02:05.012975 kubelet[3384]: I0707 00:02:05.012711 3384 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/da2d577c-1a96-4387-ad21-a7ad1db235e8-host-proc-sys-kernel\") pod \"cilium-p8kdl\" (UID: \"da2d577c-1a96-4387-ad21-a7ad1db235e8\") " pod="kube-system/cilium-p8kdl" Jul 7 00:02:05.134302 systemd[1]: Created slice kubepods-besteffort-podc0950446_913f_4dbe_b2ec_9aa6a61b8c9b.slice - libcontainer container kubepods-besteffort-podc0950446_913f_4dbe_b2ec_9aa6a61b8c9b.slice. Jul 7 00:02:05.214063 kubelet[3384]: I0707 00:02:05.213946 3384 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlmg6\" (UniqueName: \"kubernetes.io/projected/c0950446-913f-4dbe-b2ec-9aa6a61b8c9b-kube-api-access-dlmg6\") pod \"cilium-operator-5d85765b45-jnjgh\" (UID: \"c0950446-913f-4dbe-b2ec-9aa6a61b8c9b\") " pod="kube-system/cilium-operator-5d85765b45-jnjgh" Jul 7 00:02:05.214236 kubelet[3384]: I0707 00:02:05.214222 3384 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c0950446-913f-4dbe-b2ec-9aa6a61b8c9b-cilium-config-path\") pod \"cilium-operator-5d85765b45-jnjgh\" (UID: \"c0950446-913f-4dbe-b2ec-9aa6a61b8c9b\") " pod="kube-system/cilium-operator-5d85765b45-jnjgh" Jul 7 00:02:05.237615 containerd[1907]: time="2025-07-07T00:02:05.237586215Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cbptq,Uid:5b4c6054-4a1f-4ee0-bd48-261941f5aee8,Namespace:kube-system,Attempt:0,}" Jul 7 00:02:05.245250 containerd[1907]: time="2025-07-07T00:02:05.245224889Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-p8kdl,Uid:da2d577c-1a96-4387-ad21-a7ad1db235e8,Namespace:kube-system,Attempt:0,}" Jul 7 00:02:05.436595 containerd[1907]: time="2025-07-07T00:02:05.436521083Z" level=info msg="connecting to shim 301600ffdb7db421f5eafef433b4683ab857535e9c6f9789a99ca441ce8a0bdb" address="unix:///run/containerd/s/fad4c0831972b01efa6ee524726f0696611de076aebb8ad8ee96c0e4d7b27505" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:02:05.439807 containerd[1907]: time="2025-07-07T00:02:05.439711685Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-jnjgh,Uid:c0950446-913f-4dbe-b2ec-9aa6a61b8c9b,Namespace:kube-system,Attempt:0,}" Jul 7 00:02:05.453317 systemd[1]: Started cri-containerd-301600ffdb7db421f5eafef433b4683ab857535e9c6f9789a99ca441ce8a0bdb.scope - libcontainer container 301600ffdb7db421f5eafef433b4683ab857535e9c6f9789a99ca441ce8a0bdb. Jul 7 00:02:05.483849 containerd[1907]: time="2025-07-07T00:02:05.483608840Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cbptq,Uid:5b4c6054-4a1f-4ee0-bd48-261941f5aee8,Namespace:kube-system,Attempt:0,} returns sandbox id \"301600ffdb7db421f5eafef433b4683ab857535e9c6f9789a99ca441ce8a0bdb\"" Jul 7 00:02:05.486034 containerd[1907]: time="2025-07-07T00:02:05.485856544Z" level=info msg="CreateContainer within sandbox \"301600ffdb7db421f5eafef433b4683ab857535e9c6f9789a99ca441ce8a0bdb\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 7 00:02:05.513516 containerd[1907]: time="2025-07-07T00:02:05.513473194Z" level=info msg="connecting to shim 6ffdd812b9923dacb2050ac181c3d58f34a0e9107534e9eb84fb1123c781d227" address="unix:///run/containerd/s/7f856c14a880cb7cb898bfbe868f962583522dd207708921cf852bc66bd5607f" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:02:05.529417 systemd[1]: Started cri-containerd-6ffdd812b9923dacb2050ac181c3d58f34a0e9107534e9eb84fb1123c781d227.scope - libcontainer container 6ffdd812b9923dacb2050ac181c3d58f34a0e9107534e9eb84fb1123c781d227. Jul 7 00:02:05.551206 containerd[1907]: time="2025-07-07T00:02:05.551044769Z" level=info msg="Container 5b28669a66c643c15d07d7e9e99b753740d373fbaa35ec4df33ac001974604f8: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:02:05.564528 containerd[1907]: time="2025-07-07T00:02:05.564496912Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-p8kdl,Uid:da2d577c-1a96-4387-ad21-a7ad1db235e8,Namespace:kube-system,Attempt:0,} returns sandbox id \"6ffdd812b9923dacb2050ac181c3d58f34a0e9107534e9eb84fb1123c781d227\"" Jul 7 00:02:05.565665 containerd[1907]: time="2025-07-07T00:02:05.565565206Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 7 00:02:05.590728 containerd[1907]: time="2025-07-07T00:02:05.590697402Z" level=info msg="connecting to shim 685724da528e03f817f36bdbc4a2ec8b7b3d24309adac534448aa7d380941cd7" address="unix:///run/containerd/s/14e32957b567e2be566d6fbec66e469c6d3de0dedbda155f0021cafd6f29d9bd" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:02:05.601783 containerd[1907]: time="2025-07-07T00:02:05.601716436Z" level=info msg="CreateContainer within sandbox \"301600ffdb7db421f5eafef433b4683ab857535e9c6f9789a99ca441ce8a0bdb\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"5b28669a66c643c15d07d7e9e99b753740d373fbaa35ec4df33ac001974604f8\"" Jul 7 00:02:05.602516 containerd[1907]: time="2025-07-07T00:02:05.602489546Z" level=info msg="StartContainer for \"5b28669a66c643c15d07d7e9e99b753740d373fbaa35ec4df33ac001974604f8\"" Jul 7 00:02:05.604313 systemd[1]: Started cri-containerd-685724da528e03f817f36bdbc4a2ec8b7b3d24309adac534448aa7d380941cd7.scope - libcontainer container 685724da528e03f817f36bdbc4a2ec8b7b3d24309adac534448aa7d380941cd7. Jul 7 00:02:05.604858 containerd[1907]: time="2025-07-07T00:02:05.604832021Z" level=info msg="connecting to shim 5b28669a66c643c15d07d7e9e99b753740d373fbaa35ec4df33ac001974604f8" address="unix:///run/containerd/s/fad4c0831972b01efa6ee524726f0696611de076aebb8ad8ee96c0e4d7b27505" protocol=ttrpc version=3 Jul 7 00:02:05.621295 systemd[1]: Started cri-containerd-5b28669a66c643c15d07d7e9e99b753740d373fbaa35ec4df33ac001974604f8.scope - libcontainer container 5b28669a66c643c15d07d7e9e99b753740d373fbaa35ec4df33ac001974604f8. Jul 7 00:02:05.642244 containerd[1907]: time="2025-07-07T00:02:05.642209814Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-jnjgh,Uid:c0950446-913f-4dbe-b2ec-9aa6a61b8c9b,Namespace:kube-system,Attempt:0,} returns sandbox id \"685724da528e03f817f36bdbc4a2ec8b7b3d24309adac534448aa7d380941cd7\"" Jul 7 00:02:05.667222 containerd[1907]: time="2025-07-07T00:02:05.667159324Z" level=info msg="StartContainer for \"5b28669a66c643c15d07d7e9e99b753740d373fbaa35ec4df33ac001974604f8\" returns successfully" Jul 7 00:02:06.550573 kubelet[3384]: I0707 00:02:06.550511 3384 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-cbptq" podStartSLOduration=2.550496654 podStartE2EDuration="2.550496654s" podCreationTimestamp="2025-07-07 00:02:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:02:06.549671911 +0000 UTC m=+7.203728042" watchObservedRunningTime="2025-07-07 00:02:06.550496654 +0000 UTC m=+7.204552785" Jul 7 00:02:10.231326 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2490459425.mount: Deactivated successfully. Jul 7 00:02:12.472149 containerd[1907]: time="2025-07-07T00:02:12.472095368Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:02:12.476306 containerd[1907]: time="2025-07-07T00:02:12.476276981Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Jul 7 00:02:12.482252 containerd[1907]: time="2025-07-07T00:02:12.482214788Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:02:12.483177 containerd[1907]: time="2025-07-07T00:02:12.483056755Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 6.917470436s" Jul 7 00:02:12.483177 containerd[1907]: time="2025-07-07T00:02:12.483082692Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jul 7 00:02:12.484581 containerd[1907]: time="2025-07-07T00:02:12.484412105Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 7 00:02:12.485854 containerd[1907]: time="2025-07-07T00:02:12.485830977Z" level=info msg="CreateContainer within sandbox \"6ffdd812b9923dacb2050ac181c3d58f34a0e9107534e9eb84fb1123c781d227\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 7 00:02:12.553331 containerd[1907]: time="2025-07-07T00:02:12.553302811Z" level=info msg="Container cf12ba96509980251c28580d7fbb482c8fd164a3143d5710ec9000deb9eaead1: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:02:12.554380 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount883771869.mount: Deactivated successfully. Jul 7 00:02:12.577008 containerd[1907]: time="2025-07-07T00:02:12.576980842Z" level=info msg="CreateContainer within sandbox \"6ffdd812b9923dacb2050ac181c3d58f34a0e9107534e9eb84fb1123c781d227\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"cf12ba96509980251c28580d7fbb482c8fd164a3143d5710ec9000deb9eaead1\"" Jul 7 00:02:12.578079 containerd[1907]: time="2025-07-07T00:02:12.577355804Z" level=info msg="StartContainer for \"cf12ba96509980251c28580d7fbb482c8fd164a3143d5710ec9000deb9eaead1\"" Jul 7 00:02:12.578079 containerd[1907]: time="2025-07-07T00:02:12.577883379Z" level=info msg="connecting to shim cf12ba96509980251c28580d7fbb482c8fd164a3143d5710ec9000deb9eaead1" address="unix:///run/containerd/s/7f856c14a880cb7cb898bfbe868f962583522dd207708921cf852bc66bd5607f" protocol=ttrpc version=3 Jul 7 00:02:12.593316 systemd[1]: Started cri-containerd-cf12ba96509980251c28580d7fbb482c8fd164a3143d5710ec9000deb9eaead1.scope - libcontainer container cf12ba96509980251c28580d7fbb482c8fd164a3143d5710ec9000deb9eaead1. Jul 7 00:02:12.616910 containerd[1907]: time="2025-07-07T00:02:12.616885007Z" level=info msg="StartContainer for \"cf12ba96509980251c28580d7fbb482c8fd164a3143d5710ec9000deb9eaead1\" returns successfully" Jul 7 00:02:12.622848 systemd[1]: cri-containerd-cf12ba96509980251c28580d7fbb482c8fd164a3143d5710ec9000deb9eaead1.scope: Deactivated successfully. Jul 7 00:02:12.625587 containerd[1907]: time="2025-07-07T00:02:12.625412798Z" level=info msg="received exit event container_id:\"cf12ba96509980251c28580d7fbb482c8fd164a3143d5710ec9000deb9eaead1\" id:\"cf12ba96509980251c28580d7fbb482c8fd164a3143d5710ec9000deb9eaead1\" pid:3802 exited_at:{seconds:1751846532 nanos:625144494}" Jul 7 00:02:12.625587 containerd[1907]: time="2025-07-07T00:02:12.625565594Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cf12ba96509980251c28580d7fbb482c8fd164a3143d5710ec9000deb9eaead1\" id:\"cf12ba96509980251c28580d7fbb482c8fd164a3143d5710ec9000deb9eaead1\" pid:3802 exited_at:{seconds:1751846532 nanos:625144494}" Jul 7 00:02:12.637792 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cf12ba96509980251c28580d7fbb482c8fd164a3143d5710ec9000deb9eaead1-rootfs.mount: Deactivated successfully. Jul 7 00:02:14.553656 containerd[1907]: time="2025-07-07T00:02:14.553591020Z" level=info msg="CreateContainer within sandbox \"6ffdd812b9923dacb2050ac181c3d58f34a0e9107534e9eb84fb1123c781d227\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 7 00:02:14.579729 containerd[1907]: time="2025-07-07T00:02:14.579348358Z" level=info msg="Container 2e1595d091a6bf9628487013764d99c9df3145e4d52c62e46ae33304d979ad0e: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:02:14.582321 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3258438637.mount: Deactivated successfully. Jul 7 00:02:14.599617 containerd[1907]: time="2025-07-07T00:02:14.599588133Z" level=info msg="CreateContainer within sandbox \"6ffdd812b9923dacb2050ac181c3d58f34a0e9107534e9eb84fb1123c781d227\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"2e1595d091a6bf9628487013764d99c9df3145e4d52c62e46ae33304d979ad0e\"" Jul 7 00:02:14.600041 containerd[1907]: time="2025-07-07T00:02:14.599973319Z" level=info msg="StartContainer for \"2e1595d091a6bf9628487013764d99c9df3145e4d52c62e46ae33304d979ad0e\"" Jul 7 00:02:14.600757 containerd[1907]: time="2025-07-07T00:02:14.600705844Z" level=info msg="connecting to shim 2e1595d091a6bf9628487013764d99c9df3145e4d52c62e46ae33304d979ad0e" address="unix:///run/containerd/s/7f856c14a880cb7cb898bfbe868f962583522dd207708921cf852bc66bd5607f" protocol=ttrpc version=3 Jul 7 00:02:14.615293 systemd[1]: Started cri-containerd-2e1595d091a6bf9628487013764d99c9df3145e4d52c62e46ae33304d979ad0e.scope - libcontainer container 2e1595d091a6bf9628487013764d99c9df3145e4d52c62e46ae33304d979ad0e. Jul 7 00:02:14.636866 containerd[1907]: time="2025-07-07T00:02:14.636827576Z" level=info msg="StartContainer for \"2e1595d091a6bf9628487013764d99c9df3145e4d52c62e46ae33304d979ad0e\" returns successfully" Jul 7 00:02:14.644660 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 7 00:02:14.645065 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 7 00:02:14.645206 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 7 00:02:14.647166 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 00:02:14.648168 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 7 00:02:14.649756 containerd[1907]: time="2025-07-07T00:02:14.649592541Z" level=info msg="received exit event container_id:\"2e1595d091a6bf9628487013764d99c9df3145e4d52c62e46ae33304d979ad0e\" id:\"2e1595d091a6bf9628487013764d99c9df3145e4d52c62e46ae33304d979ad0e\" pid:3846 exited_at:{seconds:1751846534 nanos:649389623}" Jul 7 00:02:14.649756 containerd[1907]: time="2025-07-07T00:02:14.649736137Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2e1595d091a6bf9628487013764d99c9df3145e4d52c62e46ae33304d979ad0e\" id:\"2e1595d091a6bf9628487013764d99c9df3145e4d52c62e46ae33304d979ad0e\" pid:3846 exited_at:{seconds:1751846534 nanos:649389623}" Jul 7 00:02:14.649910 systemd[1]: cri-containerd-2e1595d091a6bf9628487013764d99c9df3145e4d52c62e46ae33304d979ad0e.scope: Deactivated successfully. Jul 7 00:02:14.667026 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 00:02:15.554327 containerd[1907]: time="2025-07-07T00:02:15.554164530Z" level=info msg="CreateContainer within sandbox \"6ffdd812b9923dacb2050ac181c3d58f34a0e9107534e9eb84fb1123c781d227\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 7 00:02:15.559906 containerd[1907]: time="2025-07-07T00:02:15.559860465Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:02:15.567460 containerd[1907]: time="2025-07-07T00:02:15.567433925Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Jul 7 00:02:15.578407 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2e1595d091a6bf9628487013764d99c9df3145e4d52c62e46ae33304d979ad0e-rootfs.mount: Deactivated successfully. Jul 7 00:02:15.579315 containerd[1907]: time="2025-07-07T00:02:15.579288177Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:02:15.580210 containerd[1907]: time="2025-07-07T00:02:15.579977469Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 3.095544058s" Jul 7 00:02:15.580210 containerd[1907]: time="2025-07-07T00:02:15.580003325Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jul 7 00:02:15.582476 containerd[1907]: time="2025-07-07T00:02:15.582435202Z" level=info msg="CreateContainer within sandbox \"685724da528e03f817f36bdbc4a2ec8b7b3d24309adac534448aa7d380941cd7\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 7 00:02:15.591257 containerd[1907]: time="2025-07-07T00:02:15.589290130Z" level=info msg="Container ea4424882ae770d287026abbcb422cc6487951189334fa5ef1f58fbf17ee950c: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:02:15.617905 containerd[1907]: time="2025-07-07T00:02:15.617808944Z" level=info msg="CreateContainer within sandbox \"6ffdd812b9923dacb2050ac181c3d58f34a0e9107534e9eb84fb1123c781d227\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ea4424882ae770d287026abbcb422cc6487951189334fa5ef1f58fbf17ee950c\"" Jul 7 00:02:15.618711 containerd[1907]: time="2025-07-07T00:02:15.618653936Z" level=info msg="StartContainer for \"ea4424882ae770d287026abbcb422cc6487951189334fa5ef1f58fbf17ee950c\"" Jul 7 00:02:15.619640 containerd[1907]: time="2025-07-07T00:02:15.619617003Z" level=info msg="connecting to shim ea4424882ae770d287026abbcb422cc6487951189334fa5ef1f58fbf17ee950c" address="unix:///run/containerd/s/7f856c14a880cb7cb898bfbe868f962583522dd207708921cf852bc66bd5607f" protocol=ttrpc version=3 Jul 7 00:02:15.625608 containerd[1907]: time="2025-07-07T00:02:15.625580730Z" level=info msg="Container 0363899fe23b69d1a1a229df0bbe2ff0517ccbc14f34fee018bb41e7e423cf00: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:02:15.634423 systemd[1]: Started cri-containerd-ea4424882ae770d287026abbcb422cc6487951189334fa5ef1f58fbf17ee950c.scope - libcontainer container ea4424882ae770d287026abbcb422cc6487951189334fa5ef1f58fbf17ee950c. Jul 7 00:02:15.641168 containerd[1907]: time="2025-07-07T00:02:15.641146886Z" level=info msg="CreateContainer within sandbox \"685724da528e03f817f36bdbc4a2ec8b7b3d24309adac534448aa7d380941cd7\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"0363899fe23b69d1a1a229df0bbe2ff0517ccbc14f34fee018bb41e7e423cf00\"" Jul 7 00:02:15.642099 containerd[1907]: time="2025-07-07T00:02:15.642081320Z" level=info msg="StartContainer for \"0363899fe23b69d1a1a229df0bbe2ff0517ccbc14f34fee018bb41e7e423cf00\"" Jul 7 00:02:15.643365 containerd[1907]: time="2025-07-07T00:02:15.643252521Z" level=info msg="connecting to shim 0363899fe23b69d1a1a229df0bbe2ff0517ccbc14f34fee018bb41e7e423cf00" address="unix:///run/containerd/s/14e32957b567e2be566d6fbec66e469c6d3de0dedbda155f0021cafd6f29d9bd" protocol=ttrpc version=3 Jul 7 00:02:15.657344 systemd[1]: Started cri-containerd-0363899fe23b69d1a1a229df0bbe2ff0517ccbc14f34fee018bb41e7e423cf00.scope - libcontainer container 0363899fe23b69d1a1a229df0bbe2ff0517ccbc14f34fee018bb41e7e423cf00. Jul 7 00:02:15.663894 systemd[1]: cri-containerd-ea4424882ae770d287026abbcb422cc6487951189334fa5ef1f58fbf17ee950c.scope: Deactivated successfully. Jul 7 00:02:15.666672 containerd[1907]: time="2025-07-07T00:02:15.666609175Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ea4424882ae770d287026abbcb422cc6487951189334fa5ef1f58fbf17ee950c\" id:\"ea4424882ae770d287026abbcb422cc6487951189334fa5ef1f58fbf17ee950c\" pid:3905 exited_at:{seconds:1751846535 nanos:666386225}" Jul 7 00:02:15.670608 containerd[1907]: time="2025-07-07T00:02:15.670553197Z" level=info msg="received exit event container_id:\"ea4424882ae770d287026abbcb422cc6487951189334fa5ef1f58fbf17ee950c\" id:\"ea4424882ae770d287026abbcb422cc6487951189334fa5ef1f58fbf17ee950c\" pid:3905 exited_at:{seconds:1751846535 nanos:666386225}" Jul 7 00:02:15.678537 containerd[1907]: time="2025-07-07T00:02:15.678431370Z" level=info msg="StartContainer for \"ea4424882ae770d287026abbcb422cc6487951189334fa5ef1f58fbf17ee950c\" returns successfully" Jul 7 00:02:15.695109 containerd[1907]: time="2025-07-07T00:02:15.694971817Z" level=info msg="StartContainer for \"0363899fe23b69d1a1a229df0bbe2ff0517ccbc14f34fee018bb41e7e423cf00\" returns successfully" Jul 7 00:02:16.561794 containerd[1907]: time="2025-07-07T00:02:16.561756596Z" level=info msg="CreateContainer within sandbox \"6ffdd812b9923dacb2050ac181c3d58f34a0e9107534e9eb84fb1123c781d227\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 7 00:02:16.578971 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ea4424882ae770d287026abbcb422cc6487951189334fa5ef1f58fbf17ee950c-rootfs.mount: Deactivated successfully. Jul 7 00:02:16.598293 containerd[1907]: time="2025-07-07T00:02:16.598258938Z" level=info msg="Container 60763619110869d2b29eac6d6f994c5f9022c23fd9ea9ef2820b76ed9dd6a3db: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:02:16.601049 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1912509494.mount: Deactivated successfully. Jul 7 00:02:16.622569 containerd[1907]: time="2025-07-07T00:02:16.622471424Z" level=info msg="CreateContainer within sandbox \"6ffdd812b9923dacb2050ac181c3d58f34a0e9107534e9eb84fb1123c781d227\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"60763619110869d2b29eac6d6f994c5f9022c23fd9ea9ef2820b76ed9dd6a3db\"" Jul 7 00:02:16.622835 containerd[1907]: time="2025-07-07T00:02:16.622818578Z" level=info msg="StartContainer for \"60763619110869d2b29eac6d6f994c5f9022c23fd9ea9ef2820b76ed9dd6a3db\"" Jul 7 00:02:16.623646 containerd[1907]: time="2025-07-07T00:02:16.623595488Z" level=info msg="connecting to shim 60763619110869d2b29eac6d6f994c5f9022c23fd9ea9ef2820b76ed9dd6a3db" address="unix:///run/containerd/s/7f856c14a880cb7cb898bfbe868f962583522dd207708921cf852bc66bd5607f" protocol=ttrpc version=3 Jul 7 00:02:16.646314 systemd[1]: Started cri-containerd-60763619110869d2b29eac6d6f994c5f9022c23fd9ea9ef2820b76ed9dd6a3db.scope - libcontainer container 60763619110869d2b29eac6d6f994c5f9022c23fd9ea9ef2820b76ed9dd6a3db. Jul 7 00:02:16.693256 systemd[1]: cri-containerd-60763619110869d2b29eac6d6f994c5f9022c23fd9ea9ef2820b76ed9dd6a3db.scope: Deactivated successfully. Jul 7 00:02:16.694711 containerd[1907]: time="2025-07-07T00:02:16.694679382Z" level=info msg="TaskExit event in podsandbox handler container_id:\"60763619110869d2b29eac6d6f994c5f9022c23fd9ea9ef2820b76ed9dd6a3db\" id:\"60763619110869d2b29eac6d6f994c5f9022c23fd9ea9ef2820b76ed9dd6a3db\" pid:3977 exited_at:{seconds:1751846536 nanos:694406599}" Jul 7 00:02:16.704598 containerd[1907]: time="2025-07-07T00:02:16.704491497Z" level=info msg="received exit event container_id:\"60763619110869d2b29eac6d6f994c5f9022c23fd9ea9ef2820b76ed9dd6a3db\" id:\"60763619110869d2b29eac6d6f994c5f9022c23fd9ea9ef2820b76ed9dd6a3db\" pid:3977 exited_at:{seconds:1751846536 nanos:694406599}" Jul 7 00:02:16.709456 containerd[1907]: time="2025-07-07T00:02:16.709422187Z" level=info msg="StartContainer for \"60763619110869d2b29eac6d6f994c5f9022c23fd9ea9ef2820b76ed9dd6a3db\" returns successfully" Jul 7 00:02:16.717616 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-60763619110869d2b29eac6d6f994c5f9022c23fd9ea9ef2820b76ed9dd6a3db-rootfs.mount: Deactivated successfully. Jul 7 00:02:17.567409 containerd[1907]: time="2025-07-07T00:02:17.567365306Z" level=info msg="CreateContainer within sandbox \"6ffdd812b9923dacb2050ac181c3d58f34a0e9107534e9eb84fb1123c781d227\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 7 00:02:17.581284 kubelet[3384]: I0707 00:02:17.581228 3384 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-jnjgh" podStartSLOduration=2.644532925 podStartE2EDuration="12.581216141s" podCreationTimestamp="2025-07-07 00:02:05 +0000 UTC" firstStartedPulling="2025-07-07 00:02:05.643770234 +0000 UTC m=+6.297826357" lastFinishedPulling="2025-07-07 00:02:15.58045345 +0000 UTC m=+16.234509573" observedRunningTime="2025-07-07 00:02:16.616803906 +0000 UTC m=+17.270860029" watchObservedRunningTime="2025-07-07 00:02:17.581216141 +0000 UTC m=+18.235272264" Jul 7 00:02:17.591946 containerd[1907]: time="2025-07-07T00:02:17.591598823Z" level=info msg="Container e4c8e4b9bede0c1b57af18bb65928bf687f03df4049a3bdd6d5a221c67bfdcc5: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:02:17.609599 containerd[1907]: time="2025-07-07T00:02:17.609572673Z" level=info msg="CreateContainer within sandbox \"6ffdd812b9923dacb2050ac181c3d58f34a0e9107534e9eb84fb1123c781d227\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e4c8e4b9bede0c1b57af18bb65928bf687f03df4049a3bdd6d5a221c67bfdcc5\"" Jul 7 00:02:17.610196 containerd[1907]: time="2025-07-07T00:02:17.610139123Z" level=info msg="StartContainer for \"e4c8e4b9bede0c1b57af18bb65928bf687f03df4049a3bdd6d5a221c67bfdcc5\"" Jul 7 00:02:17.610958 containerd[1907]: time="2025-07-07T00:02:17.610934581Z" level=info msg="connecting to shim e4c8e4b9bede0c1b57af18bb65928bf687f03df4049a3bdd6d5a221c67bfdcc5" address="unix:///run/containerd/s/7f856c14a880cb7cb898bfbe868f962583522dd207708921cf852bc66bd5607f" protocol=ttrpc version=3 Jul 7 00:02:17.629292 systemd[1]: Started cri-containerd-e4c8e4b9bede0c1b57af18bb65928bf687f03df4049a3bdd6d5a221c67bfdcc5.scope - libcontainer container e4c8e4b9bede0c1b57af18bb65928bf687f03df4049a3bdd6d5a221c67bfdcc5. Jul 7 00:02:17.657088 containerd[1907]: time="2025-07-07T00:02:17.657061643Z" level=info msg="StartContainer for \"e4c8e4b9bede0c1b57af18bb65928bf687f03df4049a3bdd6d5a221c67bfdcc5\" returns successfully" Jul 7 00:02:17.717451 containerd[1907]: time="2025-07-07T00:02:17.717419001Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e4c8e4b9bede0c1b57af18bb65928bf687f03df4049a3bdd6d5a221c67bfdcc5\" id:\"920ce5a8a927abfbfa5692a277d8667a946f16e1727ee7818eb53dc9373e9658\" pid:4048 exited_at:{seconds:1751846537 nanos:717198314}" Jul 7 00:02:17.751137 kubelet[3384]: I0707 00:02:17.751115 3384 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 7 00:02:17.790614 systemd[1]: Created slice kubepods-burstable-poda9157fc7_5db7_4102_ac2c_7bad919fce79.slice - libcontainer container kubepods-burstable-poda9157fc7_5db7_4102_ac2c_7bad919fce79.slice. Jul 7 00:02:17.800393 systemd[1]: Created slice kubepods-burstable-pod831ac116_9050_4adc_b179_5ae753a50a78.slice - libcontainer container kubepods-burstable-pod831ac116_9050_4adc_b179_5ae753a50a78.slice. Jul 7 00:02:17.887773 kubelet[3384]: I0707 00:02:17.887691 3384 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/831ac116-9050-4adc-b179-5ae753a50a78-config-volume\") pod \"coredns-7c65d6cfc9-bl7qz\" (UID: \"831ac116-9050-4adc-b179-5ae753a50a78\") " pod="kube-system/coredns-7c65d6cfc9-bl7qz" Jul 7 00:02:17.888337 kubelet[3384]: I0707 00:02:17.888095 3384 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qd6s\" (UniqueName: \"kubernetes.io/projected/831ac116-9050-4adc-b179-5ae753a50a78-kube-api-access-7qd6s\") pod \"coredns-7c65d6cfc9-bl7qz\" (UID: \"831ac116-9050-4adc-b179-5ae753a50a78\") " pod="kube-system/coredns-7c65d6cfc9-bl7qz" Jul 7 00:02:17.888337 kubelet[3384]: I0707 00:02:17.888286 3384 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a9157fc7-5db7-4102-ac2c-7bad919fce79-config-volume\") pod \"coredns-7c65d6cfc9-8n5ww\" (UID: \"a9157fc7-5db7-4102-ac2c-7bad919fce79\") " pod="kube-system/coredns-7c65d6cfc9-8n5ww" Jul 7 00:02:17.888337 kubelet[3384]: I0707 00:02:17.888306 3384 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srm2n\" (UniqueName: \"kubernetes.io/projected/a9157fc7-5db7-4102-ac2c-7bad919fce79-kube-api-access-srm2n\") pod \"coredns-7c65d6cfc9-8n5ww\" (UID: \"a9157fc7-5db7-4102-ac2c-7bad919fce79\") " pod="kube-system/coredns-7c65d6cfc9-8n5ww" Jul 7 00:02:18.095715 containerd[1907]: time="2025-07-07T00:02:18.095672879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-8n5ww,Uid:a9157fc7-5db7-4102-ac2c-7bad919fce79,Namespace:kube-system,Attempt:0,}" Jul 7 00:02:18.122422 containerd[1907]: time="2025-07-07T00:02:18.122397158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-bl7qz,Uid:831ac116-9050-4adc-b179-5ae753a50a78,Namespace:kube-system,Attempt:0,}" Jul 7 00:02:18.592211 kubelet[3384]: I0707 00:02:18.590502 3384 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-p8kdl" podStartSLOduration=7.672022929 podStartE2EDuration="14.590488098s" podCreationTimestamp="2025-07-07 00:02:04 +0000 UTC" firstStartedPulling="2025-07-07 00:02:05.565286774 +0000 UTC m=+6.219342897" lastFinishedPulling="2025-07-07 00:02:12.483751943 +0000 UTC m=+13.137808066" observedRunningTime="2025-07-07 00:02:18.58845572 +0000 UTC m=+19.242511843" watchObservedRunningTime="2025-07-07 00:02:18.590488098 +0000 UTC m=+19.244544229" Jul 7 00:02:19.741636 systemd-networkd[1572]: cilium_host: Link UP Jul 7 00:02:19.741731 systemd-networkd[1572]: cilium_net: Link UP Jul 7 00:02:19.741819 systemd-networkd[1572]: cilium_host: Gained carrier Jul 7 00:02:19.741895 systemd-networkd[1572]: cilium_net: Gained carrier Jul 7 00:02:19.871127 systemd-networkd[1572]: cilium_vxlan: Link UP Jul 7 00:02:19.871132 systemd-networkd[1572]: cilium_vxlan: Gained carrier Jul 7 00:02:20.067220 kernel: NET: Registered PF_ALG protocol family Jul 7 00:02:20.406327 systemd-networkd[1572]: cilium_net: Gained IPv6LL Jul 7 00:02:20.498693 systemd-networkd[1572]: lxc_health: Link UP Jul 7 00:02:20.507748 systemd-networkd[1572]: lxc_health: Gained carrier Jul 7 00:02:20.599292 systemd-networkd[1572]: cilium_host: Gained IPv6LL Jul 7 00:02:20.634766 systemd-networkd[1572]: lxc419c39adc2b1: Link UP Jul 7 00:02:20.639210 kernel: eth0: renamed from tmpb672f Jul 7 00:02:20.641061 systemd-networkd[1572]: lxc419c39adc2b1: Gained carrier Jul 7 00:02:20.658237 systemd-networkd[1572]: lxcfc9b81cec9cd: Link UP Jul 7 00:02:20.665211 kernel: eth0: renamed from tmp5e9ed Jul 7 00:02:20.668038 systemd-networkd[1572]: lxcfc9b81cec9cd: Gained carrier Jul 7 00:02:21.750375 systemd-networkd[1572]: cilium_vxlan: Gained IPv6LL Jul 7 00:02:21.814295 systemd-networkd[1572]: lxc419c39adc2b1: Gained IPv6LL Jul 7 00:02:22.070338 systemd-networkd[1572]: lxcfc9b81cec9cd: Gained IPv6LL Jul 7 00:02:22.198329 systemd-networkd[1572]: lxc_health: Gained IPv6LL Jul 7 00:02:23.154245 containerd[1907]: time="2025-07-07T00:02:23.154169965Z" level=info msg="connecting to shim b672f1095bd8ac328f1bfcdd092a317b7d1f7ad253fd429f9a34e5337d32efed" address="unix:///run/containerd/s/742992821eac42e842c767b6b172618089598363a82711ee488b8d5d49adb266" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:02:23.154981 containerd[1907]: time="2025-07-07T00:02:23.154952603Z" level=info msg="connecting to shim 5e9edad725632c72892fa4dd1b03ebc43338fb0c3406037438c6f8b637bab10c" address="unix:///run/containerd/s/4f8abbd179f21376cc12e98c6072f800a494ea61d5fd7eba8e6138d6feff45f0" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:02:23.189295 systemd[1]: Started cri-containerd-5e9edad725632c72892fa4dd1b03ebc43338fb0c3406037438c6f8b637bab10c.scope - libcontainer container 5e9edad725632c72892fa4dd1b03ebc43338fb0c3406037438c6f8b637bab10c. Jul 7 00:02:23.190042 systemd[1]: Started cri-containerd-b672f1095bd8ac328f1bfcdd092a317b7d1f7ad253fd429f9a34e5337d32efed.scope - libcontainer container b672f1095bd8ac328f1bfcdd092a317b7d1f7ad253fd429f9a34e5337d32efed. Jul 7 00:02:23.227100 containerd[1907]: time="2025-07-07T00:02:23.227072525Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-8n5ww,Uid:a9157fc7-5db7-4102-ac2c-7bad919fce79,Namespace:kube-system,Attempt:0,} returns sandbox id \"b672f1095bd8ac328f1bfcdd092a317b7d1f7ad253fd429f9a34e5337d32efed\"" Jul 7 00:02:23.230880 containerd[1907]: time="2025-07-07T00:02:23.230857448Z" level=info msg="CreateContainer within sandbox \"b672f1095bd8ac328f1bfcdd092a317b7d1f7ad253fd429f9a34e5337d32efed\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 7 00:02:23.235930 containerd[1907]: time="2025-07-07T00:02:23.235911015Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-bl7qz,Uid:831ac116-9050-4adc-b179-5ae753a50a78,Namespace:kube-system,Attempt:0,} returns sandbox id \"5e9edad725632c72892fa4dd1b03ebc43338fb0c3406037438c6f8b637bab10c\"" Jul 7 00:02:23.237600 containerd[1907]: time="2025-07-07T00:02:23.237554998Z" level=info msg="CreateContainer within sandbox \"5e9edad725632c72892fa4dd1b03ebc43338fb0c3406037438c6f8b637bab10c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 7 00:02:23.278117 containerd[1907]: time="2025-07-07T00:02:23.278080017Z" level=info msg="Container af1a34299b9e1f1a14d577822464e716bf1a113b8829d8a4e5fbdcfdca569e43: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:02:23.285634 containerd[1907]: time="2025-07-07T00:02:23.285267853Z" level=info msg="Container bcc1cc81bca6ea34c9ee22c7497017a3c89ecc86e8e76bd015681bc1e2f96c90: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:02:23.305549 containerd[1907]: time="2025-07-07T00:02:23.305519330Z" level=info msg="CreateContainer within sandbox \"b672f1095bd8ac328f1bfcdd092a317b7d1f7ad253fd429f9a34e5337d32efed\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"af1a34299b9e1f1a14d577822464e716bf1a113b8829d8a4e5fbdcfdca569e43\"" Jul 7 00:02:23.306593 containerd[1907]: time="2025-07-07T00:02:23.305872412Z" level=info msg="StartContainer for \"af1a34299b9e1f1a14d577822464e716bf1a113b8829d8a4e5fbdcfdca569e43\"" Jul 7 00:02:23.306593 containerd[1907]: time="2025-07-07T00:02:23.306537951Z" level=info msg="connecting to shim af1a34299b9e1f1a14d577822464e716bf1a113b8829d8a4e5fbdcfdca569e43" address="unix:///run/containerd/s/742992821eac42e842c767b6b172618089598363a82711ee488b8d5d49adb266" protocol=ttrpc version=3 Jul 7 00:02:23.311462 containerd[1907]: time="2025-07-07T00:02:23.311408625Z" level=info msg="CreateContainer within sandbox \"5e9edad725632c72892fa4dd1b03ebc43338fb0c3406037438c6f8b637bab10c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bcc1cc81bca6ea34c9ee22c7497017a3c89ecc86e8e76bd015681bc1e2f96c90\"" Jul 7 00:02:23.311983 containerd[1907]: time="2025-07-07T00:02:23.311897439Z" level=info msg="StartContainer for \"bcc1cc81bca6ea34c9ee22c7497017a3c89ecc86e8e76bd015681bc1e2f96c90\"" Jul 7 00:02:23.312424 containerd[1907]: time="2025-07-07T00:02:23.312401061Z" level=info msg="connecting to shim bcc1cc81bca6ea34c9ee22c7497017a3c89ecc86e8e76bd015681bc1e2f96c90" address="unix:///run/containerd/s/4f8abbd179f21376cc12e98c6072f800a494ea61d5fd7eba8e6138d6feff45f0" protocol=ttrpc version=3 Jul 7 00:02:23.326287 systemd[1]: Started cri-containerd-af1a34299b9e1f1a14d577822464e716bf1a113b8829d8a4e5fbdcfdca569e43.scope - libcontainer container af1a34299b9e1f1a14d577822464e716bf1a113b8829d8a4e5fbdcfdca569e43. Jul 7 00:02:23.334301 systemd[1]: Started cri-containerd-bcc1cc81bca6ea34c9ee22c7497017a3c89ecc86e8e76bd015681bc1e2f96c90.scope - libcontainer container bcc1cc81bca6ea34c9ee22c7497017a3c89ecc86e8e76bd015681bc1e2f96c90. Jul 7 00:02:23.365312 containerd[1907]: time="2025-07-07T00:02:23.365286711Z" level=info msg="StartContainer for \"bcc1cc81bca6ea34c9ee22c7497017a3c89ecc86e8e76bd015681bc1e2f96c90\" returns successfully" Jul 7 00:02:23.366504 containerd[1907]: time="2025-07-07T00:02:23.366484937Z" level=info msg="StartContainer for \"af1a34299b9e1f1a14d577822464e716bf1a113b8829d8a4e5fbdcfdca569e43\" returns successfully" Jul 7 00:02:23.594888 kubelet[3384]: I0707 00:02:23.594434 3384 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-8n5ww" podStartSLOduration=18.594421223 podStartE2EDuration="18.594421223s" podCreationTimestamp="2025-07-07 00:02:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:02:23.593190372 +0000 UTC m=+24.247246495" watchObservedRunningTime="2025-07-07 00:02:23.594421223 +0000 UTC m=+24.248477346" Jul 7 00:02:23.629265 kubelet[3384]: I0707 00:02:23.629222 3384 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-bl7qz" podStartSLOduration=18.629210024 podStartE2EDuration="18.629210024s" podCreationTimestamp="2025-07-07 00:02:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:02:23.62836968 +0000 UTC m=+24.282425803" watchObservedRunningTime="2025-07-07 00:02:23.629210024 +0000 UTC m=+24.283266147" Jul 7 00:02:34.816727 kubelet[3384]: I0707 00:02:34.816620 3384 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 00:03:34.275102 systemd[1]: Started sshd@7-10.200.20.4:22-10.200.16.10:46988.service - OpenSSH per-connection server daemon (10.200.16.10:46988). Jul 7 00:03:34.754013 sshd[4702]: Accepted publickey for core from 10.200.16.10 port 46988 ssh2: RSA SHA256:jqhyWZ4ohIpoDXTKwGqZcjJ0o+/xEZ+M9M3EOPsEPgk Jul 7 00:03:34.755065 sshd-session[4702]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:03:34.758760 systemd-logind[1880]: New session 10 of user core. Jul 7 00:03:34.765486 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 7 00:03:35.157808 sshd[4704]: Connection closed by 10.200.16.10 port 46988 Jul 7 00:03:35.158437 sshd-session[4702]: pam_unix(sshd:session): session closed for user core Jul 7 00:03:35.160802 systemd[1]: sshd@7-10.200.20.4:22-10.200.16.10:46988.service: Deactivated successfully. Jul 7 00:03:35.162425 systemd[1]: session-10.scope: Deactivated successfully. Jul 7 00:03:35.164056 systemd-logind[1880]: Session 10 logged out. Waiting for processes to exit. Jul 7 00:03:35.165428 systemd-logind[1880]: Removed session 10. Jul 7 00:03:40.244749 systemd[1]: Started sshd@8-10.200.20.4:22-10.200.16.10:33372.service - OpenSSH per-connection server daemon (10.200.16.10:33372). Jul 7 00:03:40.724536 sshd[4720]: Accepted publickey for core from 10.200.16.10 port 33372 ssh2: RSA SHA256:jqhyWZ4ohIpoDXTKwGqZcjJ0o+/xEZ+M9M3EOPsEPgk Jul 7 00:03:40.724938 sshd-session[4720]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:03:40.728168 systemd-logind[1880]: New session 11 of user core. Jul 7 00:03:40.733298 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 7 00:03:41.114409 sshd[4722]: Connection closed by 10.200.16.10 port 33372 Jul 7 00:03:41.114939 sshd-session[4720]: pam_unix(sshd:session): session closed for user core Jul 7 00:03:41.117648 systemd[1]: sshd@8-10.200.20.4:22-10.200.16.10:33372.service: Deactivated successfully. Jul 7 00:03:41.119279 systemd[1]: session-11.scope: Deactivated successfully. Jul 7 00:03:41.119891 systemd-logind[1880]: Session 11 logged out. Waiting for processes to exit. Jul 7 00:03:41.121279 systemd-logind[1880]: Removed session 11. Jul 7 00:03:46.213529 systemd[1]: Started sshd@9-10.200.20.4:22-10.200.16.10:33386.service - OpenSSH per-connection server daemon (10.200.16.10:33386). Jul 7 00:03:46.686826 sshd[4735]: Accepted publickey for core from 10.200.16.10 port 33386 ssh2: RSA SHA256:jqhyWZ4ohIpoDXTKwGqZcjJ0o+/xEZ+M9M3EOPsEPgk Jul 7 00:03:46.687766 sshd-session[4735]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:03:46.691076 systemd-logind[1880]: New session 12 of user core. Jul 7 00:03:46.696293 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 7 00:03:47.075508 sshd[4737]: Connection closed by 10.200.16.10 port 33386 Jul 7 00:03:47.076048 sshd-session[4735]: pam_unix(sshd:session): session closed for user core Jul 7 00:03:47.079002 systemd[1]: sshd@9-10.200.20.4:22-10.200.16.10:33386.service: Deactivated successfully. Jul 7 00:03:47.081026 systemd[1]: session-12.scope: Deactivated successfully. Jul 7 00:03:47.081818 systemd-logind[1880]: Session 12 logged out. Waiting for processes to exit. Jul 7 00:03:47.083482 systemd-logind[1880]: Removed session 12. Jul 7 00:03:52.162082 systemd[1]: Started sshd@10-10.200.20.4:22-10.200.16.10:50638.service - OpenSSH per-connection server daemon (10.200.16.10:50638). Jul 7 00:03:52.647154 sshd[4749]: Accepted publickey for core from 10.200.16.10 port 50638 ssh2: RSA SHA256:jqhyWZ4ohIpoDXTKwGqZcjJ0o+/xEZ+M9M3EOPsEPgk Jul 7 00:03:52.648203 sshd-session[4749]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:03:52.651606 systemd-logind[1880]: New session 13 of user core. Jul 7 00:03:52.667385 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 7 00:03:53.043665 sshd[4751]: Connection closed by 10.200.16.10 port 50638 Jul 7 00:03:53.044345 sshd-session[4749]: pam_unix(sshd:session): session closed for user core Jul 7 00:03:53.047122 systemd[1]: sshd@10-10.200.20.4:22-10.200.16.10:50638.service: Deactivated successfully. Jul 7 00:03:53.048486 systemd[1]: session-13.scope: Deactivated successfully. Jul 7 00:03:53.049243 systemd-logind[1880]: Session 13 logged out. Waiting for processes to exit. Jul 7 00:03:53.050370 systemd-logind[1880]: Removed session 13. Jul 7 00:03:53.130655 systemd[1]: Started sshd@11-10.200.20.4:22-10.200.16.10:50640.service - OpenSSH per-connection server daemon (10.200.16.10:50640). Jul 7 00:03:53.615371 sshd[4764]: Accepted publickey for core from 10.200.16.10 port 50640 ssh2: RSA SHA256:jqhyWZ4ohIpoDXTKwGqZcjJ0o+/xEZ+M9M3EOPsEPgk Jul 7 00:03:53.616368 sshd-session[4764]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:03:53.620255 systemd-logind[1880]: New session 14 of user core. Jul 7 00:03:53.624290 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 7 00:03:54.024213 sshd[4766]: Connection closed by 10.200.16.10 port 50640 Jul 7 00:03:54.024651 sshd-session[4764]: pam_unix(sshd:session): session closed for user core Jul 7 00:03:54.027291 systemd[1]: sshd@11-10.200.20.4:22-10.200.16.10:50640.service: Deactivated successfully. Jul 7 00:03:54.028929 systemd[1]: session-14.scope: Deactivated successfully. Jul 7 00:03:54.029556 systemd-logind[1880]: Session 14 logged out. Waiting for processes to exit. Jul 7 00:03:54.030564 systemd-logind[1880]: Removed session 14. Jul 7 00:03:54.117831 systemd[1]: Started sshd@12-10.200.20.4:22-10.200.16.10:50650.service - OpenSSH per-connection server daemon (10.200.16.10:50650). Jul 7 00:03:54.615216 sshd[4776]: Accepted publickey for core from 10.200.16.10 port 50650 ssh2: RSA SHA256:jqhyWZ4ohIpoDXTKwGqZcjJ0o+/xEZ+M9M3EOPsEPgk Jul 7 00:03:54.616223 sshd-session[4776]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:03:54.622077 systemd-logind[1880]: New session 15 of user core. Jul 7 00:03:54.626312 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 7 00:03:55.007737 sshd[4778]: Connection closed by 10.200.16.10 port 50650 Jul 7 00:03:55.008150 sshd-session[4776]: pam_unix(sshd:session): session closed for user core Jul 7 00:03:55.010773 systemd[1]: sshd@12-10.200.20.4:22-10.200.16.10:50650.service: Deactivated successfully. Jul 7 00:03:55.012578 systemd[1]: session-15.scope: Deactivated successfully. Jul 7 00:03:55.013386 systemd-logind[1880]: Session 15 logged out. Waiting for processes to exit. Jul 7 00:03:55.014680 systemd-logind[1880]: Removed session 15. Jul 7 00:04:00.098804 systemd[1]: Started sshd@13-10.200.20.4:22-10.200.16.10:53986.service - OpenSSH per-connection server daemon (10.200.16.10:53986). Jul 7 00:04:00.595286 sshd[4792]: Accepted publickey for core from 10.200.16.10 port 53986 ssh2: RSA SHA256:jqhyWZ4ohIpoDXTKwGqZcjJ0o+/xEZ+M9M3EOPsEPgk Jul 7 00:04:00.596325 sshd-session[4792]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:04:00.600406 systemd-logind[1880]: New session 16 of user core. Jul 7 00:04:00.604294 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 7 00:04:00.983777 sshd[4794]: Connection closed by 10.200.16.10 port 53986 Jul 7 00:04:00.985262 sshd-session[4792]: pam_unix(sshd:session): session closed for user core Jul 7 00:04:00.987940 systemd[1]: sshd@13-10.200.20.4:22-10.200.16.10:53986.service: Deactivated successfully. Jul 7 00:04:00.989975 systemd[1]: session-16.scope: Deactivated successfully. Jul 7 00:04:00.990755 systemd-logind[1880]: Session 16 logged out. Waiting for processes to exit. Jul 7 00:04:00.992105 systemd-logind[1880]: Removed session 16. Jul 7 00:04:06.112206 systemd[1]: Started sshd@14-10.200.20.4:22-10.200.16.10:53992.service - OpenSSH per-connection server daemon (10.200.16.10:53992). Jul 7 00:04:06.591243 sshd[4808]: Accepted publickey for core from 10.200.16.10 port 53992 ssh2: RSA SHA256:jqhyWZ4ohIpoDXTKwGqZcjJ0o+/xEZ+M9M3EOPsEPgk Jul 7 00:04:06.592583 sshd-session[4808]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:04:06.596098 systemd-logind[1880]: New session 17 of user core. Jul 7 00:04:06.602296 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 7 00:04:06.975199 sshd[4810]: Connection closed by 10.200.16.10 port 53992 Jul 7 00:04:06.975692 sshd-session[4808]: pam_unix(sshd:session): session closed for user core Jul 7 00:04:06.978401 systemd-logind[1880]: Session 17 logged out. Waiting for processes to exit. Jul 7 00:04:06.978940 systemd[1]: sshd@14-10.200.20.4:22-10.200.16.10:53992.service: Deactivated successfully. Jul 7 00:04:06.980325 systemd[1]: session-17.scope: Deactivated successfully. Jul 7 00:04:06.981406 systemd-logind[1880]: Removed session 17. Jul 7 00:04:07.059359 systemd[1]: Started sshd@15-10.200.20.4:22-10.200.16.10:54006.service - OpenSSH per-connection server daemon (10.200.16.10:54006). Jul 7 00:04:07.535483 sshd[4822]: Accepted publickey for core from 10.200.16.10 port 54006 ssh2: RSA SHA256:jqhyWZ4ohIpoDXTKwGqZcjJ0o+/xEZ+M9M3EOPsEPgk Jul 7 00:04:07.536466 sshd-session[4822]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:04:07.540149 systemd-logind[1880]: New session 18 of user core. Jul 7 00:04:07.547478 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 7 00:04:07.962212 sshd[4824]: Connection closed by 10.200.16.10 port 54006 Jul 7 00:04:07.962569 sshd-session[4822]: pam_unix(sshd:session): session closed for user core Jul 7 00:04:07.965176 systemd[1]: sshd@15-10.200.20.4:22-10.200.16.10:54006.service: Deactivated successfully. Jul 7 00:04:07.966563 systemd[1]: session-18.scope: Deactivated successfully. Jul 7 00:04:07.967140 systemd-logind[1880]: Session 18 logged out. Waiting for processes to exit. Jul 7 00:04:07.968311 systemd-logind[1880]: Removed session 18. Jul 7 00:04:08.050808 systemd[1]: Started sshd@16-10.200.20.4:22-10.200.16.10:54018.service - OpenSSH per-connection server daemon (10.200.16.10:54018). Jul 7 00:04:08.526629 sshd[4833]: Accepted publickey for core from 10.200.16.10 port 54018 ssh2: RSA SHA256:jqhyWZ4ohIpoDXTKwGqZcjJ0o+/xEZ+M9M3EOPsEPgk Jul 7 00:04:08.527659 sshd-session[4833]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:04:08.532198 systemd-logind[1880]: New session 19 of user core. Jul 7 00:04:08.542341 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 7 00:04:10.003708 sshd[4835]: Connection closed by 10.200.16.10 port 54018 Jul 7 00:04:10.004727 sshd-session[4833]: pam_unix(sshd:session): session closed for user core Jul 7 00:04:10.008441 systemd[1]: sshd@16-10.200.20.4:22-10.200.16.10:54018.service: Deactivated successfully. Jul 7 00:04:10.011161 systemd[1]: session-19.scope: Deactivated successfully. Jul 7 00:04:10.012091 systemd-logind[1880]: Session 19 logged out. Waiting for processes to exit. Jul 7 00:04:10.013304 systemd-logind[1880]: Removed session 19. Jul 7 00:04:10.097757 systemd[1]: Started sshd@17-10.200.20.4:22-10.200.16.10:53376.service - OpenSSH per-connection server daemon (10.200.16.10:53376). Jul 7 00:04:10.592846 sshd[4852]: Accepted publickey for core from 10.200.16.10 port 53376 ssh2: RSA SHA256:jqhyWZ4ohIpoDXTKwGqZcjJ0o+/xEZ+M9M3EOPsEPgk Jul 7 00:04:10.593896 sshd-session[4852]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:04:10.597650 systemd-logind[1880]: New session 20 of user core. Jul 7 00:04:10.601305 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 7 00:04:11.055220 sshd[4854]: Connection closed by 10.200.16.10 port 53376 Jul 7 00:04:11.055518 sshd-session[4852]: pam_unix(sshd:session): session closed for user core Jul 7 00:04:11.058181 systemd[1]: sshd@17-10.200.20.4:22-10.200.16.10:53376.service: Deactivated successfully. Jul 7 00:04:11.060576 systemd[1]: session-20.scope: Deactivated successfully. Jul 7 00:04:11.061568 systemd-logind[1880]: Session 20 logged out. Waiting for processes to exit. Jul 7 00:04:11.062945 systemd-logind[1880]: Removed session 20. Jul 7 00:04:11.145337 systemd[1]: Started sshd@18-10.200.20.4:22-10.200.16.10:53388.service - OpenSSH per-connection server daemon (10.200.16.10:53388). Jul 7 00:04:11.639276 sshd[4863]: Accepted publickey for core from 10.200.16.10 port 53388 ssh2: RSA SHA256:jqhyWZ4ohIpoDXTKwGqZcjJ0o+/xEZ+M9M3EOPsEPgk Jul 7 00:04:11.640315 sshd-session[4863]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:04:11.643810 systemd-logind[1880]: New session 21 of user core. Jul 7 00:04:11.658312 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 7 00:04:12.031154 sshd[4865]: Connection closed by 10.200.16.10 port 53388 Jul 7 00:04:12.031538 sshd-session[4863]: pam_unix(sshd:session): session closed for user core Jul 7 00:04:12.033792 systemd-logind[1880]: Session 21 logged out. Waiting for processes to exit. Jul 7 00:04:12.034486 systemd[1]: sshd@18-10.200.20.4:22-10.200.16.10:53388.service: Deactivated successfully. Jul 7 00:04:12.035941 systemd[1]: session-21.scope: Deactivated successfully. Jul 7 00:04:12.037391 systemd-logind[1880]: Removed session 21. Jul 7 00:04:17.127790 systemd[1]: Started sshd@19-10.200.20.4:22-10.200.16.10:53396.service - OpenSSH per-connection server daemon (10.200.16.10:53396). Jul 7 00:04:17.606628 sshd[4880]: Accepted publickey for core from 10.200.16.10 port 53396 ssh2: RSA SHA256:jqhyWZ4ohIpoDXTKwGqZcjJ0o+/xEZ+M9M3EOPsEPgk Jul 7 00:04:17.607656 sshd-session[4880]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:04:17.611512 systemd-logind[1880]: New session 22 of user core. Jul 7 00:04:17.615314 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 7 00:04:17.996699 sshd[4882]: Connection closed by 10.200.16.10 port 53396 Jul 7 00:04:17.997089 sshd-session[4880]: pam_unix(sshd:session): session closed for user core Jul 7 00:04:17.999973 systemd-logind[1880]: Session 22 logged out. Waiting for processes to exit. Jul 7 00:04:18.000527 systemd[1]: sshd@19-10.200.20.4:22-10.200.16.10:53396.service: Deactivated successfully. Jul 7 00:04:18.002430 systemd[1]: session-22.scope: Deactivated successfully. Jul 7 00:04:18.003723 systemd-logind[1880]: Removed session 22. Jul 7 00:04:23.087217 systemd[1]: Started sshd@20-10.200.20.4:22-10.200.16.10:39944.service - OpenSSH per-connection server daemon (10.200.16.10:39944). Jul 7 00:04:23.565874 sshd[4893]: Accepted publickey for core from 10.200.16.10 port 39944 ssh2: RSA SHA256:jqhyWZ4ohIpoDXTKwGqZcjJ0o+/xEZ+M9M3EOPsEPgk Jul 7 00:04:23.566861 sshd-session[4893]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:04:23.570249 systemd-logind[1880]: New session 23 of user core. Jul 7 00:04:23.576323 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 7 00:04:23.945097 sshd[4895]: Connection closed by 10.200.16.10 port 39944 Jul 7 00:04:23.945571 sshd-session[4893]: pam_unix(sshd:session): session closed for user core Jul 7 00:04:23.948210 systemd-logind[1880]: Session 23 logged out. Waiting for processes to exit. Jul 7 00:04:23.948304 systemd[1]: sshd@20-10.200.20.4:22-10.200.16.10:39944.service: Deactivated successfully. Jul 7 00:04:23.949622 systemd[1]: session-23.scope: Deactivated successfully. Jul 7 00:04:23.952160 systemd-logind[1880]: Removed session 23. Jul 7 00:04:29.041101 systemd[1]: Started sshd@21-10.200.20.4:22-10.200.16.10:39954.service - OpenSSH per-connection server daemon (10.200.16.10:39954). Jul 7 00:04:29.514655 sshd[4907]: Accepted publickey for core from 10.200.16.10 port 39954 ssh2: RSA SHA256:jqhyWZ4ohIpoDXTKwGqZcjJ0o+/xEZ+M9M3EOPsEPgk Jul 7 00:04:29.515595 sshd-session[4907]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:04:29.519135 systemd-logind[1880]: New session 24 of user core. Jul 7 00:04:29.528283 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 7 00:04:29.900642 sshd[4909]: Connection closed by 10.200.16.10 port 39954 Jul 7 00:04:29.901075 sshd-session[4907]: pam_unix(sshd:session): session closed for user core Jul 7 00:04:29.904275 systemd[1]: sshd@21-10.200.20.4:22-10.200.16.10:39954.service: Deactivated successfully. Jul 7 00:04:29.906386 systemd[1]: session-24.scope: Deactivated successfully. Jul 7 00:04:29.908594 systemd-logind[1880]: Session 24 logged out. Waiting for processes to exit. Jul 7 00:04:29.909547 systemd-logind[1880]: Removed session 24. Jul 7 00:04:29.987350 systemd[1]: Started sshd@22-10.200.20.4:22-10.200.16.10:54238.service - OpenSSH per-connection server daemon (10.200.16.10:54238). Jul 7 00:04:30.468056 sshd[4921]: Accepted publickey for core from 10.200.16.10 port 54238 ssh2: RSA SHA256:jqhyWZ4ohIpoDXTKwGqZcjJ0o+/xEZ+M9M3EOPsEPgk Jul 7 00:04:30.469050 sshd-session[4921]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:04:30.472523 systemd-logind[1880]: New session 25 of user core. Jul 7 00:04:30.480450 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 7 00:04:32.017332 containerd[1907]: time="2025-07-07T00:04:32.017285346Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 7 00:04:32.022290 containerd[1907]: time="2025-07-07T00:04:32.022261996Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e4c8e4b9bede0c1b57af18bb65928bf687f03df4049a3bdd6d5a221c67bfdcc5\" id:\"3920221d22f4221bbafca335948bf9e6532f008c7af1ef7615bad89ba73511b3\" pid:4940 exited_at:{seconds:1751846672 nanos:21495762}" Jul 7 00:04:32.024568 containerd[1907]: time="2025-07-07T00:04:32.024542306Z" level=info msg="StopContainer for \"e4c8e4b9bede0c1b57af18bb65928bf687f03df4049a3bdd6d5a221c67bfdcc5\" with timeout 2 (s)" Jul 7 00:04:32.024768 containerd[1907]: time="2025-07-07T00:04:32.024750625Z" level=info msg="Stop container \"e4c8e4b9bede0c1b57af18bb65928bf687f03df4049a3bdd6d5a221c67bfdcc5\" with signal terminated" Jul 7 00:04:32.029901 systemd-networkd[1572]: lxc_health: Link DOWN Jul 7 00:04:32.029906 systemd-networkd[1572]: lxc_health: Lost carrier Jul 7 00:04:32.033592 containerd[1907]: time="2025-07-07T00:04:32.033568343Z" level=info msg="StopContainer for \"0363899fe23b69d1a1a229df0bbe2ff0517ccbc14f34fee018bb41e7e423cf00\" with timeout 30 (s)" Jul 7 00:04:32.034125 containerd[1907]: time="2025-07-07T00:04:32.034104721Z" level=info msg="Stop container \"0363899fe23b69d1a1a229df0bbe2ff0517ccbc14f34fee018bb41e7e423cf00\" with signal terminated" Jul 7 00:04:32.045828 systemd[1]: cri-containerd-e4c8e4b9bede0c1b57af18bb65928bf687f03df4049a3bdd6d5a221c67bfdcc5.scope: Deactivated successfully. Jul 7 00:04:32.046063 systemd[1]: cri-containerd-e4c8e4b9bede0c1b57af18bb65928bf687f03df4049a3bdd6d5a221c67bfdcc5.scope: Consumed 4.273s CPU time, 124.2M memory peak, 152K read from disk, 12.9M written to disk. Jul 7 00:04:32.047293 containerd[1907]: time="2025-07-07T00:04:32.047251843Z" level=info msg="received exit event container_id:\"e4c8e4b9bede0c1b57af18bb65928bf687f03df4049a3bdd6d5a221c67bfdcc5\" id:\"e4c8e4b9bede0c1b57af18bb65928bf687f03df4049a3bdd6d5a221c67bfdcc5\" pid:4015 exited_at:{seconds:1751846672 nanos:47083773}" Jul 7 00:04:32.047466 containerd[1907]: time="2025-07-07T00:04:32.047324285Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e4c8e4b9bede0c1b57af18bb65928bf687f03df4049a3bdd6d5a221c67bfdcc5\" id:\"e4c8e4b9bede0c1b57af18bb65928bf687f03df4049a3bdd6d5a221c67bfdcc5\" pid:4015 exited_at:{seconds:1751846672 nanos:47083773}" Jul 7 00:04:32.048849 systemd[1]: cri-containerd-0363899fe23b69d1a1a229df0bbe2ff0517ccbc14f34fee018bb41e7e423cf00.scope: Deactivated successfully. Jul 7 00:04:32.051668 containerd[1907]: time="2025-07-07T00:04:32.051595575Z" level=info msg="received exit event container_id:\"0363899fe23b69d1a1a229df0bbe2ff0517ccbc14f34fee018bb41e7e423cf00\" id:\"0363899fe23b69d1a1a229df0bbe2ff0517ccbc14f34fee018bb41e7e423cf00\" pid:3929 exited_at:{seconds:1751846672 nanos:50592301}" Jul 7 00:04:32.051862 containerd[1907]: time="2025-07-07T00:04:32.051842480Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0363899fe23b69d1a1a229df0bbe2ff0517ccbc14f34fee018bb41e7e423cf00\" id:\"0363899fe23b69d1a1a229df0bbe2ff0517ccbc14f34fee018bb41e7e423cf00\" pid:3929 exited_at:{seconds:1751846672 nanos:50592301}" Jul 7 00:04:32.065488 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e4c8e4b9bede0c1b57af18bb65928bf687f03df4049a3bdd6d5a221c67bfdcc5-rootfs.mount: Deactivated successfully. Jul 7 00:04:32.069412 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0363899fe23b69d1a1a229df0bbe2ff0517ccbc14f34fee018bb41e7e423cf00-rootfs.mount: Deactivated successfully. Jul 7 00:04:32.198148 containerd[1907]: time="2025-07-07T00:04:32.198120740Z" level=info msg="StopContainer for \"0363899fe23b69d1a1a229df0bbe2ff0517ccbc14f34fee018bb41e7e423cf00\" returns successfully" Jul 7 00:04:32.198569 containerd[1907]: time="2025-07-07T00:04:32.198547235Z" level=info msg="StopPodSandbox for \"685724da528e03f817f36bdbc4a2ec8b7b3d24309adac534448aa7d380941cd7\"" Jul 7 00:04:32.198623 containerd[1907]: time="2025-07-07T00:04:32.198589508Z" level=info msg="Container to stop \"0363899fe23b69d1a1a229df0bbe2ff0517ccbc14f34fee018bb41e7e423cf00\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 00:04:32.202912 containerd[1907]: time="2025-07-07T00:04:32.202890335Z" level=info msg="StopContainer for \"e4c8e4b9bede0c1b57af18bb65928bf687f03df4049a3bdd6d5a221c67bfdcc5\" returns successfully" Jul 7 00:04:32.203386 containerd[1907]: time="2025-07-07T00:04:32.203304989Z" level=info msg="StopPodSandbox for \"6ffdd812b9923dacb2050ac181c3d58f34a0e9107534e9eb84fb1123c781d227\"" Jul 7 00:04:32.204136 containerd[1907]: time="2025-07-07T00:04:32.203678362Z" level=info msg="Container to stop \"2e1595d091a6bf9628487013764d99c9df3145e4d52c62e46ae33304d979ad0e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 00:04:32.204237 containerd[1907]: time="2025-07-07T00:04:32.204221333Z" level=info msg="Container to stop \"ea4424882ae770d287026abbcb422cc6487951189334fa5ef1f58fbf17ee950c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 00:04:32.204253 systemd[1]: cri-containerd-685724da528e03f817f36bdbc4a2ec8b7b3d24309adac534448aa7d380941cd7.scope: Deactivated successfully. Jul 7 00:04:32.204363 containerd[1907]: time="2025-07-07T00:04:32.204350833Z" level=info msg="Container to stop \"cf12ba96509980251c28580d7fbb482c8fd164a3143d5710ec9000deb9eaead1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 00:04:32.204443 containerd[1907]: time="2025-07-07T00:04:32.204428756Z" level=info msg="Container to stop \"60763619110869d2b29eac6d6f994c5f9022c23fd9ea9ef2820b76ed9dd6a3db\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 00:04:32.204519 containerd[1907]: time="2025-07-07T00:04:32.204505550Z" level=info msg="Container to stop \"e4c8e4b9bede0c1b57af18bb65928bf687f03df4049a3bdd6d5a221c67bfdcc5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 00:04:32.206834 containerd[1907]: time="2025-07-07T00:04:32.206794741Z" level=info msg="TaskExit event in podsandbox handler container_id:\"685724da528e03f817f36bdbc4a2ec8b7b3d24309adac534448aa7d380941cd7\" id:\"685724da528e03f817f36bdbc4a2ec8b7b3d24309adac534448aa7d380941cd7\" pid:3585 exit_status:137 exited_at:{seconds:1751846672 nanos:206481330}" Jul 7 00:04:32.210333 systemd[1]: cri-containerd-6ffdd812b9923dacb2050ac181c3d58f34a0e9107534e9eb84fb1123c781d227.scope: Deactivated successfully. Jul 7 00:04:32.225130 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6ffdd812b9923dacb2050ac181c3d58f34a0e9107534e9eb84fb1123c781d227-rootfs.mount: Deactivated successfully. Jul 7 00:04:32.230978 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-685724da528e03f817f36bdbc4a2ec8b7b3d24309adac534448aa7d380941cd7-rootfs.mount: Deactivated successfully. Jul 7 00:04:32.244324 containerd[1907]: time="2025-07-07T00:04:32.244284375Z" level=info msg="received exit event sandbox_id:\"685724da528e03f817f36bdbc4a2ec8b7b3d24309adac534448aa7d380941cd7\" exit_status:137 exited_at:{seconds:1751846672 nanos:206481330}" Jul 7 00:04:32.244598 containerd[1907]: time="2025-07-07T00:04:32.244494878Z" level=info msg="shim disconnected" id=6ffdd812b9923dacb2050ac181c3d58f34a0e9107534e9eb84fb1123c781d227 namespace=k8s.io Jul 7 00:04:32.244598 containerd[1907]: time="2025-07-07T00:04:32.244517911Z" level=warning msg="cleaning up after shim disconnected" id=6ffdd812b9923dacb2050ac181c3d58f34a0e9107534e9eb84fb1123c781d227 namespace=k8s.io Jul 7 00:04:32.244598 containerd[1907]: time="2025-07-07T00:04:32.244536920Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 00:04:32.246239 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-685724da528e03f817f36bdbc4a2ec8b7b3d24309adac534448aa7d380941cd7-shm.mount: Deactivated successfully. Jul 7 00:04:32.246636 containerd[1907]: time="2025-07-07T00:04:32.246579078Z" level=info msg="TearDown network for sandbox \"685724da528e03f817f36bdbc4a2ec8b7b3d24309adac534448aa7d380941cd7\" successfully" Jul 7 00:04:32.246636 containerd[1907]: time="2025-07-07T00:04:32.246598350Z" level=info msg="StopPodSandbox for \"685724da528e03f817f36bdbc4a2ec8b7b3d24309adac534448aa7d380941cd7\" returns successfully" Jul 7 00:04:32.246982 containerd[1907]: time="2025-07-07T00:04:32.246899281Z" level=info msg="shim disconnected" id=685724da528e03f817f36bdbc4a2ec8b7b3d24309adac534448aa7d380941cd7 namespace=k8s.io Jul 7 00:04:32.247338 containerd[1907]: time="2025-07-07T00:04:32.247293870Z" level=warning msg="cleaning up after shim disconnected" id=685724da528e03f817f36bdbc4a2ec8b7b3d24309adac534448aa7d380941cd7 namespace=k8s.io Jul 7 00:04:32.247402 containerd[1907]: time="2025-07-07T00:04:32.247321895Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 00:04:32.255029 kubelet[3384]: I0707 00:04:32.254962 3384 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dlmg6\" (UniqueName: \"kubernetes.io/projected/c0950446-913f-4dbe-b2ec-9aa6a61b8c9b-kube-api-access-dlmg6\") pod \"c0950446-913f-4dbe-b2ec-9aa6a61b8c9b\" (UID: \"c0950446-913f-4dbe-b2ec-9aa6a61b8c9b\") " Jul 7 00:04:32.255967 kubelet[3384]: I0707 00:04:32.255321 3384 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c0950446-913f-4dbe-b2ec-9aa6a61b8c9b-cilium-config-path\") pod \"c0950446-913f-4dbe-b2ec-9aa6a61b8c9b\" (UID: \"c0950446-913f-4dbe-b2ec-9aa6a61b8c9b\") " Jul 7 00:04:32.260021 kubelet[3384]: I0707 00:04:32.257475 3384 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0950446-913f-4dbe-b2ec-9aa6a61b8c9b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c0950446-913f-4dbe-b2ec-9aa6a61b8c9b" (UID: "c0950446-913f-4dbe-b2ec-9aa6a61b8c9b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 7 00:04:32.261161 kubelet[3384]: I0707 00:04:32.261135 3384 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0950446-913f-4dbe-b2ec-9aa6a61b8c9b-kube-api-access-dlmg6" (OuterVolumeSpecName: "kube-api-access-dlmg6") pod "c0950446-913f-4dbe-b2ec-9aa6a61b8c9b" (UID: "c0950446-913f-4dbe-b2ec-9aa6a61b8c9b"). InnerVolumeSpecName "kube-api-access-dlmg6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 7 00:04:32.262781 containerd[1907]: time="2025-07-07T00:04:32.262750871Z" level=info msg="received exit event sandbox_id:\"6ffdd812b9923dacb2050ac181c3d58f34a0e9107534e9eb84fb1123c781d227\" exit_status:137 exited_at:{seconds:1751846672 nanos:210992508}" Jul 7 00:04:32.263313 containerd[1907]: time="2025-07-07T00:04:32.263291641Z" level=info msg="TearDown network for sandbox \"6ffdd812b9923dacb2050ac181c3d58f34a0e9107534e9eb84fb1123c781d227\" successfully" Jul 7 00:04:32.263313 containerd[1907]: time="2025-07-07T00:04:32.263309186Z" level=info msg="StopPodSandbox for \"6ffdd812b9923dacb2050ac181c3d58f34a0e9107534e9eb84fb1123c781d227\" returns successfully" Jul 7 00:04:32.263554 containerd[1907]: time="2025-07-07T00:04:32.263530370Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6ffdd812b9923dacb2050ac181c3d58f34a0e9107534e9eb84fb1123c781d227\" id:\"6ffdd812b9923dacb2050ac181c3d58f34a0e9107534e9eb84fb1123c781d227\" pid:3538 exit_status:137 exited_at:{seconds:1751846672 nanos:210992508}" Jul 7 00:04:32.356037 kubelet[3384]: I0707 00:04:32.355892 3384 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dlmg6\" (UniqueName: \"kubernetes.io/projected/c0950446-913f-4dbe-b2ec-9aa6a61b8c9b-kube-api-access-dlmg6\") on node \"ci-4372.0.1-a-609ca7abb9\" DevicePath \"\"" Jul 7 00:04:32.356037 kubelet[3384]: I0707 00:04:32.355937 3384 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c0950446-913f-4dbe-b2ec-9aa6a61b8c9b-cilium-config-path\") on node \"ci-4372.0.1-a-609ca7abb9\" DevicePath \"\"" Jul 7 00:04:32.456330 kubelet[3384]: I0707 00:04:32.456301 3384 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/da2d577c-1a96-4387-ad21-a7ad1db235e8-etc-cni-netd\") pod \"da2d577c-1a96-4387-ad21-a7ad1db235e8\" (UID: \"da2d577c-1a96-4387-ad21-a7ad1db235e8\") " Jul 7 00:04:32.456330 kubelet[3384]: I0707 00:04:32.456330 3384 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/da2d577c-1a96-4387-ad21-a7ad1db235e8-host-proc-sys-kernel\") pod \"da2d577c-1a96-4387-ad21-a7ad1db235e8\" (UID: \"da2d577c-1a96-4387-ad21-a7ad1db235e8\") " Jul 7 00:04:32.456330 kubelet[3384]: I0707 00:04:32.456341 3384 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/da2d577c-1a96-4387-ad21-a7ad1db235e8-bpf-maps\") pod \"da2d577c-1a96-4387-ad21-a7ad1db235e8\" (UID: \"da2d577c-1a96-4387-ad21-a7ad1db235e8\") " Jul 7 00:04:32.456330 kubelet[3384]: I0707 00:04:32.456357 3384 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/da2d577c-1a96-4387-ad21-a7ad1db235e8-hubble-tls\") pod \"da2d577c-1a96-4387-ad21-a7ad1db235e8\" (UID: \"da2d577c-1a96-4387-ad21-a7ad1db235e8\") " Jul 7 00:04:32.456330 kubelet[3384]: I0707 00:04:32.456366 3384 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/da2d577c-1a96-4387-ad21-a7ad1db235e8-cilium-cgroup\") pod \"da2d577c-1a96-4387-ad21-a7ad1db235e8\" (UID: \"da2d577c-1a96-4387-ad21-a7ad1db235e8\") " Jul 7 00:04:32.456330 kubelet[3384]: I0707 00:04:32.456375 3384 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/da2d577c-1a96-4387-ad21-a7ad1db235e8-xtables-lock\") pod \"da2d577c-1a96-4387-ad21-a7ad1db235e8\" (UID: \"da2d577c-1a96-4387-ad21-a7ad1db235e8\") " Jul 7 00:04:32.456593 kubelet[3384]: I0707 00:04:32.456389 3384 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/da2d577c-1a96-4387-ad21-a7ad1db235e8-clustermesh-secrets\") pod \"da2d577c-1a96-4387-ad21-a7ad1db235e8\" (UID: \"da2d577c-1a96-4387-ad21-a7ad1db235e8\") " Jul 7 00:04:32.456593 kubelet[3384]: I0707 00:04:32.456388 3384 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/da2d577c-1a96-4387-ad21-a7ad1db235e8-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "da2d577c-1a96-4387-ad21-a7ad1db235e8" (UID: "da2d577c-1a96-4387-ad21-a7ad1db235e8"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 7 00:04:32.456593 kubelet[3384]: I0707 00:04:32.456416 3384 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/da2d577c-1a96-4387-ad21-a7ad1db235e8-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "da2d577c-1a96-4387-ad21-a7ad1db235e8" (UID: "da2d577c-1a96-4387-ad21-a7ad1db235e8"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 7 00:04:32.456593 kubelet[3384]: I0707 00:04:32.456437 3384 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/da2d577c-1a96-4387-ad21-a7ad1db235e8-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "da2d577c-1a96-4387-ad21-a7ad1db235e8" (UID: "da2d577c-1a96-4387-ad21-a7ad1db235e8"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 7 00:04:32.456593 kubelet[3384]: I0707 00:04:32.456449 3384 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/da2d577c-1a96-4387-ad21-a7ad1db235e8-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "da2d577c-1a96-4387-ad21-a7ad1db235e8" (UID: "da2d577c-1a96-4387-ad21-a7ad1db235e8"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 7 00:04:32.456908 kubelet[3384]: I0707 00:04:32.456398 3384 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/da2d577c-1a96-4387-ad21-a7ad1db235e8-host-proc-sys-net\") pod \"da2d577c-1a96-4387-ad21-a7ad1db235e8\" (UID: \"da2d577c-1a96-4387-ad21-a7ad1db235e8\") " Jul 7 00:04:32.456908 kubelet[3384]: I0707 00:04:32.456716 3384 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/da2d577c-1a96-4387-ad21-a7ad1db235e8-cilium-config-path\") pod \"da2d577c-1a96-4387-ad21-a7ad1db235e8\" (UID: \"da2d577c-1a96-4387-ad21-a7ad1db235e8\") " Jul 7 00:04:32.456908 kubelet[3384]: I0707 00:04:32.456758 3384 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/da2d577c-1a96-4387-ad21-a7ad1db235e8-lib-modules\") pod \"da2d577c-1a96-4387-ad21-a7ad1db235e8\" (UID: \"da2d577c-1a96-4387-ad21-a7ad1db235e8\") " Jul 7 00:04:32.456908 kubelet[3384]: I0707 00:04:32.456771 3384 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/da2d577c-1a96-4387-ad21-a7ad1db235e8-cilium-run\") pod \"da2d577c-1a96-4387-ad21-a7ad1db235e8\" (UID: \"da2d577c-1a96-4387-ad21-a7ad1db235e8\") " Jul 7 00:04:32.456908 kubelet[3384]: I0707 00:04:32.456780 3384 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/da2d577c-1a96-4387-ad21-a7ad1db235e8-cni-path\") pod \"da2d577c-1a96-4387-ad21-a7ad1db235e8\" (UID: \"da2d577c-1a96-4387-ad21-a7ad1db235e8\") " Jul 7 00:04:32.456908 kubelet[3384]: I0707 00:04:32.456792 3384 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r578h\" (UniqueName: \"kubernetes.io/projected/da2d577c-1a96-4387-ad21-a7ad1db235e8-kube-api-access-r578h\") pod \"da2d577c-1a96-4387-ad21-a7ad1db235e8\" (UID: \"da2d577c-1a96-4387-ad21-a7ad1db235e8\") " Jul 7 00:04:32.457026 kubelet[3384]: I0707 00:04:32.456802 3384 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/da2d577c-1a96-4387-ad21-a7ad1db235e8-hostproc\") pod \"da2d577c-1a96-4387-ad21-a7ad1db235e8\" (UID: \"da2d577c-1a96-4387-ad21-a7ad1db235e8\") " Jul 7 00:04:32.457026 kubelet[3384]: I0707 00:04:32.456824 3384 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/da2d577c-1a96-4387-ad21-a7ad1db235e8-etc-cni-netd\") on node \"ci-4372.0.1-a-609ca7abb9\" DevicePath \"\"" Jul 7 00:04:32.457026 kubelet[3384]: I0707 00:04:32.456837 3384 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/da2d577c-1a96-4387-ad21-a7ad1db235e8-host-proc-sys-kernel\") on node \"ci-4372.0.1-a-609ca7abb9\" DevicePath \"\"" Jul 7 00:04:32.457026 kubelet[3384]: I0707 00:04:32.456844 3384 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/da2d577c-1a96-4387-ad21-a7ad1db235e8-bpf-maps\") on node \"ci-4372.0.1-a-609ca7abb9\" DevicePath \"\"" Jul 7 00:04:32.457026 kubelet[3384]: I0707 00:04:32.456850 3384 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/da2d577c-1a96-4387-ad21-a7ad1db235e8-host-proc-sys-net\") on node \"ci-4372.0.1-a-609ca7abb9\" DevicePath \"\"" Jul 7 00:04:32.457026 kubelet[3384]: I0707 00:04:32.456867 3384 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/da2d577c-1a96-4387-ad21-a7ad1db235e8-hostproc" (OuterVolumeSpecName: "hostproc") pod "da2d577c-1a96-4387-ad21-a7ad1db235e8" (UID: "da2d577c-1a96-4387-ad21-a7ad1db235e8"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 7 00:04:32.457285 kubelet[3384]: I0707 00:04:32.457243 3384 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/da2d577c-1a96-4387-ad21-a7ad1db235e8-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "da2d577c-1a96-4387-ad21-a7ad1db235e8" (UID: "da2d577c-1a96-4387-ad21-a7ad1db235e8"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 7 00:04:32.457285 kubelet[3384]: I0707 00:04:32.457271 3384 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/da2d577c-1a96-4387-ad21-a7ad1db235e8-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "da2d577c-1a96-4387-ad21-a7ad1db235e8" (UID: "da2d577c-1a96-4387-ad21-a7ad1db235e8"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 7 00:04:32.458460 kubelet[3384]: I0707 00:04:32.458422 3384 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da2d577c-1a96-4387-ad21-a7ad1db235e8-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "da2d577c-1a96-4387-ad21-a7ad1db235e8" (UID: "da2d577c-1a96-4387-ad21-a7ad1db235e8"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 7 00:04:32.458536 kubelet[3384]: I0707 00:04:32.458524 3384 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/da2d577c-1a96-4387-ad21-a7ad1db235e8-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "da2d577c-1a96-4387-ad21-a7ad1db235e8" (UID: "da2d577c-1a96-4387-ad21-a7ad1db235e8"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 7 00:04:32.458671 kubelet[3384]: I0707 00:04:32.458600 3384 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/da2d577c-1a96-4387-ad21-a7ad1db235e8-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "da2d577c-1a96-4387-ad21-a7ad1db235e8" (UID: "da2d577c-1a96-4387-ad21-a7ad1db235e8"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 7 00:04:32.458671 kubelet[3384]: I0707 00:04:32.458616 3384 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/da2d577c-1a96-4387-ad21-a7ad1db235e8-cni-path" (OuterVolumeSpecName: "cni-path") pod "da2d577c-1a96-4387-ad21-a7ad1db235e8" (UID: "da2d577c-1a96-4387-ad21-a7ad1db235e8"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 7 00:04:32.459738 kubelet[3384]: I0707 00:04:32.459706 3384 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da2d577c-1a96-4387-ad21-a7ad1db235e8-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "da2d577c-1a96-4387-ad21-a7ad1db235e8" (UID: "da2d577c-1a96-4387-ad21-a7ad1db235e8"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 7 00:04:32.459804 kubelet[3384]: I0707 00:04:32.459775 3384 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da2d577c-1a96-4387-ad21-a7ad1db235e8-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "da2d577c-1a96-4387-ad21-a7ad1db235e8" (UID: "da2d577c-1a96-4387-ad21-a7ad1db235e8"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 7 00:04:32.460386 kubelet[3384]: I0707 00:04:32.460366 3384 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da2d577c-1a96-4387-ad21-a7ad1db235e8-kube-api-access-r578h" (OuterVolumeSpecName: "kube-api-access-r578h") pod "da2d577c-1a96-4387-ad21-a7ad1db235e8" (UID: "da2d577c-1a96-4387-ad21-a7ad1db235e8"). InnerVolumeSpecName "kube-api-access-r578h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 7 00:04:32.557107 kubelet[3384]: I0707 00:04:32.557035 3384 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/da2d577c-1a96-4387-ad21-a7ad1db235e8-clustermesh-secrets\") on node \"ci-4372.0.1-a-609ca7abb9\" DevicePath \"\"" Jul 7 00:04:32.557107 kubelet[3384]: I0707 00:04:32.557059 3384 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/da2d577c-1a96-4387-ad21-a7ad1db235e8-xtables-lock\") on node \"ci-4372.0.1-a-609ca7abb9\" DevicePath \"\"" Jul 7 00:04:32.557107 kubelet[3384]: I0707 00:04:32.557068 3384 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/da2d577c-1a96-4387-ad21-a7ad1db235e8-lib-modules\") on node \"ci-4372.0.1-a-609ca7abb9\" DevicePath \"\"" Jul 7 00:04:32.557107 kubelet[3384]: I0707 00:04:32.557074 3384 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/da2d577c-1a96-4387-ad21-a7ad1db235e8-cilium-run\") on node \"ci-4372.0.1-a-609ca7abb9\" DevicePath \"\"" Jul 7 00:04:32.557107 kubelet[3384]: I0707 00:04:32.557080 3384 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/da2d577c-1a96-4387-ad21-a7ad1db235e8-cni-path\") on node \"ci-4372.0.1-a-609ca7abb9\" DevicePath \"\"" Jul 7 00:04:32.557107 kubelet[3384]: I0707 00:04:32.557087 3384 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/da2d577c-1a96-4387-ad21-a7ad1db235e8-cilium-config-path\") on node \"ci-4372.0.1-a-609ca7abb9\" DevicePath \"\"" Jul 7 00:04:32.557107 kubelet[3384]: I0707 00:04:32.557092 3384 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/da2d577c-1a96-4387-ad21-a7ad1db235e8-hostproc\") on node \"ci-4372.0.1-a-609ca7abb9\" DevicePath \"\"" Jul 7 00:04:32.557107 kubelet[3384]: I0707 00:04:32.557098 3384 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r578h\" (UniqueName: \"kubernetes.io/projected/da2d577c-1a96-4387-ad21-a7ad1db235e8-kube-api-access-r578h\") on node \"ci-4372.0.1-a-609ca7abb9\" DevicePath \"\"" Jul 7 00:04:32.557716 kubelet[3384]: I0707 00:04:32.557104 3384 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/da2d577c-1a96-4387-ad21-a7ad1db235e8-hubble-tls\") on node \"ci-4372.0.1-a-609ca7abb9\" DevicePath \"\"" Jul 7 00:04:32.557716 kubelet[3384]: I0707 00:04:32.557112 3384 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/da2d577c-1a96-4387-ad21-a7ad1db235e8-cilium-cgroup\") on node \"ci-4372.0.1-a-609ca7abb9\" DevicePath \"\"" Jul 7 00:04:32.783411 kubelet[3384]: I0707 00:04:32.783264 3384 scope.go:117] "RemoveContainer" containerID="0363899fe23b69d1a1a229df0bbe2ff0517ccbc14f34fee018bb41e7e423cf00" Jul 7 00:04:32.785051 systemd[1]: Removed slice kubepods-besteffort-podc0950446_913f_4dbe_b2ec_9aa6a61b8c9b.slice - libcontainer container kubepods-besteffort-podc0950446_913f_4dbe_b2ec_9aa6a61b8c9b.slice. Jul 7 00:04:32.786936 containerd[1907]: time="2025-07-07T00:04:32.786859200Z" level=info msg="RemoveContainer for \"0363899fe23b69d1a1a229df0bbe2ff0517ccbc14f34fee018bb41e7e423cf00\"" Jul 7 00:04:32.793598 systemd[1]: Removed slice kubepods-burstable-podda2d577c_1a96_4387_ad21_a7ad1db235e8.slice - libcontainer container kubepods-burstable-podda2d577c_1a96_4387_ad21_a7ad1db235e8.slice. Jul 7 00:04:32.793671 systemd[1]: kubepods-burstable-podda2d577c_1a96_4387_ad21_a7ad1db235e8.slice: Consumed 4.327s CPU time, 124.6M memory peak, 152K read from disk, 12.9M written to disk. Jul 7 00:04:32.796941 containerd[1907]: time="2025-07-07T00:04:32.796918992Z" level=info msg="RemoveContainer for \"0363899fe23b69d1a1a229df0bbe2ff0517ccbc14f34fee018bb41e7e423cf00\" returns successfully" Jul 7 00:04:32.797107 kubelet[3384]: I0707 00:04:32.797090 3384 scope.go:117] "RemoveContainer" containerID="0363899fe23b69d1a1a229df0bbe2ff0517ccbc14f34fee018bb41e7e423cf00" Jul 7 00:04:32.797353 containerd[1907]: time="2025-07-07T00:04:32.797309053Z" level=error msg="ContainerStatus for \"0363899fe23b69d1a1a229df0bbe2ff0517ccbc14f34fee018bb41e7e423cf00\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0363899fe23b69d1a1a229df0bbe2ff0517ccbc14f34fee018bb41e7e423cf00\": not found" Jul 7 00:04:32.797524 kubelet[3384]: E0707 00:04:32.797502 3384 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0363899fe23b69d1a1a229df0bbe2ff0517ccbc14f34fee018bb41e7e423cf00\": not found" containerID="0363899fe23b69d1a1a229df0bbe2ff0517ccbc14f34fee018bb41e7e423cf00" Jul 7 00:04:32.797590 kubelet[3384]: I0707 00:04:32.797527 3384 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0363899fe23b69d1a1a229df0bbe2ff0517ccbc14f34fee018bb41e7e423cf00"} err="failed to get container status \"0363899fe23b69d1a1a229df0bbe2ff0517ccbc14f34fee018bb41e7e423cf00\": rpc error: code = NotFound desc = an error occurred when try to find container \"0363899fe23b69d1a1a229df0bbe2ff0517ccbc14f34fee018bb41e7e423cf00\": not found" Jul 7 00:04:32.797631 kubelet[3384]: I0707 00:04:32.797591 3384 scope.go:117] "RemoveContainer" containerID="e4c8e4b9bede0c1b57af18bb65928bf687f03df4049a3bdd6d5a221c67bfdcc5" Jul 7 00:04:32.799672 containerd[1907]: time="2025-07-07T00:04:32.799649829Z" level=info msg="RemoveContainer for \"e4c8e4b9bede0c1b57af18bb65928bf687f03df4049a3bdd6d5a221c67bfdcc5\"" Jul 7 00:04:32.813252 containerd[1907]: time="2025-07-07T00:04:32.813224734Z" level=info msg="RemoveContainer for \"e4c8e4b9bede0c1b57af18bb65928bf687f03df4049a3bdd6d5a221c67bfdcc5\" returns successfully" Jul 7 00:04:32.813772 kubelet[3384]: I0707 00:04:32.813615 3384 scope.go:117] "RemoveContainer" containerID="60763619110869d2b29eac6d6f994c5f9022c23fd9ea9ef2820b76ed9dd6a3db" Jul 7 00:04:32.814940 containerd[1907]: time="2025-07-07T00:04:32.814913871Z" level=info msg="RemoveContainer for \"60763619110869d2b29eac6d6f994c5f9022c23fd9ea9ef2820b76ed9dd6a3db\"" Jul 7 00:04:32.824217 containerd[1907]: time="2025-07-07T00:04:32.824170636Z" level=info msg="RemoveContainer for \"60763619110869d2b29eac6d6f994c5f9022c23fd9ea9ef2820b76ed9dd6a3db\" returns successfully" Jul 7 00:04:32.824344 kubelet[3384]: I0707 00:04:32.824326 3384 scope.go:117] "RemoveContainer" containerID="ea4424882ae770d287026abbcb422cc6487951189334fa5ef1f58fbf17ee950c" Jul 7 00:04:32.825880 containerd[1907]: time="2025-07-07T00:04:32.825861054Z" level=info msg="RemoveContainer for \"ea4424882ae770d287026abbcb422cc6487951189334fa5ef1f58fbf17ee950c\"" Jul 7 00:04:32.838352 containerd[1907]: time="2025-07-07T00:04:32.838298063Z" level=info msg="RemoveContainer for \"ea4424882ae770d287026abbcb422cc6487951189334fa5ef1f58fbf17ee950c\" returns successfully" Jul 7 00:04:32.838480 kubelet[3384]: I0707 00:04:32.838423 3384 scope.go:117] "RemoveContainer" containerID="2e1595d091a6bf9628487013764d99c9df3145e4d52c62e46ae33304d979ad0e" Jul 7 00:04:32.839474 containerd[1907]: time="2025-07-07T00:04:32.839418118Z" level=info msg="RemoveContainer for \"2e1595d091a6bf9628487013764d99c9df3145e4d52c62e46ae33304d979ad0e\"" Jul 7 00:04:32.849536 containerd[1907]: time="2025-07-07T00:04:32.849506839Z" level=info msg="RemoveContainer for \"2e1595d091a6bf9628487013764d99c9df3145e4d52c62e46ae33304d979ad0e\" returns successfully" Jul 7 00:04:32.849689 kubelet[3384]: I0707 00:04:32.849637 3384 scope.go:117] "RemoveContainer" containerID="cf12ba96509980251c28580d7fbb482c8fd164a3143d5710ec9000deb9eaead1" Jul 7 00:04:32.850723 containerd[1907]: time="2025-07-07T00:04:32.850702696Z" level=info msg="RemoveContainer for \"cf12ba96509980251c28580d7fbb482c8fd164a3143d5710ec9000deb9eaead1\"" Jul 7 00:04:32.861315 containerd[1907]: time="2025-07-07T00:04:32.861293970Z" level=info msg="RemoveContainer for \"cf12ba96509980251c28580d7fbb482c8fd164a3143d5710ec9000deb9eaead1\" returns successfully" Jul 7 00:04:32.861436 kubelet[3384]: I0707 00:04:32.861421 3384 scope.go:117] "RemoveContainer" containerID="e4c8e4b9bede0c1b57af18bb65928bf687f03df4049a3bdd6d5a221c67bfdcc5" Jul 7 00:04:32.861743 kubelet[3384]: E0707 00:04:32.861681 3384 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e4c8e4b9bede0c1b57af18bb65928bf687f03df4049a3bdd6d5a221c67bfdcc5\": not found" containerID="e4c8e4b9bede0c1b57af18bb65928bf687f03df4049a3bdd6d5a221c67bfdcc5" Jul 7 00:04:32.861777 containerd[1907]: time="2025-07-07T00:04:32.861573428Z" level=error msg="ContainerStatus for \"e4c8e4b9bede0c1b57af18bb65928bf687f03df4049a3bdd6d5a221c67bfdcc5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e4c8e4b9bede0c1b57af18bb65928bf687f03df4049a3bdd6d5a221c67bfdcc5\": not found" Jul 7 00:04:32.861846 kubelet[3384]: I0707 00:04:32.861830 3384 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e4c8e4b9bede0c1b57af18bb65928bf687f03df4049a3bdd6d5a221c67bfdcc5"} err="failed to get container status \"e4c8e4b9bede0c1b57af18bb65928bf687f03df4049a3bdd6d5a221c67bfdcc5\": rpc error: code = NotFound desc = an error occurred when try to find container \"e4c8e4b9bede0c1b57af18bb65928bf687f03df4049a3bdd6d5a221c67bfdcc5\": not found" Jul 7 00:04:32.861968 kubelet[3384]: I0707 00:04:32.861901 3384 scope.go:117] "RemoveContainer" containerID="60763619110869d2b29eac6d6f994c5f9022c23fd9ea9ef2820b76ed9dd6a3db" Jul 7 00:04:32.862109 containerd[1907]: time="2025-07-07T00:04:32.862026067Z" level=error msg="ContainerStatus for \"60763619110869d2b29eac6d6f994c5f9022c23fd9ea9ef2820b76ed9dd6a3db\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"60763619110869d2b29eac6d6f994c5f9022c23fd9ea9ef2820b76ed9dd6a3db\": not found" Jul 7 00:04:32.862150 kubelet[3384]: E0707 00:04:32.862094 3384 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"60763619110869d2b29eac6d6f994c5f9022c23fd9ea9ef2820b76ed9dd6a3db\": not found" containerID="60763619110869d2b29eac6d6f994c5f9022c23fd9ea9ef2820b76ed9dd6a3db" Jul 7 00:04:32.862270 kubelet[3384]: I0707 00:04:32.862225 3384 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"60763619110869d2b29eac6d6f994c5f9022c23fd9ea9ef2820b76ed9dd6a3db"} err="failed to get container status \"60763619110869d2b29eac6d6f994c5f9022c23fd9ea9ef2820b76ed9dd6a3db\": rpc error: code = NotFound desc = an error occurred when try to find container \"60763619110869d2b29eac6d6f994c5f9022c23fd9ea9ef2820b76ed9dd6a3db\": not found" Jul 7 00:04:32.862270 kubelet[3384]: I0707 00:04:32.862241 3384 scope.go:117] "RemoveContainer" containerID="ea4424882ae770d287026abbcb422cc6487951189334fa5ef1f58fbf17ee950c" Jul 7 00:04:32.862530 containerd[1907]: time="2025-07-07T00:04:32.862484851Z" level=error msg="ContainerStatus for \"ea4424882ae770d287026abbcb422cc6487951189334fa5ef1f58fbf17ee950c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ea4424882ae770d287026abbcb422cc6487951189334fa5ef1f58fbf17ee950c\": not found" Jul 7 00:04:32.862749 kubelet[3384]: E0707 00:04:32.862708 3384 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ea4424882ae770d287026abbcb422cc6487951189334fa5ef1f58fbf17ee950c\": not found" containerID="ea4424882ae770d287026abbcb422cc6487951189334fa5ef1f58fbf17ee950c" Jul 7 00:04:32.862749 kubelet[3384]: I0707 00:04:32.862725 3384 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ea4424882ae770d287026abbcb422cc6487951189334fa5ef1f58fbf17ee950c"} err="failed to get container status \"ea4424882ae770d287026abbcb422cc6487951189334fa5ef1f58fbf17ee950c\": rpc error: code = NotFound desc = an error occurred when try to find container \"ea4424882ae770d287026abbcb422cc6487951189334fa5ef1f58fbf17ee950c\": not found" Jul 7 00:04:32.862879 kubelet[3384]: I0707 00:04:32.862739 3384 scope.go:117] "RemoveContainer" containerID="2e1595d091a6bf9628487013764d99c9df3145e4d52c62e46ae33304d979ad0e" Jul 7 00:04:32.862977 containerd[1907]: time="2025-07-07T00:04:32.862949203Z" level=error msg="ContainerStatus for \"2e1595d091a6bf9628487013764d99c9df3145e4d52c62e46ae33304d979ad0e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2e1595d091a6bf9628487013764d99c9df3145e4d52c62e46ae33304d979ad0e\": not found" Jul 7 00:04:32.863094 kubelet[3384]: E0707 00:04:32.863077 3384 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2e1595d091a6bf9628487013764d99c9df3145e4d52c62e46ae33304d979ad0e\": not found" containerID="2e1595d091a6bf9628487013764d99c9df3145e4d52c62e46ae33304d979ad0e" Jul 7 00:04:32.863241 kubelet[3384]: I0707 00:04:32.863163 3384 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2e1595d091a6bf9628487013764d99c9df3145e4d52c62e46ae33304d979ad0e"} err="failed to get container status \"2e1595d091a6bf9628487013764d99c9df3145e4d52c62e46ae33304d979ad0e\": rpc error: code = NotFound desc = an error occurred when try to find container \"2e1595d091a6bf9628487013764d99c9df3145e4d52c62e46ae33304d979ad0e\": not found" Jul 7 00:04:32.863241 kubelet[3384]: I0707 00:04:32.863178 3384 scope.go:117] "RemoveContainer" containerID="cf12ba96509980251c28580d7fbb482c8fd164a3143d5710ec9000deb9eaead1" Jul 7 00:04:32.863383 containerd[1907]: time="2025-07-07T00:04:32.863304159Z" level=error msg="ContainerStatus for \"cf12ba96509980251c28580d7fbb482c8fd164a3143d5710ec9000deb9eaead1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cf12ba96509980251c28580d7fbb482c8fd164a3143d5710ec9000deb9eaead1\": not found" Jul 7 00:04:32.863517 kubelet[3384]: E0707 00:04:32.863449 3384 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cf12ba96509980251c28580d7fbb482c8fd164a3143d5710ec9000deb9eaead1\": not found" containerID="cf12ba96509980251c28580d7fbb482c8fd164a3143d5710ec9000deb9eaead1" Jul 7 00:04:32.863517 kubelet[3384]: I0707 00:04:32.863465 3384 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cf12ba96509980251c28580d7fbb482c8fd164a3143d5710ec9000deb9eaead1"} err="failed to get container status \"cf12ba96509980251c28580d7fbb482c8fd164a3143d5710ec9000deb9eaead1\": rpc error: code = NotFound desc = an error occurred when try to find container \"cf12ba96509980251c28580d7fbb482c8fd164a3143d5710ec9000deb9eaead1\": not found" Jul 7 00:04:33.065398 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6ffdd812b9923dacb2050ac181c3d58f34a0e9107534e9eb84fb1123c781d227-shm.mount: Deactivated successfully. Jul 7 00:04:33.065478 systemd[1]: var-lib-kubelet-pods-c0950446\x2d913f\x2d4dbe\x2db2ec\x2d9aa6a61b8c9b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddlmg6.mount: Deactivated successfully. Jul 7 00:04:33.065522 systemd[1]: var-lib-kubelet-pods-da2d577c\x2d1a96\x2d4387\x2dad21\x2da7ad1db235e8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dr578h.mount: Deactivated successfully. Jul 7 00:04:33.065559 systemd[1]: var-lib-kubelet-pods-da2d577c\x2d1a96\x2d4387\x2dad21\x2da7ad1db235e8-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 7 00:04:33.065593 systemd[1]: var-lib-kubelet-pods-da2d577c\x2d1a96\x2d4387\x2dad21\x2da7ad1db235e8-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 7 00:04:33.503893 kubelet[3384]: I0707 00:04:33.503381 3384 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0950446-913f-4dbe-b2ec-9aa6a61b8c9b" path="/var/lib/kubelet/pods/c0950446-913f-4dbe-b2ec-9aa6a61b8c9b/volumes" Jul 7 00:04:33.503893 kubelet[3384]: I0707 00:04:33.503633 3384 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da2d577c-1a96-4387-ad21-a7ad1db235e8" path="/var/lib/kubelet/pods/da2d577c-1a96-4387-ad21-a7ad1db235e8/volumes" Jul 7 00:04:34.035261 sshd[4923]: Connection closed by 10.200.16.10 port 54238 Jul 7 00:04:34.035781 sshd-session[4921]: pam_unix(sshd:session): session closed for user core Jul 7 00:04:34.038817 systemd[1]: sshd@22-10.200.20.4:22-10.200.16.10:54238.service: Deactivated successfully. Jul 7 00:04:34.040347 systemd[1]: session-25.scope: Deactivated successfully. Jul 7 00:04:34.041000 systemd-logind[1880]: Session 25 logged out. Waiting for processes to exit. Jul 7 00:04:34.042507 systemd-logind[1880]: Removed session 25. Jul 7 00:04:34.121245 systemd[1]: Started sshd@23-10.200.20.4:22-10.200.16.10:54244.service - OpenSSH per-connection server daemon (10.200.16.10:54244). Jul 7 00:04:34.574415 kubelet[3384]: E0707 00:04:34.574345 3384 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 7 00:04:34.597929 sshd[5071]: Accepted publickey for core from 10.200.16.10 port 54244 ssh2: RSA SHA256:jqhyWZ4ohIpoDXTKwGqZcjJ0o+/xEZ+M9M3EOPsEPgk Jul 7 00:04:34.598980 sshd-session[5071]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:04:34.602652 systemd-logind[1880]: New session 26 of user core. Jul 7 00:04:34.613389 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 7 00:04:35.486674 kubelet[3384]: E0707 00:04:35.485940 3384 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="da2d577c-1a96-4387-ad21-a7ad1db235e8" containerName="apply-sysctl-overwrites" Jul 7 00:04:35.486674 kubelet[3384]: E0707 00:04:35.485969 3384 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="da2d577c-1a96-4387-ad21-a7ad1db235e8" containerName="clean-cilium-state" Jul 7 00:04:35.486674 kubelet[3384]: E0707 00:04:35.485976 3384 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="da2d577c-1a96-4387-ad21-a7ad1db235e8" containerName="mount-cgroup" Jul 7 00:04:35.486674 kubelet[3384]: E0707 00:04:35.485980 3384 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="da2d577c-1a96-4387-ad21-a7ad1db235e8" containerName="mount-bpf-fs" Jul 7 00:04:35.486674 kubelet[3384]: E0707 00:04:35.485984 3384 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c0950446-913f-4dbe-b2ec-9aa6a61b8c9b" containerName="cilium-operator" Jul 7 00:04:35.486674 kubelet[3384]: E0707 00:04:35.485987 3384 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="da2d577c-1a96-4387-ad21-a7ad1db235e8" containerName="cilium-agent" Jul 7 00:04:35.486674 kubelet[3384]: I0707 00:04:35.486006 3384 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0950446-913f-4dbe-b2ec-9aa6a61b8c9b" containerName="cilium-operator" Jul 7 00:04:35.486674 kubelet[3384]: I0707 00:04:35.486011 3384 memory_manager.go:354] "RemoveStaleState removing state" podUID="da2d577c-1a96-4387-ad21-a7ad1db235e8" containerName="cilium-agent" Jul 7 00:04:35.494401 systemd[1]: Created slice kubepods-burstable-pode892296e_d291_4c1f_ae48_b7de1a582d77.slice - libcontainer container kubepods-burstable-pode892296e_d291_4c1f_ae48_b7de1a582d77.slice. Jul 7 00:04:35.544247 sshd[5073]: Connection closed by 10.200.16.10 port 54244 Jul 7 00:04:35.544938 sshd-session[5071]: pam_unix(sshd:session): session closed for user core Jul 7 00:04:35.548502 systemd[1]: sshd@23-10.200.20.4:22-10.200.16.10:54244.service: Deactivated successfully. Jul 7 00:04:35.550783 systemd[1]: session-26.scope: Deactivated successfully. Jul 7 00:04:35.552698 systemd-logind[1880]: Session 26 logged out. Waiting for processes to exit. Jul 7 00:04:35.553991 systemd-logind[1880]: Removed session 26. Jul 7 00:04:35.646343 systemd[1]: Started sshd@24-10.200.20.4:22-10.200.16.10:54258.service - OpenSSH per-connection server daemon (10.200.16.10:54258). Jul 7 00:04:35.670414 kubelet[3384]: I0707 00:04:35.670376 3384 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e892296e-d291-4c1f-ae48-b7de1a582d77-xtables-lock\") pod \"cilium-hts26\" (UID: \"e892296e-d291-4c1f-ae48-b7de1a582d77\") " pod="kube-system/cilium-hts26" Jul 7 00:04:35.670414 kubelet[3384]: I0707 00:04:35.670409 3384 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e892296e-d291-4c1f-ae48-b7de1a582d77-cilium-config-path\") pod \"cilium-hts26\" (UID: \"e892296e-d291-4c1f-ae48-b7de1a582d77\") " pod="kube-system/cilium-hts26" Jul 7 00:04:35.670658 kubelet[3384]: I0707 00:04:35.670423 3384 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdj5p\" (UniqueName: \"kubernetes.io/projected/e892296e-d291-4c1f-ae48-b7de1a582d77-kube-api-access-wdj5p\") pod \"cilium-hts26\" (UID: \"e892296e-d291-4c1f-ae48-b7de1a582d77\") " pod="kube-system/cilium-hts26" Jul 7 00:04:35.670658 kubelet[3384]: I0707 00:04:35.670440 3384 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e892296e-d291-4c1f-ae48-b7de1a582d77-host-proc-sys-kernel\") pod \"cilium-hts26\" (UID: \"e892296e-d291-4c1f-ae48-b7de1a582d77\") " pod="kube-system/cilium-hts26" Jul 7 00:04:35.670658 kubelet[3384]: I0707 00:04:35.670452 3384 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e892296e-d291-4c1f-ae48-b7de1a582d77-bpf-maps\") pod \"cilium-hts26\" (UID: \"e892296e-d291-4c1f-ae48-b7de1a582d77\") " pod="kube-system/cilium-hts26" Jul 7 00:04:35.670658 kubelet[3384]: I0707 00:04:35.670462 3384 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e892296e-d291-4c1f-ae48-b7de1a582d77-host-proc-sys-net\") pod \"cilium-hts26\" (UID: \"e892296e-d291-4c1f-ae48-b7de1a582d77\") " pod="kube-system/cilium-hts26" Jul 7 00:04:35.670658 kubelet[3384]: I0707 00:04:35.670471 3384 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e892296e-d291-4c1f-ae48-b7de1a582d77-hubble-tls\") pod \"cilium-hts26\" (UID: \"e892296e-d291-4c1f-ae48-b7de1a582d77\") " pod="kube-system/cilium-hts26" Jul 7 00:04:35.670658 kubelet[3384]: I0707 00:04:35.670480 3384 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e892296e-d291-4c1f-ae48-b7de1a582d77-hostproc\") pod \"cilium-hts26\" (UID: \"e892296e-d291-4c1f-ae48-b7de1a582d77\") " pod="kube-system/cilium-hts26" Jul 7 00:04:35.670754 kubelet[3384]: I0707 00:04:35.670489 3384 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e892296e-d291-4c1f-ae48-b7de1a582d77-cilium-ipsec-secrets\") pod \"cilium-hts26\" (UID: \"e892296e-d291-4c1f-ae48-b7de1a582d77\") " pod="kube-system/cilium-hts26" Jul 7 00:04:35.670754 kubelet[3384]: I0707 00:04:35.670506 3384 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e892296e-d291-4c1f-ae48-b7de1a582d77-etc-cni-netd\") pod \"cilium-hts26\" (UID: \"e892296e-d291-4c1f-ae48-b7de1a582d77\") " pod="kube-system/cilium-hts26" Jul 7 00:04:35.670754 kubelet[3384]: I0707 00:04:35.670516 3384 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e892296e-d291-4c1f-ae48-b7de1a582d77-lib-modules\") pod \"cilium-hts26\" (UID: \"e892296e-d291-4c1f-ae48-b7de1a582d77\") " pod="kube-system/cilium-hts26" Jul 7 00:04:35.670754 kubelet[3384]: I0707 00:04:35.670524 3384 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e892296e-d291-4c1f-ae48-b7de1a582d77-clustermesh-secrets\") pod \"cilium-hts26\" (UID: \"e892296e-d291-4c1f-ae48-b7de1a582d77\") " pod="kube-system/cilium-hts26" Jul 7 00:04:35.670754 kubelet[3384]: I0707 00:04:35.670541 3384 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e892296e-d291-4c1f-ae48-b7de1a582d77-cilium-run\") pod \"cilium-hts26\" (UID: \"e892296e-d291-4c1f-ae48-b7de1a582d77\") " pod="kube-system/cilium-hts26" Jul 7 00:04:35.670754 kubelet[3384]: I0707 00:04:35.670550 3384 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e892296e-d291-4c1f-ae48-b7de1a582d77-cilium-cgroup\") pod \"cilium-hts26\" (UID: \"e892296e-d291-4c1f-ae48-b7de1a582d77\") " pod="kube-system/cilium-hts26" Jul 7 00:04:35.670844 kubelet[3384]: I0707 00:04:35.670558 3384 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e892296e-d291-4c1f-ae48-b7de1a582d77-cni-path\") pod \"cilium-hts26\" (UID: \"e892296e-d291-4c1f-ae48-b7de1a582d77\") " pod="kube-system/cilium-hts26" Jul 7 00:04:35.797291 containerd[1907]: time="2025-07-07T00:04:35.797212924Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hts26,Uid:e892296e-d291-4c1f-ae48-b7de1a582d77,Namespace:kube-system,Attempt:0,}" Jul 7 00:04:35.848817 containerd[1907]: time="2025-07-07T00:04:35.848766183Z" level=info msg="connecting to shim 12943be1bd064c62292938a8b50f928e354dc9737149e69eb9c4d7e39a73fdd5" address="unix:///run/containerd/s/4400100be6f384f8182b69362667d9cde3ae09c5c02c5273ac22f96d2ebd5c16" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:04:35.873324 systemd[1]: Started cri-containerd-12943be1bd064c62292938a8b50f928e354dc9737149e69eb9c4d7e39a73fdd5.scope - libcontainer container 12943be1bd064c62292938a8b50f928e354dc9737149e69eb9c4d7e39a73fdd5. Jul 7 00:04:35.895460 containerd[1907]: time="2025-07-07T00:04:35.895435388Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hts26,Uid:e892296e-d291-4c1f-ae48-b7de1a582d77,Namespace:kube-system,Attempt:0,} returns sandbox id \"12943be1bd064c62292938a8b50f928e354dc9737149e69eb9c4d7e39a73fdd5\"" Jul 7 00:04:35.898125 containerd[1907]: time="2025-07-07T00:04:35.898052465Z" level=info msg="CreateContainer within sandbox \"12943be1bd064c62292938a8b50f928e354dc9737149e69eb9c4d7e39a73fdd5\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 7 00:04:35.922587 containerd[1907]: time="2025-07-07T00:04:35.922564034Z" level=info msg="Container cb9192f6ddcf8c833d4c73f8938484f171020d954c526e907b1b029512d10417: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:04:35.939556 containerd[1907]: time="2025-07-07T00:04:35.939529550Z" level=info msg="CreateContainer within sandbox \"12943be1bd064c62292938a8b50f928e354dc9737149e69eb9c4d7e39a73fdd5\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"cb9192f6ddcf8c833d4c73f8938484f171020d954c526e907b1b029512d10417\"" Jul 7 00:04:35.939902 containerd[1907]: time="2025-07-07T00:04:35.939879010Z" level=info msg="StartContainer for \"cb9192f6ddcf8c833d4c73f8938484f171020d954c526e907b1b029512d10417\"" Jul 7 00:04:35.941229 containerd[1907]: time="2025-07-07T00:04:35.941165992Z" level=info msg="connecting to shim cb9192f6ddcf8c833d4c73f8938484f171020d954c526e907b1b029512d10417" address="unix:///run/containerd/s/4400100be6f384f8182b69362667d9cde3ae09c5c02c5273ac22f96d2ebd5c16" protocol=ttrpc version=3 Jul 7 00:04:35.956316 systemd[1]: Started cri-containerd-cb9192f6ddcf8c833d4c73f8938484f171020d954c526e907b1b029512d10417.scope - libcontainer container cb9192f6ddcf8c833d4c73f8938484f171020d954c526e907b1b029512d10417. Jul 7 00:04:35.981716 containerd[1907]: time="2025-07-07T00:04:35.981675706Z" level=info msg="StartContainer for \"cb9192f6ddcf8c833d4c73f8938484f171020d954c526e907b1b029512d10417\" returns successfully" Jul 7 00:04:35.985485 systemd[1]: cri-containerd-cb9192f6ddcf8c833d4c73f8938484f171020d954c526e907b1b029512d10417.scope: Deactivated successfully. Jul 7 00:04:35.987125 containerd[1907]: time="2025-07-07T00:04:35.987092875Z" level=info msg="received exit event container_id:\"cb9192f6ddcf8c833d4c73f8938484f171020d954c526e907b1b029512d10417\" id:\"cb9192f6ddcf8c833d4c73f8938484f171020d954c526e907b1b029512d10417\" pid:5148 exited_at:{seconds:1751846675 nanos:986767304}" Jul 7 00:04:35.987226 containerd[1907]: time="2025-07-07T00:04:35.987072378Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cb9192f6ddcf8c833d4c73f8938484f171020d954c526e907b1b029512d10417\" id:\"cb9192f6ddcf8c833d4c73f8938484f171020d954c526e907b1b029512d10417\" pid:5148 exited_at:{seconds:1751846675 nanos:986767304}" Jul 7 00:04:36.132735 sshd[5083]: Accepted publickey for core from 10.200.16.10 port 54258 ssh2: RSA SHA256:jqhyWZ4ohIpoDXTKwGqZcjJ0o+/xEZ+M9M3EOPsEPgk Jul 7 00:04:36.133755 sshd-session[5083]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:04:36.137318 systemd-logind[1880]: New session 27 of user core. Jul 7 00:04:36.144276 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 7 00:04:36.478507 sshd[5181]: Connection closed by 10.200.16.10 port 54258 Jul 7 00:04:36.478995 sshd-session[5083]: pam_unix(sshd:session): session closed for user core Jul 7 00:04:36.481895 systemd[1]: sshd@24-10.200.20.4:22-10.200.16.10:54258.service: Deactivated successfully. Jul 7 00:04:36.483747 systemd[1]: session-27.scope: Deactivated successfully. Jul 7 00:04:36.484406 systemd-logind[1880]: Session 27 logged out. Waiting for processes to exit. Jul 7 00:04:36.485497 systemd-logind[1880]: Removed session 27. Jul 7 00:04:36.567402 systemd[1]: Started sshd@25-10.200.20.4:22-10.200.16.10:54262.service - OpenSSH per-connection server daemon (10.200.16.10:54262). Jul 7 00:04:36.805503 containerd[1907]: time="2025-07-07T00:04:36.804975215Z" level=info msg="CreateContainer within sandbox \"12943be1bd064c62292938a8b50f928e354dc9737149e69eb9c4d7e39a73fdd5\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 7 00:04:36.836616 containerd[1907]: time="2025-07-07T00:04:36.836589036Z" level=info msg="Container 06dd33768625082e1f0be831b5d89dd22f0791ebb13493799001a89effab1ae0: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:04:36.855981 containerd[1907]: time="2025-07-07T00:04:36.855953990Z" level=info msg="CreateContainer within sandbox \"12943be1bd064c62292938a8b50f928e354dc9737149e69eb9c4d7e39a73fdd5\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"06dd33768625082e1f0be831b5d89dd22f0791ebb13493799001a89effab1ae0\"" Jul 7 00:04:36.857034 containerd[1907]: time="2025-07-07T00:04:36.856532810Z" level=info msg="StartContainer for \"06dd33768625082e1f0be831b5d89dd22f0791ebb13493799001a89effab1ae0\"" Jul 7 00:04:36.857158 containerd[1907]: time="2025-07-07T00:04:36.857139712Z" level=info msg="connecting to shim 06dd33768625082e1f0be831b5d89dd22f0791ebb13493799001a89effab1ae0" address="unix:///run/containerd/s/4400100be6f384f8182b69362667d9cde3ae09c5c02c5273ac22f96d2ebd5c16" protocol=ttrpc version=3 Jul 7 00:04:36.874299 systemd[1]: Started cri-containerd-06dd33768625082e1f0be831b5d89dd22f0791ebb13493799001a89effab1ae0.scope - libcontainer container 06dd33768625082e1f0be831b5d89dd22f0791ebb13493799001a89effab1ae0. Jul 7 00:04:36.900206 systemd[1]: cri-containerd-06dd33768625082e1f0be831b5d89dd22f0791ebb13493799001a89effab1ae0.scope: Deactivated successfully. Jul 7 00:04:36.901127 containerd[1907]: time="2025-07-07T00:04:36.901105941Z" level=info msg="received exit event container_id:\"06dd33768625082e1f0be831b5d89dd22f0791ebb13493799001a89effab1ae0\" id:\"06dd33768625082e1f0be831b5d89dd22f0791ebb13493799001a89effab1ae0\" pid:5202 exited_at:{seconds:1751846676 nanos:900848580}" Jul 7 00:04:36.901311 containerd[1907]: time="2025-07-07T00:04:36.901282563Z" level=info msg="TaskExit event in podsandbox handler container_id:\"06dd33768625082e1f0be831b5d89dd22f0791ebb13493799001a89effab1ae0\" id:\"06dd33768625082e1f0be831b5d89dd22f0791ebb13493799001a89effab1ae0\" pid:5202 exited_at:{seconds:1751846676 nanos:900848580}" Jul 7 00:04:36.901417 containerd[1907]: time="2025-07-07T00:04:36.901402400Z" level=info msg="StartContainer for \"06dd33768625082e1f0be831b5d89dd22f0791ebb13493799001a89effab1ae0\" returns successfully" Jul 7 00:04:37.053234 sshd[5188]: Accepted publickey for core from 10.200.16.10 port 54262 ssh2: RSA SHA256:jqhyWZ4ohIpoDXTKwGqZcjJ0o+/xEZ+M9M3EOPsEPgk Jul 7 00:04:37.054582 sshd-session[5188]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:04:37.057919 systemd-logind[1880]: New session 28 of user core. Jul 7 00:04:37.063296 systemd[1]: Started session-28.scope - Session 28 of User core. Jul 7 00:04:37.778508 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-06dd33768625082e1f0be831b5d89dd22f0791ebb13493799001a89effab1ae0-rootfs.mount: Deactivated successfully. Jul 7 00:04:37.808141 containerd[1907]: time="2025-07-07T00:04:37.808096341Z" level=info msg="CreateContainer within sandbox \"12943be1bd064c62292938a8b50f928e354dc9737149e69eb9c4d7e39a73fdd5\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 7 00:04:37.836677 containerd[1907]: time="2025-07-07T00:04:37.836651230Z" level=info msg="Container 22c685eab4970efd9b842f31d6e60ff222387e28bfbc0bd65ecdcfb529aa41b0: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:04:37.855630 containerd[1907]: time="2025-07-07T00:04:37.855601808Z" level=info msg="CreateContainer within sandbox \"12943be1bd064c62292938a8b50f928e354dc9737149e69eb9c4d7e39a73fdd5\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"22c685eab4970efd9b842f31d6e60ff222387e28bfbc0bd65ecdcfb529aa41b0\"" Jul 7 00:04:37.856194 containerd[1907]: time="2025-07-07T00:04:37.856049448Z" level=info msg="StartContainer for \"22c685eab4970efd9b842f31d6e60ff222387e28bfbc0bd65ecdcfb529aa41b0\"" Jul 7 00:04:37.857236 containerd[1907]: time="2025-07-07T00:04:37.857160000Z" level=info msg="connecting to shim 22c685eab4970efd9b842f31d6e60ff222387e28bfbc0bd65ecdcfb529aa41b0" address="unix:///run/containerd/s/4400100be6f384f8182b69362667d9cde3ae09c5c02c5273ac22f96d2ebd5c16" protocol=ttrpc version=3 Jul 7 00:04:37.875295 systemd[1]: Started cri-containerd-22c685eab4970efd9b842f31d6e60ff222387e28bfbc0bd65ecdcfb529aa41b0.scope - libcontainer container 22c685eab4970efd9b842f31d6e60ff222387e28bfbc0bd65ecdcfb529aa41b0. Jul 7 00:04:37.898874 systemd[1]: cri-containerd-22c685eab4970efd9b842f31d6e60ff222387e28bfbc0bd65ecdcfb529aa41b0.scope: Deactivated successfully. Jul 7 00:04:37.902275 containerd[1907]: time="2025-07-07T00:04:37.900134298Z" level=info msg="TaskExit event in podsandbox handler container_id:\"22c685eab4970efd9b842f31d6e60ff222387e28bfbc0bd65ecdcfb529aa41b0\" id:\"22c685eab4970efd9b842f31d6e60ff222387e28bfbc0bd65ecdcfb529aa41b0\" pid:5253 exited_at:{seconds:1751846677 nanos:899848376}" Jul 7 00:04:37.905406 containerd[1907]: time="2025-07-07T00:04:37.905230615Z" level=info msg="received exit event container_id:\"22c685eab4970efd9b842f31d6e60ff222387e28bfbc0bd65ecdcfb529aa41b0\" id:\"22c685eab4970efd9b842f31d6e60ff222387e28bfbc0bd65ecdcfb529aa41b0\" pid:5253 exited_at:{seconds:1751846677 nanos:899848376}" Jul 7 00:04:37.907200 containerd[1907]: time="2025-07-07T00:04:37.907167188Z" level=info msg="StartContainer for \"22c685eab4970efd9b842f31d6e60ff222387e28bfbc0bd65ecdcfb529aa41b0\" returns successfully" Jul 7 00:04:38.778801 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-22c685eab4970efd9b842f31d6e60ff222387e28bfbc0bd65ecdcfb529aa41b0-rootfs.mount: Deactivated successfully. Jul 7 00:04:38.812886 containerd[1907]: time="2025-07-07T00:04:38.812853542Z" level=info msg="CreateContainer within sandbox \"12943be1bd064c62292938a8b50f928e354dc9737149e69eb9c4d7e39a73fdd5\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 7 00:04:38.835987 containerd[1907]: time="2025-07-07T00:04:38.835959965Z" level=info msg="Container 12d78c46327a61b2ced4ae471dc26c3bcb4f6663423d06f4d01672dad40a277d: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:04:38.854313 containerd[1907]: time="2025-07-07T00:04:38.854285201Z" level=info msg="CreateContainer within sandbox \"12943be1bd064c62292938a8b50f928e354dc9737149e69eb9c4d7e39a73fdd5\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"12d78c46327a61b2ced4ae471dc26c3bcb4f6663423d06f4d01672dad40a277d\"" Jul 7 00:04:38.854783 containerd[1907]: time="2025-07-07T00:04:38.854641118Z" level=info msg="StartContainer for \"12d78c46327a61b2ced4ae471dc26c3bcb4f6663423d06f4d01672dad40a277d\"" Jul 7 00:04:38.855340 containerd[1907]: time="2025-07-07T00:04:38.855322558Z" level=info msg="connecting to shim 12d78c46327a61b2ced4ae471dc26c3bcb4f6663423d06f4d01672dad40a277d" address="unix:///run/containerd/s/4400100be6f384f8182b69362667d9cde3ae09c5c02c5273ac22f96d2ebd5c16" protocol=ttrpc version=3 Jul 7 00:04:38.869290 systemd[1]: Started cri-containerd-12d78c46327a61b2ced4ae471dc26c3bcb4f6663423d06f4d01672dad40a277d.scope - libcontainer container 12d78c46327a61b2ced4ae471dc26c3bcb4f6663423d06f4d01672dad40a277d. Jul 7 00:04:38.885931 systemd[1]: cri-containerd-12d78c46327a61b2ced4ae471dc26c3bcb4f6663423d06f4d01672dad40a277d.scope: Deactivated successfully. Jul 7 00:04:38.886565 containerd[1907]: time="2025-07-07T00:04:38.886517717Z" level=info msg="TaskExit event in podsandbox handler container_id:\"12d78c46327a61b2ced4ae471dc26c3bcb4f6663423d06f4d01672dad40a277d\" id:\"12d78c46327a61b2ced4ae471dc26c3bcb4f6663423d06f4d01672dad40a277d\" pid:5297 exited_at:{seconds:1751846678 nanos:885992354}" Jul 7 00:04:38.890600 containerd[1907]: time="2025-07-07T00:04:38.890463281Z" level=info msg="received exit event container_id:\"12d78c46327a61b2ced4ae471dc26c3bcb4f6663423d06f4d01672dad40a277d\" id:\"12d78c46327a61b2ced4ae471dc26c3bcb4f6663423d06f4d01672dad40a277d\" pid:5297 exited_at:{seconds:1751846678 nanos:885992354}" Jul 7 00:04:38.894836 containerd[1907]: time="2025-07-07T00:04:38.894813260Z" level=info msg="StartContainer for \"12d78c46327a61b2ced4ae471dc26c3bcb4f6663423d06f4d01672dad40a277d\" returns successfully" Jul 7 00:04:38.904312 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-12d78c46327a61b2ced4ae471dc26c3bcb4f6663423d06f4d01672dad40a277d-rootfs.mount: Deactivated successfully. Jul 7 00:04:39.575445 kubelet[3384]: E0707 00:04:39.575412 3384 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 7 00:04:39.817411 containerd[1907]: time="2025-07-07T00:04:39.817330445Z" level=info msg="CreateContainer within sandbox \"12943be1bd064c62292938a8b50f928e354dc9737149e69eb9c4d7e39a73fdd5\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 7 00:04:39.856044 containerd[1907]: time="2025-07-07T00:04:39.855527076Z" level=info msg="Container 2a3d20b35247b946d8ac654319771c1cb5c14f095ef9de44feae519d77840724: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:04:39.873770 containerd[1907]: time="2025-07-07T00:04:39.873734877Z" level=info msg="CreateContainer within sandbox \"12943be1bd064c62292938a8b50f928e354dc9737149e69eb9c4d7e39a73fdd5\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"2a3d20b35247b946d8ac654319771c1cb5c14f095ef9de44feae519d77840724\"" Jul 7 00:04:39.874635 containerd[1907]: time="2025-07-07T00:04:39.874316537Z" level=info msg="StartContainer for \"2a3d20b35247b946d8ac654319771c1cb5c14f095ef9de44feae519d77840724\"" Jul 7 00:04:39.875081 containerd[1907]: time="2025-07-07T00:04:39.875024723Z" level=info msg="connecting to shim 2a3d20b35247b946d8ac654319771c1cb5c14f095ef9de44feae519d77840724" address="unix:///run/containerd/s/4400100be6f384f8182b69362667d9cde3ae09c5c02c5273ac22f96d2ebd5c16" protocol=ttrpc version=3 Jul 7 00:04:39.894297 systemd[1]: Started cri-containerd-2a3d20b35247b946d8ac654319771c1cb5c14f095ef9de44feae519d77840724.scope - libcontainer container 2a3d20b35247b946d8ac654319771c1cb5c14f095ef9de44feae519d77840724. Jul 7 00:04:39.925319 containerd[1907]: time="2025-07-07T00:04:39.925296184Z" level=info msg="StartContainer for \"2a3d20b35247b946d8ac654319771c1cb5c14f095ef9de44feae519d77840724\" returns successfully" Jul 7 00:04:39.979584 containerd[1907]: time="2025-07-07T00:04:39.979545739Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2a3d20b35247b946d8ac654319771c1cb5c14f095ef9de44feae519d77840724\" id:\"6a651a4d809839ec2ed72674c4cea7e035452d008b6e15d550dfc4fa0a27f275\" pid:5369 exited_at:{seconds:1751846679 nanos:979250313}" Jul 7 00:04:40.272310 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jul 7 00:04:40.846790 kubelet[3384]: I0707 00:04:40.846740 3384 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-hts26" podStartSLOduration=5.846726866 podStartE2EDuration="5.846726866s" podCreationTimestamp="2025-07-07 00:04:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:04:40.846599797 +0000 UTC m=+161.500655920" watchObservedRunningTime="2025-07-07 00:04:40.846726866 +0000 UTC m=+161.500782997" Jul 7 00:04:41.453253 containerd[1907]: time="2025-07-07T00:04:41.453157507Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2a3d20b35247b946d8ac654319771c1cb5c14f095ef9de44feae519d77840724\" id:\"01a2f6cdcf70be83eb9a60cd534c6a15c2abbe5f98288142f0beb2f6fab3ca30\" pid:5441 exit_status:1 exited_at:{seconds:1751846681 nanos:452965996}" Jul 7 00:04:42.537399 kubelet[3384]: I0707 00:04:42.537318 3384 setters.go:600] "Node became not ready" node="ci-4372.0.1-a-609ca7abb9" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-07-07T00:04:42Z","lastTransitionTime":"2025-07-07T00:04:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 7 00:04:42.622657 systemd-networkd[1572]: lxc_health: Link UP Jul 7 00:04:42.624040 systemd-networkd[1572]: lxc_health: Gained carrier Jul 7 00:04:43.541653 containerd[1907]: time="2025-07-07T00:04:43.541487008Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2a3d20b35247b946d8ac654319771c1cb5c14f095ef9de44feae519d77840724\" id:\"e6088fe7d99a496adea20a5d1e5554f64245f2714c4a3478a0788295cf47ebd0\" pid:5901 exited_at:{seconds:1751846683 nanos:541159365}" Jul 7 00:04:43.830312 systemd-networkd[1572]: lxc_health: Gained IPv6LL Jul 7 00:04:45.626691 containerd[1907]: time="2025-07-07T00:04:45.626631110Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2a3d20b35247b946d8ac654319771c1cb5c14f095ef9de44feae519d77840724\" id:\"1a86d2c53527e79cb771e28bdf75affd15c7ed9dbb036f1b5144decbab7346f2\" pid:5935 exited_at:{seconds:1751846685 nanos:626391982}" Jul 7 00:04:47.693123 containerd[1907]: time="2025-07-07T00:04:47.692964136Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2a3d20b35247b946d8ac654319771c1cb5c14f095ef9de44feae519d77840724\" id:\"f28ea15ab08ee911efd993f8fac160dd908e1eab5576b94f3bbf46f5eec340de\" pid:5957 exited_at:{seconds:1751846687 nanos:692550545}" Jul 7 00:04:47.788715 sshd[5235]: Connection closed by 10.200.16.10 port 54262 Jul 7 00:04:47.789205 sshd-session[5188]: pam_unix(sshd:session): session closed for user core Jul 7 00:04:47.792080 systemd[1]: sshd@25-10.200.20.4:22-10.200.16.10:54262.service: Deactivated successfully. Jul 7 00:04:47.794571 systemd[1]: session-28.scope: Deactivated successfully. Jul 7 00:04:47.796241 systemd-logind[1880]: Session 28 logged out. Waiting for processes to exit. Jul 7 00:04:47.797770 systemd-logind[1880]: Removed session 28. Jul 7 00:04:52.656079 kubelet[3384]: E0707 00:04:52.655712 3384 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: EOF"