Jul 7 00:00:48.068842 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd490] Jul 7 00:00:48.068858 kernel: Linux version 6.12.35-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Sun Jul 6 21:57:11 -00 2025 Jul 7 00:00:48.068865 kernel: KASLR enabled Jul 7 00:00:48.068869 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jul 7 00:00:48.068873 kernel: printk: legacy bootconsole [pl11] enabled Jul 7 00:00:48.068877 kernel: efi: EFI v2.7 by EDK II Jul 7 00:00:48.068882 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f20e018 RNG=0x3fd5f998 MEMRESERVE=0x3e471598 Jul 7 00:00:48.068886 kernel: random: crng init done Jul 7 00:00:48.068890 kernel: secureboot: Secure boot disabled Jul 7 00:00:48.068894 kernel: ACPI: Early table checksum verification disabled Jul 7 00:00:48.068898 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Jul 7 00:00:48.068902 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 7 00:00:48.068906 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 7 00:00:48.068911 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jul 7 00:00:48.068928 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 7 00:00:48.068932 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 7 00:00:48.068937 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 7 00:00:48.068942 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 7 00:00:48.068947 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 7 00:00:48.068951 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 7 00:00:48.068955 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jul 7 00:00:48.068959 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 7 00:00:48.068963 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jul 7 00:00:48.068967 kernel: ACPI: Use ACPI SPCR as default console: Yes Jul 7 00:00:48.068971 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Jul 7 00:00:48.068976 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] hotplug Jul 7 00:00:48.068980 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] hotplug Jul 7 00:00:48.068984 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Jul 7 00:00:48.068988 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Jul 7 00:00:48.068993 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Jul 7 00:00:48.068998 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Jul 7 00:00:48.069002 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Jul 7 00:00:48.069006 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Jul 7 00:00:48.069010 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Jul 7 00:00:48.069014 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Jul 7 00:00:48.069018 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Jul 7 00:00:48.069023 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x1bfffffff] -> [mem 0x00000000-0x1bfffffff] Jul 7 00:00:48.069027 kernel: NODE_DATA(0) allocated [mem 0x1bf7fda00-0x1bf804fff] Jul 7 00:00:48.069031 kernel: Zone ranges: Jul 7 00:00:48.069035 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jul 7 00:00:48.069042 kernel: DMA32 empty Jul 7 00:00:48.069046 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jul 7 00:00:48.069051 kernel: Device empty Jul 7 00:00:48.069055 kernel: Movable zone start for each node Jul 7 00:00:48.069060 kernel: Early memory node ranges Jul 7 00:00:48.069065 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jul 7 00:00:48.069069 kernel: node 0: [mem 0x0000000000824000-0x000000003e45ffff] Jul 7 00:00:48.069073 kernel: node 0: [mem 0x000000003e460000-0x000000003e46ffff] Jul 7 00:00:48.069078 kernel: node 0: [mem 0x000000003e470000-0x000000003e54ffff] Jul 7 00:00:48.069082 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Jul 7 00:00:48.069086 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Jul 7 00:00:48.069090 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Jul 7 00:00:48.069095 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Jul 7 00:00:48.069099 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jul 7 00:00:48.069103 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jul 7 00:00:48.069108 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jul 7 00:00:48.069112 kernel: cma: Reserved 16 MiB at 0x000000003d400000 on node -1 Jul 7 00:00:48.069117 kernel: psci: probing for conduit method from ACPI. Jul 7 00:00:48.069121 kernel: psci: PSCIv1.1 detected in firmware. Jul 7 00:00:48.069126 kernel: psci: Using standard PSCI v0.2 function IDs Jul 7 00:00:48.069130 kernel: psci: MIGRATE_INFO_TYPE not supported. Jul 7 00:00:48.069134 kernel: psci: SMC Calling Convention v1.4 Jul 7 00:00:48.069139 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jul 7 00:00:48.069143 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jul 7 00:00:48.069147 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Jul 7 00:00:48.069152 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Jul 7 00:00:48.069156 kernel: pcpu-alloc: [0] 0 [0] 1 Jul 7 00:00:48.069160 kernel: Detected PIPT I-cache on CPU0 Jul 7 00:00:48.069165 kernel: CPU features: detected: Address authentication (architected QARMA5 algorithm) Jul 7 00:00:48.069170 kernel: CPU features: detected: GIC system register CPU interface Jul 7 00:00:48.069174 kernel: CPU features: detected: Spectre-v4 Jul 7 00:00:48.069178 kernel: CPU features: detected: Spectre-BHB Jul 7 00:00:48.069183 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 7 00:00:48.069187 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 7 00:00:48.069191 kernel: CPU features: detected: ARM erratum 2067961 or 2054223 Jul 7 00:00:48.069196 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 7 00:00:48.069200 kernel: alternatives: applying boot alternatives Jul 7 00:00:48.069205 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=d1bbaf8ae8f23de11dc703e14022523825f85f007c0c35003d7559228cbdda22 Jul 7 00:00:48.069210 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 7 00:00:48.069215 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 7 00:00:48.069220 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 7 00:00:48.069224 kernel: Fallback order for Node 0: 0 Jul 7 00:00:48.069228 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048540 Jul 7 00:00:48.069232 kernel: Policy zone: Normal Jul 7 00:00:48.069237 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 7 00:00:48.069241 kernel: software IO TLB: area num 2. Jul 7 00:00:48.069245 kernel: software IO TLB: mapped [mem 0x0000000036200000-0x000000003a200000] (64MB) Jul 7 00:00:48.069250 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 7 00:00:48.069254 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 7 00:00:48.069259 kernel: rcu: RCU event tracing is enabled. Jul 7 00:00:48.069264 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 7 00:00:48.069268 kernel: Trampoline variant of Tasks RCU enabled. Jul 7 00:00:48.069273 kernel: Tracing variant of Tasks RCU enabled. Jul 7 00:00:48.069277 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 7 00:00:48.069282 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 7 00:00:48.069286 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 7 00:00:48.069290 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 7 00:00:48.069295 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 7 00:00:48.069299 kernel: GICv3: 960 SPIs implemented Jul 7 00:00:48.069303 kernel: GICv3: 0 Extended SPIs implemented Jul 7 00:00:48.069308 kernel: Root IRQ handler: gic_handle_irq Jul 7 00:00:48.069312 kernel: GICv3: GICv3 features: 16 PPIs, RSS Jul 7 00:00:48.069317 kernel: GICv3: GICD_CTRL.DS=0, SCR_EL3.FIQ=0 Jul 7 00:00:48.069321 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jul 7 00:00:48.069325 kernel: ITS: No ITS available, not enabling LPIs Jul 7 00:00:48.069330 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 7 00:00:48.069334 kernel: arch_timer: cp15 timer(s) running at 1000.00MHz (virt). Jul 7 00:00:48.069339 kernel: clocksource: arch_sys_counter: mask: 0x1fffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 7 00:00:48.069343 kernel: sched_clock: 61 bits at 1000MHz, resolution 1ns, wraps every 4398046511103ns Jul 7 00:00:48.069347 kernel: Console: colour dummy device 80x25 Jul 7 00:00:48.069352 kernel: printk: legacy console [tty1] enabled Jul 7 00:00:48.069357 kernel: ACPI: Core revision 20240827 Jul 7 00:00:48.069361 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 2000.00 BogoMIPS (lpj=1000000) Jul 7 00:00:48.069367 kernel: pid_max: default: 32768 minimum: 301 Jul 7 00:00:48.069371 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 7 00:00:48.069375 kernel: landlock: Up and running. Jul 7 00:00:48.069380 kernel: SELinux: Initializing. Jul 7 00:00:48.069385 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 7 00:00:48.069392 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 7 00:00:48.069398 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x1a0000e, misc 0x31e1 Jul 7 00:00:48.069402 kernel: Hyper-V: Host Build 10.0.26100.1261-1-0 Jul 7 00:00:48.069407 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jul 7 00:00:48.069412 kernel: rcu: Hierarchical SRCU implementation. Jul 7 00:00:48.069416 kernel: rcu: Max phase no-delay instances is 400. Jul 7 00:00:48.069422 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 7 00:00:48.069427 kernel: Remapping and enabling EFI services. Jul 7 00:00:48.069431 kernel: smp: Bringing up secondary CPUs ... Jul 7 00:00:48.069436 kernel: Detected PIPT I-cache on CPU1 Jul 7 00:00:48.069441 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jul 7 00:00:48.069446 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd490] Jul 7 00:00:48.069451 kernel: smp: Brought up 1 node, 2 CPUs Jul 7 00:00:48.069455 kernel: SMP: Total of 2 processors activated. Jul 7 00:00:48.069460 kernel: CPU: All CPU(s) started at EL1 Jul 7 00:00:48.069465 kernel: CPU features: detected: 32-bit EL0 Support Jul 7 00:00:48.069470 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jul 7 00:00:48.069474 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 7 00:00:48.069479 kernel: CPU features: detected: Common not Private translations Jul 7 00:00:48.069484 kernel: CPU features: detected: CRC32 instructions Jul 7 00:00:48.069489 kernel: CPU features: detected: Generic authentication (architected QARMA5 algorithm) Jul 7 00:00:48.069494 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 7 00:00:48.069499 kernel: CPU features: detected: LSE atomic instructions Jul 7 00:00:48.069503 kernel: CPU features: detected: Privileged Access Never Jul 7 00:00:48.069508 kernel: CPU features: detected: Speculation barrier (SB) Jul 7 00:00:48.069513 kernel: CPU features: detected: TLB range maintenance instructions Jul 7 00:00:48.069517 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 7 00:00:48.069522 kernel: CPU features: detected: Scalable Vector Extension Jul 7 00:00:48.069527 kernel: alternatives: applying system-wide alternatives Jul 7 00:00:48.069532 kernel: CPU features: detected: Hardware dirty bit management on CPU0-1 Jul 7 00:00:48.069537 kernel: SVE: maximum available vector length 16 bytes per vector Jul 7 00:00:48.069542 kernel: SVE: default vector length 16 bytes per vector Jul 7 00:00:48.069547 kernel: Memory: 3959092K/4194160K available (11136K kernel code, 2436K rwdata, 9076K rodata, 39488K init, 1038K bss, 213880K reserved, 16384K cma-reserved) Jul 7 00:00:48.069551 kernel: devtmpfs: initialized Jul 7 00:00:48.069556 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 7 00:00:48.069561 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 7 00:00:48.069565 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 7 00:00:48.069570 kernel: 0 pages in range for non-PLT usage Jul 7 00:00:48.069575 kernel: 508432 pages in range for PLT usage Jul 7 00:00:48.069580 kernel: pinctrl core: initialized pinctrl subsystem Jul 7 00:00:48.069585 kernel: SMBIOS 3.1.0 present. Jul 7 00:00:48.069590 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Jul 7 00:00:48.069594 kernel: DMI: Memory slots populated: 2/2 Jul 7 00:00:48.069599 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 7 00:00:48.069604 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 7 00:00:48.069608 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 7 00:00:48.069613 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 7 00:00:48.069618 kernel: audit: initializing netlink subsys (disabled) Jul 7 00:00:48.069623 kernel: audit: type=2000 audit(0.061:1): state=initialized audit_enabled=0 res=1 Jul 7 00:00:48.069628 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 7 00:00:48.069633 kernel: cpuidle: using governor menu Jul 7 00:00:48.069637 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 7 00:00:48.069642 kernel: ASID allocator initialised with 32768 entries Jul 7 00:00:48.069647 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 7 00:00:48.069651 kernel: Serial: AMBA PL011 UART driver Jul 7 00:00:48.069656 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 7 00:00:48.069661 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 7 00:00:48.069666 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 7 00:00:48.069671 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 7 00:00:48.069675 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 7 00:00:48.069680 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 7 00:00:48.069685 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 7 00:00:48.069689 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 7 00:00:48.069694 kernel: ACPI: Added _OSI(Module Device) Jul 7 00:00:48.069698 kernel: ACPI: Added _OSI(Processor Device) Jul 7 00:00:48.069704 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 7 00:00:48.069709 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 7 00:00:48.069713 kernel: ACPI: Interpreter enabled Jul 7 00:00:48.069718 kernel: ACPI: Using GIC for interrupt routing Jul 7 00:00:48.069723 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jul 7 00:00:48.069728 kernel: printk: legacy console [ttyAMA0] enabled Jul 7 00:00:48.069732 kernel: printk: legacy bootconsole [pl11] disabled Jul 7 00:00:48.069737 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jul 7 00:00:48.069742 kernel: ACPI: CPU0 has been hot-added Jul 7 00:00:48.069747 kernel: ACPI: CPU1 has been hot-added Jul 7 00:00:48.069752 kernel: iommu: Default domain type: Translated Jul 7 00:00:48.069757 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 7 00:00:48.069761 kernel: efivars: Registered efivars operations Jul 7 00:00:48.069766 kernel: vgaarb: loaded Jul 7 00:00:48.069771 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 7 00:00:48.069775 kernel: VFS: Disk quotas dquot_6.6.0 Jul 7 00:00:48.069780 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 7 00:00:48.069785 kernel: pnp: PnP ACPI init Jul 7 00:00:48.069790 kernel: pnp: PnP ACPI: found 0 devices Jul 7 00:00:48.069795 kernel: NET: Registered PF_INET protocol family Jul 7 00:00:48.069799 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 7 00:00:48.069804 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 7 00:00:48.069809 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 7 00:00:48.069814 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 7 00:00:48.069818 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 7 00:00:48.069823 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 7 00:00:48.069828 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 7 00:00:48.069833 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 7 00:00:48.069838 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 7 00:00:48.069843 kernel: PCI: CLS 0 bytes, default 64 Jul 7 00:00:48.069847 kernel: kvm [1]: HYP mode not available Jul 7 00:00:48.069852 kernel: Initialise system trusted keyrings Jul 7 00:00:48.069857 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 7 00:00:48.069861 kernel: Key type asymmetric registered Jul 7 00:00:48.069866 kernel: Asymmetric key parser 'x509' registered Jul 7 00:00:48.069870 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 7 00:00:48.069876 kernel: io scheduler mq-deadline registered Jul 7 00:00:48.069881 kernel: io scheduler kyber registered Jul 7 00:00:48.069886 kernel: io scheduler bfq registered Jul 7 00:00:48.069890 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 7 00:00:48.069895 kernel: thunder_xcv, ver 1.0 Jul 7 00:00:48.069900 kernel: thunder_bgx, ver 1.0 Jul 7 00:00:48.069904 kernel: nicpf, ver 1.0 Jul 7 00:00:48.069909 kernel: nicvf, ver 1.0 Jul 7 00:00:48.070014 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 7 00:00:48.070066 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-07T00:00:47 UTC (1751846447) Jul 7 00:00:48.070072 kernel: efifb: probing for efifb Jul 7 00:00:48.070077 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jul 7 00:00:48.070082 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jul 7 00:00:48.070087 kernel: efifb: scrolling: redraw Jul 7 00:00:48.070092 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jul 7 00:00:48.070096 kernel: Console: switching to colour frame buffer device 128x48 Jul 7 00:00:48.070101 kernel: fb0: EFI VGA frame buffer device Jul 7 00:00:48.070107 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jul 7 00:00:48.070111 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 7 00:00:48.070116 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Jul 7 00:00:48.070121 kernel: watchdog: NMI not fully supported Jul 7 00:00:48.070126 kernel: watchdog: Hard watchdog permanently disabled Jul 7 00:00:48.070130 kernel: NET: Registered PF_INET6 protocol family Jul 7 00:00:48.070135 kernel: Segment Routing with IPv6 Jul 7 00:00:48.070140 kernel: In-situ OAM (IOAM) with IPv6 Jul 7 00:00:48.070144 kernel: NET: Registered PF_PACKET protocol family Jul 7 00:00:48.070150 kernel: Key type dns_resolver registered Jul 7 00:00:48.070154 kernel: registered taskstats version 1 Jul 7 00:00:48.070159 kernel: Loading compiled-in X.509 certificates Jul 7 00:00:48.070164 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.35-flatcar: f8c1d02496b1c3f2ac4a0c4b5b2a55d3dc0ca718' Jul 7 00:00:48.070168 kernel: Demotion targets for Node 0: null Jul 7 00:00:48.070173 kernel: Key type .fscrypt registered Jul 7 00:00:48.070177 kernel: Key type fscrypt-provisioning registered Jul 7 00:00:48.070182 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 7 00:00:48.070187 kernel: ima: Allocated hash algorithm: sha1 Jul 7 00:00:48.070192 kernel: ima: No architecture policies found Jul 7 00:00:48.070197 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 7 00:00:48.070202 kernel: clk: Disabling unused clocks Jul 7 00:00:48.070206 kernel: PM: genpd: Disabling unused power domains Jul 7 00:00:48.070211 kernel: Warning: unable to open an initial console. Jul 7 00:00:48.070216 kernel: Freeing unused kernel memory: 39488K Jul 7 00:00:48.070220 kernel: Run /init as init process Jul 7 00:00:48.070225 kernel: with arguments: Jul 7 00:00:48.070230 kernel: /init Jul 7 00:00:48.070235 kernel: with environment: Jul 7 00:00:48.070239 kernel: HOME=/ Jul 7 00:00:48.070244 kernel: TERM=linux Jul 7 00:00:48.070249 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 7 00:00:48.070254 systemd[1]: Successfully made /usr/ read-only. Jul 7 00:00:48.070261 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 7 00:00:48.070266 systemd[1]: Detected virtualization microsoft. Jul 7 00:00:48.070272 systemd[1]: Detected architecture arm64. Jul 7 00:00:48.070277 systemd[1]: Running in initrd. Jul 7 00:00:48.070282 systemd[1]: No hostname configured, using default hostname. Jul 7 00:00:48.070287 systemd[1]: Hostname set to . Jul 7 00:00:48.070292 systemd[1]: Initializing machine ID from random generator. Jul 7 00:00:48.070297 systemd[1]: Queued start job for default target initrd.target. Jul 7 00:00:48.070302 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 00:00:48.070307 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 00:00:48.070313 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 7 00:00:48.070319 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 7 00:00:48.070324 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 7 00:00:48.070330 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 7 00:00:48.070335 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 7 00:00:48.070341 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 7 00:00:48.070346 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 00:00:48.070352 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 7 00:00:48.070357 systemd[1]: Reached target paths.target - Path Units. Jul 7 00:00:48.070362 systemd[1]: Reached target slices.target - Slice Units. Jul 7 00:00:48.070367 systemd[1]: Reached target swap.target - Swaps. Jul 7 00:00:48.070372 systemd[1]: Reached target timers.target - Timer Units. Jul 7 00:00:48.070377 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 7 00:00:48.070382 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 7 00:00:48.070387 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 7 00:00:48.070393 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 7 00:00:48.070398 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 7 00:00:48.070404 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 7 00:00:48.070409 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 00:00:48.070414 systemd[1]: Reached target sockets.target - Socket Units. Jul 7 00:00:48.070419 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 7 00:00:48.070424 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 7 00:00:48.070429 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 7 00:00:48.070435 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 7 00:00:48.070441 systemd[1]: Starting systemd-fsck-usr.service... Jul 7 00:00:48.070446 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 7 00:00:48.070451 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 7 00:00:48.070466 systemd-journald[224]: Collecting audit messages is disabled. Jul 7 00:00:48.070481 systemd-journald[224]: Journal started Jul 7 00:00:48.070494 systemd-journald[224]: Runtime Journal (/run/log/journal/cfd677f2cb2d41da938bf3c1aa09c129) is 8M, max 78.5M, 70.5M free. Jul 7 00:00:48.083549 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 00:00:48.078464 systemd-modules-load[226]: Inserted module 'overlay' Jul 7 00:00:48.095670 systemd[1]: Started systemd-journald.service - Journal Service. Jul 7 00:00:48.099180 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 7 00:00:48.105778 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 00:00:48.139196 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 7 00:00:48.139214 kernel: Bridge firewalling registered Jul 7 00:00:48.122711 systemd-modules-load[226]: Inserted module 'br_netfilter' Jul 7 00:00:48.125509 systemd[1]: Finished systemd-fsck-usr.service. Jul 7 00:00:48.129482 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 7 00:00:48.134467 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 00:00:48.145437 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 7 00:00:48.157666 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 00:00:48.168177 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 7 00:00:48.187368 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 7 00:00:48.209322 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 00:00:48.225615 systemd-tmpfiles[248]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 7 00:00:48.228003 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 00:00:48.247945 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 7 00:00:48.253693 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 00:00:48.267330 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 7 00:00:48.292042 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 7 00:00:48.299100 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 7 00:00:48.316204 dracut-cmdline[262]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=d1bbaf8ae8f23de11dc703e14022523825f85f007c0c35003d7559228cbdda22 Jul 7 00:00:48.354453 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 00:00:48.374251 systemd-resolved[264]: Positive Trust Anchors: Jul 7 00:00:48.374263 systemd-resolved[264]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 7 00:00:48.374283 systemd-resolved[264]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 7 00:00:48.375976 systemd-resolved[264]: Defaulting to hostname 'linux'. Jul 7 00:00:48.377248 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 7 00:00:48.388437 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 7 00:00:48.441931 kernel: SCSI subsystem initialized Jul 7 00:00:48.447929 kernel: Loading iSCSI transport class v2.0-870. Jul 7 00:00:48.454929 kernel: iscsi: registered transport (tcp) Jul 7 00:00:48.468176 kernel: iscsi: registered transport (qla4xxx) Jul 7 00:00:48.468214 kernel: QLogic iSCSI HBA Driver Jul 7 00:00:48.481471 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 7 00:00:48.501032 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 7 00:00:48.513232 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 7 00:00:48.555109 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 7 00:00:48.562126 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 7 00:00:48.625936 kernel: raid6: neonx8 gen() 18533 MB/s Jul 7 00:00:48.644927 kernel: raid6: neonx4 gen() 18571 MB/s Jul 7 00:00:48.663923 kernel: raid6: neonx2 gen() 17085 MB/s Jul 7 00:00:48.684021 kernel: raid6: neonx1 gen() 15017 MB/s Jul 7 00:00:48.702942 kernel: raid6: int64x8 gen() 10545 MB/s Jul 7 00:00:48.722017 kernel: raid6: int64x4 gen() 10615 MB/s Jul 7 00:00:48.741926 kernel: raid6: int64x2 gen() 8979 MB/s Jul 7 00:00:48.763650 kernel: raid6: int64x1 gen() 7015 MB/s Jul 7 00:00:48.763657 kernel: raid6: using algorithm neonx4 gen() 18571 MB/s Jul 7 00:00:48.786225 kernel: raid6: .... xor() 15139 MB/s, rmw enabled Jul 7 00:00:48.786267 kernel: raid6: using neon recovery algorithm Jul 7 00:00:48.794803 kernel: xor: measuring software checksum speed Jul 7 00:00:48.794816 kernel: 8regs : 28631 MB/sec Jul 7 00:00:48.797666 kernel: 32regs : 28793 MB/sec Jul 7 00:00:48.806258 kernel: arm64_neon : 34127 MB/sec Jul 7 00:00:48.806293 kernel: xor: using function: arm64_neon (34127 MB/sec) Jul 7 00:00:48.844014 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 7 00:00:48.849250 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 7 00:00:48.859764 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 00:00:48.880791 systemd-udevd[475]: Using default interface naming scheme 'v255'. Jul 7 00:00:48.889008 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 00:00:48.897649 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 7 00:00:48.931474 dracut-pre-trigger[490]: rd.md=0: removing MD RAID activation Jul 7 00:00:48.954522 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 7 00:00:48.962534 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 7 00:00:49.007876 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 00:00:49.022670 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 7 00:00:49.081190 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 00:00:49.081884 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 00:00:49.097714 kernel: hv_vmbus: Vmbus version:5.3 Jul 7 00:00:49.097377 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 00:00:49.106810 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 00:00:49.133999 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 7 00:00:49.134017 kernel: hv_vmbus: registering driver hid_hyperv Jul 7 00:00:49.134023 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 7 00:00:49.134030 kernel: hv_vmbus: registering driver hv_netvsc Jul 7 00:00:49.134036 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Jul 7 00:00:49.130671 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 7 00:00:49.147942 kernel: hv_vmbus: registering driver hyperv_keyboard Jul 7 00:00:49.138796 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 00:00:49.180573 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jul 7 00:00:49.180693 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Jul 7 00:00:49.180706 kernel: PTP clock support registered Jul 7 00:00:49.180713 kernel: hv_utils: Registering HyperV Utility Driver Jul 7 00:00:49.138861 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 00:00:49.274747 kernel: hv_vmbus: registering driver hv_utils Jul 7 00:00:49.274767 kernel: hv_utils: Heartbeat IC version 3.0 Jul 7 00:00:49.274775 kernel: hv_utils: Shutdown IC version 3.2 Jul 7 00:00:49.274782 kernel: hv_utils: TimeSync IC version 4.0 Jul 7 00:00:49.274788 kernel: hv_vmbus: registering driver hv_storvsc Jul 7 00:00:49.274794 kernel: scsi host0: storvsc_host_t Jul 7 00:00:49.274931 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jul 7 00:00:49.274952 kernel: scsi host1: storvsc_host_t Jul 7 00:00:49.275033 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Jul 7 00:00:49.172793 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 00:00:49.238943 systemd-resolved[264]: Clock change detected. Flushing caches. Jul 7 00:00:49.279540 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 00:00:49.314968 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jul 7 00:00:49.315116 kernel: hv_netvsc 002248bb-a855-0022-48bb-a855002248bb eth0: VF slot 1 added Jul 7 00:00:49.315188 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jul 7 00:00:49.315252 kernel: sd 0:0:0:0: [sda] Write Protect is off Jul 7 00:00:49.320707 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jul 7 00:00:49.320830 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jul 7 00:00:49.327385 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#134 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Jul 7 00:00:49.334491 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#141 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Jul 7 00:00:49.346375 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 7 00:00:49.346402 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jul 7 00:00:49.349491 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jul 7 00:00:49.353504 kernel: hv_vmbus: registering driver hv_pci Jul 7 00:00:49.353540 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 7 00:00:49.362607 kernel: hv_pci 5bfb688e-c7a8-4000-8db1-40a5e80d8ee8: PCI VMBus probing: Using version 0x10004 Jul 7 00:00:49.372780 kernel: hv_pci 5bfb688e-c7a8-4000-8db1-40a5e80d8ee8: PCI host bridge to bus c7a8:00 Jul 7 00:00:49.372904 kernel: pci_bus c7a8:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jul 7 00:00:49.372984 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jul 7 00:00:49.373054 kernel: pci_bus c7a8:00: No busn resource found for root bus, will use [bus 00-ff] Jul 7 00:00:49.383772 kernel: pci c7a8:00:02.0: [15b3:101a] type 00 class 0x020000 PCIe Endpoint Jul 7 00:00:49.389518 kernel: pci c7a8:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref] Jul 7 00:00:49.393511 kernel: pci c7a8:00:02.0: enabling Extended Tags Jul 7 00:00:49.408504 kernel: pci c7a8:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at c7a8:00:02.0 (capable of 252.048 Gb/s with 16.0 GT/s PCIe x16 link) Jul 7 00:00:49.418961 kernel: pci_bus c7a8:00: busn_res: [bus 00-ff] end is updated to 00 Jul 7 00:00:49.419090 kernel: pci c7a8:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref]: assigned Jul 7 00:00:49.436519 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jul 7 00:00:49.456576 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#166 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jul 7 00:00:49.481350 kernel: mlx5_core c7a8:00:02.0: enabling device (0000 -> 0002) Jul 7 00:00:49.488677 kernel: mlx5_core c7a8:00:02.0: PTM is not supported by PCIe Jul 7 00:00:49.488766 kernel: mlx5_core c7a8:00:02.0: firmware version: 16.30.5006 Jul 7 00:00:49.658024 kernel: hv_netvsc 002248bb-a855-0022-48bb-a855002248bb eth0: VF registering: eth1 Jul 7 00:00:49.658183 kernel: mlx5_core c7a8:00:02.0 eth1: joined to eth0 Jul 7 00:00:49.664572 kernel: mlx5_core c7a8:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Jul 7 00:00:49.674512 kernel: mlx5_core c7a8:00:02.0 enP51112s1: renamed from eth1 Jul 7 00:00:49.848055 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jul 7 00:00:49.896818 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jul 7 00:00:49.924157 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jul 7 00:00:49.975939 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jul 7 00:00:49.982548 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jul 7 00:00:49.995111 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 7 00:00:50.015510 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 7 00:00:50.020823 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 7 00:00:50.048653 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#196 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Jul 7 00:00:50.026211 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 00:00:50.047574 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 7 00:00:50.053614 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 7 00:00:50.075499 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 7 00:00:50.084507 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#148 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Jul 7 00:00:50.088203 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 7 00:00:50.097505 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 7 00:00:51.109021 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#169 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Jul 7 00:00:51.120499 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 7 00:00:51.122429 disk-uuid[659]: The operation has completed successfully. Jul 7 00:00:51.176566 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 7 00:00:51.176651 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 7 00:00:51.207731 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 7 00:00:51.224521 sh[825]: Success Jul 7 00:00:51.260266 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 7 00:00:51.260310 kernel: device-mapper: uevent: version 1.0.3 Jul 7 00:00:51.266496 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 7 00:00:51.278627 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jul 7 00:00:51.469816 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 7 00:00:51.479909 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 7 00:00:51.486634 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 7 00:00:51.510667 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 7 00:00:51.510701 kernel: BTRFS: device fsid 2cfafe0a-eb24-4e1d-b9c9-dec7de7e4c4d devid 1 transid 38 /dev/mapper/usr (254:0) scanned by mount (843) Jul 7 00:00:51.516266 kernel: BTRFS info (device dm-0): first mount of filesystem 2cfafe0a-eb24-4e1d-b9c9-dec7de7e4c4d Jul 7 00:00:51.520693 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 7 00:00:51.523846 kernel: BTRFS info (device dm-0): using free-space-tree Jul 7 00:00:51.715765 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 7 00:00:51.720152 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 7 00:00:51.728723 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 7 00:00:51.729562 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 7 00:00:51.756043 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 7 00:00:51.785520 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (878) Jul 7 00:00:51.797409 kernel: BTRFS info (device sda6): first mount of filesystem f2591801-6ba1-4aa7-8261-bdb292e2060d Jul 7 00:00:51.797444 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 7 00:00:51.800843 kernel: BTRFS info (device sda6): using free-space-tree Jul 7 00:00:51.823517 kernel: BTRFS info (device sda6): last unmount of filesystem f2591801-6ba1-4aa7-8261-bdb292e2060d Jul 7 00:00:51.824772 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 7 00:00:51.832244 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 7 00:00:51.883371 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 7 00:00:51.895365 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 7 00:00:51.923315 systemd-networkd[1012]: lo: Link UP Jul 7 00:00:51.923326 systemd-networkd[1012]: lo: Gained carrier Jul 7 00:00:51.924561 systemd-networkd[1012]: Enumeration completed Jul 7 00:00:51.926252 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 7 00:00:51.929749 systemd-networkd[1012]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 00:00:51.929752 systemd-networkd[1012]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 00:00:51.934200 systemd[1]: Reached target network.target - Network. Jul 7 00:00:51.986280 kernel: mlx5_core c7a8:00:02.0 enP51112s1: Link up Jul 7 00:00:52.018534 kernel: hv_netvsc 002248bb-a855-0022-48bb-a855002248bb eth0: Data path switched to VF: enP51112s1 Jul 7 00:00:52.018748 systemd-networkd[1012]: enP51112s1: Link UP Jul 7 00:00:52.018809 systemd-networkd[1012]: eth0: Link UP Jul 7 00:00:52.018893 systemd-networkd[1012]: eth0: Gained carrier Jul 7 00:00:52.018900 systemd-networkd[1012]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 00:00:52.028466 systemd-networkd[1012]: enP51112s1: Gained carrier Jul 7 00:00:52.045527 systemd-networkd[1012]: eth0: DHCPv4 address 10.200.20.36/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jul 7 00:00:52.523785 ignition[953]: Ignition 2.21.0 Jul 7 00:00:52.523798 ignition[953]: Stage: fetch-offline Jul 7 00:00:52.523865 ignition[953]: no configs at "/usr/lib/ignition/base.d" Jul 7 00:00:52.523871 ignition[953]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 7 00:00:52.533509 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 7 00:00:52.523963 ignition[953]: parsed url from cmdline: "" Jul 7 00:00:52.542685 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 7 00:00:52.523965 ignition[953]: no config URL provided Jul 7 00:00:52.523968 ignition[953]: reading system config file "/usr/lib/ignition/user.ign" Jul 7 00:00:52.523975 ignition[953]: no config at "/usr/lib/ignition/user.ign" Jul 7 00:00:52.523978 ignition[953]: failed to fetch config: resource requires networking Jul 7 00:00:52.524176 ignition[953]: Ignition finished successfully Jul 7 00:00:52.567399 ignition[1022]: Ignition 2.21.0 Jul 7 00:00:52.567403 ignition[1022]: Stage: fetch Jul 7 00:00:52.567585 ignition[1022]: no configs at "/usr/lib/ignition/base.d" Jul 7 00:00:52.567592 ignition[1022]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 7 00:00:52.567663 ignition[1022]: parsed url from cmdline: "" Jul 7 00:00:52.567665 ignition[1022]: no config URL provided Jul 7 00:00:52.567668 ignition[1022]: reading system config file "/usr/lib/ignition/user.ign" Jul 7 00:00:52.567673 ignition[1022]: no config at "/usr/lib/ignition/user.ign" Jul 7 00:00:52.567708 ignition[1022]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jul 7 00:00:52.647408 ignition[1022]: GET result: OK Jul 7 00:00:52.647493 ignition[1022]: config has been read from IMDS userdata Jul 7 00:00:52.647513 ignition[1022]: parsing config with SHA512: 7523b85c93c68ae8ca6fd17f159070866cb50eda4f72f336f02f5a90f53832ab70d14d20f0afce5418cb71af019958ed32d999fde872863efc3211f9d7460dab Jul 7 00:00:52.650037 unknown[1022]: fetched base config from "system" Jul 7 00:00:52.650341 ignition[1022]: fetch: fetch complete Jul 7 00:00:52.650041 unknown[1022]: fetched base config from "system" Jul 7 00:00:52.650347 ignition[1022]: fetch: fetch passed Jul 7 00:00:52.650044 unknown[1022]: fetched user config from "azure" Jul 7 00:00:52.650384 ignition[1022]: Ignition finished successfully Jul 7 00:00:52.654906 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 7 00:00:52.660281 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 7 00:00:52.696545 ignition[1029]: Ignition 2.21.0 Jul 7 00:00:52.696558 ignition[1029]: Stage: kargs Jul 7 00:00:52.696723 ignition[1029]: no configs at "/usr/lib/ignition/base.d" Jul 7 00:00:52.696731 ignition[1029]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 7 00:00:52.705579 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 7 00:00:52.699168 ignition[1029]: kargs: kargs passed Jul 7 00:00:52.719136 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 7 00:00:52.702054 ignition[1029]: Ignition finished successfully Jul 7 00:00:52.746527 ignition[1035]: Ignition 2.21.0 Jul 7 00:00:52.746536 ignition[1035]: Stage: disks Jul 7 00:00:52.746697 ignition[1035]: no configs at "/usr/lib/ignition/base.d" Jul 7 00:00:52.753017 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 7 00:00:52.746704 ignition[1035]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 7 00:00:52.762567 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 7 00:00:52.750288 ignition[1035]: disks: disks passed Jul 7 00:00:52.771150 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 7 00:00:52.750333 ignition[1035]: Ignition finished successfully Jul 7 00:00:52.784413 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 7 00:00:52.793619 systemd[1]: Reached target sysinit.target - System Initialization. Jul 7 00:00:52.800617 systemd[1]: Reached target basic.target - Basic System. Jul 7 00:00:52.810160 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 7 00:00:52.865517 systemd-fsck[1043]: ROOT: clean, 15/7326000 files, 477845/7359488 blocks Jul 7 00:00:52.872529 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 7 00:00:52.885320 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 7 00:00:53.047502 kernel: EXT4-fs (sda9): mounted filesystem 8d88df29-f94d-4ab8-8fb6-af875603e6d4 r/w with ordered data mode. Quota mode: none. Jul 7 00:00:53.048412 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 7 00:00:53.055602 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 7 00:00:53.074271 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 7 00:00:53.088573 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 7 00:00:53.097258 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jul 7 00:00:53.101808 systemd-networkd[1012]: eth0: Gained IPv6LL Jul 7 00:00:53.114077 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 7 00:00:53.144094 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1057) Jul 7 00:00:53.144114 kernel: BTRFS info (device sda6): first mount of filesystem f2591801-6ba1-4aa7-8261-bdb292e2060d Jul 7 00:00:53.144121 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 7 00:00:53.144128 kernel: BTRFS info (device sda6): using free-space-tree Jul 7 00:00:53.114112 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 7 00:00:53.128327 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 7 00:00:53.148642 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 7 00:00:53.167814 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 7 00:00:53.481581 systemd-networkd[1012]: enP51112s1: Gained IPv6LL Jul 7 00:00:53.505739 coreos-metadata[1059]: Jul 07 00:00:53.505 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jul 7 00:00:53.513034 coreos-metadata[1059]: Jul 07 00:00:53.513 INFO Fetch successful Jul 7 00:00:53.517071 coreos-metadata[1059]: Jul 07 00:00:53.517 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jul 7 00:00:53.533006 coreos-metadata[1059]: Jul 07 00:00:53.532 INFO Fetch successful Jul 7 00:00:53.544590 coreos-metadata[1059]: Jul 07 00:00:53.544 INFO wrote hostname ci-4372.0.1-a-7cca70db3c to /sysroot/etc/hostname Jul 7 00:00:53.552369 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 7 00:00:53.698133 initrd-setup-root[1089]: cut: /sysroot/etc/passwd: No such file or directory Jul 7 00:00:53.741108 initrd-setup-root[1096]: cut: /sysroot/etc/group: No such file or directory Jul 7 00:00:53.757074 initrd-setup-root[1103]: cut: /sysroot/etc/shadow: No such file or directory Jul 7 00:00:53.762213 initrd-setup-root[1110]: cut: /sysroot/etc/gshadow: No such file or directory Jul 7 00:00:54.508325 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 7 00:00:54.514328 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 7 00:00:54.531008 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 7 00:00:54.540686 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 7 00:00:54.550439 kernel: BTRFS info (device sda6): last unmount of filesystem f2591801-6ba1-4aa7-8261-bdb292e2060d Jul 7 00:00:54.567534 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 7 00:00:54.577182 ignition[1178]: INFO : Ignition 2.21.0 Jul 7 00:00:54.577182 ignition[1178]: INFO : Stage: mount Jul 7 00:00:54.577182 ignition[1178]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 00:00:54.577182 ignition[1178]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 7 00:00:54.601883 ignition[1178]: INFO : mount: mount passed Jul 7 00:00:54.601883 ignition[1178]: INFO : Ignition finished successfully Jul 7 00:00:54.583219 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 7 00:00:54.590073 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 7 00:00:54.616647 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 7 00:00:54.645835 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1190) Jul 7 00:00:54.645868 kernel: BTRFS info (device sda6): first mount of filesystem f2591801-6ba1-4aa7-8261-bdb292e2060d Jul 7 00:00:54.649911 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 7 00:00:54.653130 kernel: BTRFS info (device sda6): using free-space-tree Jul 7 00:00:54.655393 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 7 00:00:54.679614 ignition[1207]: INFO : Ignition 2.21.0 Jul 7 00:00:54.679614 ignition[1207]: INFO : Stage: files Jul 7 00:00:54.686896 ignition[1207]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 00:00:54.686896 ignition[1207]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 7 00:00:54.686896 ignition[1207]: DEBUG : files: compiled without relabeling support, skipping Jul 7 00:00:54.701850 ignition[1207]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 7 00:00:54.701850 ignition[1207]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 7 00:00:54.731167 ignition[1207]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 7 00:00:54.736774 ignition[1207]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 7 00:00:54.736774 ignition[1207]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 7 00:00:54.733844 unknown[1207]: wrote ssh authorized keys file for user: core Jul 7 00:00:54.756879 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jul 7 00:00:54.764945 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Jul 7 00:00:54.797173 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 7 00:00:54.982410 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jul 7 00:00:54.982410 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 7 00:00:54.997636 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jul 7 00:00:55.420866 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 7 00:00:55.493667 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 7 00:00:55.493667 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 7 00:00:55.508167 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 7 00:00:55.508167 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 7 00:00:55.508167 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 7 00:00:55.508167 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 7 00:00:55.508167 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 7 00:00:55.508167 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 7 00:00:55.508167 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 7 00:00:55.508167 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 7 00:00:55.508167 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 7 00:00:55.508167 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 7 00:00:55.582413 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 7 00:00:55.582413 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 7 00:00:55.582413 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Jul 7 00:00:56.230764 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 7 00:00:56.454426 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 7 00:00:56.454426 ignition[1207]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 7 00:00:56.489142 ignition[1207]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 7 00:00:56.522963 ignition[1207]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 7 00:00:56.522963 ignition[1207]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 7 00:00:56.545576 ignition[1207]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jul 7 00:00:56.545576 ignition[1207]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jul 7 00:00:56.545576 ignition[1207]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 7 00:00:56.545576 ignition[1207]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 7 00:00:56.545576 ignition[1207]: INFO : files: files passed Jul 7 00:00:56.545576 ignition[1207]: INFO : Ignition finished successfully Jul 7 00:00:56.531742 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 7 00:00:56.537164 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 7 00:00:56.563577 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 7 00:00:56.578924 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 7 00:00:56.606819 initrd-setup-root-after-ignition[1237]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 7 00:00:56.606819 initrd-setup-root-after-ignition[1237]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 7 00:00:56.579038 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 7 00:00:56.640146 initrd-setup-root-after-ignition[1241]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 7 00:00:56.603578 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 7 00:00:56.612352 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 7 00:00:56.623179 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 7 00:00:56.669770 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 7 00:00:56.669874 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 7 00:00:56.679755 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 7 00:00:56.688633 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 7 00:00:56.696662 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 7 00:00:56.697284 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 7 00:00:56.734326 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 7 00:00:56.740969 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 7 00:00:56.764149 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 7 00:00:56.769206 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 00:00:56.779421 systemd[1]: Stopped target timers.target - Timer Units. Jul 7 00:00:56.787695 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 7 00:00:56.787788 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 7 00:00:56.799392 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 7 00:00:56.804513 systemd[1]: Stopped target basic.target - Basic System. Jul 7 00:00:56.813239 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 7 00:00:56.821572 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 7 00:00:56.829999 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 7 00:00:56.838789 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 7 00:00:56.847566 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 7 00:00:56.855664 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 7 00:00:56.864944 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 7 00:00:56.872721 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 7 00:00:56.881230 systemd[1]: Stopped target swap.target - Swaps. Jul 7 00:00:56.888351 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 7 00:00:56.888445 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 7 00:00:56.899346 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 7 00:00:56.903797 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 00:00:56.913028 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 7 00:00:56.913091 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 00:00:56.921578 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 7 00:00:56.921664 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 7 00:00:56.935070 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 7 00:00:56.935155 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 7 00:00:56.940304 systemd[1]: ignition-files.service: Deactivated successfully. Jul 7 00:00:56.940381 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 7 00:00:56.947869 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jul 7 00:00:56.947941 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 7 00:00:56.959022 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 7 00:00:57.027579 ignition[1261]: INFO : Ignition 2.21.0 Jul 7 00:00:57.027579 ignition[1261]: INFO : Stage: umount Jul 7 00:00:57.027579 ignition[1261]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 00:00:57.027579 ignition[1261]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 7 00:00:57.027579 ignition[1261]: INFO : umount: umount passed Jul 7 00:00:57.027579 ignition[1261]: INFO : Ignition finished successfully Jul 7 00:00:56.972676 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 7 00:00:56.972787 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 00:00:56.991193 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 7 00:00:56.997702 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 7 00:00:56.997832 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 00:00:57.010972 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 7 00:00:57.011067 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 7 00:00:57.017348 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 7 00:00:57.023325 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 7 00:00:57.023624 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 7 00:00:57.032410 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 7 00:00:57.032487 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 7 00:00:57.039863 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 7 00:00:57.039944 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 7 00:00:57.052012 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 7 00:00:57.052064 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 7 00:00:57.060994 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 7 00:00:57.061028 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 7 00:00:57.069147 systemd[1]: Stopped target network.target - Network. Jul 7 00:00:57.076173 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 7 00:00:57.076215 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 7 00:00:57.085004 systemd[1]: Stopped target paths.target - Path Units. Jul 7 00:00:57.093644 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 7 00:00:57.097498 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 00:00:57.103781 systemd[1]: Stopped target slices.target - Slice Units. Jul 7 00:00:57.111300 systemd[1]: Stopped target sockets.target - Socket Units. Jul 7 00:00:57.119409 systemd[1]: iscsid.socket: Deactivated successfully. Jul 7 00:00:57.119451 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 7 00:00:57.126875 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 7 00:00:57.126902 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 7 00:00:57.135187 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 7 00:00:57.135230 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 7 00:00:57.142140 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 7 00:00:57.142165 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 7 00:00:57.152625 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 7 00:00:57.160826 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 7 00:00:57.169084 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 7 00:00:57.169168 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 7 00:00:57.179612 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 7 00:00:57.179707 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 7 00:00:57.377887 kernel: hv_netvsc 002248bb-a855-0022-48bb-a855002248bb eth0: Data path switched from VF: enP51112s1 Jul 7 00:00:57.186129 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 7 00:00:57.186265 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 7 00:00:57.202381 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 7 00:00:57.202551 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 7 00:00:57.202638 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 7 00:00:57.214656 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 7 00:00:57.215259 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 7 00:00:57.221605 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 7 00:00:57.221639 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 7 00:00:57.231304 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 7 00:00:57.244532 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 7 00:00:57.244602 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 7 00:00:57.254048 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 7 00:00:57.254103 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 7 00:00:57.263984 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 7 00:00:57.264021 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 7 00:00:57.268401 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 7 00:00:57.268443 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 00:00:57.279938 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 00:00:57.288087 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 7 00:00:57.288148 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 7 00:00:57.307331 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 7 00:00:57.307472 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 00:00:57.316387 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 7 00:00:57.316428 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 7 00:00:57.324949 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 7 00:00:57.324972 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 00:00:57.333238 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 7 00:00:57.333275 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 7 00:00:57.345960 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 7 00:00:57.345994 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 7 00:00:57.357576 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 7 00:00:57.560899 systemd-journald[224]: Received SIGTERM from PID 1 (systemd). Jul 7 00:00:57.357606 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 00:00:57.382418 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 7 00:00:57.399720 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 7 00:00:57.399790 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 7 00:00:57.408557 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 7 00:00:57.408598 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 00:00:57.418801 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 00:00:57.418843 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 00:00:57.429820 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jul 7 00:00:57.429867 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 7 00:00:57.429893 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 7 00:00:57.430152 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 7 00:00:57.432292 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 7 00:00:57.464572 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 7 00:00:57.464863 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 7 00:00:57.472292 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 7 00:00:57.485622 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 7 00:00:57.502082 systemd[1]: Switching root. Jul 7 00:00:57.642954 systemd-journald[224]: Journal stopped Jul 7 00:01:01.626361 kernel: SELinux: policy capability network_peer_controls=1 Jul 7 00:01:01.626380 kernel: SELinux: policy capability open_perms=1 Jul 7 00:01:01.626387 kernel: SELinux: policy capability extended_socket_class=1 Jul 7 00:01:01.626394 kernel: SELinux: policy capability always_check_network=0 Jul 7 00:01:01.626400 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 7 00:01:01.626405 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 7 00:01:01.626411 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 7 00:01:01.626416 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 7 00:01:01.626422 kernel: SELinux: policy capability userspace_initial_context=0 Jul 7 00:01:01.626427 kernel: audit: type=1403 audit(1751846458.877:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 7 00:01:01.626433 systemd[1]: Successfully loaded SELinux policy in 128.215ms. Jul 7 00:01:01.626441 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 7.775ms. Jul 7 00:01:01.626448 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 7 00:01:01.626454 systemd[1]: Detected virtualization microsoft. Jul 7 00:01:01.626460 systemd[1]: Detected architecture arm64. Jul 7 00:01:01.626467 systemd[1]: Detected first boot. Jul 7 00:01:01.626473 systemd[1]: Hostname set to . Jul 7 00:01:01.626487 systemd[1]: Initializing machine ID from random generator. Jul 7 00:01:01.626493 zram_generator::config[1303]: No configuration found. Jul 7 00:01:01.626499 kernel: NET: Registered PF_VSOCK protocol family Jul 7 00:01:01.626505 systemd[1]: Populated /etc with preset unit settings. Jul 7 00:01:01.626511 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 7 00:01:01.626518 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 7 00:01:01.626524 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 7 00:01:01.626531 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 7 00:01:01.626537 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 7 00:01:01.626543 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 7 00:01:01.626549 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 7 00:01:01.626555 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 7 00:01:01.626562 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 7 00:01:01.626568 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 7 00:01:01.626574 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 7 00:01:01.626580 systemd[1]: Created slice user.slice - User and Session Slice. Jul 7 00:01:01.626586 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 00:01:01.626592 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 00:01:01.626598 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 7 00:01:01.626605 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 7 00:01:01.626611 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 7 00:01:01.626617 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 7 00:01:01.626624 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jul 7 00:01:01.626631 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 00:01:01.626637 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 7 00:01:01.626643 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 7 00:01:01.626649 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 7 00:01:01.626655 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 7 00:01:01.626663 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 7 00:01:01.626669 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 00:01:01.626675 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 7 00:01:01.626681 systemd[1]: Reached target slices.target - Slice Units. Jul 7 00:01:01.626687 systemd[1]: Reached target swap.target - Swaps. Jul 7 00:01:01.626693 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 7 00:01:01.626699 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 7 00:01:01.626706 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 7 00:01:01.626712 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 7 00:01:01.626718 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 7 00:01:01.626724 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 00:01:01.626731 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 7 00:01:01.626737 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 7 00:01:01.626744 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 7 00:01:01.626750 systemd[1]: Mounting media.mount - External Media Directory... Jul 7 00:01:01.626756 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 7 00:01:01.626762 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 7 00:01:01.626769 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 7 00:01:01.626775 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 7 00:01:01.626781 systemd[1]: Reached target machines.target - Containers. Jul 7 00:01:01.626787 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 7 00:01:01.626795 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 00:01:01.626801 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 7 00:01:01.626807 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 7 00:01:01.626814 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 00:01:01.626820 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 7 00:01:01.626826 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 00:01:01.626832 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 7 00:01:01.626846 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 00:01:01.626852 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 7 00:01:01.626859 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 7 00:01:01.626865 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 7 00:01:01.626871 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 7 00:01:01.626877 systemd[1]: Stopped systemd-fsck-usr.service. Jul 7 00:01:01.626884 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 7 00:01:01.626891 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 7 00:01:01.626897 kernel: fuse: init (API version 7.41) Jul 7 00:01:01.626902 kernel: loop: module loaded Jul 7 00:01:01.626908 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 7 00:01:01.626915 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 7 00:01:01.626921 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 7 00:01:01.626927 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 7 00:01:01.626934 kernel: ACPI: bus type drm_connector registered Jul 7 00:01:01.626940 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 7 00:01:01.626946 systemd[1]: verity-setup.service: Deactivated successfully. Jul 7 00:01:01.626952 systemd[1]: Stopped verity-setup.service. Jul 7 00:01:01.626970 systemd-journald[1407]: Collecting audit messages is disabled. Jul 7 00:01:01.626983 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 7 00:01:01.626990 systemd-journald[1407]: Journal started Jul 7 00:01:01.627004 systemd-journald[1407]: Runtime Journal (/run/log/journal/a09929da309144fe9cd9479393604f7f) is 8M, max 78.5M, 70.5M free. Jul 7 00:01:00.903932 systemd[1]: Queued start job for default target multi-user.target. Jul 7 00:01:00.910864 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jul 7 00:01:00.911206 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 7 00:01:00.911449 systemd[1]: systemd-journald.service: Consumed 2.457s CPU time. Jul 7 00:01:01.645596 systemd[1]: Started systemd-journald.service - Journal Service. Jul 7 00:01:01.647355 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 7 00:01:01.651889 systemd[1]: Mounted media.mount - External Media Directory. Jul 7 00:01:01.655764 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 7 00:01:01.659914 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 7 00:01:01.664971 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 7 00:01:01.670504 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 7 00:01:01.675217 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 00:01:01.680615 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 7 00:01:01.680736 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 7 00:01:01.685728 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 00:01:01.685863 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 00:01:01.690818 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 7 00:01:01.690948 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 7 00:01:01.695593 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 00:01:01.695705 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 00:01:01.701012 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 7 00:01:01.701127 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 7 00:01:01.706829 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 00:01:01.706965 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 00:01:01.711733 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 7 00:01:01.716326 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 7 00:01:01.721679 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 7 00:01:01.727352 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 7 00:01:01.732607 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 00:01:01.745469 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 7 00:01:01.751050 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 7 00:01:01.761562 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 7 00:01:01.766172 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 7 00:01:01.766254 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 7 00:01:01.771520 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 7 00:01:01.777191 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 7 00:01:01.781092 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 00:01:01.792809 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 7 00:01:01.797986 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 7 00:01:01.803049 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 00:01:01.803928 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 7 00:01:01.808581 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 00:01:01.809415 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 00:01:01.815637 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 7 00:01:01.821583 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 7 00:01:01.828307 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 7 00:01:01.833534 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 7 00:01:01.840207 systemd-journald[1407]: Time spent on flushing to /var/log/journal/a09929da309144fe9cd9479393604f7f is 9.218ms for 942 entries. Jul 7 00:01:01.840207 systemd-journald[1407]: System Journal (/var/log/journal/a09929da309144fe9cd9479393604f7f) is 8M, max 2.6G, 2.6G free. Jul 7 00:01:01.894251 systemd-journald[1407]: Received client request to flush runtime journal. Jul 7 00:01:01.894292 kernel: loop0: detected capacity change from 0 to 107312 Jul 7 00:01:01.845760 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 7 00:01:01.852060 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 7 00:01:01.860466 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 7 00:01:01.895803 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 7 00:01:01.903687 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 00:01:01.936173 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 7 00:01:01.937340 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 7 00:01:02.165087 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 7 00:01:02.170638 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 7 00:01:02.196508 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 7 00:01:02.238503 kernel: loop1: detected capacity change from 0 to 138376 Jul 7 00:01:02.273320 systemd-tmpfiles[1458]: ACLs are not supported, ignoring. Jul 7 00:01:02.273331 systemd-tmpfiles[1458]: ACLs are not supported, ignoring. Jul 7 00:01:02.289551 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 00:01:03.038508 kernel: loop2: detected capacity change from 0 to 28936 Jul 7 00:01:04.757180 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 7 00:01:04.763761 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 00:01:04.773502 kernel: loop3: detected capacity change from 0 to 207008 Jul 7 00:01:04.791503 kernel: loop4: detected capacity change from 0 to 107312 Jul 7 00:01:04.792291 systemd-udevd[1464]: Using default interface naming scheme 'v255'. Jul 7 00:01:04.797523 kernel: loop5: detected capacity change from 0 to 138376 Jul 7 00:01:04.805499 kernel: loop6: detected capacity change from 0 to 28936 Jul 7 00:01:04.811494 kernel: loop7: detected capacity change from 0 to 207008 Jul 7 00:01:04.813777 (sd-merge)[1466]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jul 7 00:01:04.814127 (sd-merge)[1466]: Merged extensions into '/usr'. Jul 7 00:01:04.816420 systemd[1]: Reload requested from client PID 1442 ('systemd-sysext') (unit systemd-sysext.service)... Jul 7 00:01:04.816432 systemd[1]: Reloading... Jul 7 00:01:04.862496 zram_generator::config[1491]: No configuration found. Jul 7 00:01:05.117652 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 00:01:05.191675 systemd[1]: Reloading finished in 374 ms. Jul 7 00:01:05.210375 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 7 00:01:05.219388 systemd[1]: Starting ensure-sysext.service... Jul 7 00:01:05.224606 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 7 00:01:05.484855 systemd-tmpfiles[1548]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 7 00:01:05.485259 systemd-tmpfiles[1548]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 7 00:01:05.485583 systemd-tmpfiles[1548]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 7 00:01:05.485833 systemd-tmpfiles[1548]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 7 00:01:05.488010 systemd-tmpfiles[1548]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 7 00:01:05.488151 systemd-tmpfiles[1548]: ACLs are not supported, ignoring. Jul 7 00:01:05.488179 systemd-tmpfiles[1548]: ACLs are not supported, ignoring. Jul 7 00:01:05.491449 systemd[1]: Reload requested from client PID 1547 ('systemctl') (unit ensure-sysext.service)... Jul 7 00:01:05.491462 systemd[1]: Reloading... Jul 7 00:01:05.531151 systemd-tmpfiles[1548]: Detected autofs mount point /boot during canonicalization of boot. Jul 7 00:01:05.531159 systemd-tmpfiles[1548]: Skipping /boot Jul 7 00:01:05.541553 systemd-tmpfiles[1548]: Detected autofs mount point /boot during canonicalization of boot. Jul 7 00:01:05.541632 systemd-tmpfiles[1548]: Skipping /boot Jul 7 00:01:05.548526 zram_generator::config[1576]: No configuration found. Jul 7 00:01:05.612567 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 00:01:05.672715 systemd[1]: Reloading finished in 181 ms. Jul 7 00:01:05.836422 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 00:01:05.838651 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 00:01:05.848997 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 00:01:05.855333 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 00:01:05.860034 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 00:01:05.860247 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 7 00:01:05.861382 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 00:01:05.863508 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 00:01:05.868935 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 00:01:05.869139 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 00:01:05.874477 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 00:01:05.874686 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 00:01:05.884398 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 00:01:05.890728 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 00:01:05.912240 systemd[1]: Expecting device dev-ptp_hyperv.device - /dev/ptp_hyperv... Jul 7 00:01:05.920565 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 7 00:01:05.974495 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#169 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jul 7 00:01:06.089708 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 7 00:01:06.094848 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 00:01:06.100075 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 00:01:06.105641 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 7 00:01:06.111862 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 00:01:06.117849 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 00:01:06.121977 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 00:01:06.122065 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 7 00:01:06.123466 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 7 00:01:06.132555 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 7 00:01:06.138341 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 7 00:01:06.142657 systemd[1]: Reached target time-set.target - System Time Set. Jul 7 00:01:06.147258 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 7 00:01:06.152750 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 00:01:06.158579 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 00:01:06.163289 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 7 00:01:06.163403 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 7 00:01:06.167661 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 00:01:06.167774 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 00:01:06.173067 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 00:01:06.173170 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 00:01:06.181426 systemd[1]: Finished ensure-sysext.service. Jul 7 00:01:06.187446 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jul 7 00:01:06.189107 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 00:01:06.189154 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 00:01:06.240971 systemd[1]: Condition check resulted in dev-ptp_hyperv.device - /dev/ptp_hyperv being skipped. Jul 7 00:01:06.253367 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 7 00:01:06.269323 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 7 00:01:06.281268 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 00:01:06.329521 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 7 00:01:06.397710 kernel: hv_vmbus: registering driver hv_balloon Jul 7 00:01:06.397794 kernel: mousedev: PS/2 mouse device common for all mice Jul 7 00:01:06.401086 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jul 7 00:01:06.404110 kernel: hv_balloon: Memory hot add disabled on ARM64 Jul 7 00:01:06.534953 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 7 00:01:06.654311 kernel: hv_vmbus: registering driver hyperv_fb Jul 7 00:01:06.654404 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jul 7 00:01:06.659559 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jul 7 00:01:06.662979 kernel: Console: switching to colour dummy device 80x25 Jul 7 00:01:06.665505 kernel: Console: switching to colour frame buffer device 128x48 Jul 7 00:01:06.791048 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 00:01:06.791262 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 00:01:06.797408 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 7 00:01:06.798749 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 00:01:07.252219 systemd-resolved[1690]: Positive Trust Anchors: Jul 7 00:01:07.252236 systemd-resolved[1690]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 7 00:01:07.252257 systemd-resolved[1690]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 7 00:01:07.295621 systemd-networkd[1689]: lo: Link UP Jul 7 00:01:07.295629 systemd-networkd[1689]: lo: Gained carrier Jul 7 00:01:07.297125 systemd-networkd[1689]: Enumeration completed Jul 7 00:01:07.297235 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 7 00:01:07.301796 systemd-networkd[1689]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 00:01:07.301804 systemd-networkd[1689]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 00:01:07.302863 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 7 00:01:07.308895 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 7 00:01:07.351497 kernel: mlx5_core c7a8:00:02.0 enP51112s1: Link up Jul 7 00:01:07.375609 kernel: hv_netvsc 002248bb-a855-0022-48bb-a855002248bb eth0: Data path switched to VF: enP51112s1 Jul 7 00:01:07.375417 systemd-networkd[1689]: enP51112s1: Link UP Jul 7 00:01:07.375497 systemd-networkd[1689]: eth0: Link UP Jul 7 00:01:07.375500 systemd-networkd[1689]: eth0: Gained carrier Jul 7 00:01:07.375515 systemd-networkd[1689]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 00:01:07.383678 systemd-networkd[1689]: enP51112s1: Gained carrier Jul 7 00:01:07.393510 systemd-networkd[1689]: eth0: DHCPv4 address 10.200.20.36/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jul 7 00:01:07.484835 systemd-resolved[1690]: Using system hostname 'ci-4372.0.1-a-7cca70db3c'. Jul 7 00:01:07.545135 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jul 7 00:01:07.551867 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 7 00:01:07.734031 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 7 00:01:07.739268 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 7 00:01:07.744914 systemd[1]: Reached target network.target - Network. Jul 7 00:01:07.746460 augenrules[1812]: No rules Jul 7 00:01:07.748328 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 7 00:01:07.753293 systemd[1]: audit-rules.service: Deactivated successfully. Jul 7 00:01:07.753474 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 7 00:01:07.759217 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 7 00:01:07.991544 kernel: MACsec IEEE 802.1AE Jul 7 00:01:08.292088 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 00:01:08.521731 systemd-networkd[1689]: enP51112s1: Gained IPv6LL Jul 7 00:01:08.844044 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 7 00:01:08.849866 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 7 00:01:09.353593 systemd-networkd[1689]: eth0: Gained IPv6LL Jul 7 00:01:09.355542 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 7 00:01:09.361671 systemd[1]: Reached target network-online.target - Network is Online. Jul 7 00:01:18.764957 ldconfig[1437]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 7 00:01:18.775287 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 7 00:01:18.782048 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 7 00:01:18.800113 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 7 00:01:18.805330 systemd[1]: Reached target sysinit.target - System Initialization. Jul 7 00:01:18.810069 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 7 00:01:18.815574 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 7 00:01:18.821828 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 7 00:01:18.826410 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 7 00:01:18.831591 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 7 00:01:18.836928 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 7 00:01:18.836949 systemd[1]: Reached target paths.target - Path Units. Jul 7 00:01:18.840622 systemd[1]: Reached target timers.target - Timer Units. Jul 7 00:01:18.859247 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 7 00:01:18.865003 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 7 00:01:18.870691 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 7 00:01:18.875864 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 7 00:01:18.881053 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 7 00:01:18.887112 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 7 00:01:18.891682 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 7 00:01:18.897132 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 7 00:01:18.902404 systemd[1]: Reached target sockets.target - Socket Units. Jul 7 00:01:18.906604 systemd[1]: Reached target basic.target - Basic System. Jul 7 00:01:18.910541 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 7 00:01:18.910559 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 7 00:01:18.912298 systemd[1]: Starting chronyd.service - NTP client/server... Jul 7 00:01:18.924571 systemd[1]: Starting containerd.service - containerd container runtime... Jul 7 00:01:18.931384 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 7 00:01:18.938411 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 7 00:01:18.947613 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 7 00:01:18.954695 (chronyd)[1834]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Jul 7 00:01:18.962323 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 7 00:01:18.967697 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 7 00:01:18.972262 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 7 00:01:18.974599 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jul 7 00:01:18.980260 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jul 7 00:01:18.982577 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:01:18.983698 jq[1842]: false Jul 7 00:01:18.986029 chronyd[1850]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Jul 7 00:01:18.993078 KVP[1844]: KVP starting; pid is:1844 Jul 7 00:01:18.991882 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 7 00:01:18.999333 KVP[1844]: KVP LIC Version: 3.1 Jul 7 00:01:18.999491 kernel: hv_utils: KVP IC version 4.0 Jul 7 00:01:19.000618 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 7 00:01:19.006773 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 7 00:01:19.014337 extend-filesystems[1843]: Found /dev/sda6 Jul 7 00:01:19.018427 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 7 00:01:19.024856 chronyd[1850]: Timezone right/UTC failed leap second check, ignoring Jul 7 00:01:19.025006 chronyd[1850]: Loaded seccomp filter (level 2) Jul 7 00:01:19.025749 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 7 00:01:19.033635 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 7 00:01:19.039402 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 7 00:01:19.039774 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 7 00:01:19.040919 systemd[1]: Starting update-engine.service - Update Engine... Jul 7 00:01:19.047825 extend-filesystems[1843]: Found /dev/sda9 Jul 7 00:01:19.053180 extend-filesystems[1843]: Checking size of /dev/sda9 Jul 7 00:01:19.059632 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 7 00:01:19.069761 systemd[1]: Started chronyd.service - NTP client/server. Jul 7 00:01:19.078897 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 7 00:01:19.086851 jq[1872]: true Jul 7 00:01:19.088024 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 7 00:01:19.088190 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 7 00:01:19.090825 systemd[1]: motdgen.service: Deactivated successfully. Jul 7 00:01:19.090972 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 7 00:01:19.099211 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 7 00:01:19.107580 update_engine[1863]: I20250707 00:01:19.107510 1863 main.cc:92] Flatcar Update Engine starting Jul 7 00:01:19.108335 extend-filesystems[1843]: Old size kept for /dev/sda9 Jul 7 00:01:19.112820 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 7 00:01:19.112970 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 7 00:01:19.122431 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 7 00:01:19.122612 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 7 00:01:19.145227 (ntainerd)[1887]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 7 00:01:19.150905 jq[1886]: true Jul 7 00:01:19.154120 systemd-logind[1860]: New seat seat0. Jul 7 00:01:19.161121 tar[1882]: linux-arm64/LICENSE Jul 7 00:01:19.161121 tar[1882]: linux-arm64/helm Jul 7 00:01:19.162667 systemd-logind[1860]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Jul 7 00:01:19.162829 systemd[1]: Started systemd-logind.service - User Login Management. Jul 7 00:01:19.228199 dbus-daemon[1837]: [system] SELinux support is enabled Jul 7 00:01:19.228354 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 7 00:01:19.237099 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 7 00:01:19.237122 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 7 00:01:19.246664 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 7 00:01:19.246683 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 7 00:01:19.258737 update_engine[1863]: I20250707 00:01:19.258585 1863 update_check_scheduler.cc:74] Next update check in 6m20s Jul 7 00:01:19.260051 dbus-daemon[1837]: [system] Successfully activated service 'org.freedesktop.systemd1' Jul 7 00:01:19.260212 systemd[1]: Started update-engine.service - Update Engine. Jul 7 00:01:19.272915 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 7 00:01:19.284774 bash[1932]: Updated "/home/core/.ssh/authorized_keys" Jul 7 00:01:19.286947 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 7 00:01:19.300343 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 7 00:01:19.330466 coreos-metadata[1836]: Jul 07 00:01:19.330 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jul 7 00:01:19.339195 coreos-metadata[1836]: Jul 07 00:01:19.338 INFO Fetch successful Jul 7 00:01:19.339195 coreos-metadata[1836]: Jul 07 00:01:19.338 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jul 7 00:01:19.346193 coreos-metadata[1836]: Jul 07 00:01:19.346 INFO Fetch successful Jul 7 00:01:19.346462 coreos-metadata[1836]: Jul 07 00:01:19.346 INFO Fetching http://168.63.129.16/machine/cfa498a0-401e-4ee3-94b1-6b1c25604ec5/b76f27a0%2De852%2D49ff%2Da3f5%2D2c4161e785d2.%5Fci%2D4372.0.1%2Da%2D7cca70db3c?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jul 7 00:01:19.348906 coreos-metadata[1836]: Jul 07 00:01:19.348 INFO Fetch successful Jul 7 00:01:19.348906 coreos-metadata[1836]: Jul 07 00:01:19.348 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jul 7 00:01:19.359492 coreos-metadata[1836]: Jul 07 00:01:19.357 INFO Fetch successful Jul 7 00:01:19.419002 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 7 00:01:19.428131 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 7 00:01:19.560592 locksmithd[1949]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 7 00:01:19.615921 sshd_keygen[1881]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 7 00:01:19.645271 containerd[1887]: time="2025-07-07T00:01:19Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 7 00:01:19.649758 containerd[1887]: time="2025-07-07T00:01:19.648374548Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jul 7 00:01:19.654097 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 7 00:01:19.663341 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 7 00:01:19.663752 containerd[1887]: time="2025-07-07T00:01:19.663724052Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="7.072µs" Jul 7 00:01:19.664123 containerd[1887]: time="2025-07-07T00:01:19.664100764Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 7 00:01:19.665068 containerd[1887]: time="2025-07-07T00:01:19.665025636Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 7 00:01:19.665294 containerd[1887]: time="2025-07-07T00:01:19.665273892Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 7 00:01:19.666650 containerd[1887]: time="2025-07-07T00:01:19.666091028Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 7 00:01:19.666650 containerd[1887]: time="2025-07-07T00:01:19.666147676Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 7 00:01:19.666650 containerd[1887]: time="2025-07-07T00:01:19.666214788Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 7 00:01:19.666650 containerd[1887]: time="2025-07-07T00:01:19.666226476Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 7 00:01:19.667660 containerd[1887]: time="2025-07-07T00:01:19.667637260Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 7 00:01:19.668749 containerd[1887]: time="2025-07-07T00:01:19.668203620Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 7 00:01:19.668749 containerd[1887]: time="2025-07-07T00:01:19.668236636Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 7 00:01:19.668749 containerd[1887]: time="2025-07-07T00:01:19.668246780Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 7 00:01:19.671301 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jul 7 00:01:19.672368 containerd[1887]: time="2025-07-07T00:01:19.671741164Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 7 00:01:19.672368 containerd[1887]: time="2025-07-07T00:01:19.671924132Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 7 00:01:19.672368 containerd[1887]: time="2025-07-07T00:01:19.671950996Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 7 00:01:19.672368 containerd[1887]: time="2025-07-07T00:01:19.671958588Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 7 00:01:19.672368 containerd[1887]: time="2025-07-07T00:01:19.671977188Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 7 00:01:19.672368 containerd[1887]: time="2025-07-07T00:01:19.672134372Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 7 00:01:19.672368 containerd[1887]: time="2025-07-07T00:01:19.672194364Z" level=info msg="metadata content store policy set" policy=shared Jul 7 00:01:19.692081 containerd[1887]: time="2025-07-07T00:01:19.692050260Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 7 00:01:19.692548 containerd[1887]: time="2025-07-07T00:01:19.692203100Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 7 00:01:19.692548 containerd[1887]: time="2025-07-07T00:01:19.692223284Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 7 00:01:19.692548 containerd[1887]: time="2025-07-07T00:01:19.692242068Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 7 00:01:19.692548 containerd[1887]: time="2025-07-07T00:01:19.692252524Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 7 00:01:19.692548 containerd[1887]: time="2025-07-07T00:01:19.692271212Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 7 00:01:19.692548 containerd[1887]: time="2025-07-07T00:01:19.692279524Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 7 00:01:19.692548 containerd[1887]: time="2025-07-07T00:01:19.692290524Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 7 00:01:19.692548 containerd[1887]: time="2025-07-07T00:01:19.692297956Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 7 00:01:19.692548 containerd[1887]: time="2025-07-07T00:01:19.692304748Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 7 00:01:19.692548 containerd[1887]: time="2025-07-07T00:01:19.692317020Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 7 00:01:19.692548 containerd[1887]: time="2025-07-07T00:01:19.692326332Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 7 00:01:19.692548 containerd[1887]: time="2025-07-07T00:01:19.692445612Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 7 00:01:19.692548 containerd[1887]: time="2025-07-07T00:01:19.692471588Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 7 00:01:19.692548 containerd[1887]: time="2025-07-07T00:01:19.692501276Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 7 00:01:19.692776 containerd[1887]: time="2025-07-07T00:01:19.692508804Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 7 00:01:19.692776 containerd[1887]: time="2025-07-07T00:01:19.692515484Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 7 00:01:19.692776 containerd[1887]: time="2025-07-07T00:01:19.692522700Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 7 00:01:19.692776 containerd[1887]: time="2025-07-07T00:01:19.692530772Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 7 00:01:19.693136 containerd[1887]: time="2025-07-07T00:01:19.692538284Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 7 00:01:19.693136 containerd[1887]: time="2025-07-07T00:01:19.692868516Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 7 00:01:19.693136 containerd[1887]: time="2025-07-07T00:01:19.692879492Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 7 00:01:19.693136 containerd[1887]: time="2025-07-07T00:01:19.692887628Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 7 00:01:19.693136 containerd[1887]: time="2025-07-07T00:01:19.692953508Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 7 00:01:19.693136 containerd[1887]: time="2025-07-07T00:01:19.692964956Z" level=info msg="Start snapshots syncer" Jul 7 00:01:19.693136 containerd[1887]: time="2025-07-07T00:01:19.692992156Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 7 00:01:19.693361 containerd[1887]: time="2025-07-07T00:01:19.693321044Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 7 00:01:19.694748 containerd[1887]: time="2025-07-07T00:01:19.693855396Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 7 00:01:19.694748 containerd[1887]: time="2025-07-07T00:01:19.694578052Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 7 00:01:19.694748 containerd[1887]: time="2025-07-07T00:01:19.694715180Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 7 00:01:19.694865 containerd[1887]: time="2025-07-07T00:01:19.694734892Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 7 00:01:19.694912 containerd[1887]: time="2025-07-07T00:01:19.694899764Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 7 00:01:19.694985 containerd[1887]: time="2025-07-07T00:01:19.694970412Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 7 00:01:19.695042 containerd[1887]: time="2025-07-07T00:01:19.695021772Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 7 00:01:19.695138 containerd[1887]: time="2025-07-07T00:01:19.695081748Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 7 00:01:19.695198 containerd[1887]: time="2025-07-07T00:01:19.695182828Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 7 00:01:19.695267 containerd[1887]: time="2025-07-07T00:01:19.695256012Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 7 00:01:19.695322 containerd[1887]: time="2025-07-07T00:01:19.695312076Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 7 00:01:19.695760 containerd[1887]: time="2025-07-07T00:01:19.695355796Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 7 00:01:19.695939 containerd[1887]: time="2025-07-07T00:01:19.695905548Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 7 00:01:19.696250 systemd[1]: issuegen.service: Deactivated successfully. Jul 7 00:01:19.696495 containerd[1887]: time="2025-07-07T00:01:19.696347908Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 7 00:01:19.696495 containerd[1887]: time="2025-07-07T00:01:19.696367724Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 7 00:01:19.696495 containerd[1887]: time="2025-07-07T00:01:19.696375620Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 7 00:01:19.696495 containerd[1887]: time="2025-07-07T00:01:19.696380692Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 7 00:01:19.696495 containerd[1887]: time="2025-07-07T00:01:19.696388476Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 7 00:01:19.696495 containerd[1887]: time="2025-07-07T00:01:19.696407588Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 7 00:01:19.696495 containerd[1887]: time="2025-07-07T00:01:19.696423708Z" level=info msg="runtime interface created" Jul 7 00:01:19.696495 containerd[1887]: time="2025-07-07T00:01:19.696427084Z" level=info msg="created NRI interface" Jul 7 00:01:19.696495 containerd[1887]: time="2025-07-07T00:01:19.696453676Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 7 00:01:19.696495 containerd[1887]: time="2025-07-07T00:01:19.696463444Z" level=info msg="Connect containerd service" Jul 7 00:01:19.696819 containerd[1887]: time="2025-07-07T00:01:19.696690700Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 7 00:01:19.698185 containerd[1887]: time="2025-07-07T00:01:19.698016196Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 7 00:01:19.698650 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 7 00:01:19.708586 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 7 00:01:19.715125 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jul 7 00:01:19.738023 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 7 00:01:19.746691 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 7 00:01:19.754080 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jul 7 00:01:19.767178 systemd[1]: Reached target getty.target - Login Prompts. Jul 7 00:01:19.865102 tar[1882]: linux-arm64/README.md Jul 7 00:01:19.878107 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 7 00:01:19.930501 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:01:19.996232 (kubelet)[2037]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 00:01:20.210049 containerd[1887]: time="2025-07-07T00:01:20.209766124Z" level=info msg="Start subscribing containerd event" Jul 7 00:01:20.210049 containerd[1887]: time="2025-07-07T00:01:20.210059100Z" level=info msg="Start recovering state" Jul 7 00:01:20.210187 containerd[1887]: time="2025-07-07T00:01:20.210141604Z" level=info msg="Start event monitor" Jul 7 00:01:20.210187 containerd[1887]: time="2025-07-07T00:01:20.210152068Z" level=info msg="Start cni network conf syncer for default" Jul 7 00:01:20.210187 containerd[1887]: time="2025-07-07T00:01:20.210158508Z" level=info msg="Start streaming server" Jul 7 00:01:20.210187 containerd[1887]: time="2025-07-07T00:01:20.210164164Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 7 00:01:20.210187 containerd[1887]: time="2025-07-07T00:01:20.210168956Z" level=info msg="runtime interface starting up..." Jul 7 00:01:20.210187 containerd[1887]: time="2025-07-07T00:01:20.210172572Z" level=info msg="starting plugins..." Jul 7 00:01:20.210187 containerd[1887]: time="2025-07-07T00:01:20.210185100Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 7 00:01:20.210270 containerd[1887]: time="2025-07-07T00:01:20.209892852Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 7 00:01:20.211512 containerd[1887]: time="2025-07-07T00:01:20.210294812Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 7 00:01:20.210447 systemd[1]: Started containerd.service - containerd container runtime. Jul 7 00:01:20.217081 containerd[1887]: time="2025-07-07T00:01:20.217052956Z" level=info msg="containerd successfully booted in 0.572711s" Jul 7 00:01:20.219072 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 7 00:01:20.226342 systemd[1]: Startup finished in 1.647s (kernel) + 11.076s (initrd) + 21.475s (userspace) = 34.199s. Jul 7 00:01:20.269516 kubelet[2037]: E0707 00:01:20.265594 2037 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 00:01:20.272011 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 00:01:20.272111 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 00:01:20.272369 systemd[1]: kubelet.service: Consumed 541ms CPU time, 253.3M memory peak. Jul 7 00:01:20.456726 login[2024]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Jul 7 00:01:20.457145 login[2023]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:01:20.466344 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 7 00:01:20.468064 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 7 00:01:20.471948 systemd-logind[1860]: New session 1 of user core. Jul 7 00:01:20.481929 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 7 00:01:20.484184 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 7 00:01:20.507759 (systemd)[2056]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 7 00:01:20.509836 systemd-logind[1860]: New session c1 of user core. Jul 7 00:01:20.642879 systemd[2056]: Queued start job for default target default.target. Jul 7 00:01:20.646140 systemd[2056]: Created slice app.slice - User Application Slice. Jul 7 00:01:20.646161 systemd[2056]: Reached target paths.target - Paths. Jul 7 00:01:20.646268 systemd[2056]: Reached target timers.target - Timers. Jul 7 00:01:20.647259 systemd[2056]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 7 00:01:20.653568 systemd[2056]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 7 00:01:20.653612 systemd[2056]: Reached target sockets.target - Sockets. Jul 7 00:01:20.653641 systemd[2056]: Reached target basic.target - Basic System. Jul 7 00:01:20.653661 systemd[2056]: Reached target default.target - Main User Target. Jul 7 00:01:20.653678 systemd[2056]: Startup finished in 139ms. Jul 7 00:01:20.653851 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 7 00:01:20.656433 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 7 00:01:21.136715 waagent[2020]: 2025-07-07T00:01:21.132243Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Jul 7 00:01:21.137018 waagent[2020]: 2025-07-07T00:01:21.136902Z INFO Daemon Daemon OS: flatcar 4372.0.1 Jul 7 00:01:21.140601 waagent[2020]: 2025-07-07T00:01:21.140566Z INFO Daemon Daemon Python: 3.11.12 Jul 7 00:01:21.144092 waagent[2020]: 2025-07-07T00:01:21.144035Z INFO Daemon Daemon Run daemon Jul 7 00:01:21.147451 waagent[2020]: 2025-07-07T00:01:21.147421Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4372.0.1' Jul 7 00:01:21.154799 waagent[2020]: 2025-07-07T00:01:21.154768Z INFO Daemon Daemon Using waagent for provisioning Jul 7 00:01:21.159300 waagent[2020]: 2025-07-07T00:01:21.159271Z INFO Daemon Daemon Activate resource disk Jul 7 00:01:21.163431 waagent[2020]: 2025-07-07T00:01:21.163406Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jul 7 00:01:21.172110 waagent[2020]: 2025-07-07T00:01:21.172075Z INFO Daemon Daemon Found device: None Jul 7 00:01:21.175742 waagent[2020]: 2025-07-07T00:01:21.175713Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jul 7 00:01:21.182824 waagent[2020]: 2025-07-07T00:01:21.182797Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jul 7 00:01:21.191914 waagent[2020]: 2025-07-07T00:01:21.191876Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jul 7 00:01:21.196651 waagent[2020]: 2025-07-07T00:01:21.196623Z INFO Daemon Daemon Running default provisioning handler Jul 7 00:01:21.205375 waagent[2020]: 2025-07-07T00:01:21.205331Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jul 7 00:01:21.218015 waagent[2020]: 2025-07-07T00:01:21.217978Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jul 7 00:01:21.225822 waagent[2020]: 2025-07-07T00:01:21.225794Z INFO Daemon Daemon cloud-init is enabled: False Jul 7 00:01:21.229997 waagent[2020]: 2025-07-07T00:01:21.229973Z INFO Daemon Daemon Copying ovf-env.xml Jul 7 00:01:21.305772 waagent[2020]: 2025-07-07T00:01:21.305556Z INFO Daemon Daemon Successfully mounted dvd Jul 7 00:01:21.327751 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jul 7 00:01:21.329191 waagent[2020]: 2025-07-07T00:01:21.329144Z INFO Daemon Daemon Detect protocol endpoint Jul 7 00:01:21.333271 waagent[2020]: 2025-07-07T00:01:21.333242Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jul 7 00:01:21.337914 waagent[2020]: 2025-07-07T00:01:21.337888Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jul 7 00:01:21.343074 waagent[2020]: 2025-07-07T00:01:21.343053Z INFO Daemon Daemon Test for route to 168.63.129.16 Jul 7 00:01:21.347312 waagent[2020]: 2025-07-07T00:01:21.347288Z INFO Daemon Daemon Route to 168.63.129.16 exists Jul 7 00:01:21.351897 waagent[2020]: 2025-07-07T00:01:21.351873Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jul 7 00:01:21.388723 waagent[2020]: 2025-07-07T00:01:21.388650Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jul 7 00:01:21.393509 waagent[2020]: 2025-07-07T00:01:21.393476Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jul 7 00:01:21.397436 waagent[2020]: 2025-07-07T00:01:21.397413Z INFO Daemon Daemon Server preferred version:2015-04-05 Jul 7 00:01:21.458138 login[2024]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:01:21.461837 systemd-logind[1860]: New session 2 of user core. Jul 7 00:01:21.465597 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 7 00:01:21.523883 waagent[2020]: 2025-07-07T00:01:21.523819Z INFO Daemon Daemon Initializing goal state during protocol detection Jul 7 00:01:21.528789 waagent[2020]: 2025-07-07T00:01:21.528757Z INFO Daemon Daemon Forcing an update of the goal state. Jul 7 00:01:21.536277 waagent[2020]: 2025-07-07T00:01:21.536237Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jul 7 00:01:21.552593 waagent[2020]: 2025-07-07T00:01:21.552561Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.175 Jul 7 00:01:21.557030 waagent[2020]: 2025-07-07T00:01:21.556997Z INFO Daemon Jul 7 00:01:21.559139 waagent[2020]: 2025-07-07T00:01:21.559113Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 15b17d3f-77ee-41bf-94a3-7a19aa404ae7 eTag: 1097711476504450997 source: Fabric] Jul 7 00:01:21.567263 waagent[2020]: 2025-07-07T00:01:21.567233Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jul 7 00:01:21.571659 waagent[2020]: 2025-07-07T00:01:21.571631Z INFO Daemon Jul 7 00:01:21.574129 waagent[2020]: 2025-07-07T00:01:21.574104Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jul 7 00:01:21.582214 waagent[2020]: 2025-07-07T00:01:21.582187Z INFO Daemon Daemon Downloading artifacts profile blob Jul 7 00:01:21.643162 waagent[2020]: 2025-07-07T00:01:21.643081Z INFO Daemon Downloaded certificate {'thumbprint': '85564A95BE8DACFB0F4B750E33B39EA6D3CF6752', 'hasPrivateKey': True} Jul 7 00:01:21.650305 waagent[2020]: 2025-07-07T00:01:21.650274Z INFO Daemon Downloaded certificate {'thumbprint': '0A844D0B5BD4A88D3917BB80BDBD82DD89CE9163', 'hasPrivateKey': False} Jul 7 00:01:21.657770 waagent[2020]: 2025-07-07T00:01:21.657738Z INFO Daemon Fetch goal state completed Jul 7 00:01:21.666839 waagent[2020]: 2025-07-07T00:01:21.666812Z INFO Daemon Daemon Starting provisioning Jul 7 00:01:21.670965 waagent[2020]: 2025-07-07T00:01:21.670933Z INFO Daemon Daemon Handle ovf-env.xml. Jul 7 00:01:21.674677 waagent[2020]: 2025-07-07T00:01:21.674655Z INFO Daemon Daemon Set hostname [ci-4372.0.1-a-7cca70db3c] Jul 7 00:01:21.680645 waagent[2020]: 2025-07-07T00:01:21.680611Z INFO Daemon Daemon Publish hostname [ci-4372.0.1-a-7cca70db3c] Jul 7 00:01:21.686509 waagent[2020]: 2025-07-07T00:01:21.686458Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jul 7 00:01:21.691563 waagent[2020]: 2025-07-07T00:01:21.691534Z INFO Daemon Daemon Primary interface is [eth0] Jul 7 00:01:21.701167 systemd-networkd[1689]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 00:01:21.701174 systemd-networkd[1689]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 00:01:21.701218 systemd-networkd[1689]: eth0: DHCP lease lost Jul 7 00:01:21.702574 waagent[2020]: 2025-07-07T00:01:21.701663Z INFO Daemon Daemon Create user account if not exists Jul 7 00:01:21.705863 waagent[2020]: 2025-07-07T00:01:21.705830Z INFO Daemon Daemon User core already exists, skip useradd Jul 7 00:01:21.709826 waagent[2020]: 2025-07-07T00:01:21.709801Z INFO Daemon Daemon Configure sudoer Jul 7 00:01:21.718353 waagent[2020]: 2025-07-07T00:01:21.718311Z INFO Daemon Daemon Configure sshd Jul 7 00:01:21.726168 waagent[2020]: 2025-07-07T00:01:21.726129Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jul 7 00:01:21.735282 waagent[2020]: 2025-07-07T00:01:21.735253Z INFO Daemon Daemon Deploy ssh public key. Jul 7 00:01:21.739535 systemd-networkd[1689]: eth0: DHCPv4 address 10.200.20.36/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jul 7 00:01:22.824869 waagent[2020]: 2025-07-07T00:01:22.824822Z INFO Daemon Daemon Provisioning complete Jul 7 00:01:22.837929 waagent[2020]: 2025-07-07T00:01:22.837899Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jul 7 00:01:22.842731 waagent[2020]: 2025-07-07T00:01:22.842695Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jul 7 00:01:22.850188 waagent[2020]: 2025-07-07T00:01:22.850161Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Jul 7 00:01:22.944844 waagent[2110]: 2025-07-07T00:01:22.944783Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Jul 7 00:01:22.945067 waagent[2110]: 2025-07-07T00:01:22.944888Z INFO ExtHandler ExtHandler OS: flatcar 4372.0.1 Jul 7 00:01:22.945067 waagent[2110]: 2025-07-07T00:01:22.944924Z INFO ExtHandler ExtHandler Python: 3.11.12 Jul 7 00:01:22.945067 waagent[2110]: 2025-07-07T00:01:22.944956Z INFO ExtHandler ExtHandler CPU Arch: aarch64 Jul 7 00:01:22.974511 waagent[2110]: 2025-07-07T00:01:22.974031Z INFO ExtHandler ExtHandler Distro: flatcar-4372.0.1; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.12; Arch: aarch64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Jul 7 00:01:22.974511 waagent[2110]: 2025-07-07T00:01:22.974167Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 7 00:01:22.974511 waagent[2110]: 2025-07-07T00:01:22.974207Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 7 00:01:22.979476 waagent[2110]: 2025-07-07T00:01:22.979431Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jul 7 00:01:22.984430 waagent[2110]: 2025-07-07T00:01:22.984401Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.175 Jul 7 00:01:22.984788 waagent[2110]: 2025-07-07T00:01:22.984756Z INFO ExtHandler Jul 7 00:01:22.984837 waagent[2110]: 2025-07-07T00:01:22.984819Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 2365d54f-3eca-4e96-80f8-5da304de45ac eTag: 1097711476504450997 source: Fabric] Jul 7 00:01:22.985090 waagent[2110]: 2025-07-07T00:01:22.985061Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jul 7 00:01:22.985476 waagent[2110]: 2025-07-07T00:01:22.985449Z INFO ExtHandler Jul 7 00:01:22.985587 waagent[2110]: 2025-07-07T00:01:22.985543Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jul 7 00:01:22.988950 waagent[2110]: 2025-07-07T00:01:22.988926Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jul 7 00:01:23.042494 waagent[2110]: 2025-07-07T00:01:23.042421Z INFO ExtHandler Downloaded certificate {'thumbprint': '85564A95BE8DACFB0F4B750E33B39EA6D3CF6752', 'hasPrivateKey': True} Jul 7 00:01:23.042785 waagent[2110]: 2025-07-07T00:01:23.042756Z INFO ExtHandler Downloaded certificate {'thumbprint': '0A844D0B5BD4A88D3917BB80BDBD82DD89CE9163', 'hasPrivateKey': False} Jul 7 00:01:23.043071 waagent[2110]: 2025-07-07T00:01:23.043045Z INFO ExtHandler Fetch goal state completed Jul 7 00:01:23.054263 waagent[2110]: 2025-07-07T00:01:23.054220Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.3.3 11 Feb 2025 (Library: OpenSSL 3.3.3 11 Feb 2025) Jul 7 00:01:23.057453 waagent[2110]: 2025-07-07T00:01:23.057411Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 2110 Jul 7 00:01:23.057574 waagent[2110]: 2025-07-07T00:01:23.057548Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jul 7 00:01:23.057802 waagent[2110]: 2025-07-07T00:01:23.057775Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Jul 7 00:01:23.058861 waagent[2110]: 2025-07-07T00:01:23.058828Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4372.0.1', '', 'Flatcar Container Linux by Kinvolk'] Jul 7 00:01:23.059164 waagent[2110]: 2025-07-07T00:01:23.059135Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4372.0.1', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Jul 7 00:01:23.059271 waagent[2110]: 2025-07-07T00:01:23.059251Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Jul 7 00:01:23.059712 waagent[2110]: 2025-07-07T00:01:23.059686Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jul 7 00:01:23.098884 waagent[2110]: 2025-07-07T00:01:23.098804Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jul 7 00:01:23.098986 waagent[2110]: 2025-07-07T00:01:23.098961Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jul 7 00:01:23.103166 waagent[2110]: 2025-07-07T00:01:23.103140Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jul 7 00:01:23.107994 systemd[1]: Reload requested from client PID 2127 ('systemctl') (unit waagent.service)... Jul 7 00:01:23.108006 systemd[1]: Reloading... Jul 7 00:01:23.167505 zram_generator::config[2161]: No configuration found. Jul 7 00:01:23.242586 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 00:01:23.325165 systemd[1]: Reloading finished in 216 ms. Jul 7 00:01:23.348510 waagent[2110]: 2025-07-07T00:01:23.348007Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jul 7 00:01:23.348510 waagent[2110]: 2025-07-07T00:01:23.348139Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jul 7 00:01:24.098920 waagent[2110]: 2025-07-07T00:01:24.098842Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jul 7 00:01:24.099217 waagent[2110]: 2025-07-07T00:01:24.099148Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Jul 7 00:01:24.099789 waagent[2110]: 2025-07-07T00:01:24.099749Z INFO ExtHandler ExtHandler Starting env monitor service. Jul 7 00:01:24.100059 waagent[2110]: 2025-07-07T00:01:24.100027Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jul 7 00:01:24.100249 waagent[2110]: 2025-07-07T00:01:24.100205Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jul 7 00:01:24.100371 waagent[2110]: 2025-07-07T00:01:24.100341Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jul 7 00:01:24.100607 waagent[2110]: 2025-07-07T00:01:24.100560Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jul 7 00:01:24.100787 waagent[2110]: 2025-07-07T00:01:24.100725Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jul 7 00:01:24.101153 waagent[2110]: 2025-07-07T00:01:24.101123Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jul 7 00:01:24.101430 waagent[2110]: 2025-07-07T00:01:24.101406Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 7 00:01:24.102096 waagent[2110]: 2025-07-07T00:01:24.101545Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 7 00:01:24.102096 waagent[2110]: 2025-07-07T00:01:24.101614Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 7 00:01:24.102096 waagent[2110]: 2025-07-07T00:01:24.101770Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jul 7 00:01:24.102096 waagent[2110]: 2025-07-07T00:01:24.101892Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jul 7 00:01:24.102096 waagent[2110]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jul 7 00:01:24.102096 waagent[2110]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Jul 7 00:01:24.102096 waagent[2110]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jul 7 00:01:24.102096 waagent[2110]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jul 7 00:01:24.102096 waagent[2110]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jul 7 00:01:24.102096 waagent[2110]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jul 7 00:01:24.102392 waagent[2110]: 2025-07-07T00:01:24.102359Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 7 00:01:24.102601 waagent[2110]: 2025-07-07T00:01:24.102572Z INFO EnvHandler ExtHandler Configure routes Jul 7 00:01:24.102714 waagent[2110]: 2025-07-07T00:01:24.102692Z INFO EnvHandler ExtHandler Gateway:None Jul 7 00:01:24.102816 waagent[2110]: 2025-07-07T00:01:24.102797Z INFO EnvHandler ExtHandler Routes:None Jul 7 00:01:24.106532 waagent[2110]: 2025-07-07T00:01:24.106498Z INFO ExtHandler ExtHandler Jul 7 00:01:24.106820 waagent[2110]: 2025-07-07T00:01:24.106798Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 1cb0eb5b-6a05-4877-b753-c2e4964c08e0 correlation e3a14711-1435-42ac-bfb3-4b104a0d63fe created: 2025-07-07T00:00:09.980072Z] Jul 7 00:01:24.107371 waagent[2110]: 2025-07-07T00:01:24.107337Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jul 7 00:01:24.108509 waagent[2110]: 2025-07-07T00:01:24.108454Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Jul 7 00:01:24.132066 waagent[2110]: 2025-07-07T00:01:24.131728Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Jul 7 00:01:24.132066 waagent[2110]: Try `iptables -h' or 'iptables --help' for more information.) Jul 7 00:01:24.132066 waagent[2110]: 2025-07-07T00:01:24.132007Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: E6A30812-C51B-4E94-89BB-E7DC75547257;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Jul 7 00:01:24.152539 waagent[2110]: 2025-07-07T00:01:24.152492Z INFO MonitorHandler ExtHandler Network interfaces: Jul 7 00:01:24.152539 waagent[2110]: Executing ['ip', '-a', '-o', 'link']: Jul 7 00:01:24.152539 waagent[2110]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jul 7 00:01:24.152539 waagent[2110]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:bb:a8:55 brd ff:ff:ff:ff:ff:ff Jul 7 00:01:24.152539 waagent[2110]: 3: enP51112s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:bb:a8:55 brd ff:ff:ff:ff:ff:ff\ altname enP51112p0s2 Jul 7 00:01:24.152539 waagent[2110]: Executing ['ip', '-4', '-a', '-o', 'address']: Jul 7 00:01:24.152539 waagent[2110]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jul 7 00:01:24.152539 waagent[2110]: 2: eth0 inet 10.200.20.36/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Jul 7 00:01:24.152539 waagent[2110]: Executing ['ip', '-6', '-a', '-o', 'address']: Jul 7 00:01:24.152539 waagent[2110]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jul 7 00:01:24.152539 waagent[2110]: 2: eth0 inet6 fe80::222:48ff:febb:a855/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jul 7 00:01:24.152539 waagent[2110]: 3: enP51112s1 inet6 fe80::222:48ff:febb:a855/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jul 7 00:01:24.189207 waagent[2110]: 2025-07-07T00:01:24.189158Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Jul 7 00:01:24.189207 waagent[2110]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jul 7 00:01:24.189207 waagent[2110]: pkts bytes target prot opt in out source destination Jul 7 00:01:24.189207 waagent[2110]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jul 7 00:01:24.189207 waagent[2110]: pkts bytes target prot opt in out source destination Jul 7 00:01:24.189207 waagent[2110]: Chain OUTPUT (policy ACCEPT 2 packets, 112 bytes) Jul 7 00:01:24.189207 waagent[2110]: pkts bytes target prot opt in out source destination Jul 7 00:01:24.189207 waagent[2110]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jul 7 00:01:24.189207 waagent[2110]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jul 7 00:01:24.189207 waagent[2110]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jul 7 00:01:24.191381 waagent[2110]: 2025-07-07T00:01:24.191338Z INFO EnvHandler ExtHandler Current Firewall rules: Jul 7 00:01:24.191381 waagent[2110]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jul 7 00:01:24.191381 waagent[2110]: pkts bytes target prot opt in out source destination Jul 7 00:01:24.191381 waagent[2110]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jul 7 00:01:24.191381 waagent[2110]: pkts bytes target prot opt in out source destination Jul 7 00:01:24.191381 waagent[2110]: Chain OUTPUT (policy ACCEPT 2 packets, 112 bytes) Jul 7 00:01:24.191381 waagent[2110]: pkts bytes target prot opt in out source destination Jul 7 00:01:24.191381 waagent[2110]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jul 7 00:01:24.191381 waagent[2110]: 1 52 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jul 7 00:01:24.191381 waagent[2110]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jul 7 00:01:24.191573 waagent[2110]: 2025-07-07T00:01:24.191555Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jul 7 00:01:26.795787 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 7 00:01:26.797133 systemd[1]: Started sshd@0-10.200.20.36:22-10.200.16.10:44274.service - OpenSSH per-connection server daemon (10.200.16.10:44274). Jul 7 00:01:27.415676 sshd[2253]: Accepted publickey for core from 10.200.16.10 port 44274 ssh2: RSA SHA256:jqhyWZ4ohIpoDXTKwGqZcjJ0o+/xEZ+M9M3EOPsEPgk Jul 7 00:01:27.416712 sshd-session[2253]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:01:27.420260 systemd-logind[1860]: New session 3 of user core. Jul 7 00:01:27.427737 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 7 00:01:27.862660 systemd[1]: Started sshd@1-10.200.20.36:22-10.200.16.10:44280.service - OpenSSH per-connection server daemon (10.200.16.10:44280). Jul 7 00:01:28.354115 sshd[2258]: Accepted publickey for core from 10.200.16.10 port 44280 ssh2: RSA SHA256:jqhyWZ4ohIpoDXTKwGqZcjJ0o+/xEZ+M9M3EOPsEPgk Jul 7 00:01:28.355181 sshd-session[2258]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:01:28.358741 systemd-logind[1860]: New session 4 of user core. Jul 7 00:01:28.373698 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 7 00:01:28.714039 sshd[2260]: Connection closed by 10.200.16.10 port 44280 Jul 7 00:01:28.714657 sshd-session[2258]: pam_unix(sshd:session): session closed for user core Jul 7 00:01:28.717479 systemd[1]: sshd@1-10.200.20.36:22-10.200.16.10:44280.service: Deactivated successfully. Jul 7 00:01:28.718822 systemd[1]: session-4.scope: Deactivated successfully. Jul 7 00:01:28.719355 systemd-logind[1860]: Session 4 logged out. Waiting for processes to exit. Jul 7 00:01:28.720490 systemd-logind[1860]: Removed session 4. Jul 7 00:01:28.802947 systemd[1]: Started sshd@2-10.200.20.36:22-10.200.16.10:44282.service - OpenSSH per-connection server daemon (10.200.16.10:44282). Jul 7 00:01:29.278816 sshd[2266]: Accepted publickey for core from 10.200.16.10 port 44282 ssh2: RSA SHA256:jqhyWZ4ohIpoDXTKwGqZcjJ0o+/xEZ+M9M3EOPsEPgk Jul 7 00:01:29.279938 sshd-session[2266]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:01:29.283444 systemd-logind[1860]: New session 5 of user core. Jul 7 00:01:29.290689 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 7 00:01:29.618510 sshd[2268]: Connection closed by 10.200.16.10 port 44282 Jul 7 00:01:29.619127 sshd-session[2266]: pam_unix(sshd:session): session closed for user core Jul 7 00:01:29.622724 systemd[1]: sshd@2-10.200.20.36:22-10.200.16.10:44282.service: Deactivated successfully. Jul 7 00:01:29.624239 systemd[1]: session-5.scope: Deactivated successfully. Jul 7 00:01:29.625014 systemd-logind[1860]: Session 5 logged out. Waiting for processes to exit. Jul 7 00:01:29.626255 systemd-logind[1860]: Removed session 5. Jul 7 00:01:29.706674 systemd[1]: Started sshd@3-10.200.20.36:22-10.200.16.10:53946.service - OpenSSH per-connection server daemon (10.200.16.10:53946). Jul 7 00:01:30.201150 sshd[2274]: Accepted publickey for core from 10.200.16.10 port 53946 ssh2: RSA SHA256:jqhyWZ4ohIpoDXTKwGqZcjJ0o+/xEZ+M9M3EOPsEPgk Jul 7 00:01:30.202199 sshd-session[2274]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:01:30.206663 systemd-logind[1860]: New session 6 of user core. Jul 7 00:01:30.211601 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 7 00:01:30.385942 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 7 00:01:30.387196 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:01:30.493173 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:01:30.495372 (kubelet)[2286]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 00:01:30.561253 sshd[2276]: Connection closed by 10.200.16.10 port 53946 Jul 7 00:01:30.561769 sshd-session[2274]: pam_unix(sshd:session): session closed for user core Jul 7 00:01:30.564621 systemd[1]: sshd@3-10.200.20.36:22-10.200.16.10:53946.service: Deactivated successfully. Jul 7 00:01:30.565856 systemd[1]: session-6.scope: Deactivated successfully. Jul 7 00:01:30.566369 systemd-logind[1860]: Session 6 logged out. Waiting for processes to exit. Jul 7 00:01:30.567379 systemd-logind[1860]: Removed session 6. Jul 7 00:01:30.598783 kubelet[2286]: E0707 00:01:30.598735 2286 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 00:01:30.601451 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 00:01:30.601591 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 00:01:30.601840 systemd[1]: kubelet.service: Consumed 105ms CPU time, 106.3M memory peak. Jul 7 00:01:30.650035 systemd[1]: Started sshd@4-10.200.20.36:22-10.200.16.10:53960.service - OpenSSH per-connection server daemon (10.200.16.10:53960). Jul 7 00:01:31.132781 sshd[2296]: Accepted publickey for core from 10.200.16.10 port 53960 ssh2: RSA SHA256:jqhyWZ4ohIpoDXTKwGqZcjJ0o+/xEZ+M9M3EOPsEPgk Jul 7 00:01:31.133866 sshd-session[2296]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:01:31.137387 systemd-logind[1860]: New session 7 of user core. Jul 7 00:01:31.151595 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 7 00:01:34.596948 sudo[2299]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 7 00:01:34.597160 sudo[2299]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 00:01:34.656074 sudo[2299]: pam_unix(sudo:session): session closed for user root Jul 7 00:01:34.746260 sshd[2298]: Connection closed by 10.200.16.10 port 53960 Jul 7 00:01:34.745624 sshd-session[2296]: pam_unix(sshd:session): session closed for user core Jul 7 00:01:34.749078 systemd-logind[1860]: Session 7 logged out. Waiting for processes to exit. Jul 7 00:01:34.749246 systemd[1]: sshd@4-10.200.20.36:22-10.200.16.10:53960.service: Deactivated successfully. Jul 7 00:01:34.751316 systemd[1]: session-7.scope: Deactivated successfully. Jul 7 00:01:34.752521 systemd-logind[1860]: Removed session 7. Jul 7 00:01:34.833901 systemd[1]: Started sshd@5-10.200.20.36:22-10.200.16.10:53962.service - OpenSSH per-connection server daemon (10.200.16.10:53962). Jul 7 00:01:35.314756 sshd[2305]: Accepted publickey for core from 10.200.16.10 port 53962 ssh2: RSA SHA256:jqhyWZ4ohIpoDXTKwGqZcjJ0o+/xEZ+M9M3EOPsEPgk Jul 7 00:01:35.315899 sshd-session[2305]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:01:35.319331 systemd-logind[1860]: New session 8 of user core. Jul 7 00:01:35.327613 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 7 00:01:35.582941 sudo[2309]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 7 00:01:35.583551 sudo[2309]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 00:01:35.827638 sudo[2309]: pam_unix(sudo:session): session closed for user root Jul 7 00:01:35.831979 sudo[2308]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 7 00:01:35.832186 sudo[2308]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 00:01:35.839643 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 7 00:01:35.864701 augenrules[2331]: No rules Jul 7 00:01:35.865759 systemd[1]: audit-rules.service: Deactivated successfully. Jul 7 00:01:35.865935 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 7 00:01:35.867642 sudo[2308]: pam_unix(sudo:session): session closed for user root Jul 7 00:01:35.943170 sshd[2307]: Connection closed by 10.200.16.10 port 53962 Jul 7 00:01:35.943072 sshd-session[2305]: pam_unix(sshd:session): session closed for user core Jul 7 00:01:35.945533 systemd[1]: sshd@5-10.200.20.36:22-10.200.16.10:53962.service: Deactivated successfully. Jul 7 00:01:35.946848 systemd[1]: session-8.scope: Deactivated successfully. Jul 7 00:01:35.948037 systemd-logind[1860]: Session 8 logged out. Waiting for processes to exit. Jul 7 00:01:35.949430 systemd-logind[1860]: Removed session 8. Jul 7 00:01:36.034985 systemd[1]: Started sshd@6-10.200.20.36:22-10.200.16.10:53966.service - OpenSSH per-connection server daemon (10.200.16.10:53966). Jul 7 00:01:36.513909 sshd[2340]: Accepted publickey for core from 10.200.16.10 port 53966 ssh2: RSA SHA256:jqhyWZ4ohIpoDXTKwGqZcjJ0o+/xEZ+M9M3EOPsEPgk Jul 7 00:01:36.514986 sshd-session[2340]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:01:36.518457 systemd-logind[1860]: New session 9 of user core. Jul 7 00:01:36.525679 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 7 00:01:36.782746 sudo[2343]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 7 00:01:36.782948 sudo[2343]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 00:01:40.635867 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 7 00:01:40.637112 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:01:41.514876 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 7 00:01:41.529734 (dockerd)[2365]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 7 00:01:42.818612 chronyd[1850]: Selected source PHC0 Jul 7 00:01:45.976202 dockerd[2365]: time="2025-07-07T00:01:45.975962672Z" level=info msg="Starting up" Jul 7 00:01:45.977069 dockerd[2365]: time="2025-07-07T00:01:45.977046952Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 7 00:01:48.595359 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:01:48.598117 (kubelet)[2390]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 00:01:48.624596 kubelet[2390]: E0707 00:01:48.624553 2390 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 00:01:48.626557 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 00:01:48.626666 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 00:01:48.626925 systemd[1]: kubelet.service: Consumed 103ms CPU time, 106.9M memory peak. Jul 7 00:01:49.022942 dockerd[2365]: time="2025-07-07T00:01:49.022905088Z" level=info msg="Loading containers: start." Jul 7 00:01:49.061657 kernel: Initializing XFRM netlink socket Jul 7 00:01:49.353522 systemd-networkd[1689]: docker0: Link UP Jul 7 00:01:49.373990 dockerd[2365]: time="2025-07-07T00:01:49.373952720Z" level=info msg="Loading containers: done." Jul 7 00:01:49.383239 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1829761798-merged.mount: Deactivated successfully. Jul 7 00:01:49.397169 dockerd[2365]: time="2025-07-07T00:01:49.397097024Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 7 00:01:49.397599 dockerd[2365]: time="2025-07-07T00:01:49.397305960Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jul 7 00:01:49.397599 dockerd[2365]: time="2025-07-07T00:01:49.397422008Z" level=info msg="Initializing buildkit" Jul 7 00:01:49.442841 dockerd[2365]: time="2025-07-07T00:01:49.442812752Z" level=info msg="Completed buildkit initialization" Jul 7 00:01:49.448112 dockerd[2365]: time="2025-07-07T00:01:49.448081312Z" level=info msg="Daemon has completed initialization" Jul 7 00:01:49.448240 dockerd[2365]: time="2025-07-07T00:01:49.448210384Z" level=info msg="API listen on /run/docker.sock" Jul 7 00:01:49.448715 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 7 00:01:50.277792 containerd[1887]: time="2025-07-07T00:01:50.277753200Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\"" Jul 7 00:01:51.273875 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount182668106.mount: Deactivated successfully. Jul 7 00:01:52.460839 containerd[1887]: time="2025-07-07T00:01:52.460788952Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:52.465519 containerd[1887]: time="2025-07-07T00:01:52.465484182Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.6: active requests=0, bytes read=26328194" Jul 7 00:01:52.476263 containerd[1887]: time="2025-07-07T00:01:52.476220105Z" level=info msg="ImageCreate event name:\"sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:52.484808 containerd[1887]: time="2025-07-07T00:01:52.484764307Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:52.485412 containerd[1887]: time="2025-07-07T00:01:52.485260093Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.6\" with image id \"sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\", size \"26324994\" in 2.207467763s" Jul 7 00:01:52.485412 containerd[1887]: time="2025-07-07T00:01:52.485287649Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\" returns image reference \"sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a\"" Jul 7 00:01:52.486113 containerd[1887]: time="2025-07-07T00:01:52.486083877Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\"" Jul 7 00:01:53.657470 containerd[1887]: time="2025-07-07T00:01:53.656906379Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:53.664444 containerd[1887]: time="2025-07-07T00:01:53.664421103Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.6: active requests=0, bytes read=22529228" Jul 7 00:01:53.671359 containerd[1887]: time="2025-07-07T00:01:53.671337770Z" level=info msg="ImageCreate event name:\"sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:53.682265 containerd[1887]: time="2025-07-07T00:01:53.682233901Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:53.682816 containerd[1887]: time="2025-07-07T00:01:53.682794565Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.6\" with image id \"sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\", size \"24065018\" in 1.196597231s" Jul 7 00:01:53.682995 containerd[1887]: time="2025-07-07T00:01:53.682886872Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\" returns image reference \"sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038\"" Jul 7 00:01:53.683347 containerd[1887]: time="2025-07-07T00:01:53.683324916Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\"" Jul 7 00:01:54.515786 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Jul 7 00:01:54.747586 containerd[1887]: time="2025-07-07T00:01:54.747535223Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:54.750462 containerd[1887]: time="2025-07-07T00:01:54.750432881Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.6: active requests=0, bytes read=17484141" Jul 7 00:01:54.755332 containerd[1887]: time="2025-07-07T00:01:54.755278418Z" level=info msg="ImageCreate event name:\"sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:54.761051 containerd[1887]: time="2025-07-07T00:01:54.761005900Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:54.761663 containerd[1887]: time="2025-07-07T00:01:54.761529602Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.6\" with image id \"sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\", size \"19019949\" in 1.078169589s" Jul 7 00:01:54.761663 containerd[1887]: time="2025-07-07T00:01:54.761555395Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\" returns image reference \"sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f\"" Jul 7 00:01:54.762156 containerd[1887]: time="2025-07-07T00:01:54.762138420Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\"" Jul 7 00:01:55.849240 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2176723177.mount: Deactivated successfully. Jul 7 00:01:56.123721 containerd[1887]: time="2025-07-07T00:01:56.123603007Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:56.126950 containerd[1887]: time="2025-07-07T00:01:56.126816970Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.6: active requests=0, bytes read=27378406" Jul 7 00:01:56.131381 containerd[1887]: time="2025-07-07T00:01:56.131350258Z" level=info msg="ImageCreate event name:\"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:56.137784 containerd[1887]: time="2025-07-07T00:01:56.137748726Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:56.138196 containerd[1887]: time="2025-07-07T00:01:56.138007333Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.6\" with image id \"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\", repo tag \"registry.k8s.io/kube-proxy:v1.32.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\", size \"27377425\" in 1.375848169s" Jul 7 00:01:56.138196 containerd[1887]: time="2025-07-07T00:01:56.138030430Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\" returns image reference \"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\"" Jul 7 00:01:56.138397 containerd[1887]: time="2025-07-07T00:01:56.138379216Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 7 00:01:56.937677 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1167588836.mount: Deactivated successfully. Jul 7 00:01:58.482955 containerd[1887]: time="2025-07-07T00:01:58.482897644Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:58.485988 containerd[1887]: time="2025-07-07T00:01:58.485961818Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" Jul 7 00:01:58.489261 containerd[1887]: time="2025-07-07T00:01:58.489235735Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:58.495640 containerd[1887]: time="2025-07-07T00:01:58.495599755Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:58.496254 containerd[1887]: time="2025-07-07T00:01:58.496123017Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 2.357722873s" Jul 7 00:01:58.496254 containerd[1887]: time="2025-07-07T00:01:58.496151850Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jul 7 00:01:58.496725 containerd[1887]: time="2025-07-07T00:01:58.496668793Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 7 00:01:58.635825 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 7 00:01:58.637761 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:01:58.729368 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:01:58.731926 (kubelet)[2705]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 00:01:58.846593 kubelet[2705]: E0707 00:01:58.846450 2705 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 00:01:58.848478 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 00:01:58.848700 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 00:01:58.849167 systemd[1]: kubelet.service: Consumed 102ms CPU time, 105.3M memory peak. Jul 7 00:02:02.648307 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1695429706.mount: Deactivated successfully. Jul 7 00:02:03.884804 containerd[1887]: time="2025-07-07T00:02:03.884729355Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 00:02:04.041666 update_engine[1863]: I20250707 00:02:04.041576 1863 update_attempter.cc:509] Updating boot flags... Jul 7 00:02:04.123218 containerd[1887]: time="2025-07-07T00:02:04.122964578Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Jul 7 00:02:05.137196 containerd[1887]: time="2025-07-07T00:02:05.136811199Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 00:02:06.875159 containerd[1887]: time="2025-07-07T00:02:06.875073913Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 00:02:06.876039 containerd[1887]: time="2025-07-07T00:02:06.875666720Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 8.378848722s" Jul 7 00:02:06.876039 containerd[1887]: time="2025-07-07T00:02:06.875691353Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 7 00:02:06.876371 containerd[1887]: time="2025-07-07T00:02:06.876173922Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jul 7 00:02:08.885890 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jul 7 00:02:08.887245 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:02:09.349390 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:02:09.351914 (kubelet)[2892]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 00:02:09.494993 kubelet[2892]: E0707 00:02:09.494942 2892 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 00:02:09.497111 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 00:02:09.497224 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 00:02:09.497730 systemd[1]: kubelet.service: Consumed 99ms CPU time, 105M memory peak. Jul 7 00:02:19.541841 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jul 7 00:02:19.543640 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:02:19.551243 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount895546752.mount: Deactivated successfully. Jul 7 00:02:19.747152 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:02:19.749462 (kubelet)[2911]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 00:02:19.774488 kubelet[2911]: E0707 00:02:19.774430 2911 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 00:02:19.776283 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 00:02:19.776467 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 00:02:19.776813 systemd[1]: kubelet.service: Consumed 100ms CPU time, 107.1M memory peak. Jul 7 00:02:21.723419 containerd[1887]: time="2025-07-07T00:02:21.723231217Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:02:21.727136 containerd[1887]: time="2025-07-07T00:02:21.727107816Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812469" Jul 7 00:02:21.733637 containerd[1887]: time="2025-07-07T00:02:21.733598126Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:02:21.740853 containerd[1887]: time="2025-07-07T00:02:21.740822581Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:02:21.741417 containerd[1887]: time="2025-07-07T00:02:21.741310477Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 14.865014652s" Jul 7 00:02:21.741634 containerd[1887]: time="2025-07-07T00:02:21.741519828Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Jul 7 00:02:24.243620 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:02:24.244032 systemd[1]: kubelet.service: Consumed 100ms CPU time, 107.1M memory peak. Jul 7 00:02:24.245757 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:02:24.268580 systemd[1]: Reload requested from client PID 2997 ('systemctl') (unit session-9.scope)... Jul 7 00:02:24.268592 systemd[1]: Reloading... Jul 7 00:02:24.349533 zram_generator::config[3046]: No configuration found. Jul 7 00:02:24.418714 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 00:02:24.502970 systemd[1]: Reloading finished in 234 ms. Jul 7 00:02:24.724793 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 7 00:02:24.724878 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 7 00:02:24.725311 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:02:24.727705 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:02:24.999426 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:02:25.002229 (kubelet)[3107]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 7 00:02:25.026543 kubelet[3107]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 00:02:25.026788 kubelet[3107]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 7 00:02:25.026823 kubelet[3107]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 00:02:25.026965 kubelet[3107]: I0707 00:02:25.026941 3107 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 7 00:02:25.763502 kubelet[3107]: I0707 00:02:25.763264 3107 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 7 00:02:25.763502 kubelet[3107]: I0707 00:02:25.763297 3107 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 7 00:02:25.763691 kubelet[3107]: I0707 00:02:25.763522 3107 server.go:954] "Client rotation is on, will bootstrap in background" Jul 7 00:02:25.780447 kubelet[3107]: E0707 00:02:25.780409 3107 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.36:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.36:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:02:25.781321 kubelet[3107]: I0707 00:02:25.781267 3107 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 00:02:25.785200 kubelet[3107]: I0707 00:02:25.785159 3107 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 7 00:02:25.787764 kubelet[3107]: I0707 00:02:25.787749 3107 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 7 00:02:25.788956 kubelet[3107]: I0707 00:02:25.788652 3107 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 7 00:02:25.788956 kubelet[3107]: I0707 00:02:25.788682 3107 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4372.0.1-a-7cca70db3c","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 7 00:02:25.788956 kubelet[3107]: I0707 00:02:25.788802 3107 topology_manager.go:138] "Creating topology manager with none policy" Jul 7 00:02:25.788956 kubelet[3107]: I0707 00:02:25.788810 3107 container_manager_linux.go:304] "Creating device plugin manager" Jul 7 00:02:25.789098 kubelet[3107]: I0707 00:02:25.788912 3107 state_mem.go:36] "Initialized new in-memory state store" Jul 7 00:02:25.791193 kubelet[3107]: I0707 00:02:25.791178 3107 kubelet.go:446] "Attempting to sync node with API server" Jul 7 00:02:25.791279 kubelet[3107]: I0707 00:02:25.791270 3107 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 7 00:02:25.791329 kubelet[3107]: I0707 00:02:25.791323 3107 kubelet.go:352] "Adding apiserver pod source" Jul 7 00:02:25.791378 kubelet[3107]: I0707 00:02:25.791371 3107 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 7 00:02:25.792377 kubelet[3107]: W0707 00:02:25.792343 3107 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.36:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4372.0.1-a-7cca70db3c&limit=500&resourceVersion=0": dial tcp 10.200.20.36:6443: connect: connection refused Jul 7 00:02:25.792432 kubelet[3107]: E0707 00:02:25.792385 3107 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.36:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4372.0.1-a-7cca70db3c&limit=500&resourceVersion=0\": dial tcp 10.200.20.36:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:02:25.793155 kubelet[3107]: W0707 00:02:25.793088 3107 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.36:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.36:6443: connect: connection refused Jul 7 00:02:25.793155 kubelet[3107]: E0707 00:02:25.793123 3107 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.36:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.36:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:02:25.793236 kubelet[3107]: I0707 00:02:25.793182 3107 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 7 00:02:25.793462 kubelet[3107]: I0707 00:02:25.793447 3107 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 7 00:02:25.793513 kubelet[3107]: W0707 00:02:25.793505 3107 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 7 00:02:25.794090 kubelet[3107]: I0707 00:02:25.794068 3107 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 7 00:02:25.794133 kubelet[3107]: I0707 00:02:25.794097 3107 server.go:1287] "Started kubelet" Jul 7 00:02:25.796645 kubelet[3107]: I0707 00:02:25.796512 3107 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 7 00:02:25.800285 kubelet[3107]: I0707 00:02:25.800104 3107 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 7 00:02:25.800762 kubelet[3107]: I0707 00:02:25.800742 3107 server.go:479] "Adding debug handlers to kubelet server" Jul 7 00:02:25.801048 kubelet[3107]: I0707 00:02:25.801023 3107 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 7 00:02:25.801244 kubelet[3107]: E0707 00:02:25.801222 3107 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4372.0.1-a-7cca70db3c\" not found" Jul 7 00:02:25.802569 kubelet[3107]: I0707 00:02:25.802439 3107 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 7 00:02:25.802569 kubelet[3107]: I0707 00:02:25.802552 3107 reconciler.go:26] "Reconciler: start to sync state" Jul 7 00:02:25.804565 kubelet[3107]: I0707 00:02:25.803520 3107 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 7 00:02:25.804565 kubelet[3107]: I0707 00:02:25.803714 3107 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 7 00:02:25.804565 kubelet[3107]: I0707 00:02:25.804181 3107 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 7 00:02:25.804565 kubelet[3107]: I0707 00:02:25.804364 3107 factory.go:221] Registration of the systemd container factory successfully Jul 7 00:02:25.804565 kubelet[3107]: I0707 00:02:25.804436 3107 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 7 00:02:25.806427 kubelet[3107]: E0707 00:02:25.806336 3107 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.36:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.36:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4372.0.1-a-7cca70db3c.184fcf26d70d29ad default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4372.0.1-a-7cca70db3c,UID:ci-4372.0.1-a-7cca70db3c,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4372.0.1-a-7cca70db3c,},FirstTimestamp:2025-07-07 00:02:25.794083245 +0000 UTC m=+0.789305446,LastTimestamp:2025-07-07 00:02:25.794083245 +0000 UTC m=+0.789305446,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4372.0.1-a-7cca70db3c,}" Jul 7 00:02:25.807252 kubelet[3107]: E0707 00:02:25.807230 3107 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372.0.1-a-7cca70db3c?timeout=10s\": dial tcp 10.200.20.36:6443: connect: connection refused" interval="200ms" Jul 7 00:02:25.807466 kubelet[3107]: W0707 00:02:25.807443 3107 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.36:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.36:6443: connect: connection refused Jul 7 00:02:25.807585 kubelet[3107]: E0707 00:02:25.807568 3107 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.36:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.36:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:02:25.808255 kubelet[3107]: E0707 00:02:25.808240 3107 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 7 00:02:25.808500 kubelet[3107]: I0707 00:02:25.808461 3107 factory.go:221] Registration of the containerd container factory successfully Jul 7 00:02:25.831907 kubelet[3107]: I0707 00:02:25.831889 3107 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 7 00:02:25.831907 kubelet[3107]: I0707 00:02:25.831902 3107 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 7 00:02:25.831993 kubelet[3107]: I0707 00:02:25.831915 3107 state_mem.go:36] "Initialized new in-memory state store" Jul 7 00:02:25.832551 kubelet[3107]: I0707 00:02:25.832531 3107 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 7 00:02:25.833585 kubelet[3107]: I0707 00:02:25.833569 3107 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 7 00:02:25.833690 kubelet[3107]: I0707 00:02:25.833681 3107 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 7 00:02:25.833751 kubelet[3107]: I0707 00:02:25.833743 3107 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 7 00:02:25.833794 kubelet[3107]: I0707 00:02:25.833787 3107 kubelet.go:2382] "Starting kubelet main sync loop" Jul 7 00:02:25.833870 kubelet[3107]: E0707 00:02:25.833858 3107 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 7 00:02:25.834934 kubelet[3107]: W0707 00:02:25.834912 3107 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.36:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.36:6443: connect: connection refused Jul 7 00:02:25.838199 kubelet[3107]: E0707 00:02:25.835079 3107 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.36:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.36:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:02:25.840302 kubelet[3107]: I0707 00:02:25.840285 3107 policy_none.go:49] "None policy: Start" Jul 7 00:02:25.840302 kubelet[3107]: I0707 00:02:25.840303 3107 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 7 00:02:25.840368 kubelet[3107]: I0707 00:02:25.840312 3107 state_mem.go:35] "Initializing new in-memory state store" Jul 7 00:02:25.848729 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 7 00:02:25.859986 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 7 00:02:25.862690 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 7 00:02:25.871368 kubelet[3107]: I0707 00:02:25.871082 3107 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 7 00:02:25.871584 kubelet[3107]: I0707 00:02:25.871448 3107 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 7 00:02:25.871584 kubelet[3107]: I0707 00:02:25.871462 3107 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 7 00:02:25.871859 kubelet[3107]: I0707 00:02:25.871679 3107 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 7 00:02:25.873478 kubelet[3107]: E0707 00:02:25.873400 3107 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 7 00:02:25.873478 kubelet[3107]: E0707 00:02:25.873431 3107 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4372.0.1-a-7cca70db3c\" not found" Jul 7 00:02:25.942518 systemd[1]: Created slice kubepods-burstable-podc4a86ba012e05e50e41b9f6077dd607d.slice - libcontainer container kubepods-burstable-podc4a86ba012e05e50e41b9f6077dd607d.slice. Jul 7 00:02:25.959455 kubelet[3107]: E0707 00:02:25.959413 3107 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.0.1-a-7cca70db3c\" not found" node="ci-4372.0.1-a-7cca70db3c" Jul 7 00:02:25.961995 systemd[1]: Created slice kubepods-burstable-podfb649faa5097cacc1882437bfc723020.slice - libcontainer container kubepods-burstable-podfb649faa5097cacc1882437bfc723020.slice. Jul 7 00:02:25.966427 kubelet[3107]: E0707 00:02:25.966288 3107 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.0.1-a-7cca70db3c\" not found" node="ci-4372.0.1-a-7cca70db3c" Jul 7 00:02:25.968475 systemd[1]: Created slice kubepods-burstable-pod9b1e87e3e1b10b42fc2e39ae0ae5a0ce.slice - libcontainer container kubepods-burstable-pod9b1e87e3e1b10b42fc2e39ae0ae5a0ce.slice. Jul 7 00:02:25.969795 kubelet[3107]: E0707 00:02:25.969683 3107 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.0.1-a-7cca70db3c\" not found" node="ci-4372.0.1-a-7cca70db3c" Jul 7 00:02:25.972863 kubelet[3107]: I0707 00:02:25.972850 3107 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372.0.1-a-7cca70db3c" Jul 7 00:02:25.973366 kubelet[3107]: E0707 00:02:25.973331 3107 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.36:6443/api/v1/nodes\": dial tcp 10.200.20.36:6443: connect: connection refused" node="ci-4372.0.1-a-7cca70db3c" Jul 7 00:02:26.003702 kubelet[3107]: I0707 00:02:26.003544 3107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c4a86ba012e05e50e41b9f6077dd607d-ca-certs\") pod \"kube-apiserver-ci-4372.0.1-a-7cca70db3c\" (UID: \"c4a86ba012e05e50e41b9f6077dd607d\") " pod="kube-system/kube-apiserver-ci-4372.0.1-a-7cca70db3c" Jul 7 00:02:26.003702 kubelet[3107]: I0707 00:02:26.003571 3107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c4a86ba012e05e50e41b9f6077dd607d-k8s-certs\") pod \"kube-apiserver-ci-4372.0.1-a-7cca70db3c\" (UID: \"c4a86ba012e05e50e41b9f6077dd607d\") " pod="kube-system/kube-apiserver-ci-4372.0.1-a-7cca70db3c" Jul 7 00:02:26.003702 kubelet[3107]: I0707 00:02:26.003582 3107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fb649faa5097cacc1882437bfc723020-ca-certs\") pod \"kube-controller-manager-ci-4372.0.1-a-7cca70db3c\" (UID: \"fb649faa5097cacc1882437bfc723020\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-a-7cca70db3c" Jul 7 00:02:26.003702 kubelet[3107]: I0707 00:02:26.003594 3107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fb649faa5097cacc1882437bfc723020-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4372.0.1-a-7cca70db3c\" (UID: \"fb649faa5097cacc1882437bfc723020\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-a-7cca70db3c" Jul 7 00:02:26.003702 kubelet[3107]: I0707 00:02:26.003605 3107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c4a86ba012e05e50e41b9f6077dd607d-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4372.0.1-a-7cca70db3c\" (UID: \"c4a86ba012e05e50e41b9f6077dd607d\") " pod="kube-system/kube-apiserver-ci-4372.0.1-a-7cca70db3c" Jul 7 00:02:26.003840 kubelet[3107]: I0707 00:02:26.003614 3107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fb649faa5097cacc1882437bfc723020-flexvolume-dir\") pod \"kube-controller-manager-ci-4372.0.1-a-7cca70db3c\" (UID: \"fb649faa5097cacc1882437bfc723020\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-a-7cca70db3c" Jul 7 00:02:26.003840 kubelet[3107]: I0707 00:02:26.003624 3107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fb649faa5097cacc1882437bfc723020-k8s-certs\") pod \"kube-controller-manager-ci-4372.0.1-a-7cca70db3c\" (UID: \"fb649faa5097cacc1882437bfc723020\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-a-7cca70db3c" Jul 7 00:02:26.003840 kubelet[3107]: I0707 00:02:26.003633 3107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fb649faa5097cacc1882437bfc723020-kubeconfig\") pod \"kube-controller-manager-ci-4372.0.1-a-7cca70db3c\" (UID: \"fb649faa5097cacc1882437bfc723020\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-a-7cca70db3c" Jul 7 00:02:26.003840 kubelet[3107]: I0707 00:02:26.003643 3107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9b1e87e3e1b10b42fc2e39ae0ae5a0ce-kubeconfig\") pod \"kube-scheduler-ci-4372.0.1-a-7cca70db3c\" (UID: \"9b1e87e3e1b10b42fc2e39ae0ae5a0ce\") " pod="kube-system/kube-scheduler-ci-4372.0.1-a-7cca70db3c" Jul 7 00:02:26.007844 kubelet[3107]: E0707 00:02:26.007815 3107 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372.0.1-a-7cca70db3c?timeout=10s\": dial tcp 10.200.20.36:6443: connect: connection refused" interval="400ms" Jul 7 00:02:26.175666 kubelet[3107]: I0707 00:02:26.175626 3107 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372.0.1-a-7cca70db3c" Jul 7 00:02:26.176373 kubelet[3107]: E0707 00:02:26.176350 3107 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.36:6443/api/v1/nodes\": dial tcp 10.200.20.36:6443: connect: connection refused" node="ci-4372.0.1-a-7cca70db3c" Jul 7 00:02:26.261520 containerd[1887]: time="2025-07-07T00:02:26.261457233Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4372.0.1-a-7cca70db3c,Uid:c4a86ba012e05e50e41b9f6077dd607d,Namespace:kube-system,Attempt:0,}" Jul 7 00:02:26.266938 containerd[1887]: time="2025-07-07T00:02:26.266884376Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4372.0.1-a-7cca70db3c,Uid:fb649faa5097cacc1882437bfc723020,Namespace:kube-system,Attempt:0,}" Jul 7 00:02:26.270700 containerd[1887]: time="2025-07-07T00:02:26.270627409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4372.0.1-a-7cca70db3c,Uid:9b1e87e3e1b10b42fc2e39ae0ae5a0ce,Namespace:kube-system,Attempt:0,}" Jul 7 00:02:26.408577 kubelet[3107]: E0707 00:02:26.408535 3107 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372.0.1-a-7cca70db3c?timeout=10s\": dial tcp 10.200.20.36:6443: connect: connection refused" interval="800ms" Jul 7 00:02:26.444051 containerd[1887]: time="2025-07-07T00:02:26.443800863Z" level=info msg="connecting to shim 5aa9f67a298633bdd32943ac08247fa5cdf55e3a117ffa2fd8db233a44741e26" address="unix:///run/containerd/s/c3dd8c6ff1d1547cfef4caa6f3c5cabc0defa43453213aabd839724b5d754397" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:02:26.464081 containerd[1887]: time="2025-07-07T00:02:26.464007211Z" level=info msg="connecting to shim 95d8e6d0932ed99a964884734d0d515f25503884c0417a3118c960b1b7b797a3" address="unix:///run/containerd/s/5fe966680a1810ba19736615e4ed48a32d4b6f18b63c1085fea3b16e572bee80" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:02:26.465612 systemd[1]: Started cri-containerd-5aa9f67a298633bdd32943ac08247fa5cdf55e3a117ffa2fd8db233a44741e26.scope - libcontainer container 5aa9f67a298633bdd32943ac08247fa5cdf55e3a117ffa2fd8db233a44741e26. Jul 7 00:02:26.476384 containerd[1887]: time="2025-07-07T00:02:26.475648243Z" level=info msg="connecting to shim 37662a44293240669849ef76d0f1cf75d8935afacb577152d0d9caa149e927d0" address="unix:///run/containerd/s/9745cf1c2fdbb70de8926b44ae397cb322e4eafb6de0d966e8b5fa03dcba1f9d" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:02:26.493722 systemd[1]: Started cri-containerd-95d8e6d0932ed99a964884734d0d515f25503884c0417a3118c960b1b7b797a3.scope - libcontainer container 95d8e6d0932ed99a964884734d0d515f25503884c0417a3118c960b1b7b797a3. Jul 7 00:02:26.499221 systemd[1]: Started cri-containerd-37662a44293240669849ef76d0f1cf75d8935afacb577152d0d9caa149e927d0.scope - libcontainer container 37662a44293240669849ef76d0f1cf75d8935afacb577152d0d9caa149e927d0. Jul 7 00:02:26.526820 containerd[1887]: time="2025-07-07T00:02:26.526717443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4372.0.1-a-7cca70db3c,Uid:c4a86ba012e05e50e41b9f6077dd607d,Namespace:kube-system,Attempt:0,} returns sandbox id \"5aa9f67a298633bdd32943ac08247fa5cdf55e3a117ffa2fd8db233a44741e26\"" Jul 7 00:02:26.533008 containerd[1887]: time="2025-07-07T00:02:26.532982389Z" level=info msg="CreateContainer within sandbox \"5aa9f67a298633bdd32943ac08247fa5cdf55e3a117ffa2fd8db233a44741e26\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 7 00:02:26.551772 containerd[1887]: time="2025-07-07T00:02:26.551739691Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4372.0.1-a-7cca70db3c,Uid:fb649faa5097cacc1882437bfc723020,Namespace:kube-system,Attempt:0,} returns sandbox id \"95d8e6d0932ed99a964884734d0d515f25503884c0417a3118c960b1b7b797a3\"" Jul 7 00:02:26.553844 containerd[1887]: time="2025-07-07T00:02:26.553818662Z" level=info msg="CreateContainer within sandbox \"95d8e6d0932ed99a964884734d0d515f25503884c0417a3118c960b1b7b797a3\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 7 00:02:26.564775 containerd[1887]: time="2025-07-07T00:02:26.564751935Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4372.0.1-a-7cca70db3c,Uid:9b1e87e3e1b10b42fc2e39ae0ae5a0ce,Namespace:kube-system,Attempt:0,} returns sandbox id \"37662a44293240669849ef76d0f1cf75d8935afacb577152d0d9caa149e927d0\"" Jul 7 00:02:26.566821 containerd[1887]: time="2025-07-07T00:02:26.566366467Z" level=info msg="CreateContainer within sandbox \"37662a44293240669849ef76d0f1cf75d8935afacb577152d0d9caa149e927d0\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 7 00:02:26.581812 kubelet[3107]: I0707 00:02:26.581771 3107 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372.0.1-a-7cca70db3c" Jul 7 00:02:26.582084 kubelet[3107]: E0707 00:02:26.582062 3107 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.36:6443/api/v1/nodes\": dial tcp 10.200.20.36:6443: connect: connection refused" node="ci-4372.0.1-a-7cca70db3c" Jul 7 00:02:26.600366 containerd[1887]: time="2025-07-07T00:02:26.600328115Z" level=info msg="Container 1c882e7c5c746cea97b92ea0bb7c886612a86e13efd71f93ef3cb82a40de6f3e: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:02:26.613717 containerd[1887]: time="2025-07-07T00:02:26.613686826Z" level=info msg="Container 62485022591e8e6d1c83e6e9610b0b89112c5290b5b3a92f2e4c8297a6799ca9: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:02:26.633811 containerd[1887]: time="2025-07-07T00:02:26.633371485Z" level=info msg="Container f1fddc5a91b9b4487f3756e695afe14753a16731ccc0b3811de1f310ada2c80e: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:02:26.657814 containerd[1887]: time="2025-07-07T00:02:26.657777393Z" level=info msg="CreateContainer within sandbox \"95d8e6d0932ed99a964884734d0d515f25503884c0417a3118c960b1b7b797a3\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"1c882e7c5c746cea97b92ea0bb7c886612a86e13efd71f93ef3cb82a40de6f3e\"" Jul 7 00:02:26.658352 containerd[1887]: time="2025-07-07T00:02:26.658321803Z" level=info msg="StartContainer for \"1c882e7c5c746cea97b92ea0bb7c886612a86e13efd71f93ef3cb82a40de6f3e\"" Jul 7 00:02:26.659140 containerd[1887]: time="2025-07-07T00:02:26.659113812Z" level=info msg="connecting to shim 1c882e7c5c746cea97b92ea0bb7c886612a86e13efd71f93ef3cb82a40de6f3e" address="unix:///run/containerd/s/5fe966680a1810ba19736615e4ed48a32d4b6f18b63c1085fea3b16e572bee80" protocol=ttrpc version=3 Jul 7 00:02:26.673712 systemd[1]: Started cri-containerd-1c882e7c5c746cea97b92ea0bb7c886612a86e13efd71f93ef3cb82a40de6f3e.scope - libcontainer container 1c882e7c5c746cea97b92ea0bb7c886612a86e13efd71f93ef3cb82a40de6f3e. Jul 7 00:02:26.679752 containerd[1887]: time="2025-07-07T00:02:26.679709677Z" level=info msg="CreateContainer within sandbox \"37662a44293240669849ef76d0f1cf75d8935afacb577152d0d9caa149e927d0\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f1fddc5a91b9b4487f3756e695afe14753a16731ccc0b3811de1f310ada2c80e\"" Jul 7 00:02:26.680289 containerd[1887]: time="2025-07-07T00:02:26.680272895Z" level=info msg="StartContainer for \"f1fddc5a91b9b4487f3756e695afe14753a16731ccc0b3811de1f310ada2c80e\"" Jul 7 00:02:26.681328 containerd[1887]: time="2025-07-07T00:02:26.681291744Z" level=info msg="connecting to shim f1fddc5a91b9b4487f3756e695afe14753a16731ccc0b3811de1f310ada2c80e" address="unix:///run/containerd/s/9745cf1c2fdbb70de8926b44ae397cb322e4eafb6de0d966e8b5fa03dcba1f9d" protocol=ttrpc version=3 Jul 7 00:02:26.690254 containerd[1887]: time="2025-07-07T00:02:26.690209000Z" level=info msg="CreateContainer within sandbox \"5aa9f67a298633bdd32943ac08247fa5cdf55e3a117ffa2fd8db233a44741e26\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"62485022591e8e6d1c83e6e9610b0b89112c5290b5b3a92f2e4c8297a6799ca9\"" Jul 7 00:02:26.691124 containerd[1887]: time="2025-07-07T00:02:26.690788698Z" level=info msg="StartContainer for \"62485022591e8e6d1c83e6e9610b0b89112c5290b5b3a92f2e4c8297a6799ca9\"" Jul 7 00:02:26.692915 containerd[1887]: time="2025-07-07T00:02:26.692895366Z" level=info msg="connecting to shim 62485022591e8e6d1c83e6e9610b0b89112c5290b5b3a92f2e4c8297a6799ca9" address="unix:///run/containerd/s/c3dd8c6ff1d1547cfef4caa6f3c5cabc0defa43453213aabd839724b5d754397" protocol=ttrpc version=3 Jul 7 00:02:26.700234 kubelet[3107]: W0707 00:02:26.700163 3107 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.36:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.36:6443: connect: connection refused Jul 7 00:02:26.700981 kubelet[3107]: E0707 00:02:26.700959 3107 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.36:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.36:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:02:26.706722 systemd[1]: Started cri-containerd-f1fddc5a91b9b4487f3756e695afe14753a16731ccc0b3811de1f310ada2c80e.scope - libcontainer container f1fddc5a91b9b4487f3756e695afe14753a16731ccc0b3811de1f310ada2c80e. Jul 7 00:02:26.714667 systemd[1]: Started cri-containerd-62485022591e8e6d1c83e6e9610b0b89112c5290b5b3a92f2e4c8297a6799ca9.scope - libcontainer container 62485022591e8e6d1c83e6e9610b0b89112c5290b5b3a92f2e4c8297a6799ca9. Jul 7 00:02:26.728134 containerd[1887]: time="2025-07-07T00:02:26.727985187Z" level=info msg="StartContainer for \"1c882e7c5c746cea97b92ea0bb7c886612a86e13efd71f93ef3cb82a40de6f3e\" returns successfully" Jul 7 00:02:26.774089 containerd[1887]: time="2025-07-07T00:02:26.773996520Z" level=info msg="StartContainer for \"62485022591e8e6d1c83e6e9610b0b89112c5290b5b3a92f2e4c8297a6799ca9\" returns successfully" Jul 7 00:02:26.775242 containerd[1887]: time="2025-07-07T00:02:26.775203911Z" level=info msg="StartContainer for \"f1fddc5a91b9b4487f3756e695afe14753a16731ccc0b3811de1f310ada2c80e\" returns successfully" Jul 7 00:02:26.842360 kubelet[3107]: E0707 00:02:26.842331 3107 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.0.1-a-7cca70db3c\" not found" node="ci-4372.0.1-a-7cca70db3c" Jul 7 00:02:26.845222 kubelet[3107]: E0707 00:02:26.845105 3107 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.0.1-a-7cca70db3c\" not found" node="ci-4372.0.1-a-7cca70db3c" Jul 7 00:02:26.846705 kubelet[3107]: E0707 00:02:26.846677 3107 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.0.1-a-7cca70db3c\" not found" node="ci-4372.0.1-a-7cca70db3c" Jul 7 00:02:27.384446 kubelet[3107]: I0707 00:02:27.384189 3107 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372.0.1-a-7cca70db3c" Jul 7 00:02:27.848354 kubelet[3107]: E0707 00:02:27.848318 3107 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.0.1-a-7cca70db3c\" not found" node="ci-4372.0.1-a-7cca70db3c" Jul 7 00:02:27.848803 kubelet[3107]: E0707 00:02:27.848783 3107 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.0.1-a-7cca70db3c\" not found" node="ci-4372.0.1-a-7cca70db3c" Jul 7 00:02:27.925456 kubelet[3107]: E0707 00:02:27.925417 3107 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4372.0.1-a-7cca70db3c\" not found" node="ci-4372.0.1-a-7cca70db3c" Jul 7 00:02:28.114199 kubelet[3107]: I0707 00:02:28.114089 3107 kubelet_node_status.go:78] "Successfully registered node" node="ci-4372.0.1-a-7cca70db3c" Jul 7 00:02:28.114199 kubelet[3107]: E0707 00:02:28.114121 3107 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4372.0.1-a-7cca70db3c\": node \"ci-4372.0.1-a-7cca70db3c\" not found" Jul 7 00:02:28.202580 kubelet[3107]: I0707 00:02:28.202540 3107 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4372.0.1-a-7cca70db3c" Jul 7 00:02:28.265445 kubelet[3107]: E0707 00:02:28.265181 3107 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4372.0.1-a-7cca70db3c\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4372.0.1-a-7cca70db3c" Jul 7 00:02:28.265445 kubelet[3107]: I0707 00:02:28.265232 3107 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4372.0.1-a-7cca70db3c" Jul 7 00:02:28.268112 kubelet[3107]: E0707 00:02:28.267980 3107 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4372.0.1-a-7cca70db3c\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4372.0.1-a-7cca70db3c" Jul 7 00:02:28.268112 kubelet[3107]: I0707 00:02:28.268002 3107 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4372.0.1-a-7cca70db3c" Jul 7 00:02:28.271097 kubelet[3107]: E0707 00:02:28.271070 3107 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4372.0.1-a-7cca70db3c\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4372.0.1-a-7cca70db3c" Jul 7 00:02:28.794523 kubelet[3107]: I0707 00:02:28.794491 3107 apiserver.go:52] "Watching apiserver" Jul 7 00:02:28.803267 kubelet[3107]: I0707 00:02:28.803242 3107 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 7 00:02:28.848882 kubelet[3107]: I0707 00:02:28.848794 3107 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4372.0.1-a-7cca70db3c" Jul 7 00:02:28.850396 kubelet[3107]: E0707 00:02:28.850372 3107 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4372.0.1-a-7cca70db3c\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4372.0.1-a-7cca70db3c" Jul 7 00:02:29.929386 systemd[1]: Reload requested from client PID 3374 ('systemctl') (unit session-9.scope)... Jul 7 00:02:29.929398 systemd[1]: Reloading... Jul 7 00:02:30.010572 zram_generator::config[3423]: No configuration found. Jul 7 00:02:30.076105 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 00:02:30.166372 systemd[1]: Reloading finished in 236 ms. Jul 7 00:02:30.185825 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:02:30.198204 systemd[1]: kubelet.service: Deactivated successfully. Jul 7 00:02:30.198478 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:02:30.198542 systemd[1]: kubelet.service: Consumed 1.037s CPU time, 127.3M memory peak. Jul 7 00:02:30.200124 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:02:31.097046 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:02:31.103936 (kubelet)[3484]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 7 00:02:31.134303 kubelet[3484]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 00:02:31.134303 kubelet[3484]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 7 00:02:31.134303 kubelet[3484]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 00:02:31.136273 kubelet[3484]: I0707 00:02:31.135978 3484 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 7 00:02:31.140536 kubelet[3484]: I0707 00:02:31.140479 3484 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 7 00:02:31.141066 kubelet[3484]: I0707 00:02:31.141022 3484 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 7 00:02:31.141358 kubelet[3484]: I0707 00:02:31.141295 3484 server.go:954] "Client rotation is on, will bootstrap in background" Jul 7 00:02:31.142455 kubelet[3484]: I0707 00:02:31.142431 3484 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 7 00:02:31.144724 kubelet[3484]: I0707 00:02:31.144698 3484 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 00:02:31.149154 kubelet[3484]: I0707 00:02:31.149131 3484 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 7 00:02:31.154387 kubelet[3484]: I0707 00:02:31.154362 3484 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 7 00:02:31.154615 kubelet[3484]: I0707 00:02:31.154592 3484 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 7 00:02:31.154730 kubelet[3484]: I0707 00:02:31.154615 3484 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4372.0.1-a-7cca70db3c","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 7 00:02:31.154810 kubelet[3484]: I0707 00:02:31.154736 3484 topology_manager.go:138] "Creating topology manager with none policy" Jul 7 00:02:31.154810 kubelet[3484]: I0707 00:02:31.154744 3484 container_manager_linux.go:304] "Creating device plugin manager" Jul 7 00:02:31.154810 kubelet[3484]: I0707 00:02:31.154775 3484 state_mem.go:36] "Initialized new in-memory state store" Jul 7 00:02:31.154879 kubelet[3484]: I0707 00:02:31.154869 3484 kubelet.go:446] "Attempting to sync node with API server" Jul 7 00:02:31.154901 kubelet[3484]: I0707 00:02:31.154882 3484 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 7 00:02:31.154920 kubelet[3484]: I0707 00:02:31.154902 3484 kubelet.go:352] "Adding apiserver pod source" Jul 7 00:02:31.154920 kubelet[3484]: I0707 00:02:31.154912 3484 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 7 00:02:31.155876 kubelet[3484]: I0707 00:02:31.155712 3484 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 7 00:02:31.156968 kubelet[3484]: I0707 00:02:31.156448 3484 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 7 00:02:31.157467 kubelet[3484]: I0707 00:02:31.157453 3484 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 7 00:02:31.157637 kubelet[3484]: I0707 00:02:31.157626 3484 server.go:1287] "Started kubelet" Jul 7 00:02:31.159620 kubelet[3484]: I0707 00:02:31.159604 3484 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 7 00:02:31.163935 kubelet[3484]: I0707 00:02:31.163889 3484 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 7 00:02:31.164511 kubelet[3484]: I0707 00:02:31.164458 3484 server.go:479] "Adding debug handlers to kubelet server" Jul 7 00:02:31.165554 kubelet[3484]: I0707 00:02:31.165250 3484 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 7 00:02:31.165554 kubelet[3484]: E0707 00:02:31.165414 3484 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4372.0.1-a-7cca70db3c\" not found" Jul 7 00:02:31.166223 kubelet[3484]: I0707 00:02:31.166174 3484 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 7 00:02:31.166355 kubelet[3484]: I0707 00:02:31.166335 3484 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 7 00:02:31.166503 kubelet[3484]: I0707 00:02:31.166472 3484 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 7 00:02:31.167597 kubelet[3484]: I0707 00:02:31.166876 3484 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 7 00:02:31.167597 kubelet[3484]: I0707 00:02:31.167000 3484 reconciler.go:26] "Reconciler: start to sync state" Jul 7 00:02:31.168828 kubelet[3484]: I0707 00:02:31.168788 3484 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 7 00:02:31.169720 kubelet[3484]: I0707 00:02:31.169700 3484 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 7 00:02:31.169812 kubelet[3484]: I0707 00:02:31.169803 3484 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 7 00:02:31.169864 kubelet[3484]: I0707 00:02:31.169856 3484 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 7 00:02:31.169903 kubelet[3484]: I0707 00:02:31.169896 3484 kubelet.go:2382] "Starting kubelet main sync loop" Jul 7 00:02:31.170023 kubelet[3484]: E0707 00:02:31.170006 3484 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 7 00:02:31.172515 kubelet[3484]: I0707 00:02:31.172361 3484 factory.go:221] Registration of the systemd container factory successfully Jul 7 00:02:31.172515 kubelet[3484]: I0707 00:02:31.172441 3484 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 7 00:02:31.178531 kubelet[3484]: I0707 00:02:31.178291 3484 factory.go:221] Registration of the containerd container factory successfully Jul 7 00:02:31.195877 kubelet[3484]: E0707 00:02:31.195839 3484 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 7 00:02:31.222099 kubelet[3484]: I0707 00:02:31.222071 3484 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 7 00:02:31.222099 kubelet[3484]: I0707 00:02:31.222087 3484 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 7 00:02:31.222238 kubelet[3484]: I0707 00:02:31.222132 3484 state_mem.go:36] "Initialized new in-memory state store" Jul 7 00:02:31.222317 kubelet[3484]: I0707 00:02:31.222299 3484 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 7 00:02:31.222355 kubelet[3484]: I0707 00:02:31.222313 3484 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 7 00:02:31.222355 kubelet[3484]: I0707 00:02:31.222327 3484 policy_none.go:49] "None policy: Start" Jul 7 00:02:31.222355 kubelet[3484]: I0707 00:02:31.222346 3484 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 7 00:02:31.222355 kubelet[3484]: I0707 00:02:31.222354 3484 state_mem.go:35] "Initializing new in-memory state store" Jul 7 00:02:31.222444 kubelet[3484]: I0707 00:02:31.222432 3484 state_mem.go:75] "Updated machine memory state" Jul 7 00:02:31.225786 kubelet[3484]: I0707 00:02:31.225769 3484 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 7 00:02:31.225908 kubelet[3484]: I0707 00:02:31.225893 3484 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 7 00:02:31.225938 kubelet[3484]: I0707 00:02:31.225905 3484 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 7 00:02:31.227273 kubelet[3484]: I0707 00:02:31.227248 3484 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 7 00:02:31.227449 kubelet[3484]: E0707 00:02:31.227433 3484 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 7 00:02:31.240531 sudo[3515]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 7 00:02:31.241023 sudo[3515]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jul 7 00:02:31.271110 kubelet[3484]: I0707 00:02:31.270783 3484 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4372.0.1-a-7cca70db3c" Jul 7 00:02:31.271354 kubelet[3484]: I0707 00:02:31.271336 3484 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4372.0.1-a-7cca70db3c" Jul 7 00:02:31.271654 kubelet[3484]: I0707 00:02:31.271635 3484 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4372.0.1-a-7cca70db3c" Jul 7 00:02:31.278346 kubelet[3484]: W0707 00:02:31.278126 3484 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 7 00:02:31.283179 kubelet[3484]: W0707 00:02:31.283159 3484 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 7 00:02:31.283452 kubelet[3484]: W0707 00:02:31.283439 3484 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 7 00:02:31.330833 kubelet[3484]: I0707 00:02:31.330765 3484 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372.0.1-a-7cca70db3c" Jul 7 00:02:31.342581 kubelet[3484]: I0707 00:02:31.342483 3484 kubelet_node_status.go:124] "Node was previously registered" node="ci-4372.0.1-a-7cca70db3c" Jul 7 00:02:31.343228 kubelet[3484]: I0707 00:02:31.342773 3484 kubelet_node_status.go:78] "Successfully registered node" node="ci-4372.0.1-a-7cca70db3c" Jul 7 00:02:31.469323 kubelet[3484]: I0707 00:02:31.469235 3484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c4a86ba012e05e50e41b9f6077dd607d-ca-certs\") pod \"kube-apiserver-ci-4372.0.1-a-7cca70db3c\" (UID: \"c4a86ba012e05e50e41b9f6077dd607d\") " pod="kube-system/kube-apiserver-ci-4372.0.1-a-7cca70db3c" Jul 7 00:02:31.469323 kubelet[3484]: I0707 00:02:31.469262 3484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c4a86ba012e05e50e41b9f6077dd607d-k8s-certs\") pod \"kube-apiserver-ci-4372.0.1-a-7cca70db3c\" (UID: \"c4a86ba012e05e50e41b9f6077dd607d\") " pod="kube-system/kube-apiserver-ci-4372.0.1-a-7cca70db3c" Jul 7 00:02:31.469691 kubelet[3484]: I0707 00:02:31.469278 3484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c4a86ba012e05e50e41b9f6077dd607d-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4372.0.1-a-7cca70db3c\" (UID: \"c4a86ba012e05e50e41b9f6077dd607d\") " pod="kube-system/kube-apiserver-ci-4372.0.1-a-7cca70db3c" Jul 7 00:02:31.469691 kubelet[3484]: I0707 00:02:31.469559 3484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fb649faa5097cacc1882437bfc723020-ca-certs\") pod \"kube-controller-manager-ci-4372.0.1-a-7cca70db3c\" (UID: \"fb649faa5097cacc1882437bfc723020\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-a-7cca70db3c" Jul 7 00:02:31.469691 kubelet[3484]: I0707 00:02:31.469577 3484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fb649faa5097cacc1882437bfc723020-kubeconfig\") pod \"kube-controller-manager-ci-4372.0.1-a-7cca70db3c\" (UID: \"fb649faa5097cacc1882437bfc723020\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-a-7cca70db3c" Jul 7 00:02:31.469691 kubelet[3484]: I0707 00:02:31.469588 3484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9b1e87e3e1b10b42fc2e39ae0ae5a0ce-kubeconfig\") pod \"kube-scheduler-ci-4372.0.1-a-7cca70db3c\" (UID: \"9b1e87e3e1b10b42fc2e39ae0ae5a0ce\") " pod="kube-system/kube-scheduler-ci-4372.0.1-a-7cca70db3c" Jul 7 00:02:31.469691 kubelet[3484]: I0707 00:02:31.469598 3484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fb649faa5097cacc1882437bfc723020-flexvolume-dir\") pod \"kube-controller-manager-ci-4372.0.1-a-7cca70db3c\" (UID: \"fb649faa5097cacc1882437bfc723020\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-a-7cca70db3c" Jul 7 00:02:31.469796 kubelet[3484]: I0707 00:02:31.469609 3484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fb649faa5097cacc1882437bfc723020-k8s-certs\") pod \"kube-controller-manager-ci-4372.0.1-a-7cca70db3c\" (UID: \"fb649faa5097cacc1882437bfc723020\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-a-7cca70db3c" Jul 7 00:02:31.469916 kubelet[3484]: I0707 00:02:31.469886 3484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fb649faa5097cacc1882437bfc723020-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4372.0.1-a-7cca70db3c\" (UID: \"fb649faa5097cacc1882437bfc723020\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-a-7cca70db3c" Jul 7 00:02:31.591703 sudo[3515]: pam_unix(sudo:session): session closed for user root Jul 7 00:02:32.160099 kubelet[3484]: I0707 00:02:32.160052 3484 apiserver.go:52] "Watching apiserver" Jul 7 00:02:32.168306 kubelet[3484]: I0707 00:02:32.168280 3484 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 7 00:02:32.215569 kubelet[3484]: I0707 00:02:32.215535 3484 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4372.0.1-a-7cca70db3c" Jul 7 00:02:32.215955 kubelet[3484]: I0707 00:02:32.215916 3484 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4372.0.1-a-7cca70db3c" Jul 7 00:02:32.223957 kubelet[3484]: W0707 00:02:32.223928 3484 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 7 00:02:32.224031 kubelet[3484]: E0707 00:02:32.223979 3484 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4372.0.1-a-7cca70db3c\" already exists" pod="kube-system/kube-scheduler-ci-4372.0.1-a-7cca70db3c" Jul 7 00:02:32.224694 kubelet[3484]: W0707 00:02:32.224655 3484 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 7 00:02:32.224856 kubelet[3484]: E0707 00:02:32.224795 3484 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4372.0.1-a-7cca70db3c\" already exists" pod="kube-system/kube-apiserver-ci-4372.0.1-a-7cca70db3c" Jul 7 00:02:32.232021 kubelet[3484]: I0707 00:02:32.231892 3484 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4372.0.1-a-7cca70db3c" podStartSLOduration=1.231867832 podStartE2EDuration="1.231867832s" podCreationTimestamp="2025-07-07 00:02:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:02:32.23182711 +0000 UTC m=+1.125052485" watchObservedRunningTime="2025-07-07 00:02:32.231867832 +0000 UTC m=+1.125093207" Jul 7 00:02:32.253670 kubelet[3484]: I0707 00:02:32.253596 3484 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4372.0.1-a-7cca70db3c" podStartSLOduration=1.25358555 podStartE2EDuration="1.25358555s" podCreationTimestamp="2025-07-07 00:02:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:02:32.242416448 +0000 UTC m=+1.135641847" watchObservedRunningTime="2025-07-07 00:02:32.25358555 +0000 UTC m=+1.146810925" Jul 7 00:02:32.263045 kubelet[3484]: I0707 00:02:32.263008 3484 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4372.0.1-a-7cca70db3c" podStartSLOduration=1.262998848 podStartE2EDuration="1.262998848s" podCreationTimestamp="2025-07-07 00:02:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:02:32.253975219 +0000 UTC m=+1.147200594" watchObservedRunningTime="2025-07-07 00:02:32.262998848 +0000 UTC m=+1.156224231" Jul 7 00:02:32.750382 sudo[2343]: pam_unix(sudo:session): session closed for user root Jul 7 00:02:32.839199 sshd[2342]: Connection closed by 10.200.16.10 port 53966 Jul 7 00:02:32.839752 sshd-session[2340]: pam_unix(sshd:session): session closed for user core Jul 7 00:02:32.843022 systemd[1]: sshd@6-10.200.20.36:22-10.200.16.10:53966.service: Deactivated successfully. Jul 7 00:02:32.845417 systemd[1]: session-9.scope: Deactivated successfully. Jul 7 00:02:32.845585 systemd[1]: session-9.scope: Consumed 3.108s CPU time, 266.9M memory peak. Jul 7 00:02:32.846735 systemd-logind[1860]: Session 9 logged out. Waiting for processes to exit. Jul 7 00:02:32.848317 systemd-logind[1860]: Removed session 9. Jul 7 00:02:36.596363 kubelet[3484]: I0707 00:02:36.596323 3484 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 7 00:02:36.597020 kubelet[3484]: I0707 00:02:36.596756 3484 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 7 00:02:36.597049 containerd[1887]: time="2025-07-07T00:02:36.596615959Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 7 00:02:37.607444 systemd[1]: Created slice kubepods-besteffort-podadf32000_4e71_484d_8e6a_cef4a488b5d8.slice - libcontainer container kubepods-besteffort-podadf32000_4e71_484d_8e6a_cef4a488b5d8.slice. Jul 7 00:02:37.622285 systemd[1]: Created slice kubepods-burstable-pod2461eeeb_d0ca_43f3_b55b_f005c3167746.slice - libcontainer container kubepods-burstable-pod2461eeeb_d0ca_43f3_b55b_f005c3167746.slice. Jul 7 00:02:37.702558 kubelet[3484]: I0707 00:02:37.702517 3484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njrzp\" (UniqueName: \"kubernetes.io/projected/adf32000-4e71-484d-8e6a-cef4a488b5d8-kube-api-access-njrzp\") pod \"kube-proxy-wddbh\" (UID: \"adf32000-4e71-484d-8e6a-cef4a488b5d8\") " pod="kube-system/kube-proxy-wddbh" Jul 7 00:02:37.702558 kubelet[3484]: I0707 00:02:37.702559 3484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2461eeeb-d0ca-43f3-b55b-f005c3167746-hostproc\") pod \"cilium-cqk4g\" (UID: \"2461eeeb-d0ca-43f3-b55b-f005c3167746\") " pod="kube-system/cilium-cqk4g" Jul 7 00:02:37.702558 kubelet[3484]: I0707 00:02:37.702570 3484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2461eeeb-d0ca-43f3-b55b-f005c3167746-cni-path\") pod \"cilium-cqk4g\" (UID: \"2461eeeb-d0ca-43f3-b55b-f005c3167746\") " pod="kube-system/cilium-cqk4g" Jul 7 00:02:37.702558 kubelet[3484]: I0707 00:02:37.702581 3484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2461eeeb-d0ca-43f3-b55b-f005c3167746-cilium-config-path\") pod \"cilium-cqk4g\" (UID: \"2461eeeb-d0ca-43f3-b55b-f005c3167746\") " pod="kube-system/cilium-cqk4g" Jul 7 00:02:37.702558 kubelet[3484]: I0707 00:02:37.702590 3484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2461eeeb-d0ca-43f3-b55b-f005c3167746-cilium-cgroup\") pod \"cilium-cqk4g\" (UID: \"2461eeeb-d0ca-43f3-b55b-f005c3167746\") " pod="kube-system/cilium-cqk4g" Jul 7 00:02:37.702558 kubelet[3484]: I0707 00:02:37.702600 3484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2461eeeb-d0ca-43f3-b55b-f005c3167746-clustermesh-secrets\") pod \"cilium-cqk4g\" (UID: \"2461eeeb-d0ca-43f3-b55b-f005c3167746\") " pod="kube-system/cilium-cqk4g" Jul 7 00:02:37.703172 kubelet[3484]: I0707 00:02:37.702622 3484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hm7fk\" (UniqueName: \"kubernetes.io/projected/2461eeeb-d0ca-43f3-b55b-f005c3167746-kube-api-access-hm7fk\") pod \"cilium-cqk4g\" (UID: \"2461eeeb-d0ca-43f3-b55b-f005c3167746\") " pod="kube-system/cilium-cqk4g" Jul 7 00:02:37.703172 kubelet[3484]: I0707 00:02:37.702634 3484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2461eeeb-d0ca-43f3-b55b-f005c3167746-cilium-run\") pod \"cilium-cqk4g\" (UID: \"2461eeeb-d0ca-43f3-b55b-f005c3167746\") " pod="kube-system/cilium-cqk4g" Jul 7 00:02:37.703172 kubelet[3484]: I0707 00:02:37.702643 3484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2461eeeb-d0ca-43f3-b55b-f005c3167746-etc-cni-netd\") pod \"cilium-cqk4g\" (UID: \"2461eeeb-d0ca-43f3-b55b-f005c3167746\") " pod="kube-system/cilium-cqk4g" Jul 7 00:02:37.703172 kubelet[3484]: I0707 00:02:37.702651 3484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2461eeeb-d0ca-43f3-b55b-f005c3167746-xtables-lock\") pod \"cilium-cqk4g\" (UID: \"2461eeeb-d0ca-43f3-b55b-f005c3167746\") " pod="kube-system/cilium-cqk4g" Jul 7 00:02:37.703172 kubelet[3484]: I0707 00:02:37.702662 3484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2461eeeb-d0ca-43f3-b55b-f005c3167746-bpf-maps\") pod \"cilium-cqk4g\" (UID: \"2461eeeb-d0ca-43f3-b55b-f005c3167746\") " pod="kube-system/cilium-cqk4g" Jul 7 00:02:37.703172 kubelet[3484]: I0707 00:02:37.702677 3484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/adf32000-4e71-484d-8e6a-cef4a488b5d8-kube-proxy\") pod \"kube-proxy-wddbh\" (UID: \"adf32000-4e71-484d-8e6a-cef4a488b5d8\") " pod="kube-system/kube-proxy-wddbh" Jul 7 00:02:37.703409 kubelet[3484]: I0707 00:02:37.702686 3484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2461eeeb-d0ca-43f3-b55b-f005c3167746-host-proc-sys-net\") pod \"cilium-cqk4g\" (UID: \"2461eeeb-d0ca-43f3-b55b-f005c3167746\") " pod="kube-system/cilium-cqk4g" Jul 7 00:02:37.703409 kubelet[3484]: I0707 00:02:37.702694 3484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2461eeeb-d0ca-43f3-b55b-f005c3167746-host-proc-sys-kernel\") pod \"cilium-cqk4g\" (UID: \"2461eeeb-d0ca-43f3-b55b-f005c3167746\") " pod="kube-system/cilium-cqk4g" Jul 7 00:02:37.703409 kubelet[3484]: I0707 00:02:37.702705 3484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2461eeeb-d0ca-43f3-b55b-f005c3167746-hubble-tls\") pod \"cilium-cqk4g\" (UID: \"2461eeeb-d0ca-43f3-b55b-f005c3167746\") " pod="kube-system/cilium-cqk4g" Jul 7 00:02:37.703409 kubelet[3484]: I0707 00:02:37.702718 3484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2461eeeb-d0ca-43f3-b55b-f005c3167746-lib-modules\") pod \"cilium-cqk4g\" (UID: \"2461eeeb-d0ca-43f3-b55b-f005c3167746\") " pod="kube-system/cilium-cqk4g" Jul 7 00:02:37.703409 kubelet[3484]: I0707 00:02:37.702731 3484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/adf32000-4e71-484d-8e6a-cef4a488b5d8-xtables-lock\") pod \"kube-proxy-wddbh\" (UID: \"adf32000-4e71-484d-8e6a-cef4a488b5d8\") " pod="kube-system/kube-proxy-wddbh" Jul 7 00:02:37.703409 kubelet[3484]: I0707 00:02:37.702739 3484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/adf32000-4e71-484d-8e6a-cef4a488b5d8-lib-modules\") pod \"kube-proxy-wddbh\" (UID: \"adf32000-4e71-484d-8e6a-cef4a488b5d8\") " pod="kube-system/kube-proxy-wddbh" Jul 7 00:02:37.719976 systemd[1]: Created slice kubepods-besteffort-pod5201c41b_558a_4393_97fc_f0192cdf8ef8.slice - libcontainer container kubepods-besteffort-pod5201c41b_558a_4393_97fc_f0192cdf8ef8.slice. Jul 7 00:02:37.803142 kubelet[3484]: I0707 00:02:37.803092 3484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5201c41b-558a-4393-97fc-f0192cdf8ef8-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-r2445\" (UID: \"5201c41b-558a-4393-97fc-f0192cdf8ef8\") " pod="kube-system/cilium-operator-6c4d7847fc-r2445" Jul 7 00:02:37.803283 kubelet[3484]: I0707 00:02:37.803176 3484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mg8df\" (UniqueName: \"kubernetes.io/projected/5201c41b-558a-4393-97fc-f0192cdf8ef8-kube-api-access-mg8df\") pod \"cilium-operator-6c4d7847fc-r2445\" (UID: \"5201c41b-558a-4393-97fc-f0192cdf8ef8\") " pod="kube-system/cilium-operator-6c4d7847fc-r2445" Jul 7 00:02:37.916243 containerd[1887]: time="2025-07-07T00:02:37.916203324Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wddbh,Uid:adf32000-4e71-484d-8e6a-cef4a488b5d8,Namespace:kube-system,Attempt:0,}" Jul 7 00:02:37.929084 containerd[1887]: time="2025-07-07T00:02:37.929054633Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cqk4g,Uid:2461eeeb-d0ca-43f3-b55b-f005c3167746,Namespace:kube-system,Attempt:0,}" Jul 7 00:02:38.002804 containerd[1887]: time="2025-07-07T00:02:38.002525074Z" level=info msg="connecting to shim 723a7382f5426430785118cb3f2e2217e11056c55a00dcdf9d60418d1b5f5aa3" address="unix:///run/containerd/s/8a37d12a97a01be7304a57ef9b77a09dd1cd50d3094e9a0ab0438d2ae86585ff" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:02:38.019634 systemd[1]: Started cri-containerd-723a7382f5426430785118cb3f2e2217e11056c55a00dcdf9d60418d1b5f5aa3.scope - libcontainer container 723a7382f5426430785118cb3f2e2217e11056c55a00dcdf9d60418d1b5f5aa3. Jul 7 00:02:38.024180 containerd[1887]: time="2025-07-07T00:02:38.024099931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-r2445,Uid:5201c41b-558a-4393-97fc-f0192cdf8ef8,Namespace:kube-system,Attempt:0,}" Jul 7 00:02:38.029463 containerd[1887]: time="2025-07-07T00:02:38.029357019Z" level=info msg="connecting to shim 7e16eedfdbe1a86479dc69d76ebc444ea5a7600f0171a1a1123f77b9243d2d64" address="unix:///run/containerd/s/e91c205eecdc57c7c691afa7d79c37989dfafa8fcaf7722cde4758e29c634b68" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:02:38.052562 containerd[1887]: time="2025-07-07T00:02:38.052526601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wddbh,Uid:adf32000-4e71-484d-8e6a-cef4a488b5d8,Namespace:kube-system,Attempt:0,} returns sandbox id \"723a7382f5426430785118cb3f2e2217e11056c55a00dcdf9d60418d1b5f5aa3\"" Jul 7 00:02:38.053665 systemd[1]: Started cri-containerd-7e16eedfdbe1a86479dc69d76ebc444ea5a7600f0171a1a1123f77b9243d2d64.scope - libcontainer container 7e16eedfdbe1a86479dc69d76ebc444ea5a7600f0171a1a1123f77b9243d2d64. Jul 7 00:02:38.055983 containerd[1887]: time="2025-07-07T00:02:38.055960236Z" level=info msg="CreateContainer within sandbox \"723a7382f5426430785118cb3f2e2217e11056c55a00dcdf9d60418d1b5f5aa3\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 7 00:02:38.090086 containerd[1887]: time="2025-07-07T00:02:38.090046927Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cqk4g,Uid:2461eeeb-d0ca-43f3-b55b-f005c3167746,Namespace:kube-system,Attempt:0,} returns sandbox id \"7e16eedfdbe1a86479dc69d76ebc444ea5a7600f0171a1a1123f77b9243d2d64\"" Jul 7 00:02:38.092375 containerd[1887]: time="2025-07-07T00:02:38.092348716Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 7 00:02:38.112902 containerd[1887]: time="2025-07-07T00:02:38.112873059Z" level=info msg="Container ad1de6e1773aed44c1d4ad078be98207e823e550b7aa231acc82f620c6e48b8a: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:02:38.127646 containerd[1887]: time="2025-07-07T00:02:38.127609647Z" level=info msg="connecting to shim ab1936bf67181d458407e973f2d899ddbaa23015ad89e7af07a5492d3dca4a63" address="unix:///run/containerd/s/1300a557c742e070c5c47701e5b64d94876e8884c952e6199f1a3fd2e34bbe65" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:02:38.147771 containerd[1887]: time="2025-07-07T00:02:38.146882996Z" level=info msg="CreateContainer within sandbox \"723a7382f5426430785118cb3f2e2217e11056c55a00dcdf9d60418d1b5f5aa3\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ad1de6e1773aed44c1d4ad078be98207e823e550b7aa231acc82f620c6e48b8a\"" Jul 7 00:02:38.147666 systemd[1]: Started cri-containerd-ab1936bf67181d458407e973f2d899ddbaa23015ad89e7af07a5492d3dca4a63.scope - libcontainer container ab1936bf67181d458407e973f2d899ddbaa23015ad89e7af07a5492d3dca4a63. Jul 7 00:02:38.150193 containerd[1887]: time="2025-07-07T00:02:38.150153473Z" level=info msg="StartContainer for \"ad1de6e1773aed44c1d4ad078be98207e823e550b7aa231acc82f620c6e48b8a\"" Jul 7 00:02:38.153094 containerd[1887]: time="2025-07-07T00:02:38.153054986Z" level=info msg="connecting to shim ad1de6e1773aed44c1d4ad078be98207e823e550b7aa231acc82f620c6e48b8a" address="unix:///run/containerd/s/8a37d12a97a01be7304a57ef9b77a09dd1cd50d3094e9a0ab0438d2ae86585ff" protocol=ttrpc version=3 Jul 7 00:02:38.173622 systemd[1]: Started cri-containerd-ad1de6e1773aed44c1d4ad078be98207e823e550b7aa231acc82f620c6e48b8a.scope - libcontainer container ad1de6e1773aed44c1d4ad078be98207e823e550b7aa231acc82f620c6e48b8a. Jul 7 00:02:38.191379 containerd[1887]: time="2025-07-07T00:02:38.191347786Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-r2445,Uid:5201c41b-558a-4393-97fc-f0192cdf8ef8,Namespace:kube-system,Attempt:0,} returns sandbox id \"ab1936bf67181d458407e973f2d899ddbaa23015ad89e7af07a5492d3dca4a63\"" Jul 7 00:02:38.213075 containerd[1887]: time="2025-07-07T00:02:38.212720181Z" level=info msg="StartContainer for \"ad1de6e1773aed44c1d4ad078be98207e823e550b7aa231acc82f620c6e48b8a\" returns successfully" Jul 7 00:02:38.240490 kubelet[3484]: I0707 00:02:38.240429 3484 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-wddbh" podStartSLOduration=1.240414706 podStartE2EDuration="1.240414706s" podCreationTimestamp="2025-07-07 00:02:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:02:38.239981292 +0000 UTC m=+7.133206667" watchObservedRunningTime="2025-07-07 00:02:38.240414706 +0000 UTC m=+7.133640081" Jul 7 00:02:47.693349 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2682958666.mount: Deactivated successfully. Jul 7 00:02:49.552266 containerd[1887]: time="2025-07-07T00:02:49.552134115Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:02:49.789666 containerd[1887]: time="2025-07-07T00:02:49.789613004Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Jul 7 00:02:50.136159 containerd[1887]: time="2025-07-07T00:02:50.136107478Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:02:50.137045 containerd[1887]: time="2025-07-07T00:02:50.136996381Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 12.04461736s" Jul 7 00:02:50.137220 containerd[1887]: time="2025-07-07T00:02:50.137149170Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jul 7 00:02:50.138460 containerd[1887]: time="2025-07-07T00:02:50.138434111Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 7 00:02:50.139688 containerd[1887]: time="2025-07-07T00:02:50.139561886Z" level=info msg="CreateContainer within sandbox \"7e16eedfdbe1a86479dc69d76ebc444ea5a7600f0171a1a1123f77b9243d2d64\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 7 00:02:50.174389 containerd[1887]: time="2025-07-07T00:02:50.174363794Z" level=info msg="Container d1acc36ba21cf6c815cec6637d9947e9b71c80c1e37c6324af64868c6ea92b62: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:02:50.176057 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3598935724.mount: Deactivated successfully. Jul 7 00:02:50.203249 containerd[1887]: time="2025-07-07T00:02:50.203205582Z" level=info msg="CreateContainer within sandbox \"7e16eedfdbe1a86479dc69d76ebc444ea5a7600f0171a1a1123f77b9243d2d64\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d1acc36ba21cf6c815cec6637d9947e9b71c80c1e37c6324af64868c6ea92b62\"" Jul 7 00:02:50.203883 containerd[1887]: time="2025-07-07T00:02:50.203858092Z" level=info msg="StartContainer for \"d1acc36ba21cf6c815cec6637d9947e9b71c80c1e37c6324af64868c6ea92b62\"" Jul 7 00:02:50.204854 containerd[1887]: time="2025-07-07T00:02:50.204806509Z" level=info msg="connecting to shim d1acc36ba21cf6c815cec6637d9947e9b71c80c1e37c6324af64868c6ea92b62" address="unix:///run/containerd/s/e91c205eecdc57c7c691afa7d79c37989dfafa8fcaf7722cde4758e29c634b68" protocol=ttrpc version=3 Jul 7 00:02:50.222731 systemd[1]: Started cri-containerd-d1acc36ba21cf6c815cec6637d9947e9b71c80c1e37c6324af64868c6ea92b62.scope - libcontainer container d1acc36ba21cf6c815cec6637d9947e9b71c80c1e37c6324af64868c6ea92b62. Jul 7 00:02:50.249140 containerd[1887]: time="2025-07-07T00:02:50.249106027Z" level=info msg="StartContainer for \"d1acc36ba21cf6c815cec6637d9947e9b71c80c1e37c6324af64868c6ea92b62\" returns successfully" Jul 7 00:02:50.254183 systemd[1]: cri-containerd-d1acc36ba21cf6c815cec6637d9947e9b71c80c1e37c6324af64868c6ea92b62.scope: Deactivated successfully. Jul 7 00:02:50.256026 containerd[1887]: time="2025-07-07T00:02:50.255913336Z" level=info msg="received exit event container_id:\"d1acc36ba21cf6c815cec6637d9947e9b71c80c1e37c6324af64868c6ea92b62\" id:\"d1acc36ba21cf6c815cec6637d9947e9b71c80c1e37c6324af64868c6ea92b62\" pid:3898 exited_at:{seconds:1751846570 nanos:255011321}" Jul 7 00:02:50.256245 containerd[1887]: time="2025-07-07T00:02:50.256223563Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d1acc36ba21cf6c815cec6637d9947e9b71c80c1e37c6324af64868c6ea92b62\" id:\"d1acc36ba21cf6c815cec6637d9947e9b71c80c1e37c6324af64868c6ea92b62\" pid:3898 exited_at:{seconds:1751846570 nanos:255011321}" Jul 7 00:02:51.172599 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d1acc36ba21cf6c815cec6637d9947e9b71c80c1e37c6324af64868c6ea92b62-rootfs.mount: Deactivated successfully. Jul 7 00:02:54.259606 containerd[1887]: time="2025-07-07T00:02:54.259083123Z" level=info msg="CreateContainer within sandbox \"7e16eedfdbe1a86479dc69d76ebc444ea5a7600f0171a1a1123f77b9243d2d64\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 7 00:02:54.553528 containerd[1887]: time="2025-07-07T00:02:54.551583447Z" level=info msg="Container c46847702bf821d76d009a9f5445c7db65cb93f723e91ef61726cbbf812af4ef: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:02:54.553218 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3274026771.mount: Deactivated successfully. Jul 7 00:02:54.641250 containerd[1887]: time="2025-07-07T00:02:54.641185638Z" level=info msg="CreateContainer within sandbox \"7e16eedfdbe1a86479dc69d76ebc444ea5a7600f0171a1a1123f77b9243d2d64\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c46847702bf821d76d009a9f5445c7db65cb93f723e91ef61726cbbf812af4ef\"" Jul 7 00:02:54.642101 containerd[1887]: time="2025-07-07T00:02:54.642075237Z" level=info msg="StartContainer for \"c46847702bf821d76d009a9f5445c7db65cb93f723e91ef61726cbbf812af4ef\"" Jul 7 00:02:54.642936 containerd[1887]: time="2025-07-07T00:02:54.642733292Z" level=info msg="connecting to shim c46847702bf821d76d009a9f5445c7db65cb93f723e91ef61726cbbf812af4ef" address="unix:///run/containerd/s/e91c205eecdc57c7c691afa7d79c37989dfafa8fcaf7722cde4758e29c634b68" protocol=ttrpc version=3 Jul 7 00:02:54.661604 systemd[1]: Started cri-containerd-c46847702bf821d76d009a9f5445c7db65cb93f723e91ef61726cbbf812af4ef.scope - libcontainer container c46847702bf821d76d009a9f5445c7db65cb93f723e91ef61726cbbf812af4ef. Jul 7 00:02:54.696334 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 7 00:02:54.696506 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 7 00:02:54.697568 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 7 00:02:54.699352 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 00:02:54.700620 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 7 00:02:54.702098 systemd[1]: cri-containerd-c46847702bf821d76d009a9f5445c7db65cb93f723e91ef61726cbbf812af4ef.scope: Deactivated successfully. Jul 7 00:02:54.703292 containerd[1887]: time="2025-07-07T00:02:54.703262991Z" level=info msg="received exit event container_id:\"c46847702bf821d76d009a9f5445c7db65cb93f723e91ef61726cbbf812af4ef\" id:\"c46847702bf821d76d009a9f5445c7db65cb93f723e91ef61726cbbf812af4ef\" pid:3941 exited_at:{seconds:1751846574 nanos:702911378}" Jul 7 00:02:54.703914 containerd[1887]: time="2025-07-07T00:02:54.703822810Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c46847702bf821d76d009a9f5445c7db65cb93f723e91ef61726cbbf812af4ef\" id:\"c46847702bf821d76d009a9f5445c7db65cb93f723e91ef61726cbbf812af4ef\" pid:3941 exited_at:{seconds:1751846574 nanos:702911378}" Jul 7 00:02:54.703914 containerd[1887]: time="2025-07-07T00:02:54.703880740Z" level=info msg="StartContainer for \"c46847702bf821d76d009a9f5445c7db65cb93f723e91ef61726cbbf812af4ef\" returns successfully" Jul 7 00:02:54.719428 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 00:02:55.548417 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c46847702bf821d76d009a9f5445c7db65cb93f723e91ef61726cbbf812af4ef-rootfs.mount: Deactivated successfully. Jul 7 00:02:56.265262 containerd[1887]: time="2025-07-07T00:02:56.265188649Z" level=info msg="CreateContainer within sandbox \"7e16eedfdbe1a86479dc69d76ebc444ea5a7600f0171a1a1123f77b9243d2d64\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 7 00:02:56.407644 containerd[1887]: time="2025-07-07T00:02:56.407605571Z" level=info msg="Container 2d5e81b114278fb17721842706154cffd22e85e3ae8fe496284a72bde45e41a2: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:02:56.546913 containerd[1887]: time="2025-07-07T00:02:56.546776448Z" level=info msg="CreateContainer within sandbox \"7e16eedfdbe1a86479dc69d76ebc444ea5a7600f0171a1a1123f77b9243d2d64\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"2d5e81b114278fb17721842706154cffd22e85e3ae8fe496284a72bde45e41a2\"" Jul 7 00:02:56.548086 containerd[1887]: time="2025-07-07T00:02:56.547476519Z" level=info msg="StartContainer for \"2d5e81b114278fb17721842706154cffd22e85e3ae8fe496284a72bde45e41a2\"" Jul 7 00:02:56.548817 containerd[1887]: time="2025-07-07T00:02:56.548791307Z" level=info msg="connecting to shim 2d5e81b114278fb17721842706154cffd22e85e3ae8fe496284a72bde45e41a2" address="unix:///run/containerd/s/e91c205eecdc57c7c691afa7d79c37989dfafa8fcaf7722cde4758e29c634b68" protocol=ttrpc version=3 Jul 7 00:02:56.565598 systemd[1]: Started cri-containerd-2d5e81b114278fb17721842706154cffd22e85e3ae8fe496284a72bde45e41a2.scope - libcontainer container 2d5e81b114278fb17721842706154cffd22e85e3ae8fe496284a72bde45e41a2. Jul 7 00:02:56.589324 systemd[1]: cri-containerd-2d5e81b114278fb17721842706154cffd22e85e3ae8fe496284a72bde45e41a2.scope: Deactivated successfully. Jul 7 00:02:56.592118 containerd[1887]: time="2025-07-07T00:02:56.592004776Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2d5e81b114278fb17721842706154cffd22e85e3ae8fe496284a72bde45e41a2\" id:\"2d5e81b114278fb17721842706154cffd22e85e3ae8fe496284a72bde45e41a2\" pid:3989 exited_at:{seconds:1751846576 nanos:591494287}" Jul 7 00:02:56.644705 containerd[1887]: time="2025-07-07T00:02:56.644608615Z" level=info msg="received exit event container_id:\"2d5e81b114278fb17721842706154cffd22e85e3ae8fe496284a72bde45e41a2\" id:\"2d5e81b114278fb17721842706154cffd22e85e3ae8fe496284a72bde45e41a2\" pid:3989 exited_at:{seconds:1751846576 nanos:591494287}" Jul 7 00:02:56.650519 containerd[1887]: time="2025-07-07T00:02:56.650476667Z" level=info msg="StartContainer for \"2d5e81b114278fb17721842706154cffd22e85e3ae8fe496284a72bde45e41a2\" returns successfully" Jul 7 00:02:56.660004 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2d5e81b114278fb17721842706154cffd22e85e3ae8fe496284a72bde45e41a2-rootfs.mount: Deactivated successfully. Jul 7 00:02:58.152971 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1712187882.mount: Deactivated successfully. Jul 7 00:02:58.273340 containerd[1887]: time="2025-07-07T00:02:58.273216981Z" level=info msg="CreateContainer within sandbox \"7e16eedfdbe1a86479dc69d76ebc444ea5a7600f0171a1a1123f77b9243d2d64\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 7 00:02:58.756505 containerd[1887]: time="2025-07-07T00:02:58.755642343Z" level=info msg="Container 958348ef3381f9ce559b1a8757be3362a4fee3be0724e57301404d932db3e5f7: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:02:58.896102 containerd[1887]: time="2025-07-07T00:02:58.896048662Z" level=info msg="CreateContainer within sandbox \"7e16eedfdbe1a86479dc69d76ebc444ea5a7600f0171a1a1123f77b9243d2d64\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"958348ef3381f9ce559b1a8757be3362a4fee3be0724e57301404d932db3e5f7\"" Jul 7 00:02:58.897146 containerd[1887]: time="2025-07-07T00:02:58.896585312Z" level=info msg="StartContainer for \"958348ef3381f9ce559b1a8757be3362a4fee3be0724e57301404d932db3e5f7\"" Jul 7 00:02:58.898574 containerd[1887]: time="2025-07-07T00:02:58.898551425Z" level=info msg="connecting to shim 958348ef3381f9ce559b1a8757be3362a4fee3be0724e57301404d932db3e5f7" address="unix:///run/containerd/s/e91c205eecdc57c7c691afa7d79c37989dfafa8fcaf7722cde4758e29c634b68" protocol=ttrpc version=3 Jul 7 00:02:58.918600 systemd[1]: Started cri-containerd-958348ef3381f9ce559b1a8757be3362a4fee3be0724e57301404d932db3e5f7.scope - libcontainer container 958348ef3381f9ce559b1a8757be3362a4fee3be0724e57301404d932db3e5f7. Jul 7 00:02:58.935029 systemd[1]: cri-containerd-958348ef3381f9ce559b1a8757be3362a4fee3be0724e57301404d932db3e5f7.scope: Deactivated successfully. Jul 7 00:02:58.936341 containerd[1887]: time="2025-07-07T00:02:58.936312504Z" level=info msg="TaskExit event in podsandbox handler container_id:\"958348ef3381f9ce559b1a8757be3362a4fee3be0724e57301404d932db3e5f7\" id:\"958348ef3381f9ce559b1a8757be3362a4fee3be0724e57301404d932db3e5f7\" pid:4032 exited_at:{seconds:1751846578 nanos:935707596}" Jul 7 00:02:58.990500 containerd[1887]: time="2025-07-07T00:02:58.990447314Z" level=info msg="received exit event container_id:\"958348ef3381f9ce559b1a8757be3362a4fee3be0724e57301404d932db3e5f7\" id:\"958348ef3381f9ce559b1a8757be3362a4fee3be0724e57301404d932db3e5f7\" pid:4032 exited_at:{seconds:1751846578 nanos:935707596}" Jul 7 00:02:58.995001 containerd[1887]: time="2025-07-07T00:02:58.994977097Z" level=info msg="StartContainer for \"958348ef3381f9ce559b1a8757be3362a4fee3be0724e57301404d932db3e5f7\" returns successfully" Jul 7 00:02:59.149191 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-958348ef3381f9ce559b1a8757be3362a4fee3be0724e57301404d932db3e5f7-rootfs.mount: Deactivated successfully. Jul 7 00:03:00.281048 containerd[1887]: time="2025-07-07T00:03:00.281008633Z" level=info msg="CreateContainer within sandbox \"7e16eedfdbe1a86479dc69d76ebc444ea5a7600f0171a1a1123f77b9243d2d64\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 7 00:03:01.200254 containerd[1887]: time="2025-07-07T00:03:01.199230934Z" level=info msg="Container 3e55efea29e21d63d8b7eabb60817c9665399d931ca74ac3b3142b4651e65dbc: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:03:01.300075 containerd[1887]: time="2025-07-07T00:03:01.300033809Z" level=info msg="CreateContainer within sandbox \"7e16eedfdbe1a86479dc69d76ebc444ea5a7600f0171a1a1123f77b9243d2d64\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"3e55efea29e21d63d8b7eabb60817c9665399d931ca74ac3b3142b4651e65dbc\"" Jul 7 00:03:01.301408 containerd[1887]: time="2025-07-07T00:03:01.300683119Z" level=info msg="StartContainer for \"3e55efea29e21d63d8b7eabb60817c9665399d931ca74ac3b3142b4651e65dbc\"" Jul 7 00:03:01.303179 containerd[1887]: time="2025-07-07T00:03:01.303103096Z" level=info msg="connecting to shim 3e55efea29e21d63d8b7eabb60817c9665399d931ca74ac3b3142b4651e65dbc" address="unix:///run/containerd/s/e91c205eecdc57c7c691afa7d79c37989dfafa8fcaf7722cde4758e29c634b68" protocol=ttrpc version=3 Jul 7 00:03:01.327593 systemd[1]: Started cri-containerd-3e55efea29e21d63d8b7eabb60817c9665399d931ca74ac3b3142b4651e65dbc.scope - libcontainer container 3e55efea29e21d63d8b7eabb60817c9665399d931ca74ac3b3142b4651e65dbc. Jul 7 00:03:01.448205 containerd[1887]: time="2025-07-07T00:03:01.448164378Z" level=info msg="StartContainer for \"3e55efea29e21d63d8b7eabb60817c9665399d931ca74ac3b3142b4651e65dbc\" returns successfully" Jul 7 00:03:01.490553 containerd[1887]: time="2025-07-07T00:03:01.490375677Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3e55efea29e21d63d8b7eabb60817c9665399d931ca74ac3b3142b4651e65dbc\" id:\"195b1916a17d06431b44e961241b18d6ee715d148ac576b98a6dacc520180744\" pid:4117 exited_at:{seconds:1751846581 nanos:490050154}" Jul 7 00:03:01.522939 kubelet[3484]: I0707 00:03:01.522891 3484 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 7 00:03:01.562460 systemd[1]: Created slice kubepods-burstable-pod81318c3d_ccf1_43a8_8807_3addd448203a.slice - libcontainer container kubepods-burstable-pod81318c3d_ccf1_43a8_8807_3addd448203a.slice. Jul 7 00:03:01.568911 systemd[1]: Created slice kubepods-burstable-podbb106bde_6c12_4a05_bde2_6b44b37e55b7.slice - libcontainer container kubepods-burstable-podbb106bde_6c12_4a05_bde2_6b44b37e55b7.slice. Jul 7 00:03:01.644264 kubelet[3484]: I0707 00:03:01.644232 3484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76rrk\" (UniqueName: \"kubernetes.io/projected/81318c3d-ccf1-43a8-8807-3addd448203a-kube-api-access-76rrk\") pod \"coredns-668d6bf9bc-lcqjr\" (UID: \"81318c3d-ccf1-43a8-8807-3addd448203a\") " pod="kube-system/coredns-668d6bf9bc-lcqjr" Jul 7 00:03:01.644380 kubelet[3484]: I0707 00:03:01.644269 3484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bb106bde-6c12-4a05-bde2-6b44b37e55b7-config-volume\") pod \"coredns-668d6bf9bc-dk42n\" (UID: \"bb106bde-6c12-4a05-bde2-6b44b37e55b7\") " pod="kube-system/coredns-668d6bf9bc-dk42n" Jul 7 00:03:01.644380 kubelet[3484]: I0707 00:03:01.644284 3484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jljq4\" (UniqueName: \"kubernetes.io/projected/bb106bde-6c12-4a05-bde2-6b44b37e55b7-kube-api-access-jljq4\") pod \"coredns-668d6bf9bc-dk42n\" (UID: \"bb106bde-6c12-4a05-bde2-6b44b37e55b7\") " pod="kube-system/coredns-668d6bf9bc-dk42n" Jul 7 00:03:01.644380 kubelet[3484]: I0707 00:03:01.644297 3484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/81318c3d-ccf1-43a8-8807-3addd448203a-config-volume\") pod \"coredns-668d6bf9bc-lcqjr\" (UID: \"81318c3d-ccf1-43a8-8807-3addd448203a\") " pod="kube-system/coredns-668d6bf9bc-lcqjr" Jul 7 00:03:01.867206 containerd[1887]: time="2025-07-07T00:03:01.867091761Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lcqjr,Uid:81318c3d-ccf1-43a8-8807-3addd448203a,Namespace:kube-system,Attempt:0,}" Jul 7 00:03:01.872554 containerd[1887]: time="2025-07-07T00:03:01.872520239Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dk42n,Uid:bb106bde-6c12-4a05-bde2-6b44b37e55b7,Namespace:kube-system,Attempt:0,}" Jul 7 00:03:01.937149 containerd[1887]: time="2025-07-07T00:03:01.937117415Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:03:02.048713 containerd[1887]: time="2025-07-07T00:03:02.048666248Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Jul 7 00:03:02.145084 containerd[1887]: time="2025-07-07T00:03:02.145040103Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:03:02.146348 containerd[1887]: time="2025-07-07T00:03:02.146323273Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 12.007861601s" Jul 7 00:03:02.146413 containerd[1887]: time="2025-07-07T00:03:02.146351914Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jul 7 00:03:02.149096 containerd[1887]: time="2025-07-07T00:03:02.149032404Z" level=info msg="CreateContainer within sandbox \"ab1936bf67181d458407e973f2d899ddbaa23015ad89e7af07a5492d3dca4a63\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 7 00:03:02.302606 kubelet[3484]: I0707 00:03:02.302556 3484 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-cqk4g" podStartSLOduration=13.255706646 podStartE2EDuration="25.302542641s" podCreationTimestamp="2025-07-07 00:02:37 +0000 UTC" firstStartedPulling="2025-07-07 00:02:38.091115227 +0000 UTC m=+6.984340602" lastFinishedPulling="2025-07-07 00:02:50.137951222 +0000 UTC m=+19.031176597" observedRunningTime="2025-07-07 00:03:02.301933061 +0000 UTC m=+31.195158436" watchObservedRunningTime="2025-07-07 00:03:02.302542641 +0000 UTC m=+31.195768032" Jul 7 00:03:02.308391 containerd[1887]: time="2025-07-07T00:03:02.308358619Z" level=info msg="Container fecf727af81e92c2c0743e116c5b48ff1981aa03e7950e8847092d1575936441: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:03:02.403132 containerd[1887]: time="2025-07-07T00:03:02.403030297Z" level=info msg="CreateContainer within sandbox \"ab1936bf67181d458407e973f2d899ddbaa23015ad89e7af07a5492d3dca4a63\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"fecf727af81e92c2c0743e116c5b48ff1981aa03e7950e8847092d1575936441\"" Jul 7 00:03:02.403808 containerd[1887]: time="2025-07-07T00:03:02.403718000Z" level=info msg="StartContainer for \"fecf727af81e92c2c0743e116c5b48ff1981aa03e7950e8847092d1575936441\"" Jul 7 00:03:02.404814 containerd[1887]: time="2025-07-07T00:03:02.404789812Z" level=info msg="connecting to shim fecf727af81e92c2c0743e116c5b48ff1981aa03e7950e8847092d1575936441" address="unix:///run/containerd/s/1300a557c742e070c5c47701e5b64d94876e8884c952e6199f1a3fd2e34bbe65" protocol=ttrpc version=3 Jul 7 00:03:02.423601 systemd[1]: Started cri-containerd-fecf727af81e92c2c0743e116c5b48ff1981aa03e7950e8847092d1575936441.scope - libcontainer container fecf727af81e92c2c0743e116c5b48ff1981aa03e7950e8847092d1575936441. Jul 7 00:03:02.448068 containerd[1887]: time="2025-07-07T00:03:02.448013313Z" level=info msg="StartContainer for \"fecf727af81e92c2c0743e116c5b48ff1981aa03e7950e8847092d1575936441\" returns successfully" Jul 7 00:03:03.299681 kubelet[3484]: I0707 00:03:03.299451 3484 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-r2445" podStartSLOduration=2.346604071 podStartE2EDuration="26.299436399s" podCreationTimestamp="2025-07-07 00:02:37 +0000 UTC" firstStartedPulling="2025-07-07 00:02:38.194163016 +0000 UTC m=+7.087388391" lastFinishedPulling="2025-07-07 00:03:02.146995344 +0000 UTC m=+31.040220719" observedRunningTime="2025-07-07 00:03:03.299212623 +0000 UTC m=+32.192437998" watchObservedRunningTime="2025-07-07 00:03:03.299436399 +0000 UTC m=+32.192661782" Jul 7 00:03:06.403854 systemd-networkd[1689]: cilium_host: Link UP Jul 7 00:03:06.405556 systemd-networkd[1689]: cilium_net: Link UP Jul 7 00:03:06.406325 systemd-networkd[1689]: cilium_net: Gained carrier Jul 7 00:03:06.406419 systemd-networkd[1689]: cilium_host: Gained carrier Jul 7 00:03:06.506802 systemd-networkd[1689]: cilium_vxlan: Link UP Jul 7 00:03:06.506962 systemd-networkd[1689]: cilium_vxlan: Gained carrier Jul 7 00:03:06.725530 kernel: NET: Registered PF_ALG protocol family Jul 7 00:03:06.985665 systemd-networkd[1689]: cilium_net: Gained IPv6LL Jul 7 00:03:07.255719 systemd-networkd[1689]: lxc_health: Link UP Jul 7 00:03:07.263876 systemd-networkd[1689]: lxc_health: Gained carrier Jul 7 00:03:07.369661 systemd-networkd[1689]: cilium_host: Gained IPv6LL Jul 7 00:03:07.584035 systemd-networkd[1689]: lxc8b517e7fd47a: Link UP Jul 7 00:03:07.591562 kernel: eth0: renamed from tmpc5716 Jul 7 00:03:07.593042 systemd-networkd[1689]: lxc8b517e7fd47a: Gained carrier Jul 7 00:03:07.622217 systemd-networkd[1689]: lxcd3ef4c6cbd59: Link UP Jul 7 00:03:07.629550 kernel: eth0: renamed from tmp96894 Jul 7 00:03:07.635009 systemd-networkd[1689]: lxcd3ef4c6cbd59: Gained carrier Jul 7 00:03:07.817768 systemd-networkd[1689]: cilium_vxlan: Gained IPv6LL Jul 7 00:03:08.457708 systemd-networkd[1689]: lxc_health: Gained IPv6LL Jul 7 00:03:08.905718 systemd-networkd[1689]: lxc8b517e7fd47a: Gained IPv6LL Jul 7 00:03:08.905968 systemd-networkd[1689]: lxcd3ef4c6cbd59: Gained IPv6LL Jul 7 00:03:10.553277 containerd[1887]: time="2025-07-07T00:03:10.552858255Z" level=info msg="connecting to shim c57167a103efd4a9aee8827dedbdc9a0b48c43b977040920208ab4c73763b442" address="unix:///run/containerd/s/5e11dc362f759ba3cabe46372d6788910440c88e0467b35fd92aa2fab00564c4" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:03:10.569609 systemd[1]: Started cri-containerd-c57167a103efd4a9aee8827dedbdc9a0b48c43b977040920208ab4c73763b442.scope - libcontainer container c57167a103efd4a9aee8827dedbdc9a0b48c43b977040920208ab4c73763b442. Jul 7 00:03:10.614493 containerd[1887]: time="2025-07-07T00:03:10.614431753Z" level=info msg="connecting to shim 96894c020265627a6fdc00d035ecae97ebc496ad13257cbe830092c2dee74a5b" address="unix:///run/containerd/s/ff8ed4913a268ff333a7819a4cc289492522e9da5552ceb37f359cb283dd0664" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:03:10.634598 systemd[1]: Started cri-containerd-96894c020265627a6fdc00d035ecae97ebc496ad13257cbe830092c2dee74a5b.scope - libcontainer container 96894c020265627a6fdc00d035ecae97ebc496ad13257cbe830092c2dee74a5b. Jul 7 00:03:10.648405 containerd[1887]: time="2025-07-07T00:03:10.648126743Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lcqjr,Uid:81318c3d-ccf1-43a8-8807-3addd448203a,Namespace:kube-system,Attempt:0,} returns sandbox id \"c57167a103efd4a9aee8827dedbdc9a0b48c43b977040920208ab4c73763b442\"" Jul 7 00:03:10.652261 containerd[1887]: time="2025-07-07T00:03:10.652233333Z" level=info msg="CreateContainer within sandbox \"c57167a103efd4a9aee8827dedbdc9a0b48c43b977040920208ab4c73763b442\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 7 00:03:10.700857 containerd[1887]: time="2025-07-07T00:03:10.700830326Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dk42n,Uid:bb106bde-6c12-4a05-bde2-6b44b37e55b7,Namespace:kube-system,Attempt:0,} returns sandbox id \"96894c020265627a6fdc00d035ecae97ebc496ad13257cbe830092c2dee74a5b\"" Jul 7 00:03:10.703610 containerd[1887]: time="2025-07-07T00:03:10.703588110Z" level=info msg="CreateContainer within sandbox \"96894c020265627a6fdc00d035ecae97ebc496ad13257cbe830092c2dee74a5b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 7 00:03:11.008679 containerd[1887]: time="2025-07-07T00:03:11.008639829Z" level=info msg="Container 7e609c2e5407381653290732d64c3fdf8cbb024a9b83495db10d4b6f11c53e3f: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:03:11.062879 containerd[1887]: time="2025-07-07T00:03:11.062836067Z" level=info msg="Container 43c99de93ac52ab8774e9703e7ce934454ae8df08ce87474ea5cc46510ec4247: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:03:11.103739 containerd[1887]: time="2025-07-07T00:03:11.103697967Z" level=info msg="CreateContainer within sandbox \"c57167a103efd4a9aee8827dedbdc9a0b48c43b977040920208ab4c73763b442\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7e609c2e5407381653290732d64c3fdf8cbb024a9b83495db10d4b6f11c53e3f\"" Jul 7 00:03:11.104243 containerd[1887]: time="2025-07-07T00:03:11.104156966Z" level=info msg="StartContainer for \"7e609c2e5407381653290732d64c3fdf8cbb024a9b83495db10d4b6f11c53e3f\"" Jul 7 00:03:11.105281 containerd[1887]: time="2025-07-07T00:03:11.105254010Z" level=info msg="connecting to shim 7e609c2e5407381653290732d64c3fdf8cbb024a9b83495db10d4b6f11c53e3f" address="unix:///run/containerd/s/5e11dc362f759ba3cabe46372d6788910440c88e0467b35fd92aa2fab00564c4" protocol=ttrpc version=3 Jul 7 00:03:11.109965 containerd[1887]: time="2025-07-07T00:03:11.109892876Z" level=info msg="CreateContainer within sandbox \"96894c020265627a6fdc00d035ecae97ebc496ad13257cbe830092c2dee74a5b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"43c99de93ac52ab8774e9703e7ce934454ae8df08ce87474ea5cc46510ec4247\"" Jul 7 00:03:11.110890 containerd[1887]: time="2025-07-07T00:03:11.110860844Z" level=info msg="StartContainer for \"43c99de93ac52ab8774e9703e7ce934454ae8df08ce87474ea5cc46510ec4247\"" Jul 7 00:03:11.111524 containerd[1887]: time="2025-07-07T00:03:11.111502162Z" level=info msg="connecting to shim 43c99de93ac52ab8774e9703e7ce934454ae8df08ce87474ea5cc46510ec4247" address="unix:///run/containerd/s/ff8ed4913a268ff333a7819a4cc289492522e9da5552ceb37f359cb283dd0664" protocol=ttrpc version=3 Jul 7 00:03:11.122638 systemd[1]: Started cri-containerd-7e609c2e5407381653290732d64c3fdf8cbb024a9b83495db10d4b6f11c53e3f.scope - libcontainer container 7e609c2e5407381653290732d64c3fdf8cbb024a9b83495db10d4b6f11c53e3f. Jul 7 00:03:11.129601 systemd[1]: Started cri-containerd-43c99de93ac52ab8774e9703e7ce934454ae8df08ce87474ea5cc46510ec4247.scope - libcontainer container 43c99de93ac52ab8774e9703e7ce934454ae8df08ce87474ea5cc46510ec4247. Jul 7 00:03:11.162511 containerd[1887]: time="2025-07-07T00:03:11.161555086Z" level=info msg="StartContainer for \"7e609c2e5407381653290732d64c3fdf8cbb024a9b83495db10d4b6f11c53e3f\" returns successfully" Jul 7 00:03:11.173116 containerd[1887]: time="2025-07-07T00:03:11.173075901Z" level=info msg="StartContainer for \"43c99de93ac52ab8774e9703e7ce934454ae8df08ce87474ea5cc46510ec4247\" returns successfully" Jul 7 00:03:11.322633 kubelet[3484]: I0707 00:03:11.322167 3484 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-dk42n" podStartSLOduration=34.322136886 podStartE2EDuration="34.322136886s" podCreationTimestamp="2025-07-07 00:02:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:03:11.321184631 +0000 UTC m=+40.214410006" watchObservedRunningTime="2025-07-07 00:03:11.322136886 +0000 UTC m=+40.215362269" Jul 7 00:03:11.336119 kubelet[3484]: I0707 00:03:11.336073 3484 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-lcqjr" podStartSLOduration=34.336059308 podStartE2EDuration="34.336059308s" podCreationTimestamp="2025-07-07 00:02:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:03:11.334656902 +0000 UTC m=+40.227882277" watchObservedRunningTime="2025-07-07 00:03:11.336059308 +0000 UTC m=+40.229284683" Jul 7 00:04:36.214456 systemd[1]: Started sshd@7-10.200.20.36:22-10.200.16.10:47536.service - OpenSSH per-connection server daemon (10.200.16.10:47536). Jul 7 00:04:36.707011 sshd[4799]: Accepted publickey for core from 10.200.16.10 port 47536 ssh2: RSA SHA256:jqhyWZ4ohIpoDXTKwGqZcjJ0o+/xEZ+M9M3EOPsEPgk Jul 7 00:04:36.708233 sshd-session[4799]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:04:36.711847 systemd-logind[1860]: New session 10 of user core. Jul 7 00:04:36.721610 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 7 00:04:37.109151 sshd[4801]: Connection closed by 10.200.16.10 port 47536 Jul 7 00:04:37.109786 sshd-session[4799]: pam_unix(sshd:session): session closed for user core Jul 7 00:04:37.112853 systemd[1]: sshd@7-10.200.20.36:22-10.200.16.10:47536.service: Deactivated successfully. Jul 7 00:04:37.114305 systemd[1]: session-10.scope: Deactivated successfully. Jul 7 00:04:37.114975 systemd-logind[1860]: Session 10 logged out. Waiting for processes to exit. Jul 7 00:04:37.116304 systemd-logind[1860]: Removed session 10. Jul 7 00:04:42.197158 systemd[1]: Started sshd@8-10.200.20.36:22-10.200.16.10:34470.service - OpenSSH per-connection server daemon (10.200.16.10:34470). Jul 7 00:04:42.681881 sshd[4816]: Accepted publickey for core from 10.200.16.10 port 34470 ssh2: RSA SHA256:jqhyWZ4ohIpoDXTKwGqZcjJ0o+/xEZ+M9M3EOPsEPgk Jul 7 00:04:42.683029 sshd-session[4816]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:04:42.686641 systemd-logind[1860]: New session 11 of user core. Jul 7 00:04:42.698629 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 7 00:04:43.078513 sshd[4818]: Connection closed by 10.200.16.10 port 34470 Jul 7 00:04:43.078311 sshd-session[4816]: pam_unix(sshd:session): session closed for user core Jul 7 00:04:43.081309 systemd-logind[1860]: Session 11 logged out. Waiting for processes to exit. Jul 7 00:04:43.081611 systemd[1]: sshd@8-10.200.20.36:22-10.200.16.10:34470.service: Deactivated successfully. Jul 7 00:04:43.083279 systemd[1]: session-11.scope: Deactivated successfully. Jul 7 00:04:43.085704 systemd-logind[1860]: Removed session 11. Jul 7 00:04:48.164316 systemd[1]: Started sshd@9-10.200.20.36:22-10.200.16.10:34480.service - OpenSSH per-connection server daemon (10.200.16.10:34480). Jul 7 00:04:48.644514 sshd[4831]: Accepted publickey for core from 10.200.16.10 port 34480 ssh2: RSA SHA256:jqhyWZ4ohIpoDXTKwGqZcjJ0o+/xEZ+M9M3EOPsEPgk Jul 7 00:04:48.645675 sshd-session[4831]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:04:48.649629 systemd-logind[1860]: New session 12 of user core. Jul 7 00:04:48.655617 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 7 00:04:49.040736 sshd[4833]: Connection closed by 10.200.16.10 port 34480 Jul 7 00:04:49.041374 sshd-session[4831]: pam_unix(sshd:session): session closed for user core Jul 7 00:04:49.044587 systemd[1]: sshd@9-10.200.20.36:22-10.200.16.10:34480.service: Deactivated successfully. Jul 7 00:04:49.046781 systemd[1]: session-12.scope: Deactivated successfully. Jul 7 00:04:49.047726 systemd-logind[1860]: Session 12 logged out. Waiting for processes to exit. Jul 7 00:04:49.049439 systemd-logind[1860]: Removed session 12. Jul 7 00:04:54.132127 systemd[1]: Started sshd@10-10.200.20.36:22-10.200.16.10:42542.service - OpenSSH per-connection server daemon (10.200.16.10:42542). Jul 7 00:04:54.613012 sshd[4845]: Accepted publickey for core from 10.200.16.10 port 42542 ssh2: RSA SHA256:jqhyWZ4ohIpoDXTKwGqZcjJ0o+/xEZ+M9M3EOPsEPgk Jul 7 00:04:54.614092 sshd-session[4845]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:04:54.617666 systemd-logind[1860]: New session 13 of user core. Jul 7 00:04:54.627706 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 7 00:04:54.998885 sshd[4847]: Connection closed by 10.200.16.10 port 42542 Jul 7 00:04:54.998190 sshd-session[4845]: pam_unix(sshd:session): session closed for user core Jul 7 00:04:55.001335 systemd[1]: sshd@10-10.200.20.36:22-10.200.16.10:42542.service: Deactivated successfully. Jul 7 00:04:55.003043 systemd[1]: session-13.scope: Deactivated successfully. Jul 7 00:04:55.003810 systemd-logind[1860]: Session 13 logged out. Waiting for processes to exit. Jul 7 00:04:55.004939 systemd-logind[1860]: Removed session 13. Jul 7 00:04:55.090327 systemd[1]: Started sshd@11-10.200.20.36:22-10.200.16.10:42550.service - OpenSSH per-connection server daemon (10.200.16.10:42550). Jul 7 00:04:55.586621 sshd[4860]: Accepted publickey for core from 10.200.16.10 port 42550 ssh2: RSA SHA256:jqhyWZ4ohIpoDXTKwGqZcjJ0o+/xEZ+M9M3EOPsEPgk Jul 7 00:04:55.587711 sshd-session[4860]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:04:55.591368 systemd-logind[1860]: New session 14 of user core. Jul 7 00:04:55.598620 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 7 00:04:56.007592 sshd[4862]: Connection closed by 10.200.16.10 port 42550 Jul 7 00:04:56.007435 sshd-session[4860]: pam_unix(sshd:session): session closed for user core Jul 7 00:04:56.010500 systemd[1]: sshd@11-10.200.20.36:22-10.200.16.10:42550.service: Deactivated successfully. Jul 7 00:04:56.012757 systemd[1]: session-14.scope: Deactivated successfully. Jul 7 00:04:56.014141 systemd-logind[1860]: Session 14 logged out. Waiting for processes to exit. Jul 7 00:04:56.015746 systemd-logind[1860]: Removed session 14. Jul 7 00:04:56.093819 systemd[1]: Started sshd@12-10.200.20.36:22-10.200.16.10:42558.service - OpenSSH per-connection server daemon (10.200.16.10:42558). Jul 7 00:04:56.572985 sshd[4872]: Accepted publickey for core from 10.200.16.10 port 42558 ssh2: RSA SHA256:jqhyWZ4ohIpoDXTKwGqZcjJ0o+/xEZ+M9M3EOPsEPgk Jul 7 00:04:56.574090 sshd-session[4872]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:04:56.577731 systemd-logind[1860]: New session 15 of user core. Jul 7 00:04:56.585613 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 7 00:04:56.960516 sshd[4874]: Connection closed by 10.200.16.10 port 42558 Jul 7 00:04:56.961069 sshd-session[4872]: pam_unix(sshd:session): session closed for user core Jul 7 00:04:56.964321 systemd[1]: sshd@12-10.200.20.36:22-10.200.16.10:42558.service: Deactivated successfully. Jul 7 00:04:56.965963 systemd[1]: session-15.scope: Deactivated successfully. Jul 7 00:04:56.966935 systemd-logind[1860]: Session 15 logged out. Waiting for processes to exit. Jul 7 00:04:56.968434 systemd-logind[1860]: Removed session 15. Jul 7 00:05:02.054430 systemd[1]: Started sshd@13-10.200.20.36:22-10.200.16.10:55410.service - OpenSSH per-connection server daemon (10.200.16.10:55410). Jul 7 00:05:02.529666 sshd[4887]: Accepted publickey for core from 10.200.16.10 port 55410 ssh2: RSA SHA256:jqhyWZ4ohIpoDXTKwGqZcjJ0o+/xEZ+M9M3EOPsEPgk Jul 7 00:05:02.530780 sshd-session[4887]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:05:02.534435 systemd-logind[1860]: New session 16 of user core. Jul 7 00:05:02.540595 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 7 00:05:02.911538 sshd[4889]: Connection closed by 10.200.16.10 port 55410 Jul 7 00:05:02.912109 sshd-session[4887]: pam_unix(sshd:session): session closed for user core Jul 7 00:05:02.915153 systemd[1]: sshd@13-10.200.20.36:22-10.200.16.10:55410.service: Deactivated successfully. Jul 7 00:05:02.916710 systemd[1]: session-16.scope: Deactivated successfully. Jul 7 00:05:02.917372 systemd-logind[1860]: Session 16 logged out. Waiting for processes to exit. Jul 7 00:05:02.919891 systemd-logind[1860]: Removed session 16. Jul 7 00:05:02.997356 systemd[1]: Started sshd@14-10.200.20.36:22-10.200.16.10:55420.service - OpenSSH per-connection server daemon (10.200.16.10:55420). Jul 7 00:05:03.477897 sshd[4900]: Accepted publickey for core from 10.200.16.10 port 55420 ssh2: RSA SHA256:jqhyWZ4ohIpoDXTKwGqZcjJ0o+/xEZ+M9M3EOPsEPgk Jul 7 00:05:03.479186 sshd-session[4900]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:05:03.483190 systemd-logind[1860]: New session 17 of user core. Jul 7 00:05:03.488606 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 7 00:05:03.944508 sshd[4902]: Connection closed by 10.200.16.10 port 55420 Jul 7 00:05:03.944673 sshd-session[4900]: pam_unix(sshd:session): session closed for user core Jul 7 00:05:03.947954 systemd[1]: sshd@14-10.200.20.36:22-10.200.16.10:55420.service: Deactivated successfully. Jul 7 00:05:03.949817 systemd[1]: session-17.scope: Deactivated successfully. Jul 7 00:05:03.951132 systemd-logind[1860]: Session 17 logged out. Waiting for processes to exit. Jul 7 00:05:03.952824 systemd-logind[1860]: Removed session 17. Jul 7 00:05:04.030535 systemd[1]: Started sshd@15-10.200.20.36:22-10.200.16.10:55432.service - OpenSSH per-connection server daemon (10.200.16.10:55432). Jul 7 00:05:04.507708 sshd[4911]: Accepted publickey for core from 10.200.16.10 port 55432 ssh2: RSA SHA256:jqhyWZ4ohIpoDXTKwGqZcjJ0o+/xEZ+M9M3EOPsEPgk Jul 7 00:05:04.508820 sshd-session[4911]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:05:04.512448 systemd-logind[1860]: New session 18 of user core. Jul 7 00:05:04.520596 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 7 00:05:05.544588 sshd[4913]: Connection closed by 10.200.16.10 port 55432 Jul 7 00:05:05.545252 sshd-session[4911]: pam_unix(sshd:session): session closed for user core Jul 7 00:05:05.549419 systemd-logind[1860]: Session 18 logged out. Waiting for processes to exit. Jul 7 00:05:05.550224 systemd[1]: sshd@15-10.200.20.36:22-10.200.16.10:55432.service: Deactivated successfully. Jul 7 00:05:05.551792 systemd[1]: session-18.scope: Deactivated successfully. Jul 7 00:05:05.554315 systemd-logind[1860]: Removed session 18. Jul 7 00:05:05.634980 systemd[1]: Started sshd@16-10.200.20.36:22-10.200.16.10:55438.service - OpenSSH per-connection server daemon (10.200.16.10:55438). Jul 7 00:05:06.122272 sshd[4931]: Accepted publickey for core from 10.200.16.10 port 55438 ssh2: RSA SHA256:jqhyWZ4ohIpoDXTKwGqZcjJ0o+/xEZ+M9M3EOPsEPgk Jul 7 00:05:06.125538 sshd-session[4931]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:05:06.129601 systemd-logind[1860]: New session 19 of user core. Jul 7 00:05:06.136608 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 7 00:05:06.584187 sshd[4933]: Connection closed by 10.200.16.10 port 55438 Jul 7 00:05:06.584811 sshd-session[4931]: pam_unix(sshd:session): session closed for user core Jul 7 00:05:06.588384 systemd[1]: sshd@16-10.200.20.36:22-10.200.16.10:55438.service: Deactivated successfully. Jul 7 00:05:06.590892 systemd[1]: session-19.scope: Deactivated successfully. Jul 7 00:05:06.591685 systemd-logind[1860]: Session 19 logged out. Waiting for processes to exit. Jul 7 00:05:06.593201 systemd-logind[1860]: Removed session 19. Jul 7 00:05:06.670374 systemd[1]: Started sshd@17-10.200.20.36:22-10.200.16.10:55448.service - OpenSSH per-connection server daemon (10.200.16.10:55448). Jul 7 00:05:07.150999 sshd[4943]: Accepted publickey for core from 10.200.16.10 port 55448 ssh2: RSA SHA256:jqhyWZ4ohIpoDXTKwGqZcjJ0o+/xEZ+M9M3EOPsEPgk Jul 7 00:05:07.152154 sshd-session[4943]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:05:07.156121 systemd-logind[1860]: New session 20 of user core. Jul 7 00:05:07.158598 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 7 00:05:07.531109 sshd[4945]: Connection closed by 10.200.16.10 port 55448 Jul 7 00:05:07.531905 sshd-session[4943]: pam_unix(sshd:session): session closed for user core Jul 7 00:05:07.535010 systemd-logind[1860]: Session 20 logged out. Waiting for processes to exit. Jul 7 00:05:07.535405 systemd[1]: sshd@17-10.200.20.36:22-10.200.16.10:55448.service: Deactivated successfully. Jul 7 00:05:07.538660 systemd[1]: session-20.scope: Deactivated successfully. Jul 7 00:05:07.541750 systemd-logind[1860]: Removed session 20. Jul 7 00:05:12.618789 systemd[1]: Started sshd@18-10.200.20.36:22-10.200.16.10:47408.service - OpenSSH per-connection server daemon (10.200.16.10:47408). Jul 7 00:05:13.099140 sshd[4960]: Accepted publickey for core from 10.200.16.10 port 47408 ssh2: RSA SHA256:jqhyWZ4ohIpoDXTKwGqZcjJ0o+/xEZ+M9M3EOPsEPgk Jul 7 00:05:13.100258 sshd-session[4960]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:05:13.104051 systemd-logind[1860]: New session 21 of user core. Jul 7 00:05:13.111608 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 7 00:05:13.494318 sshd[4962]: Connection closed by 10.200.16.10 port 47408 Jul 7 00:05:13.494693 sshd-session[4960]: pam_unix(sshd:session): session closed for user core Jul 7 00:05:13.498308 systemd[1]: sshd@18-10.200.20.36:22-10.200.16.10:47408.service: Deactivated successfully. Jul 7 00:05:13.499755 systemd[1]: session-21.scope: Deactivated successfully. Jul 7 00:05:13.501359 systemd-logind[1860]: Session 21 logged out. Waiting for processes to exit. Jul 7 00:05:13.502395 systemd-logind[1860]: Removed session 21. Jul 7 00:05:18.582879 systemd[1]: Started sshd@19-10.200.20.36:22-10.200.16.10:47418.service - OpenSSH per-connection server daemon (10.200.16.10:47418). Jul 7 00:05:19.064253 sshd[4974]: Accepted publickey for core from 10.200.16.10 port 47418 ssh2: RSA SHA256:jqhyWZ4ohIpoDXTKwGqZcjJ0o+/xEZ+M9M3EOPsEPgk Jul 7 00:05:19.065339 sshd-session[4974]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:05:19.069109 systemd-logind[1860]: New session 22 of user core. Jul 7 00:05:19.074693 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 7 00:05:19.446604 sshd[4976]: Connection closed by 10.200.16.10 port 47418 Jul 7 00:05:19.447172 sshd-session[4974]: pam_unix(sshd:session): session closed for user core Jul 7 00:05:19.450193 systemd[1]: sshd@19-10.200.20.36:22-10.200.16.10:47418.service: Deactivated successfully. Jul 7 00:05:19.451780 systemd[1]: session-22.scope: Deactivated successfully. Jul 7 00:05:19.452402 systemd-logind[1860]: Session 22 logged out. Waiting for processes to exit. Jul 7 00:05:19.453509 systemd-logind[1860]: Removed session 22. Jul 7 00:05:24.532098 systemd[1]: Started sshd@20-10.200.20.36:22-10.200.16.10:56976.service - OpenSSH per-connection server daemon (10.200.16.10:56976). Jul 7 00:05:25.011314 sshd[4988]: Accepted publickey for core from 10.200.16.10 port 56976 ssh2: RSA SHA256:jqhyWZ4ohIpoDXTKwGqZcjJ0o+/xEZ+M9M3EOPsEPgk Jul 7 00:05:25.012390 sshd-session[4988]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:05:25.016056 systemd-logind[1860]: New session 23 of user core. Jul 7 00:05:25.023608 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 7 00:05:25.398733 sshd[4990]: Connection closed by 10.200.16.10 port 56976 Jul 7 00:05:25.398639 sshd-session[4988]: pam_unix(sshd:session): session closed for user core Jul 7 00:05:25.401626 systemd[1]: sshd@20-10.200.20.36:22-10.200.16.10:56976.service: Deactivated successfully. Jul 7 00:05:25.403810 systemd[1]: session-23.scope: Deactivated successfully. Jul 7 00:05:25.404543 systemd-logind[1860]: Session 23 logged out. Waiting for processes to exit. Jul 7 00:05:25.406614 systemd-logind[1860]: Removed session 23. Jul 7 00:05:25.484367 systemd[1]: Started sshd@21-10.200.20.36:22-10.200.16.10:56980.service - OpenSSH per-connection server daemon (10.200.16.10:56980). Jul 7 00:05:25.964390 sshd[5001]: Accepted publickey for core from 10.200.16.10 port 56980 ssh2: RSA SHA256:jqhyWZ4ohIpoDXTKwGqZcjJ0o+/xEZ+M9M3EOPsEPgk Jul 7 00:05:25.965445 sshd-session[5001]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:05:25.969112 systemd-logind[1860]: New session 24 of user core. Jul 7 00:05:25.979693 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 7 00:05:27.538018 containerd[1887]: time="2025-07-07T00:05:27.537975217Z" level=info msg="StopContainer for \"fecf727af81e92c2c0743e116c5b48ff1981aa03e7950e8847092d1575936441\" with timeout 30 (s)" Jul 7 00:05:27.539209 containerd[1887]: time="2025-07-07T00:05:27.539182389Z" level=info msg="Stop container \"fecf727af81e92c2c0743e116c5b48ff1981aa03e7950e8847092d1575936441\" with signal terminated" Jul 7 00:05:27.546155 containerd[1887]: time="2025-07-07T00:05:27.546121301Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 7 00:05:27.550703 containerd[1887]: time="2025-07-07T00:05:27.550666054Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3e55efea29e21d63d8b7eabb60817c9665399d931ca74ac3b3142b4651e65dbc\" id:\"e24d354b97e1927f7c8c79fc97eac125bf7e729b6d999a067a9636dca5e458c3\" pid:5022 exited_at:{seconds:1751846727 nanos:550344726}" Jul 7 00:05:27.552929 systemd[1]: cri-containerd-fecf727af81e92c2c0743e116c5b48ff1981aa03e7950e8847092d1575936441.scope: Deactivated successfully. Jul 7 00:05:27.556851 containerd[1887]: time="2025-07-07T00:05:27.556231793Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fecf727af81e92c2c0743e116c5b48ff1981aa03e7950e8847092d1575936441\" id:\"fecf727af81e92c2c0743e116c5b48ff1981aa03e7950e8847092d1575936441\" pid:4217 exited_at:{seconds:1751846727 nanos:555815100}" Jul 7 00:05:27.556917 containerd[1887]: time="2025-07-07T00:05:27.556612956Z" level=info msg="StopContainer for \"3e55efea29e21d63d8b7eabb60817c9665399d931ca74ac3b3142b4651e65dbc\" with timeout 2 (s)" Jul 7 00:05:27.557013 containerd[1887]: time="2025-07-07T00:05:27.556674023Z" level=info msg="received exit event container_id:\"fecf727af81e92c2c0743e116c5b48ff1981aa03e7950e8847092d1575936441\" id:\"fecf727af81e92c2c0743e116c5b48ff1981aa03e7950e8847092d1575936441\" pid:4217 exited_at:{seconds:1751846727 nanos:555815100}" Jul 7 00:05:27.557390 containerd[1887]: time="2025-07-07T00:05:27.557368809Z" level=info msg="Stop container \"3e55efea29e21d63d8b7eabb60817c9665399d931ca74ac3b3142b4651e65dbc\" with signal terminated" Jul 7 00:05:27.565876 systemd-networkd[1689]: lxc_health: Link DOWN Jul 7 00:05:27.565881 systemd-networkd[1689]: lxc_health: Lost carrier Jul 7 00:05:27.578058 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fecf727af81e92c2c0743e116c5b48ff1981aa03e7950e8847092d1575936441-rootfs.mount: Deactivated successfully. Jul 7 00:05:27.581166 systemd[1]: cri-containerd-3e55efea29e21d63d8b7eabb60817c9665399d931ca74ac3b3142b4651e65dbc.scope: Deactivated successfully. Jul 7 00:05:27.581732 systemd[1]: cri-containerd-3e55efea29e21d63d8b7eabb60817c9665399d931ca74ac3b3142b4651e65dbc.scope: Consumed 4.329s CPU time, 123.6M memory peak, 152K read from disk, 12.9M written to disk. Jul 7 00:05:27.582321 containerd[1887]: time="2025-07-07T00:05:27.582292467Z" level=info msg="received exit event container_id:\"3e55efea29e21d63d8b7eabb60817c9665399d931ca74ac3b3142b4651e65dbc\" id:\"3e55efea29e21d63d8b7eabb60817c9665399d931ca74ac3b3142b4651e65dbc\" pid:4077 exited_at:{seconds:1751846727 nanos:581910336}" Jul 7 00:05:27.582531 containerd[1887]: time="2025-07-07T00:05:27.582474172Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3e55efea29e21d63d8b7eabb60817c9665399d931ca74ac3b3142b4651e65dbc\" id:\"3e55efea29e21d63d8b7eabb60817c9665399d931ca74ac3b3142b4651e65dbc\" pid:4077 exited_at:{seconds:1751846727 nanos:581910336}" Jul 7 00:05:27.597016 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3e55efea29e21d63d8b7eabb60817c9665399d931ca74ac3b3142b4651e65dbc-rootfs.mount: Deactivated successfully. Jul 7 00:05:27.714031 containerd[1887]: time="2025-07-07T00:05:27.713991348Z" level=info msg="StopContainer for \"3e55efea29e21d63d8b7eabb60817c9665399d931ca74ac3b3142b4651e65dbc\" returns successfully" Jul 7 00:05:27.714748 containerd[1887]: time="2025-07-07T00:05:27.714709447Z" level=info msg="StopPodSandbox for \"7e16eedfdbe1a86479dc69d76ebc444ea5a7600f0171a1a1123f77b9243d2d64\"" Jul 7 00:05:27.714794 containerd[1887]: time="2025-07-07T00:05:27.714778083Z" level=info msg="Container to stop \"2d5e81b114278fb17721842706154cffd22e85e3ae8fe496284a72bde45e41a2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 00:05:27.714794 containerd[1887]: time="2025-07-07T00:05:27.714787355Z" level=info msg="Container to stop \"958348ef3381f9ce559b1a8757be3362a4fee3be0724e57301404d932db3e5f7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 00:05:27.714829 containerd[1887]: time="2025-07-07T00:05:27.714793619Z" level=info msg="Container to stop \"d1acc36ba21cf6c815cec6637d9947e9b71c80c1e37c6324af64868c6ea92b62\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 00:05:27.714829 containerd[1887]: time="2025-07-07T00:05:27.714812780Z" level=info msg="Container to stop \"c46847702bf821d76d009a9f5445c7db65cb93f723e91ef61726cbbf812af4ef\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 00:05:27.714829 containerd[1887]: time="2025-07-07T00:05:27.714818309Z" level=info msg="Container to stop \"3e55efea29e21d63d8b7eabb60817c9665399d931ca74ac3b3142b4651e65dbc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 00:05:27.716502 containerd[1887]: time="2025-07-07T00:05:27.716207345Z" level=info msg="StopContainer for \"fecf727af81e92c2c0743e116c5b48ff1981aa03e7950e8847092d1575936441\" returns successfully" Jul 7 00:05:27.716682 containerd[1887]: time="2025-07-07T00:05:27.716661904Z" level=info msg="StopPodSandbox for \"ab1936bf67181d458407e973f2d899ddbaa23015ad89e7af07a5492d3dca4a63\"" Jul 7 00:05:27.716750 containerd[1887]: time="2025-07-07T00:05:27.716700106Z" level=info msg="Container to stop \"fecf727af81e92c2c0743e116c5b48ff1981aa03e7950e8847092d1575936441\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 00:05:27.720440 systemd[1]: cri-containerd-7e16eedfdbe1a86479dc69d76ebc444ea5a7600f0171a1a1123f77b9243d2d64.scope: Deactivated successfully. Jul 7 00:05:27.722229 containerd[1887]: time="2025-07-07T00:05:27.722121246Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7e16eedfdbe1a86479dc69d76ebc444ea5a7600f0171a1a1123f77b9243d2d64\" id:\"7e16eedfdbe1a86479dc69d76ebc444ea5a7600f0171a1a1123f77b9243d2d64\" pid:3635 exit_status:137 exited_at:{seconds:1751846727 nanos:721723450}" Jul 7 00:05:27.725268 systemd[1]: cri-containerd-ab1936bf67181d458407e973f2d899ddbaa23015ad89e7af07a5492d3dca4a63.scope: Deactivated successfully. Jul 7 00:05:27.745207 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ab1936bf67181d458407e973f2d899ddbaa23015ad89e7af07a5492d3dca4a63-rootfs.mount: Deactivated successfully. Jul 7 00:05:27.749781 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7e16eedfdbe1a86479dc69d76ebc444ea5a7600f0171a1a1123f77b9243d2d64-rootfs.mount: Deactivated successfully. Jul 7 00:05:27.764211 containerd[1887]: time="2025-07-07T00:05:27.764157671Z" level=info msg="TearDown network for sandbox \"7e16eedfdbe1a86479dc69d76ebc444ea5a7600f0171a1a1123f77b9243d2d64\" successfully" Jul 7 00:05:27.764211 containerd[1887]: time="2025-07-07T00:05:27.764190153Z" level=info msg="StopPodSandbox for \"7e16eedfdbe1a86479dc69d76ebc444ea5a7600f0171a1a1123f77b9243d2d64\" returns successfully" Jul 7 00:05:27.765361 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7e16eedfdbe1a86479dc69d76ebc444ea5a7600f0171a1a1123f77b9243d2d64-shm.mount: Deactivated successfully. Jul 7 00:05:27.765822 containerd[1887]: time="2025-07-07T00:05:27.765554524Z" level=info msg="shim disconnected" id=ab1936bf67181d458407e973f2d899ddbaa23015ad89e7af07a5492d3dca4a63 namespace=k8s.io Jul 7 00:05:27.765822 containerd[1887]: time="2025-07-07T00:05:27.765576910Z" level=warning msg="cleaning up after shim disconnected" id=ab1936bf67181d458407e973f2d899ddbaa23015ad89e7af07a5492d3dca4a63 namespace=k8s.io Jul 7 00:05:27.765822 containerd[1887]: time="2025-07-07T00:05:27.765605455Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 00:05:27.767209 containerd[1887]: time="2025-07-07T00:05:27.767107353Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ab1936bf67181d458407e973f2d899ddbaa23015ad89e7af07a5492d3dca4a63\" id:\"ab1936bf67181d458407e973f2d899ddbaa23015ad89e7af07a5492d3dca4a63\" pid:3682 exit_status:137 exited_at:{seconds:1751846727 nanos:726228746}" Jul 7 00:05:27.767273 containerd[1887]: time="2025-07-07T00:05:27.767247592Z" level=info msg="received exit event sandbox_id:\"ab1936bf67181d458407e973f2d899ddbaa23015ad89e7af07a5492d3dca4a63\" exit_status:137 exited_at:{seconds:1751846727 nanos:726228746}" Jul 7 00:05:27.767652 containerd[1887]: time="2025-07-07T00:05:27.767622107Z" level=info msg="shim disconnected" id=7e16eedfdbe1a86479dc69d76ebc444ea5a7600f0171a1a1123f77b9243d2d64 namespace=k8s.io Jul 7 00:05:27.767715 containerd[1887]: time="2025-07-07T00:05:27.767647276Z" level=warning msg="cleaning up after shim disconnected" id=7e16eedfdbe1a86479dc69d76ebc444ea5a7600f0171a1a1123f77b9243d2d64 namespace=k8s.io Jul 7 00:05:27.767715 containerd[1887]: time="2025-07-07T00:05:27.767662845Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 00:05:27.768531 containerd[1887]: time="2025-07-07T00:05:27.767955235Z" level=info msg="received exit event sandbox_id:\"7e16eedfdbe1a86479dc69d76ebc444ea5a7600f0171a1a1123f77b9243d2d64\" exit_status:137 exited_at:{seconds:1751846727 nanos:721723450}" Jul 7 00:05:27.771111 containerd[1887]: time="2025-07-07T00:05:27.771083694Z" level=info msg="TearDown network for sandbox \"ab1936bf67181d458407e973f2d899ddbaa23015ad89e7af07a5492d3dca4a63\" successfully" Jul 7 00:05:27.771383 containerd[1887]: time="2025-07-07T00:05:27.771242494Z" level=info msg="StopPodSandbox for \"ab1936bf67181d458407e973f2d899ddbaa23015ad89e7af07a5492d3dca4a63\" returns successfully" Jul 7 00:05:27.842800 kubelet[3484]: I0707 00:05:27.842517 3484 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2461eeeb-d0ca-43f3-b55b-f005c3167746-clustermesh-secrets\") pod \"2461eeeb-d0ca-43f3-b55b-f005c3167746\" (UID: \"2461eeeb-d0ca-43f3-b55b-f005c3167746\") " Jul 7 00:05:27.842800 kubelet[3484]: I0707 00:05:27.842719 3484 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg8df\" (UniqueName: \"kubernetes.io/projected/5201c41b-558a-4393-97fc-f0192cdf8ef8-kube-api-access-mg8df\") pod \"5201c41b-558a-4393-97fc-f0192cdf8ef8\" (UID: \"5201c41b-558a-4393-97fc-f0192cdf8ef8\") " Jul 7 00:05:27.842800 kubelet[3484]: I0707 00:05:27.842741 3484 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2461eeeb-d0ca-43f3-b55b-f005c3167746-cilium-config-path\") pod \"2461eeeb-d0ca-43f3-b55b-f005c3167746\" (UID: \"2461eeeb-d0ca-43f3-b55b-f005c3167746\") " Jul 7 00:05:27.842800 kubelet[3484]: I0707 00:05:27.842754 3484 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2461eeeb-d0ca-43f3-b55b-f005c3167746-cilium-cgroup\") pod \"2461eeeb-d0ca-43f3-b55b-f005c3167746\" (UID: \"2461eeeb-d0ca-43f3-b55b-f005c3167746\") " Jul 7 00:05:27.843979 kubelet[3484]: I0707 00:05:27.842767 3484 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2461eeeb-d0ca-43f3-b55b-f005c3167746-lib-modules\") pod \"2461eeeb-d0ca-43f3-b55b-f005c3167746\" (UID: \"2461eeeb-d0ca-43f3-b55b-f005c3167746\") " Jul 7 00:05:27.844151 kubelet[3484]: I0707 00:05:27.844102 3484 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2461eeeb-d0ca-43f3-b55b-f005c3167746-host-proc-sys-kernel\") pod \"2461eeeb-d0ca-43f3-b55b-f005c3167746\" (UID: \"2461eeeb-d0ca-43f3-b55b-f005c3167746\") " Jul 7 00:05:27.844151 kubelet[3484]: I0707 00:05:27.844129 3484 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5201c41b-558a-4393-97fc-f0192cdf8ef8-cilium-config-path\") pod \"5201c41b-558a-4393-97fc-f0192cdf8ef8\" (UID: \"5201c41b-558a-4393-97fc-f0192cdf8ef8\") " Jul 7 00:05:27.844304 kubelet[3484]: I0707 00:05:27.844247 3484 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2461eeeb-d0ca-43f3-b55b-f005c3167746-cni-path\") pod \"2461eeeb-d0ca-43f3-b55b-f005c3167746\" (UID: \"2461eeeb-d0ca-43f3-b55b-f005c3167746\") " Jul 7 00:05:27.844304 kubelet[3484]: I0707 00:05:27.844272 3484 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hm7fk\" (UniqueName: \"kubernetes.io/projected/2461eeeb-d0ca-43f3-b55b-f005c3167746-kube-api-access-hm7fk\") pod \"2461eeeb-d0ca-43f3-b55b-f005c3167746\" (UID: \"2461eeeb-d0ca-43f3-b55b-f005c3167746\") " Jul 7 00:05:27.844304 kubelet[3484]: I0707 00:05:27.844282 3484 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2461eeeb-d0ca-43f3-b55b-f005c3167746-etc-cni-netd\") pod \"2461eeeb-d0ca-43f3-b55b-f005c3167746\" (UID: \"2461eeeb-d0ca-43f3-b55b-f005c3167746\") " Jul 7 00:05:27.844463 kubelet[3484]: I0707 00:05:27.844392 3484 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2461eeeb-d0ca-43f3-b55b-f005c3167746-cilium-run\") pod \"2461eeeb-d0ca-43f3-b55b-f005c3167746\" (UID: \"2461eeeb-d0ca-43f3-b55b-f005c3167746\") " Jul 7 00:05:27.844463 kubelet[3484]: I0707 00:05:27.844414 3484 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2461eeeb-d0ca-43f3-b55b-f005c3167746-xtables-lock\") pod \"2461eeeb-d0ca-43f3-b55b-f005c3167746\" (UID: \"2461eeeb-d0ca-43f3-b55b-f005c3167746\") " Jul 7 00:05:27.844463 kubelet[3484]: I0707 00:05:27.844428 3484 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2461eeeb-d0ca-43f3-b55b-f005c3167746-hubble-tls\") pod \"2461eeeb-d0ca-43f3-b55b-f005c3167746\" (UID: \"2461eeeb-d0ca-43f3-b55b-f005c3167746\") " Jul 7 00:05:27.844463 kubelet[3484]: I0707 00:05:27.844442 3484 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2461eeeb-d0ca-43f3-b55b-f005c3167746-hostproc\") pod \"2461eeeb-d0ca-43f3-b55b-f005c3167746\" (UID: \"2461eeeb-d0ca-43f3-b55b-f005c3167746\") " Jul 7 00:05:27.844657 kubelet[3484]: I0707 00:05:27.844452 3484 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2461eeeb-d0ca-43f3-b55b-f005c3167746-bpf-maps\") pod \"2461eeeb-d0ca-43f3-b55b-f005c3167746\" (UID: \"2461eeeb-d0ca-43f3-b55b-f005c3167746\") " Jul 7 00:05:27.844657 kubelet[3484]: I0707 00:05:27.844600 3484 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2461eeeb-d0ca-43f3-b55b-f005c3167746-host-proc-sys-net\") pod \"2461eeeb-d0ca-43f3-b55b-f005c3167746\" (UID: \"2461eeeb-d0ca-43f3-b55b-f005c3167746\") " Jul 7 00:05:27.844794 kubelet[3484]: I0707 00:05:27.844724 3484 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2461eeeb-d0ca-43f3-b55b-f005c3167746-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "2461eeeb-d0ca-43f3-b55b-f005c3167746" (UID: "2461eeeb-d0ca-43f3-b55b-f005c3167746"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 7 00:05:27.845552 kubelet[3484]: I0707 00:05:27.845458 3484 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2461eeeb-d0ca-43f3-b55b-f005c3167746-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "2461eeeb-d0ca-43f3-b55b-f005c3167746" (UID: "2461eeeb-d0ca-43f3-b55b-f005c3167746"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 7 00:05:27.845837 kubelet[3484]: I0707 00:05:27.845783 3484 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2461eeeb-d0ca-43f3-b55b-f005c3167746-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "2461eeeb-d0ca-43f3-b55b-f005c3167746" (UID: "2461eeeb-d0ca-43f3-b55b-f005c3167746"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 7 00:05:27.845837 kubelet[3484]: I0707 00:05:27.845812 3484 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2461eeeb-d0ca-43f3-b55b-f005c3167746-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "2461eeeb-d0ca-43f3-b55b-f005c3167746" (UID: "2461eeeb-d0ca-43f3-b55b-f005c3167746"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 7 00:05:27.845837 kubelet[3484]: I0707 00:05:27.845822 3484 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2461eeeb-d0ca-43f3-b55b-f005c3167746-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "2461eeeb-d0ca-43f3-b55b-f005c3167746" (UID: "2461eeeb-d0ca-43f3-b55b-f005c3167746"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 7 00:05:27.846370 kubelet[3484]: I0707 00:05:27.846315 3484 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2461eeeb-d0ca-43f3-b55b-f005c3167746-hostproc" (OuterVolumeSpecName: "hostproc") pod "2461eeeb-d0ca-43f3-b55b-f005c3167746" (UID: "2461eeeb-d0ca-43f3-b55b-f005c3167746"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 7 00:05:27.846370 kubelet[3484]: I0707 00:05:27.846347 3484 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2461eeeb-d0ca-43f3-b55b-f005c3167746-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "2461eeeb-d0ca-43f3-b55b-f005c3167746" (UID: "2461eeeb-d0ca-43f3-b55b-f005c3167746"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 7 00:05:27.846572 kubelet[3484]: I0707 00:05:27.846360 3484 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2461eeeb-d0ca-43f3-b55b-f005c3167746-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "2461eeeb-d0ca-43f3-b55b-f005c3167746" (UID: "2461eeeb-d0ca-43f3-b55b-f005c3167746"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 7 00:05:27.846833 kubelet[3484]: I0707 00:05:27.846811 3484 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2461eeeb-d0ca-43f3-b55b-f005c3167746-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "2461eeeb-d0ca-43f3-b55b-f005c3167746" (UID: "2461eeeb-d0ca-43f3-b55b-f005c3167746"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 7 00:05:27.847332 kubelet[3484]: I0707 00:05:27.847309 3484 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2461eeeb-d0ca-43f3-b55b-f005c3167746-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "2461eeeb-d0ca-43f3-b55b-f005c3167746" (UID: "2461eeeb-d0ca-43f3-b55b-f005c3167746"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 7 00:05:27.849063 kubelet[3484]: I0707 00:05:27.848540 3484 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2461eeeb-d0ca-43f3-b55b-f005c3167746-kube-api-access-hm7fk" (OuterVolumeSpecName: "kube-api-access-hm7fk") pod "2461eeeb-d0ca-43f3-b55b-f005c3167746" (UID: "2461eeeb-d0ca-43f3-b55b-f005c3167746"). InnerVolumeSpecName "kube-api-access-hm7fk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 7 00:05:27.849063 kubelet[3484]: I0707 00:05:27.848560 3484 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2461eeeb-d0ca-43f3-b55b-f005c3167746-cni-path" (OuterVolumeSpecName: "cni-path") pod "2461eeeb-d0ca-43f3-b55b-f005c3167746" (UID: "2461eeeb-d0ca-43f3-b55b-f005c3167746"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 7 00:05:27.849063 kubelet[3484]: I0707 00:05:27.849042 3484 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5201c41b-558a-4393-97fc-f0192cdf8ef8-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "5201c41b-558a-4393-97fc-f0192cdf8ef8" (UID: "5201c41b-558a-4393-97fc-f0192cdf8ef8"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 7 00:05:27.849423 kubelet[3484]: I0707 00:05:27.849392 3484 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2461eeeb-d0ca-43f3-b55b-f005c3167746-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "2461eeeb-d0ca-43f3-b55b-f005c3167746" (UID: "2461eeeb-d0ca-43f3-b55b-f005c3167746"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 7 00:05:27.850316 kubelet[3484]: I0707 00:05:27.850284 3484 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5201c41b-558a-4393-97fc-f0192cdf8ef8-kube-api-access-mg8df" (OuterVolumeSpecName: "kube-api-access-mg8df") pod "5201c41b-558a-4393-97fc-f0192cdf8ef8" (UID: "5201c41b-558a-4393-97fc-f0192cdf8ef8"). InnerVolumeSpecName "kube-api-access-mg8df". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 7 00:05:27.850874 kubelet[3484]: I0707 00:05:27.850832 3484 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2461eeeb-d0ca-43f3-b55b-f005c3167746-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "2461eeeb-d0ca-43f3-b55b-f005c3167746" (UID: "2461eeeb-d0ca-43f3-b55b-f005c3167746"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 7 00:05:27.945371 kubelet[3484]: I0707 00:05:27.945318 3484 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2461eeeb-d0ca-43f3-b55b-f005c3167746-hostproc\") on node \"ci-4372.0.1-a-7cca70db3c\" DevicePath \"\"" Jul 7 00:05:27.945688 kubelet[3484]: I0707 00:05:27.945350 3484 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2461eeeb-d0ca-43f3-b55b-f005c3167746-bpf-maps\") on node \"ci-4372.0.1-a-7cca70db3c\" DevicePath \"\"" Jul 7 00:05:27.945688 kubelet[3484]: I0707 00:05:27.945575 3484 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2461eeeb-d0ca-43f3-b55b-f005c3167746-host-proc-sys-net\") on node \"ci-4372.0.1-a-7cca70db3c\" DevicePath \"\"" Jul 7 00:05:27.945688 kubelet[3484]: I0707 00:05:27.945591 3484 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mg8df\" (UniqueName: \"kubernetes.io/projected/5201c41b-558a-4393-97fc-f0192cdf8ef8-kube-api-access-mg8df\") on node \"ci-4372.0.1-a-7cca70db3c\" DevicePath \"\"" Jul 7 00:05:27.945688 kubelet[3484]: I0707 00:05:27.945599 3484 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2461eeeb-d0ca-43f3-b55b-f005c3167746-clustermesh-secrets\") on node \"ci-4372.0.1-a-7cca70db3c\" DevicePath \"\"" Jul 7 00:05:27.945688 kubelet[3484]: I0707 00:05:27.945605 3484 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2461eeeb-d0ca-43f3-b55b-f005c3167746-cilium-config-path\") on node \"ci-4372.0.1-a-7cca70db3c\" DevicePath \"\"" Jul 7 00:05:27.945688 kubelet[3484]: I0707 00:05:27.945612 3484 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2461eeeb-d0ca-43f3-b55b-f005c3167746-lib-modules\") on node \"ci-4372.0.1-a-7cca70db3c\" DevicePath \"\"" Jul 7 00:05:27.945688 kubelet[3484]: I0707 00:05:27.945618 3484 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2461eeeb-d0ca-43f3-b55b-f005c3167746-cilium-cgroup\") on node \"ci-4372.0.1-a-7cca70db3c\" DevicePath \"\"" Jul 7 00:05:27.945688 kubelet[3484]: I0707 00:05:27.945623 3484 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2461eeeb-d0ca-43f3-b55b-f005c3167746-host-proc-sys-kernel\") on node \"ci-4372.0.1-a-7cca70db3c\" DevicePath \"\"" Jul 7 00:05:27.945854 kubelet[3484]: I0707 00:05:27.945638 3484 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5201c41b-558a-4393-97fc-f0192cdf8ef8-cilium-config-path\") on node \"ci-4372.0.1-a-7cca70db3c\" DevicePath \"\"" Jul 7 00:05:27.945854 kubelet[3484]: I0707 00:05:27.945644 3484 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2461eeeb-d0ca-43f3-b55b-f005c3167746-cni-path\") on node \"ci-4372.0.1-a-7cca70db3c\" DevicePath \"\"" Jul 7 00:05:27.945854 kubelet[3484]: I0707 00:05:27.945650 3484 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hm7fk\" (UniqueName: \"kubernetes.io/projected/2461eeeb-d0ca-43f3-b55b-f005c3167746-kube-api-access-hm7fk\") on node \"ci-4372.0.1-a-7cca70db3c\" DevicePath \"\"" Jul 7 00:05:27.945854 kubelet[3484]: I0707 00:05:27.945656 3484 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2461eeeb-d0ca-43f3-b55b-f005c3167746-etc-cni-netd\") on node \"ci-4372.0.1-a-7cca70db3c\" DevicePath \"\"" Jul 7 00:05:27.945854 kubelet[3484]: I0707 00:05:27.945661 3484 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2461eeeb-d0ca-43f3-b55b-f005c3167746-cilium-run\") on node \"ci-4372.0.1-a-7cca70db3c\" DevicePath \"\"" Jul 7 00:05:27.945854 kubelet[3484]: I0707 00:05:27.945666 3484 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2461eeeb-d0ca-43f3-b55b-f005c3167746-xtables-lock\") on node \"ci-4372.0.1-a-7cca70db3c\" DevicePath \"\"" Jul 7 00:05:27.945854 kubelet[3484]: I0707 00:05:27.945672 3484 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2461eeeb-d0ca-43f3-b55b-f005c3167746-hubble-tls\") on node \"ci-4372.0.1-a-7cca70db3c\" DevicePath \"\"" Jul 7 00:05:28.546451 kubelet[3484]: I0707 00:05:28.546380 3484 scope.go:117] "RemoveContainer" containerID="fecf727af81e92c2c0743e116c5b48ff1981aa03e7950e8847092d1575936441" Jul 7 00:05:28.556511 containerd[1887]: time="2025-07-07T00:05:28.554752141Z" level=info msg="RemoveContainer for \"fecf727af81e92c2c0743e116c5b48ff1981aa03e7950e8847092d1575936441\"" Jul 7 00:05:28.556032 systemd[1]: Removed slice kubepods-besteffort-pod5201c41b_558a_4393_97fc_f0192cdf8ef8.slice - libcontainer container kubepods-besteffort-pod5201c41b_558a_4393_97fc_f0192cdf8ef8.slice. Jul 7 00:05:28.563901 systemd[1]: Removed slice kubepods-burstable-pod2461eeeb_d0ca_43f3_b55b_f005c3167746.slice - libcontainer container kubepods-burstable-pod2461eeeb_d0ca_43f3_b55b_f005c3167746.slice. Jul 7 00:05:28.564577 systemd[1]: kubepods-burstable-pod2461eeeb_d0ca_43f3_b55b_f005c3167746.slice: Consumed 4.385s CPU time, 124M memory peak, 152K read from disk, 12.9M written to disk. Jul 7 00:05:28.578043 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ab1936bf67181d458407e973f2d899ddbaa23015ad89e7af07a5492d3dca4a63-shm.mount: Deactivated successfully. Jul 7 00:05:28.578123 systemd[1]: var-lib-kubelet-pods-5201c41b\x2d558a\x2d4393\x2d97fc\x2df0192cdf8ef8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmg8df.mount: Deactivated successfully. Jul 7 00:05:28.578169 systemd[1]: var-lib-kubelet-pods-2461eeeb\x2dd0ca\x2d43f3\x2db55b\x2df005c3167746-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhm7fk.mount: Deactivated successfully. Jul 7 00:05:28.578204 systemd[1]: var-lib-kubelet-pods-2461eeeb\x2dd0ca\x2d43f3\x2db55b\x2df005c3167746-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 7 00:05:28.578246 systemd[1]: var-lib-kubelet-pods-2461eeeb\x2dd0ca\x2d43f3\x2db55b\x2df005c3167746-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 7 00:05:28.618847 containerd[1887]: time="2025-07-07T00:05:28.618788415Z" level=info msg="RemoveContainer for \"fecf727af81e92c2c0743e116c5b48ff1981aa03e7950e8847092d1575936441\" returns successfully" Jul 7 00:05:28.619069 kubelet[3484]: I0707 00:05:28.619040 3484 scope.go:117] "RemoveContainer" containerID="fecf727af81e92c2c0743e116c5b48ff1981aa03e7950e8847092d1575936441" Jul 7 00:05:28.619345 containerd[1887]: time="2025-07-07T00:05:28.619319242Z" level=error msg="ContainerStatus for \"fecf727af81e92c2c0743e116c5b48ff1981aa03e7950e8847092d1575936441\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fecf727af81e92c2c0743e116c5b48ff1981aa03e7950e8847092d1575936441\": not found" Jul 7 00:05:28.619568 kubelet[3484]: E0707 00:05:28.619544 3484 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fecf727af81e92c2c0743e116c5b48ff1981aa03e7950e8847092d1575936441\": not found" containerID="fecf727af81e92c2c0743e116c5b48ff1981aa03e7950e8847092d1575936441" Jul 7 00:05:28.619625 kubelet[3484]: I0707 00:05:28.619572 3484 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fecf727af81e92c2c0743e116c5b48ff1981aa03e7950e8847092d1575936441"} err="failed to get container status \"fecf727af81e92c2c0743e116c5b48ff1981aa03e7950e8847092d1575936441\": rpc error: code = NotFound desc = an error occurred when try to find container \"fecf727af81e92c2c0743e116c5b48ff1981aa03e7950e8847092d1575936441\": not found" Jul 7 00:05:28.619625 kubelet[3484]: I0707 00:05:28.619623 3484 scope.go:117] "RemoveContainer" containerID="3e55efea29e21d63d8b7eabb60817c9665399d931ca74ac3b3142b4651e65dbc" Jul 7 00:05:28.620883 containerd[1887]: time="2025-07-07T00:05:28.620864406Z" level=info msg="RemoveContainer for \"3e55efea29e21d63d8b7eabb60817c9665399d931ca74ac3b3142b4651e65dbc\"" Jul 7 00:05:28.633902 containerd[1887]: time="2025-07-07T00:05:28.633830648Z" level=info msg="RemoveContainer for \"3e55efea29e21d63d8b7eabb60817c9665399d931ca74ac3b3142b4651e65dbc\" returns successfully" Jul 7 00:05:28.634045 kubelet[3484]: I0707 00:05:28.634028 3484 scope.go:117] "RemoveContainer" containerID="958348ef3381f9ce559b1a8757be3362a4fee3be0724e57301404d932db3e5f7" Jul 7 00:05:28.635311 containerd[1887]: time="2025-07-07T00:05:28.635288504Z" level=info msg="RemoveContainer for \"958348ef3381f9ce559b1a8757be3362a4fee3be0724e57301404d932db3e5f7\"" Jul 7 00:05:28.648127 containerd[1887]: time="2025-07-07T00:05:28.648056616Z" level=info msg="RemoveContainer for \"958348ef3381f9ce559b1a8757be3362a4fee3be0724e57301404d932db3e5f7\" returns successfully" Jul 7 00:05:28.649864 kubelet[3484]: I0707 00:05:28.649606 3484 scope.go:117] "RemoveContainer" containerID="2d5e81b114278fb17721842706154cffd22e85e3ae8fe496284a72bde45e41a2" Jul 7 00:05:28.652582 containerd[1887]: time="2025-07-07T00:05:28.652552743Z" level=info msg="RemoveContainer for \"2d5e81b114278fb17721842706154cffd22e85e3ae8fe496284a72bde45e41a2\"" Jul 7 00:05:28.667233 containerd[1887]: time="2025-07-07T00:05:28.667205373Z" level=info msg="RemoveContainer for \"2d5e81b114278fb17721842706154cffd22e85e3ae8fe496284a72bde45e41a2\" returns successfully" Jul 7 00:05:28.667459 kubelet[3484]: I0707 00:05:28.667427 3484 scope.go:117] "RemoveContainer" containerID="c46847702bf821d76d009a9f5445c7db65cb93f723e91ef61726cbbf812af4ef" Jul 7 00:05:28.668569 containerd[1887]: time="2025-07-07T00:05:28.668546943Z" level=info msg="RemoveContainer for \"c46847702bf821d76d009a9f5445c7db65cb93f723e91ef61726cbbf812af4ef\"" Jul 7 00:05:28.680875 containerd[1887]: time="2025-07-07T00:05:28.680838616Z" level=info msg="RemoveContainer for \"c46847702bf821d76d009a9f5445c7db65cb93f723e91ef61726cbbf812af4ef\" returns successfully" Jul 7 00:05:28.681222 kubelet[3484]: I0707 00:05:28.681141 3484 scope.go:117] "RemoveContainer" containerID="d1acc36ba21cf6c815cec6637d9947e9b71c80c1e37c6324af64868c6ea92b62" Jul 7 00:05:28.682629 containerd[1887]: time="2025-07-07T00:05:28.682606895Z" level=info msg="RemoveContainer for \"d1acc36ba21cf6c815cec6637d9947e9b71c80c1e37c6324af64868c6ea92b62\"" Jul 7 00:05:28.699222 containerd[1887]: time="2025-07-07T00:05:28.699194484Z" level=info msg="RemoveContainer for \"d1acc36ba21cf6c815cec6637d9947e9b71c80c1e37c6324af64868c6ea92b62\" returns successfully" Jul 7 00:05:28.699473 kubelet[3484]: I0707 00:05:28.699454 3484 scope.go:117] "RemoveContainer" containerID="3e55efea29e21d63d8b7eabb60817c9665399d931ca74ac3b3142b4651e65dbc" Jul 7 00:05:28.699789 containerd[1887]: time="2025-07-07T00:05:28.699759696Z" level=error msg="ContainerStatus for \"3e55efea29e21d63d8b7eabb60817c9665399d931ca74ac3b3142b4651e65dbc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3e55efea29e21d63d8b7eabb60817c9665399d931ca74ac3b3142b4651e65dbc\": not found" Jul 7 00:05:28.699915 kubelet[3484]: E0707 00:05:28.699895 3484 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3e55efea29e21d63d8b7eabb60817c9665399d931ca74ac3b3142b4651e65dbc\": not found" containerID="3e55efea29e21d63d8b7eabb60817c9665399d931ca74ac3b3142b4651e65dbc" Jul 7 00:05:28.699961 kubelet[3484]: I0707 00:05:28.699940 3484 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3e55efea29e21d63d8b7eabb60817c9665399d931ca74ac3b3142b4651e65dbc"} err="failed to get container status \"3e55efea29e21d63d8b7eabb60817c9665399d931ca74ac3b3142b4651e65dbc\": rpc error: code = NotFound desc = an error occurred when try to find container \"3e55efea29e21d63d8b7eabb60817c9665399d931ca74ac3b3142b4651e65dbc\": not found" Jul 7 00:05:28.699961 kubelet[3484]: I0707 00:05:28.699960 3484 scope.go:117] "RemoveContainer" containerID="958348ef3381f9ce559b1a8757be3362a4fee3be0724e57301404d932db3e5f7" Jul 7 00:05:28.700122 containerd[1887]: time="2025-07-07T00:05:28.700093521Z" level=error msg="ContainerStatus for \"958348ef3381f9ce559b1a8757be3362a4fee3be0724e57301404d932db3e5f7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"958348ef3381f9ce559b1a8757be3362a4fee3be0724e57301404d932db3e5f7\": not found" Jul 7 00:05:28.700333 kubelet[3484]: E0707 00:05:28.700312 3484 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"958348ef3381f9ce559b1a8757be3362a4fee3be0724e57301404d932db3e5f7\": not found" containerID="958348ef3381f9ce559b1a8757be3362a4fee3be0724e57301404d932db3e5f7" Jul 7 00:05:28.700378 kubelet[3484]: I0707 00:05:28.700333 3484 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"958348ef3381f9ce559b1a8757be3362a4fee3be0724e57301404d932db3e5f7"} err="failed to get container status \"958348ef3381f9ce559b1a8757be3362a4fee3be0724e57301404d932db3e5f7\": rpc error: code = NotFound desc = an error occurred when try to find container \"958348ef3381f9ce559b1a8757be3362a4fee3be0724e57301404d932db3e5f7\": not found" Jul 7 00:05:28.700378 kubelet[3484]: I0707 00:05:28.700348 3484 scope.go:117] "RemoveContainer" containerID="2d5e81b114278fb17721842706154cffd22e85e3ae8fe496284a72bde45e41a2" Jul 7 00:05:28.700532 containerd[1887]: time="2025-07-07T00:05:28.700500093Z" level=error msg="ContainerStatus for \"2d5e81b114278fb17721842706154cffd22e85e3ae8fe496284a72bde45e41a2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2d5e81b114278fb17721842706154cffd22e85e3ae8fe496284a72bde45e41a2\": not found" Jul 7 00:05:28.700720 kubelet[3484]: E0707 00:05:28.700666 3484 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2d5e81b114278fb17721842706154cffd22e85e3ae8fe496284a72bde45e41a2\": not found" containerID="2d5e81b114278fb17721842706154cffd22e85e3ae8fe496284a72bde45e41a2" Jul 7 00:05:28.700807 kubelet[3484]: I0707 00:05:28.700786 3484 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2d5e81b114278fb17721842706154cffd22e85e3ae8fe496284a72bde45e41a2"} err="failed to get container status \"2d5e81b114278fb17721842706154cffd22e85e3ae8fe496284a72bde45e41a2\": rpc error: code = NotFound desc = an error occurred when try to find container \"2d5e81b114278fb17721842706154cffd22e85e3ae8fe496284a72bde45e41a2\": not found" Jul 7 00:05:28.700860 kubelet[3484]: I0707 00:05:28.700849 3484 scope.go:117] "RemoveContainer" containerID="c46847702bf821d76d009a9f5445c7db65cb93f723e91ef61726cbbf812af4ef" Jul 7 00:05:28.701082 containerd[1887]: time="2025-07-07T00:05:28.701049032Z" level=error msg="ContainerStatus for \"c46847702bf821d76d009a9f5445c7db65cb93f723e91ef61726cbbf812af4ef\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c46847702bf821d76d009a9f5445c7db65cb93f723e91ef61726cbbf812af4ef\": not found" Jul 7 00:05:28.701315 kubelet[3484]: E0707 00:05:28.701284 3484 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c46847702bf821d76d009a9f5445c7db65cb93f723e91ef61726cbbf812af4ef\": not found" containerID="c46847702bf821d76d009a9f5445c7db65cb93f723e91ef61726cbbf812af4ef" Jul 7 00:05:28.701315 kubelet[3484]: I0707 00:05:28.701304 3484 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c46847702bf821d76d009a9f5445c7db65cb93f723e91ef61726cbbf812af4ef"} err="failed to get container status \"c46847702bf821d76d009a9f5445c7db65cb93f723e91ef61726cbbf812af4ef\": rpc error: code = NotFound desc = an error occurred when try to find container \"c46847702bf821d76d009a9f5445c7db65cb93f723e91ef61726cbbf812af4ef\": not found" Jul 7 00:05:28.701315 kubelet[3484]: I0707 00:05:28.701315 3484 scope.go:117] "RemoveContainer" containerID="d1acc36ba21cf6c815cec6637d9947e9b71c80c1e37c6324af64868c6ea92b62" Jul 7 00:05:28.701647 containerd[1887]: time="2025-07-07T00:05:28.701608372Z" level=error msg="ContainerStatus for \"d1acc36ba21cf6c815cec6637d9947e9b71c80c1e37c6324af64868c6ea92b62\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d1acc36ba21cf6c815cec6637d9947e9b71c80c1e37c6324af64868c6ea92b62\": not found" Jul 7 00:05:28.701790 kubelet[3484]: E0707 00:05:28.701769 3484 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d1acc36ba21cf6c815cec6637d9947e9b71c80c1e37c6324af64868c6ea92b62\": not found" containerID="d1acc36ba21cf6c815cec6637d9947e9b71c80c1e37c6324af64868c6ea92b62" Jul 7 00:05:28.701790 kubelet[3484]: I0707 00:05:28.701786 3484 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d1acc36ba21cf6c815cec6637d9947e9b71c80c1e37c6324af64868c6ea92b62"} err="failed to get container status \"d1acc36ba21cf6c815cec6637d9947e9b71c80c1e37c6324af64868c6ea92b62\": rpc error: code = NotFound desc = an error occurred when try to find container \"d1acc36ba21cf6c815cec6637d9947e9b71c80c1e37c6324af64868c6ea92b62\": not found" Jul 7 00:05:29.172754 kubelet[3484]: I0707 00:05:29.172719 3484 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2461eeeb-d0ca-43f3-b55b-f005c3167746" path="/var/lib/kubelet/pods/2461eeeb-d0ca-43f3-b55b-f005c3167746/volumes" Jul 7 00:05:29.173112 kubelet[3484]: I0707 00:05:29.173094 3484 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5201c41b-558a-4393-97fc-f0192cdf8ef8" path="/var/lib/kubelet/pods/5201c41b-558a-4393-97fc-f0192cdf8ef8/volumes" Jul 7 00:05:29.543961 sshd[5003]: Connection closed by 10.200.16.10 port 56980 Jul 7 00:05:29.544608 sshd-session[5001]: pam_unix(sshd:session): session closed for user core Jul 7 00:05:29.549293 systemd[1]: sshd@21-10.200.20.36:22-10.200.16.10:56980.service: Deactivated successfully. Jul 7 00:05:29.551269 systemd[1]: session-24.scope: Deactivated successfully. Jul 7 00:05:29.552072 systemd-logind[1860]: Session 24 logged out. Waiting for processes to exit. Jul 7 00:05:29.553430 systemd-logind[1860]: Removed session 24. Jul 7 00:05:29.632261 systemd[1]: Started sshd@22-10.200.20.36:22-10.200.16.10:60974.service - OpenSSH per-connection server daemon (10.200.16.10:60974). Jul 7 00:05:30.136055 sshd[5155]: Accepted publickey for core from 10.200.16.10 port 60974 ssh2: RSA SHA256:jqhyWZ4ohIpoDXTKwGqZcjJ0o+/xEZ+M9M3EOPsEPgk Jul 7 00:05:30.137158 sshd-session[5155]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:05:30.140750 systemd-logind[1860]: New session 25 of user core. Jul 7 00:05:30.152618 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 7 00:05:30.875732 kubelet[3484]: I0707 00:05:30.875394 3484 memory_manager.go:355] "RemoveStaleState removing state" podUID="2461eeeb-d0ca-43f3-b55b-f005c3167746" containerName="cilium-agent" Jul 7 00:05:30.877176 kubelet[3484]: I0707 00:05:30.876376 3484 memory_manager.go:355] "RemoveStaleState removing state" podUID="5201c41b-558a-4393-97fc-f0192cdf8ef8" containerName="cilium-operator" Jul 7 00:05:30.885016 systemd[1]: Created slice kubepods-burstable-pod8fce3b4a_bef4_485f_95d9_1130ef6b06b5.slice - libcontainer container kubepods-burstable-pod8fce3b4a_bef4_485f_95d9_1130ef6b06b5.slice. Jul 7 00:05:30.930966 sshd[5157]: Connection closed by 10.200.16.10 port 60974 Jul 7 00:05:30.931901 sshd-session[5155]: pam_unix(sshd:session): session closed for user core Jul 7 00:05:30.935296 systemd-logind[1860]: Session 25 logged out. Waiting for processes to exit. Jul 7 00:05:30.935672 systemd[1]: sshd@22-10.200.20.36:22-10.200.16.10:60974.service: Deactivated successfully. Jul 7 00:05:30.937437 systemd[1]: session-25.scope: Deactivated successfully. Jul 7 00:05:30.940115 systemd-logind[1860]: Removed session 25. Jul 7 00:05:30.965144 kubelet[3484]: I0707 00:05:30.965037 3484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8fce3b4a-bef4-485f-95d9-1130ef6b06b5-cilium-cgroup\") pod \"cilium-4cr4g\" (UID: \"8fce3b4a-bef4-485f-95d9-1130ef6b06b5\") " pod="kube-system/cilium-4cr4g" Jul 7 00:05:30.965144 kubelet[3484]: I0707 00:05:30.965078 3484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/8fce3b4a-bef4-485f-95d9-1130ef6b06b5-cilium-ipsec-secrets\") pod \"cilium-4cr4g\" (UID: \"8fce3b4a-bef4-485f-95d9-1130ef6b06b5\") " pod="kube-system/cilium-4cr4g" Jul 7 00:05:30.965144 kubelet[3484]: I0707 00:05:30.965095 3484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27p44\" (UniqueName: \"kubernetes.io/projected/8fce3b4a-bef4-485f-95d9-1130ef6b06b5-kube-api-access-27p44\") pod \"cilium-4cr4g\" (UID: \"8fce3b4a-bef4-485f-95d9-1130ef6b06b5\") " pod="kube-system/cilium-4cr4g" Jul 7 00:05:30.965302 kubelet[3484]: I0707 00:05:30.965145 3484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8fce3b4a-bef4-485f-95d9-1130ef6b06b5-etc-cni-netd\") pod \"cilium-4cr4g\" (UID: \"8fce3b4a-bef4-485f-95d9-1130ef6b06b5\") " pod="kube-system/cilium-4cr4g" Jul 7 00:05:30.965302 kubelet[3484]: I0707 00:05:30.965178 3484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8fce3b4a-bef4-485f-95d9-1130ef6b06b5-lib-modules\") pod \"cilium-4cr4g\" (UID: \"8fce3b4a-bef4-485f-95d9-1130ef6b06b5\") " pod="kube-system/cilium-4cr4g" Jul 7 00:05:30.965302 kubelet[3484]: I0707 00:05:30.965190 3484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8fce3b4a-bef4-485f-95d9-1130ef6b06b5-hubble-tls\") pod \"cilium-4cr4g\" (UID: \"8fce3b4a-bef4-485f-95d9-1130ef6b06b5\") " pod="kube-system/cilium-4cr4g" Jul 7 00:05:30.965302 kubelet[3484]: I0707 00:05:30.965203 3484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8fce3b4a-bef4-485f-95d9-1130ef6b06b5-hostproc\") pod \"cilium-4cr4g\" (UID: \"8fce3b4a-bef4-485f-95d9-1130ef6b06b5\") " pod="kube-system/cilium-4cr4g" Jul 7 00:05:30.965302 kubelet[3484]: I0707 00:05:30.965216 3484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8fce3b4a-bef4-485f-95d9-1130ef6b06b5-bpf-maps\") pod \"cilium-4cr4g\" (UID: \"8fce3b4a-bef4-485f-95d9-1130ef6b06b5\") " pod="kube-system/cilium-4cr4g" Jul 7 00:05:30.965302 kubelet[3484]: I0707 00:05:30.965226 3484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8fce3b4a-bef4-485f-95d9-1130ef6b06b5-xtables-lock\") pod \"cilium-4cr4g\" (UID: \"8fce3b4a-bef4-485f-95d9-1130ef6b06b5\") " pod="kube-system/cilium-4cr4g" Jul 7 00:05:30.965390 kubelet[3484]: I0707 00:05:30.965243 3484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8fce3b4a-bef4-485f-95d9-1130ef6b06b5-cilium-config-path\") pod \"cilium-4cr4g\" (UID: \"8fce3b4a-bef4-485f-95d9-1130ef6b06b5\") " pod="kube-system/cilium-4cr4g" Jul 7 00:05:30.965390 kubelet[3484]: I0707 00:05:30.965254 3484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8fce3b4a-bef4-485f-95d9-1130ef6b06b5-host-proc-sys-kernel\") pod \"cilium-4cr4g\" (UID: \"8fce3b4a-bef4-485f-95d9-1130ef6b06b5\") " pod="kube-system/cilium-4cr4g" Jul 7 00:05:30.965390 kubelet[3484]: I0707 00:05:30.965266 3484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8fce3b4a-bef4-485f-95d9-1130ef6b06b5-clustermesh-secrets\") pod \"cilium-4cr4g\" (UID: \"8fce3b4a-bef4-485f-95d9-1130ef6b06b5\") " pod="kube-system/cilium-4cr4g" Jul 7 00:05:30.965390 kubelet[3484]: I0707 00:05:30.965277 3484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8fce3b4a-bef4-485f-95d9-1130ef6b06b5-cilium-run\") pod \"cilium-4cr4g\" (UID: \"8fce3b4a-bef4-485f-95d9-1130ef6b06b5\") " pod="kube-system/cilium-4cr4g" Jul 7 00:05:30.965390 kubelet[3484]: I0707 00:05:30.965285 3484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8fce3b4a-bef4-485f-95d9-1130ef6b06b5-cni-path\") pod \"cilium-4cr4g\" (UID: \"8fce3b4a-bef4-485f-95d9-1130ef6b06b5\") " pod="kube-system/cilium-4cr4g" Jul 7 00:05:30.965390 kubelet[3484]: I0707 00:05:30.965293 3484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8fce3b4a-bef4-485f-95d9-1130ef6b06b5-host-proc-sys-net\") pod \"cilium-4cr4g\" (UID: \"8fce3b4a-bef4-485f-95d9-1130ef6b06b5\") " pod="kube-system/cilium-4cr4g" Jul 7 00:05:31.018144 systemd[1]: Started sshd@23-10.200.20.36:22-10.200.16.10:60984.service - OpenSSH per-connection server daemon (10.200.16.10:60984). Jul 7 00:05:31.185478 containerd[1887]: time="2025-07-07T00:05:31.185268181Z" level=info msg="StopPodSandbox for \"7e16eedfdbe1a86479dc69d76ebc444ea5a7600f0171a1a1123f77b9243d2d64\"" Jul 7 00:05:31.185478 containerd[1887]: time="2025-07-07T00:05:31.185387954Z" level=info msg="TearDown network for sandbox \"7e16eedfdbe1a86479dc69d76ebc444ea5a7600f0171a1a1123f77b9243d2d64\" successfully" Jul 7 00:05:31.185478 containerd[1887]: time="2025-07-07T00:05:31.185396459Z" level=info msg="StopPodSandbox for \"7e16eedfdbe1a86479dc69d76ebc444ea5a7600f0171a1a1123f77b9243d2d64\" returns successfully" Jul 7 00:05:31.185853 containerd[1887]: time="2025-07-07T00:05:31.185712059Z" level=info msg="RemovePodSandbox for \"7e16eedfdbe1a86479dc69d76ebc444ea5a7600f0171a1a1123f77b9243d2d64\"" Jul 7 00:05:31.185937 containerd[1887]: time="2025-07-07T00:05:31.185916685Z" level=info msg="Forcibly stopping sandbox \"7e16eedfdbe1a86479dc69d76ebc444ea5a7600f0171a1a1123f77b9243d2d64\"" Jul 7 00:05:31.186035 containerd[1887]: time="2025-07-07T00:05:31.186014481Z" level=info msg="TearDown network for sandbox \"7e16eedfdbe1a86479dc69d76ebc444ea5a7600f0171a1a1123f77b9243d2d64\" successfully" Jul 7 00:05:31.187910 containerd[1887]: time="2025-07-07T00:05:31.187887454Z" level=info msg="Ensure that sandbox 7e16eedfdbe1a86479dc69d76ebc444ea5a7600f0171a1a1123f77b9243d2d64 in task-service has been cleanup successfully" Jul 7 00:05:31.188666 containerd[1887]: time="2025-07-07T00:05:31.188529518Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4cr4g,Uid:8fce3b4a-bef4-485f-95d9-1130ef6b06b5,Namespace:kube-system,Attempt:0,}" Jul 7 00:05:31.213589 containerd[1887]: time="2025-07-07T00:05:31.213556461Z" level=info msg="RemovePodSandbox \"7e16eedfdbe1a86479dc69d76ebc444ea5a7600f0171a1a1123f77b9243d2d64\" returns successfully" Jul 7 00:05:31.214279 containerd[1887]: time="2025-07-07T00:05:31.214140522Z" level=info msg="StopPodSandbox for \"ab1936bf67181d458407e973f2d899ddbaa23015ad89e7af07a5492d3dca4a63\"" Jul 7 00:05:31.214279 containerd[1887]: time="2025-07-07T00:05:31.214230159Z" level=info msg="TearDown network for sandbox \"ab1936bf67181d458407e973f2d899ddbaa23015ad89e7af07a5492d3dca4a63\" successfully" Jul 7 00:05:31.214279 containerd[1887]: time="2025-07-07T00:05:31.214237951Z" level=info msg="StopPodSandbox for \"ab1936bf67181d458407e973f2d899ddbaa23015ad89e7af07a5492d3dca4a63\" returns successfully" Jul 7 00:05:31.214737 containerd[1887]: time="2025-07-07T00:05:31.214662436Z" level=info msg="RemovePodSandbox for \"ab1936bf67181d458407e973f2d899ddbaa23015ad89e7af07a5492d3dca4a63\"" Jul 7 00:05:31.214737 containerd[1887]: time="2025-07-07T00:05:31.214685397Z" level=info msg="Forcibly stopping sandbox \"ab1936bf67181d458407e973f2d899ddbaa23015ad89e7af07a5492d3dca4a63\"" Jul 7 00:05:31.214847 containerd[1887]: time="2025-07-07T00:05:31.214832308Z" level=info msg="TearDown network for sandbox \"ab1936bf67181d458407e973f2d899ddbaa23015ad89e7af07a5492d3dca4a63\" successfully" Jul 7 00:05:31.215995 containerd[1887]: time="2025-07-07T00:05:31.215935579Z" level=info msg="Ensure that sandbox ab1936bf67181d458407e973f2d899ddbaa23015ad89e7af07a5492d3dca4a63 in task-service has been cleanup successfully" Jul 7 00:05:31.240852 containerd[1887]: time="2025-07-07T00:05:31.240259111Z" level=info msg="RemovePodSandbox \"ab1936bf67181d458407e973f2d899ddbaa23015ad89e7af07a5492d3dca4a63\" returns successfully" Jul 7 00:05:31.264914 kubelet[3484]: E0707 00:05:31.264883 3484 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 7 00:05:31.265765 containerd[1887]: time="2025-07-07T00:05:31.265724876Z" level=info msg="connecting to shim 98ae7662d197f59cfd5c64989ba2673cd64d4b7e898b2a45d385c3c1ab914415" address="unix:///run/containerd/s/af8c1fbada0eff583a0af06a5a1607d35df021340eb28327ffd3b90f4ee95582" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:05:31.286608 systemd[1]: Started cri-containerd-98ae7662d197f59cfd5c64989ba2673cd64d4b7e898b2a45d385c3c1ab914415.scope - libcontainer container 98ae7662d197f59cfd5c64989ba2673cd64d4b7e898b2a45d385c3c1ab914415. Jul 7 00:05:31.314163 containerd[1887]: time="2025-07-07T00:05:31.314128496Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4cr4g,Uid:8fce3b4a-bef4-485f-95d9-1130ef6b06b5,Namespace:kube-system,Attempt:0,} returns sandbox id \"98ae7662d197f59cfd5c64989ba2673cd64d4b7e898b2a45d385c3c1ab914415\"" Jul 7 00:05:31.317572 containerd[1887]: time="2025-07-07T00:05:31.317537169Z" level=info msg="CreateContainer within sandbox \"98ae7662d197f59cfd5c64989ba2673cd64d4b7e898b2a45d385c3c1ab914415\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 7 00:05:31.342398 containerd[1887]: time="2025-07-07T00:05:31.342357654Z" level=info msg="Container a3f93d56cda9f8d48478a0095c547455aa988323c549c621f966f15e135e0e28: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:05:31.360692 containerd[1887]: time="2025-07-07T00:05:31.360653560Z" level=info msg="CreateContainer within sandbox \"98ae7662d197f59cfd5c64989ba2673cd64d4b7e898b2a45d385c3c1ab914415\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a3f93d56cda9f8d48478a0095c547455aa988323c549c621f966f15e135e0e28\"" Jul 7 00:05:31.361633 containerd[1887]: time="2025-07-07T00:05:31.361559708Z" level=info msg="StartContainer for \"a3f93d56cda9f8d48478a0095c547455aa988323c549c621f966f15e135e0e28\"" Jul 7 00:05:31.363525 containerd[1887]: time="2025-07-07T00:05:31.363288354Z" level=info msg="connecting to shim a3f93d56cda9f8d48478a0095c547455aa988323c549c621f966f15e135e0e28" address="unix:///run/containerd/s/af8c1fbada0eff583a0af06a5a1607d35df021340eb28327ffd3b90f4ee95582" protocol=ttrpc version=3 Jul 7 00:05:31.379616 systemd[1]: Started cri-containerd-a3f93d56cda9f8d48478a0095c547455aa988323c549c621f966f15e135e0e28.scope - libcontainer container a3f93d56cda9f8d48478a0095c547455aa988323c549c621f966f15e135e0e28. Jul 7 00:05:31.407906 systemd[1]: cri-containerd-a3f93d56cda9f8d48478a0095c547455aa988323c549c621f966f15e135e0e28.scope: Deactivated successfully. Jul 7 00:05:31.408262 containerd[1887]: time="2025-07-07T00:05:31.408207554Z" level=info msg="StartContainer for \"a3f93d56cda9f8d48478a0095c547455aa988323c549c621f966f15e135e0e28\" returns successfully" Jul 7 00:05:31.410384 containerd[1887]: time="2025-07-07T00:05:31.410355284Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a3f93d56cda9f8d48478a0095c547455aa988323c549c621f966f15e135e0e28\" id:\"a3f93d56cda9f8d48478a0095c547455aa988323c549c621f966f15e135e0e28\" pid:5232 exited_at:{seconds:1751846731 nanos:409346674}" Jul 7 00:05:31.410601 containerd[1887]: time="2025-07-07T00:05:31.410574599Z" level=info msg="received exit event container_id:\"a3f93d56cda9f8d48478a0095c547455aa988323c549c621f966f15e135e0e28\" id:\"a3f93d56cda9f8d48478a0095c547455aa988323c549c621f966f15e135e0e28\" pid:5232 exited_at:{seconds:1751846731 nanos:409346674}" Jul 7 00:05:31.499788 sshd[5167]: Accepted publickey for core from 10.200.16.10 port 60984 ssh2: RSA SHA256:jqhyWZ4ohIpoDXTKwGqZcjJ0o+/xEZ+M9M3EOPsEPgk Jul 7 00:05:31.501016 sshd-session[5167]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:05:31.504851 systemd-logind[1860]: New session 26 of user core. Jul 7 00:05:31.513609 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 7 00:05:31.571211 containerd[1887]: time="2025-07-07T00:05:31.570882576Z" level=info msg="CreateContainer within sandbox \"98ae7662d197f59cfd5c64989ba2673cd64d4b7e898b2a45d385c3c1ab914415\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 7 00:05:31.596420 containerd[1887]: time="2025-07-07T00:05:31.596387463Z" level=info msg="Container 974c2b98573c9de507948b25e1883a132d3ab2246fdc31790c01079268ec53aa: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:05:31.612868 containerd[1887]: time="2025-07-07T00:05:31.612813892Z" level=info msg="CreateContainer within sandbox \"98ae7662d197f59cfd5c64989ba2673cd64d4b7e898b2a45d385c3c1ab914415\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"974c2b98573c9de507948b25e1883a132d3ab2246fdc31790c01079268ec53aa\"" Jul 7 00:05:31.613732 containerd[1887]: time="2025-07-07T00:05:31.613584314Z" level=info msg="StartContainer for \"974c2b98573c9de507948b25e1883a132d3ab2246fdc31790c01079268ec53aa\"" Jul 7 00:05:31.614510 containerd[1887]: time="2025-07-07T00:05:31.614491015Z" level=info msg="connecting to shim 974c2b98573c9de507948b25e1883a132d3ab2246fdc31790c01079268ec53aa" address="unix:///run/containerd/s/af8c1fbada0eff583a0af06a5a1607d35df021340eb28327ffd3b90f4ee95582" protocol=ttrpc version=3 Jul 7 00:05:31.629611 systemd[1]: Started cri-containerd-974c2b98573c9de507948b25e1883a132d3ab2246fdc31790c01079268ec53aa.scope - libcontainer container 974c2b98573c9de507948b25e1883a132d3ab2246fdc31790c01079268ec53aa. Jul 7 00:05:31.654232 systemd[1]: cri-containerd-974c2b98573c9de507948b25e1883a132d3ab2246fdc31790c01079268ec53aa.scope: Deactivated successfully. Jul 7 00:05:31.655709 containerd[1887]: time="2025-07-07T00:05:31.655656917Z" level=info msg="TaskExit event in podsandbox handler container_id:\"974c2b98573c9de507948b25e1883a132d3ab2246fdc31790c01079268ec53aa\" id:\"974c2b98573c9de507948b25e1883a132d3ab2246fdc31790c01079268ec53aa\" pid:5278 exited_at:{seconds:1751846731 nanos:655288027}" Jul 7 00:05:31.655885 containerd[1887]: time="2025-07-07T00:05:31.655782051Z" level=info msg="received exit event container_id:\"974c2b98573c9de507948b25e1883a132d3ab2246fdc31790c01079268ec53aa\" id:\"974c2b98573c9de507948b25e1883a132d3ab2246fdc31790c01079268ec53aa\" pid:5278 exited_at:{seconds:1751846731 nanos:655288027}" Jul 7 00:05:31.656888 containerd[1887]: time="2025-07-07T00:05:31.656781124Z" level=info msg="StartContainer for \"974c2b98573c9de507948b25e1883a132d3ab2246fdc31790c01079268ec53aa\" returns successfully" Jul 7 00:05:31.849261 sshd[5265]: Connection closed by 10.200.16.10 port 60984 Jul 7 00:05:31.848607 sshd-session[5167]: pam_unix(sshd:session): session closed for user core Jul 7 00:05:31.851190 systemd[1]: session-26.scope: Deactivated successfully. Jul 7 00:05:31.852432 systemd[1]: sshd@23-10.200.20.36:22-10.200.16.10:60984.service: Deactivated successfully. Jul 7 00:05:31.855757 systemd-logind[1860]: Session 26 logged out. Waiting for processes to exit. Jul 7 00:05:31.857040 systemd-logind[1860]: Removed session 26. Jul 7 00:05:31.941021 systemd[1]: Started sshd@24-10.200.20.36:22-10.200.16.10:60994.service - OpenSSH per-connection server daemon (10.200.16.10:60994). Jul 7 00:05:32.446522 sshd[5315]: Accepted publickey for core from 10.200.16.10 port 60994 ssh2: RSA SHA256:jqhyWZ4ohIpoDXTKwGqZcjJ0o+/xEZ+M9M3EOPsEPgk Jul 7 00:05:32.447636 sshd-session[5315]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:05:32.451734 systemd-logind[1860]: New session 27 of user core. Jul 7 00:05:32.458625 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 7 00:05:32.575874 containerd[1887]: time="2025-07-07T00:05:32.575828946Z" level=info msg="CreateContainer within sandbox \"98ae7662d197f59cfd5c64989ba2673cd64d4b7e898b2a45d385c3c1ab914415\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 7 00:05:32.616364 containerd[1887]: time="2025-07-07T00:05:32.616240339Z" level=info msg="Container 37aa5d7fd780fb33937fb8614ae211df95aa9504249949e94b5f671ac485c9c2: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:05:32.633903 containerd[1887]: time="2025-07-07T00:05:32.633839394Z" level=info msg="CreateContainer within sandbox \"98ae7662d197f59cfd5c64989ba2673cd64d4b7e898b2a45d385c3c1ab914415\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"37aa5d7fd780fb33937fb8614ae211df95aa9504249949e94b5f671ac485c9c2\"" Jul 7 00:05:32.634534 containerd[1887]: time="2025-07-07T00:05:32.634451400Z" level=info msg="StartContainer for \"37aa5d7fd780fb33937fb8614ae211df95aa9504249949e94b5f671ac485c9c2\"" Jul 7 00:05:32.636045 containerd[1887]: time="2025-07-07T00:05:32.636008869Z" level=info msg="connecting to shim 37aa5d7fd780fb33937fb8614ae211df95aa9504249949e94b5f671ac485c9c2" address="unix:///run/containerd/s/af8c1fbada0eff583a0af06a5a1607d35df021340eb28327ffd3b90f4ee95582" protocol=ttrpc version=3 Jul 7 00:05:32.656638 systemd[1]: Started cri-containerd-37aa5d7fd780fb33937fb8614ae211df95aa9504249949e94b5f671ac485c9c2.scope - libcontainer container 37aa5d7fd780fb33937fb8614ae211df95aa9504249949e94b5f671ac485c9c2. Jul 7 00:05:32.685254 systemd[1]: cri-containerd-37aa5d7fd780fb33937fb8614ae211df95aa9504249949e94b5f671ac485c9c2.scope: Deactivated successfully. Jul 7 00:05:32.687343 containerd[1887]: time="2025-07-07T00:05:32.687313033Z" level=info msg="TaskExit event in podsandbox handler container_id:\"37aa5d7fd780fb33937fb8614ae211df95aa9504249949e94b5f671ac485c9c2\" id:\"37aa5d7fd780fb33937fb8614ae211df95aa9504249949e94b5f671ac485c9c2\" pid:5330 exited_at:{seconds:1751846732 nanos:686131887}" Jul 7 00:05:32.689786 containerd[1887]: time="2025-07-07T00:05:32.689690927Z" level=info msg="received exit event container_id:\"37aa5d7fd780fb33937fb8614ae211df95aa9504249949e94b5f671ac485c9c2\" id:\"37aa5d7fd780fb33937fb8614ae211df95aa9504249949e94b5f671ac485c9c2\" pid:5330 exited_at:{seconds:1751846732 nanos:686131887}" Jul 7 00:05:32.696214 containerd[1887]: time="2025-07-07T00:05:32.695462445Z" level=info msg="StartContainer for \"37aa5d7fd780fb33937fb8614ae211df95aa9504249949e94b5f671ac485c9c2\" returns successfully" Jul 7 00:05:32.706452 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-37aa5d7fd780fb33937fb8614ae211df95aa9504249949e94b5f671ac485c9c2-rootfs.mount: Deactivated successfully. Jul 7 00:05:33.577930 containerd[1887]: time="2025-07-07T00:05:33.577889556Z" level=info msg="CreateContainer within sandbox \"98ae7662d197f59cfd5c64989ba2673cd64d4b7e898b2a45d385c3c1ab914415\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 7 00:05:33.621549 containerd[1887]: time="2025-07-07T00:05:33.621511804Z" level=info msg="Container 4c048201d47acfbd345af003976fdaf8089d32b78328cc16bb87a0d0221923e0: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:05:33.624268 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1609153333.mount: Deactivated successfully. Jul 7 00:05:33.652451 containerd[1887]: time="2025-07-07T00:05:33.652412190Z" level=info msg="CreateContainer within sandbox \"98ae7662d197f59cfd5c64989ba2673cd64d4b7e898b2a45d385c3c1ab914415\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"4c048201d47acfbd345af003976fdaf8089d32b78328cc16bb87a0d0221923e0\"" Jul 7 00:05:33.652962 containerd[1887]: time="2025-07-07T00:05:33.652945816Z" level=info msg="StartContainer for \"4c048201d47acfbd345af003976fdaf8089d32b78328cc16bb87a0d0221923e0\"" Jul 7 00:05:33.653842 containerd[1887]: time="2025-07-07T00:05:33.653766257Z" level=info msg="connecting to shim 4c048201d47acfbd345af003976fdaf8089d32b78328cc16bb87a0d0221923e0" address="unix:///run/containerd/s/af8c1fbada0eff583a0af06a5a1607d35df021340eb28327ffd3b90f4ee95582" protocol=ttrpc version=3 Jul 7 00:05:33.676625 systemd[1]: Started cri-containerd-4c048201d47acfbd345af003976fdaf8089d32b78328cc16bb87a0d0221923e0.scope - libcontainer container 4c048201d47acfbd345af003976fdaf8089d32b78328cc16bb87a0d0221923e0. Jul 7 00:05:33.695889 systemd[1]: cri-containerd-4c048201d47acfbd345af003976fdaf8089d32b78328cc16bb87a0d0221923e0.scope: Deactivated successfully. Jul 7 00:05:33.697593 containerd[1887]: time="2025-07-07T00:05:33.697517703Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4c048201d47acfbd345af003976fdaf8089d32b78328cc16bb87a0d0221923e0\" id:\"4c048201d47acfbd345af003976fdaf8089d32b78328cc16bb87a0d0221923e0\" pid:5376 exited_at:{seconds:1751846733 nanos:696638356}" Jul 7 00:05:33.708083 containerd[1887]: time="2025-07-07T00:05:33.707983277Z" level=info msg="received exit event container_id:\"4c048201d47acfbd345af003976fdaf8089d32b78328cc16bb87a0d0221923e0\" id:\"4c048201d47acfbd345af003976fdaf8089d32b78328cc16bb87a0d0221923e0\" pid:5376 exited_at:{seconds:1751846733 nanos:696638356}" Jul 7 00:05:33.709323 containerd[1887]: time="2025-07-07T00:05:33.709301927Z" level=info msg="StartContainer for \"4c048201d47acfbd345af003976fdaf8089d32b78328cc16bb87a0d0221923e0\" returns successfully" Jul 7 00:05:33.721578 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4c048201d47acfbd345af003976fdaf8089d32b78328cc16bb87a0d0221923e0-rootfs.mount: Deactivated successfully. Jul 7 00:05:34.584720 containerd[1887]: time="2025-07-07T00:05:34.584679890Z" level=info msg="CreateContainer within sandbox \"98ae7662d197f59cfd5c64989ba2673cd64d4b7e898b2a45d385c3c1ab914415\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 7 00:05:34.616307 containerd[1887]: time="2025-07-07T00:05:34.616268070Z" level=info msg="Container b1017e1f0771be4e60133f6832a8dabd315b2f08f59d78a725da1dd76afa46ee: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:05:34.636007 containerd[1887]: time="2025-07-07T00:05:34.635921539Z" level=info msg="CreateContainer within sandbox \"98ae7662d197f59cfd5c64989ba2673cd64d4b7e898b2a45d385c3c1ab914415\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b1017e1f0771be4e60133f6832a8dabd315b2f08f59d78a725da1dd76afa46ee\"" Jul 7 00:05:34.636565 containerd[1887]: time="2025-07-07T00:05:34.636433756Z" level=info msg="StartContainer for \"b1017e1f0771be4e60133f6832a8dabd315b2f08f59d78a725da1dd76afa46ee\"" Jul 7 00:05:34.637698 containerd[1887]: time="2025-07-07T00:05:34.637534234Z" level=info msg="connecting to shim b1017e1f0771be4e60133f6832a8dabd315b2f08f59d78a725da1dd76afa46ee" address="unix:///run/containerd/s/af8c1fbada0eff583a0af06a5a1607d35df021340eb28327ffd3b90f4ee95582" protocol=ttrpc version=3 Jul 7 00:05:34.659628 systemd[1]: Started cri-containerd-b1017e1f0771be4e60133f6832a8dabd315b2f08f59d78a725da1dd76afa46ee.scope - libcontainer container b1017e1f0771be4e60133f6832a8dabd315b2f08f59d78a725da1dd76afa46ee. Jul 7 00:05:34.691331 containerd[1887]: time="2025-07-07T00:05:34.691294344Z" level=info msg="StartContainer for \"b1017e1f0771be4e60133f6832a8dabd315b2f08f59d78a725da1dd76afa46ee\" returns successfully" Jul 7 00:05:34.751264 containerd[1887]: time="2025-07-07T00:05:34.750906679Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b1017e1f0771be4e60133f6832a8dabd315b2f08f59d78a725da1dd76afa46ee\" id:\"cf1a06e9ee9d6508f0fc79a27e79e7511e9c9396bfa15d2a04aa98b359b41e93\" pid:5449 exited_at:{seconds:1751846734 nanos:750107568}" Jul 7 00:05:35.036540 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jul 7 00:05:35.603927 kubelet[3484]: I0707 00:05:35.603874 3484 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-4cr4g" podStartSLOduration=5.6038598010000005 podStartE2EDuration="5.603859801s" podCreationTimestamp="2025-07-07 00:05:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:05:35.602418865 +0000 UTC m=+184.495644272" watchObservedRunningTime="2025-07-07 00:05:35.603859801 +0000 UTC m=+184.497085184" Jul 7 00:05:35.637729 kubelet[3484]: I0707 00:05:35.637604 3484 setters.go:602] "Node became not ready" node="ci-4372.0.1-a-7cca70db3c" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-07-07T00:05:35Z","lastTransitionTime":"2025-07-07T00:05:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 7 00:05:36.868698 containerd[1887]: time="2025-07-07T00:05:36.868624290Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b1017e1f0771be4e60133f6832a8dabd315b2f08f59d78a725da1dd76afa46ee\" id:\"bd1cb0b60554abf206521df6236c4a7c2f8f211535cdbbe25657dc2626af8fc3\" pid:5730 exit_status:1 exited_at:{seconds:1751846736 nanos:868217814}" Jul 7 00:05:37.433599 systemd-networkd[1689]: lxc_health: Link UP Jul 7 00:05:37.449631 systemd-networkd[1689]: lxc_health: Gained carrier Jul 7 00:05:38.473717 systemd-networkd[1689]: lxc_health: Gained IPv6LL Jul 7 00:05:38.960494 containerd[1887]: time="2025-07-07T00:05:38.960445136Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b1017e1f0771be4e60133f6832a8dabd315b2f08f59d78a725da1dd76afa46ee\" id:\"257a35600d1de3f74bdf71bd529c9f1ee177b7b113c22311840069897ebadc58\" pid:5987 exited_at:{seconds:1751846738 nanos:959945367}" Jul 7 00:05:41.054233 containerd[1887]: time="2025-07-07T00:05:41.054184302Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b1017e1f0771be4e60133f6832a8dabd315b2f08f59d78a725da1dd76afa46ee\" id:\"32ca77a5a601609222d1466ad2eaacc56c9e9c0ef5085e6c25b6bcc5cdb67a43\" pid:6018 exited_at:{seconds:1751846741 nanos:53620386}" Jul 7 00:05:43.129921 containerd[1887]: time="2025-07-07T00:05:43.129855368Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b1017e1f0771be4e60133f6832a8dabd315b2f08f59d78a725da1dd76afa46ee\" id:\"524e0aab4bdbe29a882a3673c27a04bc3eb69dc4dd384900258ca96dce3894c6\" pid:6040 exited_at:{seconds:1751846743 nanos:129250036}" Jul 7 00:05:43.217530 sshd[5317]: Connection closed by 10.200.16.10 port 60994 Jul 7 00:05:43.218083 sshd-session[5315]: pam_unix(sshd:session): session closed for user core Jul 7 00:05:43.221057 systemd[1]: sshd@24-10.200.20.36:22-10.200.16.10:60994.service: Deactivated successfully. Jul 7 00:05:43.222800 systemd[1]: session-27.scope: Deactivated successfully. Jul 7 00:05:43.223508 systemd-logind[1860]: Session 27 logged out. Waiting for processes to exit. Jul 7 00:05:43.225063 systemd-logind[1860]: Removed session 27.