Jul 6 23:45:50.110812 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd490] Jul 6 23:45:50.110832 kernel: Linux version 6.12.35-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Sun Jul 6 21:52:18 -00 2025 Jul 6 23:45:50.110838 kernel: KASLR enabled Jul 6 23:45:50.110842 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jul 6 23:45:50.110847 kernel: printk: legacy bootconsole [pl11] enabled Jul 6 23:45:50.110851 kernel: efi: EFI v2.7 by EDK II Jul 6 23:45:50.110856 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f20e018 RNG=0x3fd5f998 MEMRESERVE=0x3e471598 Jul 6 23:45:50.110860 kernel: random: crng init done Jul 6 23:45:50.110864 kernel: secureboot: Secure boot disabled Jul 6 23:45:50.110868 kernel: ACPI: Early table checksum verification disabled Jul 6 23:45:50.110872 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Jul 6 23:45:50.110876 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 6 23:45:50.110880 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 6 23:45:50.110884 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jul 6 23:45:50.110890 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 6 23:45:50.110894 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 6 23:45:50.110898 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 6 23:45:50.110903 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 6 23:45:50.110908 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 6 23:45:50.110912 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 6 23:45:50.110916 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jul 6 23:45:50.110920 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 6 23:45:50.110924 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jul 6 23:45:50.110929 kernel: ACPI: Use ACPI SPCR as default console: Yes Jul 6 23:45:50.110933 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Jul 6 23:45:50.110937 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] hotplug Jul 6 23:45:50.110941 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] hotplug Jul 6 23:45:50.110945 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Jul 6 23:45:50.110949 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Jul 6 23:45:50.110954 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Jul 6 23:45:50.110959 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Jul 6 23:45:50.110963 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Jul 6 23:45:50.110967 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Jul 6 23:45:50.110971 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Jul 6 23:45:50.110975 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Jul 6 23:45:50.110979 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Jul 6 23:45:50.110984 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x1bfffffff] -> [mem 0x00000000-0x1bfffffff] Jul 6 23:45:50.110988 kernel: NODE_DATA(0) allocated [mem 0x1bf7fddc0-0x1bf804fff] Jul 6 23:45:50.110992 kernel: Zone ranges: Jul 6 23:45:50.110996 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jul 6 23:45:50.111003 kernel: DMA32 empty Jul 6 23:45:50.111007 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jul 6 23:45:50.111012 kernel: Device empty Jul 6 23:45:50.111016 kernel: Movable zone start for each node Jul 6 23:45:50.111020 kernel: Early memory node ranges Jul 6 23:45:50.111026 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jul 6 23:45:50.111030 kernel: node 0: [mem 0x0000000000824000-0x000000003e45ffff] Jul 6 23:45:50.111035 kernel: node 0: [mem 0x000000003e460000-0x000000003e46ffff] Jul 6 23:45:50.111039 kernel: node 0: [mem 0x000000003e470000-0x000000003e54ffff] Jul 6 23:45:50.111043 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Jul 6 23:45:50.111048 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Jul 6 23:45:50.111052 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Jul 6 23:45:50.111056 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Jul 6 23:45:50.111061 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jul 6 23:45:50.111065 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jul 6 23:45:50.111069 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jul 6 23:45:50.111074 kernel: psci: probing for conduit method from ACPI. Jul 6 23:45:50.111079 kernel: psci: PSCIv1.1 detected in firmware. Jul 6 23:45:50.111083 kernel: psci: Using standard PSCI v0.2 function IDs Jul 6 23:45:50.111087 kernel: psci: MIGRATE_INFO_TYPE not supported. Jul 6 23:45:50.111092 kernel: psci: SMC Calling Convention v1.4 Jul 6 23:45:50.111096 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jul 6 23:45:50.111100 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jul 6 23:45:50.111104 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Jul 6 23:45:50.111109 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Jul 6 23:45:50.111113 kernel: pcpu-alloc: [0] 0 [0] 1 Jul 6 23:45:50.111118 kernel: Detected PIPT I-cache on CPU0 Jul 6 23:45:50.111122 kernel: CPU features: detected: Address authentication (architected QARMA5 algorithm) Jul 6 23:45:50.111127 kernel: CPU features: detected: GIC system register CPU interface Jul 6 23:45:50.111132 kernel: CPU features: detected: Spectre-v4 Jul 6 23:45:50.111136 kernel: CPU features: detected: Spectre-BHB Jul 6 23:45:50.111140 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 6 23:45:50.111145 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 6 23:45:50.111149 kernel: CPU features: detected: ARM erratum 2067961 or 2054223 Jul 6 23:45:50.111153 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 6 23:45:50.111158 kernel: alternatives: applying boot alternatives Jul 6 23:45:50.111163 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=dd2d39de40482a23e9bb75390ff5ca85cd9bd34d902b8049121a8373f8cb2ef2 Jul 6 23:45:50.111168 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 6 23:45:50.111172 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 6 23:45:50.111177 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 6 23:45:50.111182 kernel: Fallback order for Node 0: 0 Jul 6 23:45:50.111186 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048540 Jul 6 23:45:50.111190 kernel: Policy zone: Normal Jul 6 23:45:50.111195 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 6 23:45:50.111199 kernel: software IO TLB: area num 2. Jul 6 23:45:50.111203 kernel: software IO TLB: mapped [mem 0x000000003a460000-0x000000003e460000] (64MB) Jul 6 23:45:50.111208 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 6 23:45:50.111212 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 6 23:45:50.111217 kernel: rcu: RCU event tracing is enabled. Jul 6 23:45:50.111222 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 6 23:45:50.111227 kernel: Trampoline variant of Tasks RCU enabled. Jul 6 23:45:50.111232 kernel: Tracing variant of Tasks RCU enabled. Jul 6 23:45:50.111236 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 6 23:45:50.111240 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 6 23:45:50.111245 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 6 23:45:50.111249 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 6 23:45:50.111254 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 6 23:45:50.111258 kernel: GICv3: 960 SPIs implemented Jul 6 23:45:50.111262 kernel: GICv3: 0 Extended SPIs implemented Jul 6 23:45:50.111266 kernel: Root IRQ handler: gic_handle_irq Jul 6 23:45:50.111271 kernel: GICv3: GICv3 features: 16 PPIs, RSS Jul 6 23:45:50.111275 kernel: GICv3: GICD_CTRL.DS=0, SCR_EL3.FIQ=0 Jul 6 23:45:50.111280 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jul 6 23:45:50.111285 kernel: ITS: No ITS available, not enabling LPIs Jul 6 23:45:50.111289 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 6 23:45:50.111294 kernel: arch_timer: cp15 timer(s) running at 1000.00MHz (virt). Jul 6 23:45:50.111298 kernel: clocksource: arch_sys_counter: mask: 0x1fffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 6 23:45:50.111302 kernel: sched_clock: 61 bits at 1000MHz, resolution 1ns, wraps every 4398046511103ns Jul 6 23:45:50.111307 kernel: Console: colour dummy device 80x25 Jul 6 23:45:50.111312 kernel: printk: legacy console [tty1] enabled Jul 6 23:45:50.111316 kernel: ACPI: Core revision 20240827 Jul 6 23:45:50.111321 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 2000.00 BogoMIPS (lpj=1000000) Jul 6 23:45:50.111326 kernel: pid_max: default: 32768 minimum: 301 Jul 6 23:45:50.111331 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 6 23:45:50.111335 kernel: landlock: Up and running. Jul 6 23:45:50.111340 kernel: SELinux: Initializing. Jul 6 23:45:50.111344 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 6 23:45:50.111349 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 6 23:45:50.111357 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x1a0000e, misc 0x31e1 Jul 6 23:45:50.111362 kernel: Hyper-V: Host Build 10.0.26100.1261-1-0 Jul 6 23:45:50.111367 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jul 6 23:45:50.111372 kernel: rcu: Hierarchical SRCU implementation. Jul 6 23:45:50.111376 kernel: rcu: Max phase no-delay instances is 400. Jul 6 23:45:50.111381 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 6 23:45:50.111387 kernel: Remapping and enabling EFI services. Jul 6 23:45:50.111391 kernel: smp: Bringing up secondary CPUs ... Jul 6 23:45:50.111396 kernel: Detected PIPT I-cache on CPU1 Jul 6 23:45:50.111401 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jul 6 23:45:50.111406 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd490] Jul 6 23:45:50.111411 kernel: smp: Brought up 1 node, 2 CPUs Jul 6 23:45:50.111416 kernel: SMP: Total of 2 processors activated. Jul 6 23:45:50.111421 kernel: CPU: All CPU(s) started at EL1 Jul 6 23:45:50.111425 kernel: CPU features: detected: 32-bit EL0 Support Jul 6 23:45:50.111430 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jul 6 23:45:50.111435 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 6 23:45:50.111440 kernel: CPU features: detected: Common not Private translations Jul 6 23:45:50.111444 kernel: CPU features: detected: CRC32 instructions Jul 6 23:45:50.111449 kernel: CPU features: detected: Generic authentication (architected QARMA5 algorithm) Jul 6 23:45:50.111455 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 6 23:45:50.111460 kernel: CPU features: detected: LSE atomic instructions Jul 6 23:45:50.111464 kernel: CPU features: detected: Privileged Access Never Jul 6 23:45:50.111469 kernel: CPU features: detected: Speculation barrier (SB) Jul 6 23:45:50.111474 kernel: CPU features: detected: TLB range maintenance instructions Jul 6 23:45:50.111478 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 6 23:45:50.111483 kernel: CPU features: detected: Scalable Vector Extension Jul 6 23:45:50.111488 kernel: alternatives: applying system-wide alternatives Jul 6 23:45:50.111492 kernel: CPU features: detected: Hardware dirty bit management on CPU0-1 Jul 6 23:45:50.111498 kernel: SVE: maximum available vector length 16 bytes per vector Jul 6 23:45:50.111503 kernel: SVE: default vector length 16 bytes per vector Jul 6 23:45:50.111508 kernel: Memory: 3975672K/4194160K available (11072K kernel code, 2428K rwdata, 9032K rodata, 39424K init, 1035K bss, 213688K reserved, 0K cma-reserved) Jul 6 23:45:50.111513 kernel: devtmpfs: initialized Jul 6 23:45:50.111518 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 6 23:45:50.111522 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 6 23:45:50.111527 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 6 23:45:50.111532 kernel: 0 pages in range for non-PLT usage Jul 6 23:45:50.111536 kernel: 508480 pages in range for PLT usage Jul 6 23:45:50.111542 kernel: pinctrl core: initialized pinctrl subsystem Jul 6 23:45:50.111547 kernel: SMBIOS 3.1.0 present. Jul 6 23:45:50.111551 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Jul 6 23:45:50.111556 kernel: DMI: Memory slots populated: 2/2 Jul 6 23:45:50.111561 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 6 23:45:50.111565 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 6 23:45:50.111570 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 6 23:45:50.111575 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 6 23:45:50.111580 kernel: audit: initializing netlink subsys (disabled) Jul 6 23:45:50.111602 kernel: audit: type=2000 audit(0.059:1): state=initialized audit_enabled=0 res=1 Jul 6 23:45:50.111606 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 6 23:45:50.111611 kernel: cpuidle: using governor menu Jul 6 23:45:50.111616 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 6 23:45:50.111620 kernel: ASID allocator initialised with 32768 entries Jul 6 23:45:50.111625 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 6 23:45:50.111630 kernel: Serial: AMBA PL011 UART driver Jul 6 23:45:50.111635 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 6 23:45:50.111639 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 6 23:45:50.111645 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 6 23:45:50.111650 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 6 23:45:50.111655 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 6 23:45:50.111659 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 6 23:45:50.111664 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 6 23:45:50.111669 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 6 23:45:50.111673 kernel: ACPI: Added _OSI(Module Device) Jul 6 23:45:50.111678 kernel: ACPI: Added _OSI(Processor Device) Jul 6 23:45:50.111683 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 6 23:45:50.111688 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 6 23:45:50.111693 kernel: ACPI: Interpreter enabled Jul 6 23:45:50.111698 kernel: ACPI: Using GIC for interrupt routing Jul 6 23:45:50.111703 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jul 6 23:45:50.111707 kernel: printk: legacy console [ttyAMA0] enabled Jul 6 23:45:50.111712 kernel: printk: legacy bootconsole [pl11] disabled Jul 6 23:45:50.111717 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jul 6 23:45:50.111721 kernel: ACPI: CPU0 has been hot-added Jul 6 23:45:50.111726 kernel: ACPI: CPU1 has been hot-added Jul 6 23:45:50.111732 kernel: iommu: Default domain type: Translated Jul 6 23:45:50.111736 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 6 23:45:50.111741 kernel: efivars: Registered efivars operations Jul 6 23:45:50.111746 kernel: vgaarb: loaded Jul 6 23:45:50.111750 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 6 23:45:50.111755 kernel: VFS: Disk quotas dquot_6.6.0 Jul 6 23:45:50.111760 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 6 23:45:50.111764 kernel: pnp: PnP ACPI init Jul 6 23:45:50.111769 kernel: pnp: PnP ACPI: found 0 devices Jul 6 23:45:50.111775 kernel: NET: Registered PF_INET protocol family Jul 6 23:45:50.111780 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 6 23:45:50.111785 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 6 23:45:50.111789 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 6 23:45:50.111794 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 6 23:45:50.111799 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 6 23:45:50.111803 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 6 23:45:50.111808 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 6 23:45:50.111813 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 6 23:45:50.111819 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 6 23:45:50.111823 kernel: PCI: CLS 0 bytes, default 64 Jul 6 23:45:50.111828 kernel: kvm [1]: HYP mode not available Jul 6 23:45:50.111833 kernel: Initialise system trusted keyrings Jul 6 23:45:50.111837 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 6 23:45:50.111842 kernel: Key type asymmetric registered Jul 6 23:45:50.111847 kernel: Asymmetric key parser 'x509' registered Jul 6 23:45:50.111851 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 6 23:45:50.111856 kernel: io scheduler mq-deadline registered Jul 6 23:45:50.111862 kernel: io scheduler kyber registered Jul 6 23:45:50.111866 kernel: io scheduler bfq registered Jul 6 23:45:50.111871 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 6 23:45:50.111876 kernel: thunder_xcv, ver 1.0 Jul 6 23:45:50.111880 kernel: thunder_bgx, ver 1.0 Jul 6 23:45:50.111885 kernel: nicpf, ver 1.0 Jul 6 23:45:50.111890 kernel: nicvf, ver 1.0 Jul 6 23:45:50.112017 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 6 23:45:50.112070 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-06T23:45:49 UTC (1751845549) Jul 6 23:45:50.112076 kernel: efifb: probing for efifb Jul 6 23:45:50.112081 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jul 6 23:45:50.112086 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jul 6 23:45:50.112091 kernel: efifb: scrolling: redraw Jul 6 23:45:50.112095 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jul 6 23:45:50.112100 kernel: Console: switching to colour frame buffer device 128x48 Jul 6 23:45:50.112105 kernel: fb0: EFI VGA frame buffer device Jul 6 23:45:50.112110 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jul 6 23:45:50.112116 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 6 23:45:50.112121 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Jul 6 23:45:50.112125 kernel: NET: Registered PF_INET6 protocol family Jul 6 23:45:50.112130 kernel: watchdog: NMI not fully supported Jul 6 23:45:50.112135 kernel: watchdog: Hard watchdog permanently disabled Jul 6 23:45:50.112139 kernel: Segment Routing with IPv6 Jul 6 23:45:50.112144 kernel: In-situ OAM (IOAM) with IPv6 Jul 6 23:45:50.112149 kernel: NET: Registered PF_PACKET protocol family Jul 6 23:45:50.112154 kernel: Key type dns_resolver registered Jul 6 23:45:50.112160 kernel: registered taskstats version 1 Jul 6 23:45:50.112164 kernel: Loading compiled-in X.509 certificates Jul 6 23:45:50.112169 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.35-flatcar: 90fb300ebe1fa0773739bb35dad461c5679d8dfb' Jul 6 23:45:50.112174 kernel: Demotion targets for Node 0: null Jul 6 23:45:50.112179 kernel: Key type .fscrypt registered Jul 6 23:45:50.112183 kernel: Key type fscrypt-provisioning registered Jul 6 23:45:50.112188 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 6 23:45:50.112193 kernel: ima: Allocated hash algorithm: sha1 Jul 6 23:45:50.112198 kernel: ima: No architecture policies found Jul 6 23:45:50.112204 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 6 23:45:50.112208 kernel: clk: Disabling unused clocks Jul 6 23:45:50.112213 kernel: PM: genpd: Disabling unused power domains Jul 6 23:45:50.112218 kernel: Warning: unable to open an initial console. Jul 6 23:45:50.112223 kernel: Freeing unused kernel memory: 39424K Jul 6 23:45:50.112227 kernel: Run /init as init process Jul 6 23:45:50.112232 kernel: with arguments: Jul 6 23:45:50.112237 kernel: /init Jul 6 23:45:50.112241 kernel: with environment: Jul 6 23:45:50.112247 kernel: HOME=/ Jul 6 23:45:50.112251 kernel: TERM=linux Jul 6 23:45:50.112256 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 6 23:45:50.112263 systemd[1]: Successfully made /usr/ read-only. Jul 6 23:45:50.112270 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 6 23:45:50.112275 systemd[1]: Detected virtualization microsoft. Jul 6 23:45:50.112280 systemd[1]: Detected architecture arm64. Jul 6 23:45:50.112286 systemd[1]: Running in initrd. Jul 6 23:45:50.112291 systemd[1]: No hostname configured, using default hostname. Jul 6 23:45:50.112297 systemd[1]: Hostname set to . Jul 6 23:45:50.112302 systemd[1]: Initializing machine ID from random generator. Jul 6 23:45:50.112307 systemd[1]: Queued start job for default target initrd.target. Jul 6 23:45:50.112312 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:45:50.112317 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:45:50.112322 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 6 23:45:50.112329 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 6 23:45:50.112334 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 6 23:45:50.112339 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 6 23:45:50.112345 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 6 23:45:50.112350 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 6 23:45:50.112356 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:45:50.112361 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:45:50.112367 systemd[1]: Reached target paths.target - Path Units. Jul 6 23:45:50.112372 systemd[1]: Reached target slices.target - Slice Units. Jul 6 23:45:50.112377 systemd[1]: Reached target swap.target - Swaps. Jul 6 23:45:50.112382 systemd[1]: Reached target timers.target - Timer Units. Jul 6 23:45:50.112388 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 6 23:45:50.112393 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 6 23:45:50.112398 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 6 23:45:50.112403 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 6 23:45:50.112409 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:45:50.112414 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 6 23:45:50.112420 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:45:50.112425 systemd[1]: Reached target sockets.target - Socket Units. Jul 6 23:45:50.112430 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 6 23:45:50.112435 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 6 23:45:50.112440 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 6 23:45:50.112446 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 6 23:45:50.112451 systemd[1]: Starting systemd-fsck-usr.service... Jul 6 23:45:50.112457 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 6 23:45:50.112463 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 6 23:45:50.112483 systemd-journald[224]: Collecting audit messages is disabled. Jul 6 23:45:50.112497 systemd-journald[224]: Journal started Jul 6 23:45:50.112512 systemd-journald[224]: Runtime Journal (/run/log/journal/2b012fa7414a45198db23b267bd34abf) is 8M, max 78.5M, 70.5M free. Jul 6 23:45:50.136637 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:45:50.148133 systemd[1]: Started systemd-journald.service - Journal Service. Jul 6 23:45:50.148571 systemd-modules-load[226]: Inserted module 'overlay' Jul 6 23:45:50.156044 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 6 23:45:50.188919 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 6 23:45:50.188944 kernel: Bridge firewalling registered Jul 6 23:45:50.176353 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:45:50.192913 systemd[1]: Finished systemd-fsck-usr.service. Jul 6 23:45:50.195644 systemd-modules-load[226]: Inserted module 'br_netfilter' Jul 6 23:45:50.197473 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 6 23:45:50.209054 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:45:50.220518 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 6 23:45:50.249174 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:45:50.256982 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 6 23:45:50.275237 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 6 23:45:50.300850 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:45:50.307379 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:45:50.320645 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 6 23:45:50.324839 systemd-tmpfiles[246]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 6 23:45:50.333328 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:45:50.339809 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 6 23:45:50.370541 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 6 23:45:50.377243 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 6 23:45:50.400071 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:45:50.412210 dracut-cmdline[260]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=dd2d39de40482a23e9bb75390ff5ca85cd9bd34d902b8049121a8373f8cb2ef2 Jul 6 23:45:50.449200 systemd-resolved[265]: Positive Trust Anchors: Jul 6 23:45:50.449220 systemd-resolved[265]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 6 23:45:50.449239 systemd-resolved[265]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 6 23:45:50.450933 systemd-resolved[265]: Defaulting to hostname 'linux'. Jul 6 23:45:50.451675 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 6 23:45:50.464946 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:45:50.562610 kernel: SCSI subsystem initialized Jul 6 23:45:50.568602 kernel: Loading iSCSI transport class v2.0-870. Jul 6 23:45:50.576608 kernel: iscsi: registered transport (tcp) Jul 6 23:45:50.590244 kernel: iscsi: registered transport (qla4xxx) Jul 6 23:45:50.590257 kernel: QLogic iSCSI HBA Driver Jul 6 23:45:50.604758 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 6 23:45:50.635487 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 6 23:45:50.642617 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 6 23:45:50.692919 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 6 23:45:50.698658 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 6 23:45:50.764614 kernel: raid6: neonx8 gen() 18521 MB/s Jul 6 23:45:50.783593 kernel: raid6: neonx4 gen() 18563 MB/s Jul 6 23:45:50.802590 kernel: raid6: neonx2 gen() 17096 MB/s Jul 6 23:45:50.822712 kernel: raid6: neonx1 gen() 15051 MB/s Jul 6 23:45:50.841695 kernel: raid6: int64x8 gen() 10558 MB/s Jul 6 23:45:50.860687 kernel: raid6: int64x4 gen() 10617 MB/s Jul 6 23:45:50.881591 kernel: raid6: int64x2 gen() 8982 MB/s Jul 6 23:45:50.905446 kernel: raid6: int64x1 gen() 7010 MB/s Jul 6 23:45:50.905458 kernel: raid6: using algorithm neonx4 gen() 18563 MB/s Jul 6 23:45:50.931186 kernel: raid6: .... xor() 15150 MB/s, rmw enabled Jul 6 23:45:50.931272 kernel: raid6: using neon recovery algorithm Jul 6 23:45:50.940456 kernel: xor: measuring software checksum speed Jul 6 23:45:50.940540 kernel: 8regs : 28595 MB/sec Jul 6 23:45:50.947587 kernel: 32regs : 27693 MB/sec Jul 6 23:45:50.947599 kernel: arm64_neon : 37398 MB/sec Jul 6 23:45:50.951104 kernel: xor: using function: arm64_neon (37398 MB/sec) Jul 6 23:45:50.991608 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 6 23:45:50.996903 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 6 23:45:51.008438 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:45:51.043115 systemd-udevd[473]: Using default interface naming scheme 'v255'. Jul 6 23:45:51.049079 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:45:51.060751 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 6 23:45:51.099619 dracut-pre-trigger[486]: rd.md=0: removing MD RAID activation Jul 6 23:45:51.125046 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 6 23:45:51.133086 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 6 23:45:51.186368 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:45:51.201729 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 6 23:45:51.263602 kernel: hv_vmbus: Vmbus version:5.3 Jul 6 23:45:51.265330 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:45:51.265471 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:45:51.295521 kernel: hv_vmbus: registering driver hyperv_keyboard Jul 6 23:45:51.300988 kernel: hv_vmbus: registering driver hv_storvsc Jul 6 23:45:51.300997 kernel: scsi host0: storvsc_host_t Jul 6 23:45:51.301179 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jul 6 23:45:51.301208 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 6 23:45:51.313123 kernel: scsi host1: storvsc_host_t Jul 6 23:45:51.307535 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:45:51.349342 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jul 6 23:45:51.349375 kernel: hv_vmbus: registering driver hv_netvsc Jul 6 23:45:51.349382 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 6 23:45:51.349389 kernel: hv_vmbus: registering driver hid_hyperv Jul 6 23:45:51.333772 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:45:51.387267 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jul 6 23:45:51.387285 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jul 6 23:45:51.387403 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Jul 6 23:45:51.354520 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 6 23:45:51.385954 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:45:51.386050 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:45:51.396936 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:45:51.425365 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:45:51.453608 kernel: PTP clock support registered Jul 6 23:45:51.462581 kernel: hv_utils: Registering HyperV Utility Driver Jul 6 23:45:51.462649 kernel: hv_vmbus: registering driver hv_utils Jul 6 23:45:51.463609 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jul 6 23:45:51.475838 kernel: hv_utils: Heartbeat IC version 3.0 Jul 6 23:45:51.475887 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jul 6 23:45:51.476060 kernel: hv_utils: Shutdown IC version 3.2 Jul 6 23:45:51.482529 kernel: sd 0:0:0:0: [sda] Write Protect is off Jul 6 23:45:51.482719 kernel: hv_netvsc 002248c0-2c98-0022-48c0-2c98002248c0 eth0: VF slot 1 added Jul 6 23:45:51.490462 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jul 6 23:45:51.492717 kernel: hv_utils: TimeSync IC version 4.0 Jul 6 23:45:51.492763 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jul 6 23:45:51.489571 systemd-resolved[265]: Clock change detected. Flushing caches. Jul 6 23:45:51.510758 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#79 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Jul 6 23:45:51.519740 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#86 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Jul 6 23:45:51.529356 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 6 23:45:51.529410 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jul 6 23:45:51.540140 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jul 6 23:45:51.540390 kernel: hv_vmbus: registering driver hv_pci Jul 6 23:45:51.540399 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 6 23:45:51.549441 kernel: hv_pci ad9aeb58-4a1b-4ea9-9f32-7d978a234eaf: PCI VMBus probing: Using version 0x10004 Jul 6 23:45:51.549652 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jul 6 23:45:51.564107 kernel: hv_pci ad9aeb58-4a1b-4ea9-9f32-7d978a234eaf: PCI host bridge to bus 4a1b:00 Jul 6 23:45:51.564336 kernel: pci_bus 4a1b:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jul 6 23:45:51.564429 kernel: pci_bus 4a1b:00: No busn resource found for root bus, will use [bus 00-ff] Jul 6 23:45:51.577227 kernel: pci 4a1b:00:02.0: [15b3:101a] type 00 class 0x020000 PCIe Endpoint Jul 6 23:45:51.583746 kernel: pci 4a1b:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref] Jul 6 23:45:51.588726 kernel: pci 4a1b:00:02.0: enabling Extended Tags Jul 6 23:45:51.606787 kernel: pci 4a1b:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 4a1b:00:02.0 (capable of 252.048 Gb/s with 16.0 GT/s PCIe x16 link) Jul 6 23:45:51.620456 kernel: pci_bus 4a1b:00: busn_res: [bus 00-ff] end is updated to 00 Jul 6 23:45:51.620676 kernel: pci 4a1b:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref]: assigned Jul 6 23:45:51.639787 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#79 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jul 6 23:45:51.666788 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#250 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jul 6 23:45:51.690949 kernel: mlx5_core 4a1b:00:02.0: enabling device (0000 -> 0002) Jul 6 23:45:51.700874 kernel: mlx5_core 4a1b:00:02.0: PTM is not supported by PCIe Jul 6 23:45:51.701060 kernel: mlx5_core 4a1b:00:02.0: firmware version: 16.30.5006 Jul 6 23:45:51.877972 kernel: hv_netvsc 002248c0-2c98-0022-48c0-2c98002248c0 eth0: VF registering: eth1 Jul 6 23:45:51.878194 kernel: mlx5_core 4a1b:00:02.0 eth1: joined to eth0 Jul 6 23:45:51.883517 kernel: mlx5_core 4a1b:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Jul 6 23:45:51.892792 kernel: mlx5_core 4a1b:00:02.0 enP18971s1: renamed from eth1 Jul 6 23:45:52.204325 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jul 6 23:45:52.244010 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jul 6 23:45:52.268950 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jul 6 23:45:52.274712 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jul 6 23:45:52.294834 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jul 6 23:45:52.302176 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 6 23:45:52.448747 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 6 23:45:52.454843 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 6 23:45:52.464879 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:45:52.477179 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 6 23:45:52.488342 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 6 23:45:52.512287 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 6 23:45:53.338430 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#247 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Jul 6 23:45:53.351800 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 6 23:45:53.351854 disk-uuid[649]: The operation has completed successfully. Jul 6 23:45:53.422592 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 6 23:45:53.427031 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 6 23:45:53.452176 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 6 23:45:53.467938 sh[822]: Success Jul 6 23:45:53.503004 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 6 23:45:53.503073 kernel: device-mapper: uevent: version 1.0.3 Jul 6 23:45:53.508413 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 6 23:45:53.523754 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jul 6 23:45:53.723691 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 6 23:45:53.732075 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 6 23:45:53.744876 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 6 23:45:53.817654 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 6 23:45:53.817743 kernel: BTRFS: device fsid aa7ffdf7-f152-4ceb-bd0e-b3b3f8f8b296 devid 1 transid 38 /dev/mapper/usr (254:0) scanned by mount (840) Jul 6 23:45:53.824378 kernel: BTRFS info (device dm-0): first mount of filesystem aa7ffdf7-f152-4ceb-bd0e-b3b3f8f8b296 Jul 6 23:45:53.828670 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 6 23:45:53.832345 kernel: BTRFS info (device dm-0): using free-space-tree Jul 6 23:45:54.408390 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 6 23:45:54.412939 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 6 23:45:54.421572 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 6 23:45:54.422426 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 6 23:45:54.446522 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 6 23:45:54.473739 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (863) Jul 6 23:45:54.484798 kernel: BTRFS info (device sda6): first mount of filesystem 492b2e2a-5dd7-445f-b930-e9dd6acadf93 Jul 6 23:45:54.484858 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 6 23:45:54.488959 kernel: BTRFS info (device sda6): using free-space-tree Jul 6 23:45:54.525743 kernel: BTRFS info (device sda6): last unmount of filesystem 492b2e2a-5dd7-445f-b930-e9dd6acadf93 Jul 6 23:45:54.527811 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 6 23:45:54.539403 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 6 23:45:54.570607 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 6 23:45:54.583893 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 6 23:45:54.616631 systemd-networkd[1009]: lo: Link UP Jul 6 23:45:54.616642 systemd-networkd[1009]: lo: Gained carrier Jul 6 23:45:54.617890 systemd-networkd[1009]: Enumeration completed Jul 6 23:45:54.620232 systemd-networkd[1009]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:45:54.620235 systemd-networkd[1009]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 6 23:45:54.622808 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 6 23:45:54.628160 systemd[1]: Reached target network.target - Network. Jul 6 23:45:54.701721 kernel: mlx5_core 4a1b:00:02.0 enP18971s1: Link up Jul 6 23:45:54.737825 kernel: hv_netvsc 002248c0-2c98-0022-48c0-2c98002248c0 eth0: Data path switched to VF: enP18971s1 Jul 6 23:45:54.738043 systemd-networkd[1009]: enP18971s1: Link UP Jul 6 23:45:54.738094 systemd-networkd[1009]: eth0: Link UP Jul 6 23:45:54.738156 systemd-networkd[1009]: eth0: Gained carrier Jul 6 23:45:54.738165 systemd-networkd[1009]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:45:54.742882 systemd-networkd[1009]: enP18971s1: Gained carrier Jul 6 23:45:54.761743 systemd-networkd[1009]: eth0: DHCPv4 address 10.200.20.39/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jul 6 23:45:55.985912 systemd-networkd[1009]: enP18971s1: Gained IPv6LL Jul 6 23:45:56.020290 ignition[965]: Ignition 2.21.0 Jul 6 23:45:56.020306 ignition[965]: Stage: fetch-offline Jul 6 23:45:56.024979 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 6 23:45:56.020382 ignition[965]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:45:56.035146 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 6 23:45:56.020388 ignition[965]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 6 23:45:56.020493 ignition[965]: parsed url from cmdline: "" Jul 6 23:45:56.020495 ignition[965]: no config URL provided Jul 6 23:45:56.020498 ignition[965]: reading system config file "/usr/lib/ignition/user.ign" Jul 6 23:45:56.020503 ignition[965]: no config at "/usr/lib/ignition/user.ign" Jul 6 23:45:56.020506 ignition[965]: failed to fetch config: resource requires networking Jul 6 23:45:56.023286 ignition[965]: Ignition finished successfully Jul 6 23:45:56.070584 ignition[1021]: Ignition 2.21.0 Jul 6 23:45:56.070591 ignition[1021]: Stage: fetch Jul 6 23:45:56.070787 ignition[1021]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:45:56.070794 ignition[1021]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 6 23:45:56.070875 ignition[1021]: parsed url from cmdline: "" Jul 6 23:45:56.070878 ignition[1021]: no config URL provided Jul 6 23:45:56.070881 ignition[1021]: reading system config file "/usr/lib/ignition/user.ign" Jul 6 23:45:56.070886 ignition[1021]: no config at "/usr/lib/ignition/user.ign" Jul 6 23:45:56.070909 ignition[1021]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jul 6 23:45:56.178075 systemd-networkd[1009]: eth0: Gained IPv6LL Jul 6 23:45:56.230738 ignition[1021]: GET result: OK Jul 6 23:45:56.230802 ignition[1021]: config has been read from IMDS userdata Jul 6 23:45:56.237122 unknown[1021]: fetched base config from "system" Jul 6 23:45:56.231353 ignition[1021]: parsing config with SHA512: 721a0d2bc01f9b1a6c83a4d00e9307bf783aa11b5c5757a487cfa2b6551a38662765cc6fd0b21a4df7449cd477106a1f1a69c06eff77358ff1e75b8c87c79961 Jul 6 23:45:56.237128 unknown[1021]: fetched base config from "system" Jul 6 23:45:56.237396 ignition[1021]: fetch: fetch complete Jul 6 23:45:56.237144 unknown[1021]: fetched user config from "azure" Jul 6 23:45:56.237400 ignition[1021]: fetch: fetch passed Jul 6 23:45:56.240743 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 6 23:45:56.237465 ignition[1021]: Ignition finished successfully Jul 6 23:45:56.248721 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 6 23:45:56.286182 ignition[1028]: Ignition 2.21.0 Jul 6 23:45:56.286197 ignition[1028]: Stage: kargs Jul 6 23:45:56.286356 ignition[1028]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:45:56.293744 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 6 23:45:56.286364 ignition[1028]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 6 23:45:56.301284 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 6 23:45:56.287167 ignition[1028]: kargs: kargs passed Jul 6 23:45:56.287222 ignition[1028]: Ignition finished successfully Jul 6 23:45:56.339462 ignition[1034]: Ignition 2.21.0 Jul 6 23:45:56.339479 ignition[1034]: Stage: disks Jul 6 23:45:56.344175 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 6 23:45:56.339649 ignition[1034]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:45:56.353091 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 6 23:45:56.339657 ignition[1034]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 6 23:45:56.361833 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 6 23:45:56.340218 ignition[1034]: disks: disks passed Jul 6 23:45:56.371066 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 6 23:45:56.340263 ignition[1034]: Ignition finished successfully Jul 6 23:45:56.384185 systemd[1]: Reached target sysinit.target - System Initialization. Jul 6 23:45:56.392748 systemd[1]: Reached target basic.target - Basic System. Jul 6 23:45:56.402569 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 6 23:45:56.493306 systemd-fsck[1042]: ROOT: clean, 15/7326000 files, 477845/7359488 blocks Jul 6 23:45:56.501940 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 6 23:45:56.508556 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 6 23:45:56.702718 kernel: EXT4-fs (sda9): mounted filesystem a6b10247-fbe6-4a25-95d9-ddd4b58604ec r/w with ordered data mode. Quota mode: none. Jul 6 23:45:56.703916 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 6 23:45:56.707723 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 6 23:45:56.735804 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 6 23:45:56.754371 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 6 23:45:56.763561 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jul 6 23:45:56.789300 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1056) Jul 6 23:45:56.789357 kernel: BTRFS info (device sda6): first mount of filesystem 492b2e2a-5dd7-445f-b930-e9dd6acadf93 Jul 6 23:45:56.789554 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 6 23:45:56.818040 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 6 23:45:56.818060 kernel: BTRFS info (device sda6): using free-space-tree Jul 6 23:45:56.789601 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 6 23:45:56.819551 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 6 23:45:56.827545 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 6 23:45:56.842862 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 6 23:45:57.333130 coreos-metadata[1058]: Jul 06 23:45:57.333 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jul 6 23:45:57.341377 coreos-metadata[1058]: Jul 06 23:45:57.340 INFO Fetch successful Jul 6 23:45:57.341377 coreos-metadata[1058]: Jul 06 23:45:57.340 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jul 6 23:45:57.355846 coreos-metadata[1058]: Jul 06 23:45:57.351 INFO Fetch successful Jul 6 23:45:57.366245 coreos-metadata[1058]: Jul 06 23:45:57.366 INFO wrote hostname ci-4344.1.1-a-5eeae23dc4 to /sysroot/etc/hostname Jul 6 23:45:57.375377 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 6 23:45:57.593378 initrd-setup-root[1086]: cut: /sysroot/etc/passwd: No such file or directory Jul 6 23:45:57.630963 initrd-setup-root[1093]: cut: /sysroot/etc/group: No such file or directory Jul 6 23:45:57.650440 initrd-setup-root[1100]: cut: /sysroot/etc/shadow: No such file or directory Jul 6 23:45:57.658262 initrd-setup-root[1107]: cut: /sysroot/etc/gshadow: No such file or directory Jul 6 23:45:58.589777 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 6 23:45:58.596955 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 6 23:45:58.617507 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 6 23:45:58.630492 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 6 23:45:58.640979 kernel: BTRFS info (device sda6): last unmount of filesystem 492b2e2a-5dd7-445f-b930-e9dd6acadf93 Jul 6 23:45:58.658141 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 6 23:45:58.665214 ignition[1175]: INFO : Ignition 2.21.0 Jul 6 23:45:58.665214 ignition[1175]: INFO : Stage: mount Jul 6 23:45:58.665214 ignition[1175]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:45:58.665214 ignition[1175]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 6 23:45:58.665214 ignition[1175]: INFO : mount: mount passed Jul 6 23:45:58.665214 ignition[1175]: INFO : Ignition finished successfully Jul 6 23:45:58.666867 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 6 23:45:58.677048 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 6 23:45:58.704619 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 6 23:45:58.750068 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1187) Jul 6 23:45:58.750122 kernel: BTRFS info (device sda6): first mount of filesystem 492b2e2a-5dd7-445f-b930-e9dd6acadf93 Jul 6 23:45:58.754545 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 6 23:45:58.758021 kernel: BTRFS info (device sda6): using free-space-tree Jul 6 23:45:58.760624 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 6 23:45:58.789605 ignition[1205]: INFO : Ignition 2.21.0 Jul 6 23:45:58.789605 ignition[1205]: INFO : Stage: files Jul 6 23:45:58.796591 ignition[1205]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:45:58.796591 ignition[1205]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 6 23:45:58.796591 ignition[1205]: DEBUG : files: compiled without relabeling support, skipping Jul 6 23:45:58.823241 ignition[1205]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 6 23:45:58.823241 ignition[1205]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 6 23:45:58.862797 ignition[1205]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 6 23:45:58.869052 ignition[1205]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 6 23:45:58.869052 ignition[1205]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 6 23:45:58.863243 unknown[1205]: wrote ssh authorized keys file for user: core Jul 6 23:45:58.886550 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jul 6 23:45:58.886550 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Jul 6 23:45:58.997095 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 6 23:45:59.247858 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jul 6 23:45:59.247858 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 6 23:45:59.265635 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jul 6 23:45:59.728429 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 6 23:45:59.794882 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 6 23:45:59.794882 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 6 23:45:59.814246 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 6 23:45:59.814246 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 6 23:45:59.814246 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 6 23:45:59.814246 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 6 23:45:59.814246 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 6 23:45:59.814246 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 6 23:45:59.814246 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 6 23:45:59.879793 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 6 23:45:59.879793 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 6 23:45:59.879793 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 6 23:45:59.879793 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 6 23:45:59.879793 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 6 23:45:59.879793 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Jul 6 23:46:00.524933 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 6 23:46:00.780418 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 6 23:46:00.780418 ignition[1205]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 6 23:46:00.811215 ignition[1205]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 6 23:46:00.825344 ignition[1205]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 6 23:46:00.825344 ignition[1205]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 6 23:46:00.825344 ignition[1205]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jul 6 23:46:00.865388 ignition[1205]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jul 6 23:46:00.865388 ignition[1205]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 6 23:46:00.865388 ignition[1205]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 6 23:46:00.865388 ignition[1205]: INFO : files: files passed Jul 6 23:46:00.865388 ignition[1205]: INFO : Ignition finished successfully Jul 6 23:46:00.842979 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 6 23:46:00.851141 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 6 23:46:00.896961 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 6 23:46:00.916293 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 6 23:46:00.959676 initrd-setup-root-after-ignition[1233]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:46:00.959676 initrd-setup-root-after-ignition[1233]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:46:00.916381 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 6 23:46:00.979261 initrd-setup-root-after-ignition[1237]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:46:00.927510 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 6 23:46:00.935453 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 6 23:46:00.949086 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 6 23:46:00.996032 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 6 23:46:00.996213 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 6 23:46:01.005257 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 6 23:46:01.016398 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 6 23:46:01.028227 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 6 23:46:01.029047 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 6 23:46:01.071387 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 6 23:46:01.078420 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 6 23:46:01.106248 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:46:01.111628 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:46:01.122042 systemd[1]: Stopped target timers.target - Timer Units. Jul 6 23:46:01.131188 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 6 23:46:01.131308 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 6 23:46:01.144779 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 6 23:46:01.149857 systemd[1]: Stopped target basic.target - Basic System. Jul 6 23:46:01.160805 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 6 23:46:01.170905 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 6 23:46:01.180185 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 6 23:46:01.190025 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 6 23:46:01.200282 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 6 23:46:01.209563 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 6 23:46:01.220150 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 6 23:46:01.229687 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 6 23:46:01.240000 systemd[1]: Stopped target swap.target - Swaps. Jul 6 23:46:01.248482 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 6 23:46:01.248635 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 6 23:46:01.260113 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:46:01.269245 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:46:01.280139 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 6 23:46:01.280228 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:46:01.289954 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 6 23:46:01.290106 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 6 23:46:01.305812 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 6 23:46:01.305971 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 6 23:46:01.319112 systemd[1]: ignition-files.service: Deactivated successfully. Jul 6 23:46:01.319233 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 6 23:46:01.328630 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jul 6 23:46:01.328754 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 6 23:46:01.414760 ignition[1258]: INFO : Ignition 2.21.0 Jul 6 23:46:01.414760 ignition[1258]: INFO : Stage: umount Jul 6 23:46:01.414760 ignition[1258]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:46:01.414760 ignition[1258]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 6 23:46:01.414760 ignition[1258]: INFO : umount: umount passed Jul 6 23:46:01.414760 ignition[1258]: INFO : Ignition finished successfully Jul 6 23:46:01.341805 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 6 23:46:01.357391 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 6 23:46:01.357600 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:46:01.370828 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 6 23:46:01.383284 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 6 23:46:01.383457 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:46:01.389959 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 6 23:46:01.390043 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 6 23:46:01.415899 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 6 23:46:01.416502 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 6 23:46:01.416582 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 6 23:46:01.423891 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 6 23:46:01.423980 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 6 23:46:01.431552 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 6 23:46:01.431597 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 6 23:46:01.440772 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 6 23:46:01.440809 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 6 23:46:01.444850 systemd[1]: Stopped target network.target - Network. Jul 6 23:46:01.453203 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 6 23:46:01.453247 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 6 23:46:01.461006 systemd[1]: Stopped target paths.target - Path Units. Jul 6 23:46:01.468505 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 6 23:46:01.472316 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:46:01.477200 systemd[1]: Stopped target slices.target - Slice Units. Jul 6 23:46:01.484348 systemd[1]: Stopped target sockets.target - Socket Units. Jul 6 23:46:01.493605 systemd[1]: iscsid.socket: Deactivated successfully. Jul 6 23:46:01.493650 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 6 23:46:01.498161 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 6 23:46:01.498191 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 6 23:46:01.506096 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 6 23:46:01.506143 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 6 23:46:01.514268 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 6 23:46:01.514295 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 6 23:46:01.522647 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 6 23:46:01.757864 kernel: hv_netvsc 002248c0-2c98-0022-48c0-2c98002248c0 eth0: Data path switched from VF: enP18971s1 Jul 6 23:46:01.530464 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 6 23:46:01.540083 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 6 23:46:01.540162 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 6 23:46:01.553610 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 6 23:46:01.553710 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 6 23:46:01.566610 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 6 23:46:01.568750 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 6 23:46:01.580894 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 6 23:46:01.580943 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:46:01.594820 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 6 23:46:01.608317 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 6 23:46:01.608400 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 6 23:46:01.618121 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:46:01.626653 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 6 23:46:01.626795 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 6 23:46:01.646393 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 6 23:46:01.657619 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 6 23:46:01.657748 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:46:01.675367 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 6 23:46:01.675424 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 6 23:46:01.680480 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 6 23:46:01.680539 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:46:01.693680 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 6 23:46:01.693758 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 6 23:46:01.693992 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 6 23:46:01.694127 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:46:01.703885 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 6 23:46:01.703962 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 6 23:46:01.712590 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 6 23:46:01.712620 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:46:01.722043 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 6 23:46:01.722093 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 6 23:46:01.743818 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 6 23:46:01.743888 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 6 23:46:01.757935 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 6 23:46:01.757995 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:46:01.773853 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 6 23:46:01.788680 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 6 23:46:01.788782 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 6 23:46:01.807969 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 6 23:46:02.027405 systemd-journald[224]: Received SIGTERM from PID 1 (systemd). Jul 6 23:46:01.808062 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:46:01.821537 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:46:01.821604 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:46:01.836813 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jul 6 23:46:01.836873 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 6 23:46:01.836899 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 6 23:46:01.837248 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 6 23:46:01.838730 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 6 23:46:01.848326 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 6 23:46:01.849727 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 6 23:46:01.861990 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 6 23:46:01.863735 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 6 23:46:01.874964 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 6 23:46:01.883387 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 6 23:46:01.883483 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 6 23:46:01.894324 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 6 23:46:01.923178 systemd[1]: Switching root. Jul 6 23:46:02.110513 systemd-journald[224]: Journal stopped Jul 6 23:46:06.281996 kernel: SELinux: policy capability network_peer_controls=1 Jul 6 23:46:06.282019 kernel: SELinux: policy capability open_perms=1 Jul 6 23:46:06.282027 kernel: SELinux: policy capability extended_socket_class=1 Jul 6 23:46:06.282032 kernel: SELinux: policy capability always_check_network=0 Jul 6 23:46:06.282038 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 6 23:46:06.282044 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 6 23:46:06.282050 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 6 23:46:06.282055 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 6 23:46:06.282060 kernel: SELinux: policy capability userspace_initial_context=0 Jul 6 23:46:06.282065 kernel: audit: type=1403 audit(1751845562.828:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 6 23:46:06.282072 systemd[1]: Successfully loaded SELinux policy in 161.736ms. Jul 6 23:46:06.282080 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 8.156ms. Jul 6 23:46:06.282086 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 6 23:46:06.282092 systemd[1]: Detected virtualization microsoft. Jul 6 23:46:06.282099 systemd[1]: Detected architecture arm64. Jul 6 23:46:06.282106 systemd[1]: Detected first boot. Jul 6 23:46:06.282112 systemd[1]: Hostname set to . Jul 6 23:46:06.282118 systemd[1]: Initializing machine ID from random generator. Jul 6 23:46:06.282124 zram_generator::config[1300]: No configuration found. Jul 6 23:46:06.282130 kernel: NET: Registered PF_VSOCK protocol family Jul 6 23:46:06.282136 systemd[1]: Populated /etc with preset unit settings. Jul 6 23:46:06.282142 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 6 23:46:06.282149 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 6 23:46:06.282155 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 6 23:46:06.282161 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 6 23:46:06.282167 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 6 23:46:06.282174 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 6 23:46:06.282180 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 6 23:46:06.282186 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 6 23:46:06.282192 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 6 23:46:06.282199 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 6 23:46:06.282205 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 6 23:46:06.282211 systemd[1]: Created slice user.slice - User and Session Slice. Jul 6 23:46:06.282217 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:46:06.282223 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:46:06.282229 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 6 23:46:06.282235 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 6 23:46:06.282241 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 6 23:46:06.282248 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 6 23:46:06.282254 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jul 6 23:46:06.282263 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:46:06.282269 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:46:06.282276 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 6 23:46:06.282282 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 6 23:46:06.282288 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 6 23:46:06.282294 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 6 23:46:06.282301 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:46:06.282307 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 6 23:46:06.282313 systemd[1]: Reached target slices.target - Slice Units. Jul 6 23:46:06.282319 systemd[1]: Reached target swap.target - Swaps. Jul 6 23:46:06.282325 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 6 23:46:06.282331 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 6 23:46:06.282338 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 6 23:46:06.282345 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:46:06.282351 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 6 23:46:06.282357 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:46:06.282363 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 6 23:46:06.282369 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 6 23:46:06.282376 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 6 23:46:06.282382 systemd[1]: Mounting media.mount - External Media Directory... Jul 6 23:46:06.282389 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 6 23:46:06.282396 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 6 23:46:06.282402 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 6 23:46:06.282409 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 6 23:46:06.282415 systemd[1]: Reached target machines.target - Containers. Jul 6 23:46:06.282421 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 6 23:46:06.282428 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:46:06.282434 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 6 23:46:06.282441 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 6 23:46:06.282447 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:46:06.282453 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 6 23:46:06.282459 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:46:06.282465 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 6 23:46:06.282471 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:46:06.282478 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 6 23:46:06.282485 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 6 23:46:06.282491 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 6 23:46:06.282497 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 6 23:46:06.282503 systemd[1]: Stopped systemd-fsck-usr.service. Jul 6 23:46:06.282510 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 6 23:46:06.282516 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 6 23:46:06.282522 kernel: fuse: init (API version 7.41) Jul 6 23:46:06.282529 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 6 23:46:06.282535 kernel: loop: module loaded Jul 6 23:46:06.282541 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 6 23:46:06.282547 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 6 23:46:06.282553 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 6 23:46:06.282559 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 6 23:46:06.282578 systemd-journald[1404]: Collecting audit messages is disabled. Jul 6 23:46:06.282593 systemd-journald[1404]: Journal started Jul 6 23:46:06.282608 systemd-journald[1404]: Runtime Journal (/run/log/journal/276accf024d8488d8f1309475fb34d1d) is 8M, max 78.5M, 70.5M free. Jul 6 23:46:05.418610 systemd[1]: Queued start job for default target multi-user.target. Jul 6 23:46:05.424205 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jul 6 23:46:05.424606 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 6 23:46:05.425942 systemd[1]: systemd-journald.service: Consumed 2.839s CPU time. Jul 6 23:46:06.296973 systemd[1]: verity-setup.service: Deactivated successfully. Jul 6 23:46:06.297045 systemd[1]: Stopped verity-setup.service. Jul 6 23:46:06.297055 kernel: ACPI: bus type drm_connector registered Jul 6 23:46:06.319294 systemd[1]: Started systemd-journald.service - Journal Service. Jul 6 23:46:06.320103 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 6 23:46:06.325134 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 6 23:46:06.330000 systemd[1]: Mounted media.mount - External Media Directory. Jul 6 23:46:06.334953 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 6 23:46:06.340425 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 6 23:46:06.345859 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 6 23:46:06.350465 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 6 23:46:06.355871 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:46:06.362046 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 6 23:46:06.362210 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 6 23:46:06.368639 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:46:06.368879 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:46:06.374517 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 6 23:46:06.374658 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 6 23:46:06.380275 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:46:06.380415 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:46:06.386972 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 6 23:46:06.387104 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 6 23:46:06.392302 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:46:06.392439 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:46:06.398406 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 6 23:46:06.404074 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 6 23:46:06.410244 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 6 23:46:06.417362 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 6 23:46:06.423634 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:46:06.439252 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 6 23:46:06.446054 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 6 23:46:06.458803 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 6 23:46:06.464962 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 6 23:46:06.465000 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 6 23:46:06.471114 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 6 23:46:06.481046 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 6 23:46:06.486359 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:46:06.491303 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 6 23:46:06.497378 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 6 23:46:06.503561 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 6 23:46:06.504803 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 6 23:46:06.510548 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 6 23:46:06.513856 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:46:06.526872 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 6 23:46:06.536917 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 6 23:46:06.544170 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 6 23:46:06.552599 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 6 23:46:06.559860 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 6 23:46:06.572838 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 6 23:46:06.576838 systemd-journald[1404]: Time spent on flushing to /var/log/journal/276accf024d8488d8f1309475fb34d1d is 8.681ms for 939 entries. Jul 6 23:46:06.576838 systemd-journald[1404]: System Journal (/var/log/journal/276accf024d8488d8f1309475fb34d1d) is 8M, max 2.6G, 2.6G free. Jul 6 23:46:06.635178 systemd-journald[1404]: Received client request to flush runtime journal. Jul 6 23:46:06.635243 kernel: loop0: detected capacity change from 0 to 107312 Jul 6 23:46:06.589907 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 6 23:46:06.597049 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:46:06.639859 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 6 23:46:06.650633 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 6 23:46:06.658862 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 6 23:46:06.681837 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 6 23:46:06.682765 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 6 23:46:06.761400 systemd-tmpfiles[1452]: ACLs are not supported, ignoring. Jul 6 23:46:06.761414 systemd-tmpfiles[1452]: ACLs are not supported, ignoring. Jul 6 23:46:06.766466 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:46:07.048730 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 6 23:46:07.113725 kernel: loop1: detected capacity change from 0 to 28936 Jul 6 23:46:07.420731 kernel: loop2: detected capacity change from 0 to 211168 Jul 6 23:46:07.492799 kernel: loop3: detected capacity change from 0 to 138376 Jul 6 23:46:07.853604 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 6 23:46:07.861587 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:46:07.886554 systemd-udevd[1461]: Using default interface naming scheme 'v255'. Jul 6 23:46:07.889813 kernel: loop4: detected capacity change from 0 to 107312 Jul 6 23:46:07.896730 kernel: loop5: detected capacity change from 0 to 28936 Jul 6 23:46:07.903729 kernel: loop6: detected capacity change from 0 to 211168 Jul 6 23:46:07.911713 kernel: loop7: detected capacity change from 0 to 138376 Jul 6 23:46:07.916117 (sd-merge)[1463]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jul 6 23:46:07.916496 (sd-merge)[1463]: Merged extensions into '/usr'. Jul 6 23:46:07.919304 systemd[1]: Reload requested from client PID 1439 ('systemd-sysext') (unit systemd-sysext.service)... Jul 6 23:46:07.919437 systemd[1]: Reloading... Jul 6 23:46:07.979816 zram_generator::config[1495]: No configuration found. Jul 6 23:46:08.048881 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:46:08.155931 systemd[1]: Reloading finished in 236 ms. Jul 6 23:46:08.210209 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:46:08.222180 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 6 23:46:08.242327 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jul 6 23:46:08.249045 systemd[1]: Starting ensure-sysext.service... Jul 6 23:46:08.258266 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 6 23:46:08.281160 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#203 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jul 6 23:46:08.275614 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 6 23:46:08.301284 systemd-tmpfiles[1588]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 6 23:46:08.301608 systemd-tmpfiles[1588]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 6 23:46:08.301851 systemd-tmpfiles[1588]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 6 23:46:08.301985 systemd-tmpfiles[1588]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 6 23:46:08.302413 systemd-tmpfiles[1588]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 6 23:46:08.302546 systemd-tmpfiles[1588]: ACLs are not supported, ignoring. Jul 6 23:46:08.302581 systemd-tmpfiles[1588]: ACLs are not supported, ignoring. Jul 6 23:46:08.305878 systemd[1]: Reload requested from client PID 1581 ('systemctl') (unit ensure-sysext.service)... Jul 6 23:46:08.306006 systemd[1]: Reloading... Jul 6 23:46:08.319680 systemd-tmpfiles[1588]: Detected autofs mount point /boot during canonicalization of boot. Jul 6 23:46:08.319693 systemd-tmpfiles[1588]: Skipping /boot Jul 6 23:46:08.338739 kernel: mousedev: PS/2 mouse device common for all mice Jul 6 23:46:08.345980 systemd-tmpfiles[1588]: Detected autofs mount point /boot during canonicalization of boot. Jul 6 23:46:08.345993 systemd-tmpfiles[1588]: Skipping /boot Jul 6 23:46:08.363924 kernel: hv_vmbus: registering driver hv_balloon Jul 6 23:46:08.368870 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jul 6 23:46:08.368930 kernel: hv_balloon: Memory hot add disabled on ARM64 Jul 6 23:46:08.431833 zram_generator::config[1630]: No configuration found. Jul 6 23:46:08.501717 kernel: hv_vmbus: registering driver hyperv_fb Jul 6 23:46:08.513235 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jul 6 23:46:08.513335 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jul 6 23:46:08.519968 kernel: Console: switching to colour dummy device 80x25 Jul 6 23:46:08.524739 kernel: Console: switching to colour frame buffer device 128x48 Jul 6 23:46:08.569669 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:46:08.649457 systemd[1]: Reloading finished in 343 ms. Jul 6 23:46:08.652732 kernel: MACsec IEEE 802.1AE Jul 6 23:46:08.658918 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:46:08.708614 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jul 6 23:46:08.714508 systemd[1]: Finished ensure-sysext.service. Jul 6 23:46:08.727232 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 6 23:46:08.738877 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 6 23:46:08.743975 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:46:08.753476 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:46:08.761883 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 6 23:46:08.767390 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:46:08.778869 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:46:08.785178 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:46:08.786322 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 6 23:46:08.792135 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 6 23:46:08.798097 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 6 23:46:08.808861 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 6 23:46:08.813822 systemd[1]: Reached target time-set.target - System Time Set. Jul 6 23:46:08.820933 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 6 23:46:08.835205 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 6 23:46:08.842114 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:46:08.847423 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:46:08.847759 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:46:08.853412 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 6 23:46:08.854750 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 6 23:46:08.861217 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:46:08.866740 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:46:08.873629 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:46:08.873940 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:46:08.881739 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 6 23:46:08.894004 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 6 23:46:08.894176 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 6 23:46:08.899323 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 6 23:46:08.908374 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 6 23:46:08.914227 augenrules[1793]: No rules Jul 6 23:46:08.915065 systemd[1]: audit-rules.service: Deactivated successfully. Jul 6 23:46:08.915269 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 6 23:46:08.921263 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 6 23:46:09.013967 systemd-resolved[1771]: Positive Trust Anchors: Jul 6 23:46:09.014332 systemd-resolved[1771]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 6 23:46:09.014397 systemd-resolved[1771]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 6 23:46:09.017511 systemd-resolved[1771]: Using system hostname 'ci-4344.1.1-a-5eeae23dc4'. Jul 6 23:46:09.019044 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 6 23:46:09.024776 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:46:09.090814 systemd-networkd[1583]: lo: Link UP Jul 6 23:46:09.090823 systemd-networkd[1583]: lo: Gained carrier Jul 6 23:46:09.092830 systemd-networkd[1583]: Enumeration completed Jul 6 23:46:09.093262 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 6 23:46:09.093823 systemd-networkd[1583]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:46:09.093890 systemd-networkd[1583]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 6 23:46:09.099390 systemd[1]: Reached target network.target - Network. Jul 6 23:46:09.105077 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 6 23:46:09.111631 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 6 23:46:09.156721 kernel: mlx5_core 4a1b:00:02.0 enP18971s1: Link up Jul 6 23:46:09.179714 kernel: hv_netvsc 002248c0-2c98-0022-48c0-2c98002248c0 eth0: Data path switched to VF: enP18971s1 Jul 6 23:46:09.181130 systemd-networkd[1583]: enP18971s1: Link UP Jul 6 23:46:09.181208 systemd-networkd[1583]: eth0: Link UP Jul 6 23:46:09.181211 systemd-networkd[1583]: eth0: Gained carrier Jul 6 23:46:09.181229 systemd-networkd[1583]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:46:09.182597 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 6 23:46:09.191575 systemd-networkd[1583]: enP18971s1: Gained carrier Jul 6 23:46:09.202763 systemd-networkd[1583]: eth0: DHCPv4 address 10.200.20.39/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jul 6 23:46:09.236330 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 6 23:46:09.243153 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 6 23:46:09.246555 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:46:10.449871 systemd-networkd[1583]: eth0: Gained IPv6LL Jul 6 23:46:10.452217 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 6 23:46:10.459746 systemd[1]: Reached target network-online.target - Network is Online. Jul 6 23:46:10.833887 systemd-networkd[1583]: enP18971s1: Gained IPv6LL Jul 6 23:46:11.804574 ldconfig[1434]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 6 23:46:11.819913 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 6 23:46:11.827215 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 6 23:46:11.853536 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 6 23:46:11.859261 systemd[1]: Reached target sysinit.target - System Initialization. Jul 6 23:46:11.864147 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 6 23:46:11.872165 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 6 23:46:11.879020 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 6 23:46:11.883835 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 6 23:46:11.891572 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 6 23:46:11.897840 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 6 23:46:11.897868 systemd[1]: Reached target paths.target - Path Units. Jul 6 23:46:11.901957 systemd[1]: Reached target timers.target - Timer Units. Jul 6 23:46:11.907388 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 6 23:46:11.914315 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 6 23:46:11.921318 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 6 23:46:11.927949 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 6 23:46:11.933883 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 6 23:46:11.953466 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 6 23:46:11.958959 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 6 23:46:11.966066 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 6 23:46:11.972062 systemd[1]: Reached target sockets.target - Socket Units. Jul 6 23:46:11.976521 systemd[1]: Reached target basic.target - Basic System. Jul 6 23:46:11.981047 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 6 23:46:11.981072 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 6 23:46:11.983151 systemd[1]: Starting chronyd.service - NTP client/server... Jul 6 23:46:11.996822 systemd[1]: Starting containerd.service - containerd container runtime... Jul 6 23:46:12.004863 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 6 23:46:12.016989 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 6 23:46:12.024276 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 6 23:46:12.032229 (chronyd)[1821]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Jul 6 23:46:12.035499 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 6 23:46:12.042145 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 6 23:46:12.049741 jq[1829]: false Jul 6 23:46:12.051200 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 6 23:46:12.053772 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jul 6 23:46:12.058940 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jul 6 23:46:12.060834 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:46:12.068859 KVP[1831]: KVP starting; pid is:1831 Jul 6 23:46:12.074112 chronyd[1837]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Jul 6 23:46:12.075071 KVP[1831]: KVP LIC Version: 3.1 Jul 6 23:46:12.075723 kernel: hv_utils: KVP IC version 4.0 Jul 6 23:46:12.077172 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 6 23:46:12.085886 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 6 23:46:12.092424 extend-filesystems[1830]: Found /dev/sda6 Jul 6 23:46:12.097524 chronyd[1837]: Timezone right/UTC failed leap second check, ignoring Jul 6 23:46:12.099229 chronyd[1837]: Loaded seccomp filter (level 2) Jul 6 23:46:12.099310 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 6 23:46:12.109364 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 6 23:46:12.120483 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 6 23:46:12.128975 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 6 23:46:12.137534 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 6 23:46:12.138033 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 6 23:46:12.138933 extend-filesystems[1830]: Found /dev/sda9 Jul 6 23:46:12.143357 systemd[1]: Starting update-engine.service - Update Engine... Jul 6 23:46:12.149729 extend-filesystems[1830]: Checking size of /dev/sda9 Jul 6 23:46:12.159609 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 6 23:46:12.169539 jq[1863]: true Jul 6 23:46:12.171101 systemd[1]: Started chronyd.service - NTP client/server. Jul 6 23:46:12.182801 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 6 23:46:12.193027 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 6 23:46:12.198034 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 6 23:46:12.199011 systemd[1]: motdgen.service: Deactivated successfully. Jul 6 23:46:12.199170 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 6 23:46:12.206232 extend-filesystems[1830]: Old size kept for /dev/sda9 Jul 6 23:46:12.206674 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 6 23:46:12.207943 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 6 23:46:12.227669 update_engine[1859]: I20250706 23:46:12.226742 1859 main.cc:92] Flatcar Update Engine starting Jul 6 23:46:12.231405 systemd-logind[1853]: New seat seat0. Jul 6 23:46:12.232074 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 6 23:46:12.239357 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 6 23:46:12.239546 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 6 23:46:12.250358 systemd-logind[1853]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 6 23:46:12.251222 systemd[1]: Started systemd-logind.service - User Login Management. Jul 6 23:46:12.291065 (ntainerd)[1882]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 6 23:46:12.298707 jq[1878]: true Jul 6 23:46:12.330401 tar[1870]: linux-arm64/LICENSE Jul 6 23:46:12.334864 tar[1870]: linux-arm64/helm Jul 6 23:46:12.425326 bash[1918]: Updated "/home/core/.ssh/authorized_keys" Jul 6 23:46:12.430063 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 6 23:46:12.439409 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 6 23:46:12.444657 dbus-daemon[1824]: [system] SELinux support is enabled Jul 6 23:46:12.444876 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 6 23:46:12.454773 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 6 23:46:12.454806 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 6 23:46:12.457594 update_engine[1859]: I20250706 23:46:12.457537 1859 update_check_scheduler.cc:74] Next update check in 3m40s Jul 6 23:46:12.464686 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 6 23:46:12.464755 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 6 23:46:12.476521 systemd[1]: Started update-engine.service - Update Engine. Jul 6 23:46:12.477970 dbus-daemon[1824]: [system] Successfully activated service 'org.freedesktop.systemd1' Jul 6 23:46:12.493899 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 6 23:46:12.539866 coreos-metadata[1823]: Jul 06 23:46:12.539 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jul 6 23:46:12.553105 coreos-metadata[1823]: Jul 06 23:46:12.552 INFO Fetch successful Jul 6 23:46:12.553105 coreos-metadata[1823]: Jul 06 23:46:12.553 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jul 6 23:46:12.559166 coreos-metadata[1823]: Jul 06 23:46:12.558 INFO Fetch successful Jul 6 23:46:12.559166 coreos-metadata[1823]: Jul 06 23:46:12.559 INFO Fetching http://168.63.129.16/machine/95e65504-6958-41fe-a913-50f512a8bb81/5146c966%2D1715%2D453f%2D86f2%2D735a0fe92e32.%5Fci%2D4344.1.1%2Da%2D5eeae23dc4?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jul 6 23:46:12.564564 coreos-metadata[1823]: Jul 06 23:46:12.564 INFO Fetch successful Jul 6 23:46:12.564564 coreos-metadata[1823]: Jul 06 23:46:12.564 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jul 6 23:46:12.572054 sshd_keygen[1861]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 6 23:46:12.578111 coreos-metadata[1823]: Jul 06 23:46:12.577 INFO Fetch successful Jul 6 23:46:12.637929 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 6 23:46:12.644189 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 6 23:46:12.652596 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 6 23:46:12.658418 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 6 23:46:12.660416 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jul 6 23:46:12.691038 systemd[1]: issuegen.service: Deactivated successfully. Jul 6 23:46:12.693692 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 6 23:46:12.705635 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jul 6 23:46:12.716992 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 6 23:46:12.737768 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 6 23:46:12.745991 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 6 23:46:12.755752 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jul 6 23:46:12.765407 systemd[1]: Reached target getty.target - Login Prompts. Jul 6 23:46:12.813940 locksmithd[1949]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 6 23:46:12.934782 containerd[1882]: time="2025-07-06T23:46:12Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 6 23:46:12.934782 containerd[1882]: time="2025-07-06T23:46:12.932943256Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jul 6 23:46:12.940723 containerd[1882]: time="2025-07-06T23:46:12.940392848Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.384µs" Jul 6 23:46:12.940723 containerd[1882]: time="2025-07-06T23:46:12.940439520Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 6 23:46:12.940723 containerd[1882]: time="2025-07-06T23:46:12.940458752Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 6 23:46:12.940723 containerd[1882]: time="2025-07-06T23:46:12.940636776Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 6 23:46:12.940723 containerd[1882]: time="2025-07-06T23:46:12.940651144Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 6 23:46:12.940723 containerd[1882]: time="2025-07-06T23:46:12.940676064Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 6 23:46:12.941405 containerd[1882]: time="2025-07-06T23:46:12.941196424Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 6 23:46:12.941405 containerd[1882]: time="2025-07-06T23:46:12.941221088Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 6 23:46:12.941501 containerd[1882]: time="2025-07-06T23:46:12.941458528Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 6 23:46:12.941501 containerd[1882]: time="2025-07-06T23:46:12.941470256Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 6 23:46:12.941501 containerd[1882]: time="2025-07-06T23:46:12.941478152Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 6 23:46:12.941501 containerd[1882]: time="2025-07-06T23:46:12.941482936Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 6 23:46:12.941562 containerd[1882]: time="2025-07-06T23:46:12.941544960Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 6 23:46:12.942414 containerd[1882]: time="2025-07-06T23:46:12.941718840Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 6 23:46:12.942414 containerd[1882]: time="2025-07-06T23:46:12.941745560Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 6 23:46:12.942414 containerd[1882]: time="2025-07-06T23:46:12.941751544Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 6 23:46:12.942414 containerd[1882]: time="2025-07-06T23:46:12.941772160Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 6 23:46:12.942414 containerd[1882]: time="2025-07-06T23:46:12.941953496Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 6 23:46:12.942414 containerd[1882]: time="2025-07-06T23:46:12.942013184Z" level=info msg="metadata content store policy set" policy=shared Jul 6 23:46:12.956871 containerd[1882]: time="2025-07-06T23:46:12.956607304Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 6 23:46:12.956871 containerd[1882]: time="2025-07-06T23:46:12.956680256Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 6 23:46:12.956871 containerd[1882]: time="2025-07-06T23:46:12.956692472Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 6 23:46:12.956871 containerd[1882]: time="2025-07-06T23:46:12.956716768Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 6 23:46:12.956871 containerd[1882]: time="2025-07-06T23:46:12.956725576Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 6 23:46:12.956871 containerd[1882]: time="2025-07-06T23:46:12.956734744Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 6 23:46:12.956871 containerd[1882]: time="2025-07-06T23:46:12.956743504Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 6 23:46:12.956871 containerd[1882]: time="2025-07-06T23:46:12.956751840Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 6 23:46:12.956871 containerd[1882]: time="2025-07-06T23:46:12.956764472Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 6 23:46:12.956871 containerd[1882]: time="2025-07-06T23:46:12.956770576Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 6 23:46:12.956871 containerd[1882]: time="2025-07-06T23:46:12.956778728Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 6 23:46:12.956871 containerd[1882]: time="2025-07-06T23:46:12.956790488Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 6 23:46:12.957175 containerd[1882]: time="2025-07-06T23:46:12.956957568Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 6 23:46:12.957175 containerd[1882]: time="2025-07-06T23:46:12.956972664Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 6 23:46:12.957175 containerd[1882]: time="2025-07-06T23:46:12.956983760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 6 23:46:12.957175 containerd[1882]: time="2025-07-06T23:46:12.956990976Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 6 23:46:12.957175 containerd[1882]: time="2025-07-06T23:46:12.956998016Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 6 23:46:12.957175 containerd[1882]: time="2025-07-06T23:46:12.957005536Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 6 23:46:12.957175 containerd[1882]: time="2025-07-06T23:46:12.957016024Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 6 23:46:12.957175 containerd[1882]: time="2025-07-06T23:46:12.957023120Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 6 23:46:12.957175 containerd[1882]: time="2025-07-06T23:46:12.957030808Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 6 23:46:12.957175 containerd[1882]: time="2025-07-06T23:46:12.957037112Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 6 23:46:12.957175 containerd[1882]: time="2025-07-06T23:46:12.957043776Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 6 23:46:12.957175 containerd[1882]: time="2025-07-06T23:46:12.957105048Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 6 23:46:12.957175 containerd[1882]: time="2025-07-06T23:46:12.957116880Z" level=info msg="Start snapshots syncer" Jul 6 23:46:12.957175 containerd[1882]: time="2025-07-06T23:46:12.957134744Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 6 23:46:12.957353 containerd[1882]: time="2025-07-06T23:46:12.957302672Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 6 23:46:12.957353 containerd[1882]: time="2025-07-06T23:46:12.957337728Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 6 23:46:12.958189 containerd[1882]: time="2025-07-06T23:46:12.957474576Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 6 23:46:12.958189 containerd[1882]: time="2025-07-06T23:46:12.957604216Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 6 23:46:12.958189 containerd[1882]: time="2025-07-06T23:46:12.957620776Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 6 23:46:12.958189 containerd[1882]: time="2025-07-06T23:46:12.957627576Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 6 23:46:12.958189 containerd[1882]: time="2025-07-06T23:46:12.957635376Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 6 23:46:12.958189 containerd[1882]: time="2025-07-06T23:46:12.957642896Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 6 23:46:12.958189 containerd[1882]: time="2025-07-06T23:46:12.957668512Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 6 23:46:12.958189 containerd[1882]: time="2025-07-06T23:46:12.957675424Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 6 23:46:12.958189 containerd[1882]: time="2025-07-06T23:46:12.957695408Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 6 23:46:12.958189 containerd[1882]: time="2025-07-06T23:46:12.957725448Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 6 23:46:12.958189 containerd[1882]: time="2025-07-06T23:46:12.957733464Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 6 23:46:12.958189 containerd[1882]: time="2025-07-06T23:46:12.957767336Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 6 23:46:12.958189 containerd[1882]: time="2025-07-06T23:46:12.957779624Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 6 23:46:12.958189 containerd[1882]: time="2025-07-06T23:46:12.957785888Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 6 23:46:12.958417 containerd[1882]: time="2025-07-06T23:46:12.957791288Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 6 23:46:12.958417 containerd[1882]: time="2025-07-06T23:46:12.957795896Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 6 23:46:12.958417 containerd[1882]: time="2025-07-06T23:46:12.957801904Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 6 23:46:12.958417 containerd[1882]: time="2025-07-06T23:46:12.957809640Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 6 23:46:12.958417 containerd[1882]: time="2025-07-06T23:46:12.957857640Z" level=info msg="runtime interface created" Jul 6 23:46:12.958417 containerd[1882]: time="2025-07-06T23:46:12.957862696Z" level=info msg="created NRI interface" Jul 6 23:46:12.958417 containerd[1882]: time="2025-07-06T23:46:12.957872120Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 6 23:46:12.958417 containerd[1882]: time="2025-07-06T23:46:12.957881112Z" level=info msg="Connect containerd service" Jul 6 23:46:12.958417 containerd[1882]: time="2025-07-06T23:46:12.957906448Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 6 23:46:12.959569 containerd[1882]: time="2025-07-06T23:46:12.958616064Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 6 23:46:12.959596 tar[1870]: linux-arm64/README.md Jul 6 23:46:12.977481 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 6 23:46:13.079859 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:46:13.085655 (kubelet)[2016]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:46:13.331461 kubelet[2016]: E0706 23:46:13.331321 2016 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:46:13.334812 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:46:13.334939 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:46:13.335234 systemd[1]: kubelet.service: Consumed 560ms CPU time, 256.3M memory peak. Jul 6 23:46:13.858886 containerd[1882]: time="2025-07-06T23:46:13.858757496Z" level=info msg="Start subscribing containerd event" Jul 6 23:46:13.858886 containerd[1882]: time="2025-07-06T23:46:13.858769152Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 6 23:46:13.858886 containerd[1882]: time="2025-07-06T23:46:13.858847072Z" level=info msg="Start recovering state" Jul 6 23:46:13.858886 containerd[1882]: time="2025-07-06T23:46:13.858892832Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 6 23:46:13.859229 containerd[1882]: time="2025-07-06T23:46:13.859158680Z" level=info msg="Start event monitor" Jul 6 23:46:13.859229 containerd[1882]: time="2025-07-06T23:46:13.859180744Z" level=info msg="Start cni network conf syncer for default" Jul 6 23:46:13.859229 containerd[1882]: time="2025-07-06T23:46:13.859187880Z" level=info msg="Start streaming server" Jul 6 23:46:13.859229 containerd[1882]: time="2025-07-06T23:46:13.859196472Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 6 23:46:13.859229 containerd[1882]: time="2025-07-06T23:46:13.859201304Z" level=info msg="runtime interface starting up..." Jul 6 23:46:13.859229 containerd[1882]: time="2025-07-06T23:46:13.859206496Z" level=info msg="starting plugins..." Jul 6 23:46:13.859229 containerd[1882]: time="2025-07-06T23:46:13.859218256Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 6 23:46:13.864285 containerd[1882]: time="2025-07-06T23:46:13.859479488Z" level=info msg="containerd successfully booted in 0.928052s" Jul 6 23:46:13.859599 systemd[1]: Started containerd.service - containerd container runtime. Jul 6 23:46:13.864961 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 6 23:46:13.872906 systemd[1]: Startup finished in 1.669s (kernel) + 13.043s (initrd) + 11.205s (userspace) = 25.918s. Jul 6 23:46:14.062681 login[1994]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:46:14.062833 login[1993]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:46:14.086828 systemd-logind[1853]: New session 2 of user core. Jul 6 23:46:14.088471 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 6 23:46:14.090350 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 6 23:46:14.095109 systemd-logind[1853]: New session 1 of user core. Jul 6 23:46:14.110769 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 6 23:46:14.114819 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 6 23:46:14.123572 (systemd)[2039]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 6 23:46:14.125970 systemd-logind[1853]: New session c1 of user core. Jul 6 23:46:14.274712 waagent[1989]: 2025-07-06T23:46:14.273389Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Jul 6 23:46:14.280003 waagent[1989]: 2025-07-06T23:46:14.279947Z INFO Daemon Daemon OS: flatcar 4344.1.1 Jul 6 23:46:14.280240 waagent[1989]: 2025-07-06T23:46:14.280215Z INFO Daemon Daemon Python: 3.11.12 Jul 6 23:46:14.280645 waagent[1989]: 2025-07-06T23:46:14.280612Z INFO Daemon Daemon Run daemon Jul 6 23:46:14.282934 waagent[1989]: 2025-07-06T23:46:14.282897Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4344.1.1' Jul 6 23:46:14.283147 waagent[1989]: 2025-07-06T23:46:14.283122Z INFO Daemon Daemon Using waagent for provisioning Jul 6 23:46:14.283453 waagent[1989]: 2025-07-06T23:46:14.283428Z INFO Daemon Daemon Activate resource disk Jul 6 23:46:14.283649 waagent[1989]: 2025-07-06T23:46:14.283627Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jul 6 23:46:14.286151 waagent[1989]: 2025-07-06T23:46:14.286114Z INFO Daemon Daemon Found device: None Jul 6 23:46:14.286389 waagent[1989]: 2025-07-06T23:46:14.286363Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jul 6 23:46:14.286615 waagent[1989]: 2025-07-06T23:46:14.286594Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jul 6 23:46:14.287346 waagent[1989]: 2025-07-06T23:46:14.287302Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jul 6 23:46:14.287595 waagent[1989]: 2025-07-06T23:46:14.287568Z INFO Daemon Daemon Running default provisioning handler Jul 6 23:46:14.292913 waagent[1989]: 2025-07-06T23:46:14.292863Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jul 6 23:46:14.293684 waagent[1989]: 2025-07-06T23:46:14.293641Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jul 6 23:46:14.293918 waagent[1989]: 2025-07-06T23:46:14.293889Z INFO Daemon Daemon cloud-init is enabled: False Jul 6 23:46:14.294085 waagent[1989]: 2025-07-06T23:46:14.294063Z INFO Daemon Daemon Copying ovf-env.xml Jul 6 23:46:14.296695 systemd[2039]: Queued start job for default target default.target. Jul 6 23:46:14.508324 systemd[2039]: Created slice app.slice - User Application Slice. Jul 6 23:46:14.508825 systemd[2039]: Reached target paths.target - Paths. Jul 6 23:46:14.508876 systemd[2039]: Reached target timers.target - Timers. Jul 6 23:46:14.510516 systemd[2039]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 6 23:46:14.518936 systemd[2039]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 6 23:46:14.519001 systemd[2039]: Reached target sockets.target - Sockets. Jul 6 23:46:14.519044 systemd[2039]: Reached target basic.target - Basic System. Jul 6 23:46:14.519067 systemd[2039]: Reached target default.target - Main User Target. Jul 6 23:46:14.519093 systemd[2039]: Startup finished in 387ms. Jul 6 23:46:14.519245 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 6 23:46:14.520608 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 6 23:46:14.521177 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 6 23:46:14.547865 waagent[1989]: 2025-07-06T23:46:14.547738Z INFO Daemon Daemon Successfully mounted dvd Jul 6 23:46:14.591154 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jul 6 23:46:14.594756 waagent[1989]: 2025-07-06T23:46:14.594538Z INFO Daemon Daemon Detect protocol endpoint Jul 6 23:46:14.600576 waagent[1989]: 2025-07-06T23:46:14.600170Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jul 6 23:46:14.605564 waagent[1989]: 2025-07-06T23:46:14.605497Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jul 6 23:46:14.611077 waagent[1989]: 2025-07-06T23:46:14.611039Z INFO Daemon Daemon Test for route to 168.63.129.16 Jul 6 23:46:14.615459 waagent[1989]: 2025-07-06T23:46:14.615422Z INFO Daemon Daemon Route to 168.63.129.16 exists Jul 6 23:46:14.620165 waagent[1989]: 2025-07-06T23:46:14.620131Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jul 6 23:46:14.752585 waagent[1989]: 2025-07-06T23:46:14.752532Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jul 6 23:46:14.757894 waagent[1989]: 2025-07-06T23:46:14.757871Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jul 6 23:46:14.762290 waagent[1989]: 2025-07-06T23:46:14.762207Z INFO Daemon Daemon Server preferred version:2015-04-05 Jul 6 23:46:14.891872 waagent[1989]: 2025-07-06T23:46:14.891787Z INFO Daemon Daemon Initializing goal state during protocol detection Jul 6 23:46:14.898778 waagent[1989]: 2025-07-06T23:46:14.898727Z INFO Daemon Daemon Forcing an update of the goal state. Jul 6 23:46:14.908793 waagent[1989]: 2025-07-06T23:46:14.908751Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jul 6 23:46:14.969289 waagent[1989]: 2025-07-06T23:46:14.969250Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.175 Jul 6 23:46:14.974286 waagent[1989]: 2025-07-06T23:46:14.974246Z INFO Daemon Jul 6 23:46:14.977627 waagent[1989]: 2025-07-06T23:46:14.977595Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 0b81cf32-e0f5-4728-a25d-3c0d91201b97 eTag: 1043958903616742487 source: Fabric] Jul 6 23:46:14.986252 waagent[1989]: 2025-07-06T23:46:14.986220Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jul 6 23:46:14.991555 waagent[1989]: 2025-07-06T23:46:14.991524Z INFO Daemon Jul 6 23:46:14.993701 waagent[1989]: 2025-07-06T23:46:14.993674Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jul 6 23:46:15.003562 waagent[1989]: 2025-07-06T23:46:15.003530Z INFO Daemon Daemon Downloading artifacts profile blob Jul 6 23:46:15.071765 waagent[1989]: 2025-07-06T23:46:15.071612Z INFO Daemon Downloaded certificate {'thumbprint': 'FF7C357209160D9C49AA8B9D1ACED75534AE3FC9', 'hasPrivateKey': False} Jul 6 23:46:15.080292 waagent[1989]: 2025-07-06T23:46:15.080248Z INFO Daemon Downloaded certificate {'thumbprint': 'C084FE94F1396A4BE0ED65313A146D3E898F1CB7', 'hasPrivateKey': True} Jul 6 23:46:15.092131 waagent[1989]: 2025-07-06T23:46:15.092093Z INFO Daemon Fetch goal state completed Jul 6 23:46:15.107096 waagent[1989]: 2025-07-06T23:46:15.107062Z INFO Daemon Daemon Starting provisioning Jul 6 23:46:15.112230 waagent[1989]: 2025-07-06T23:46:15.112184Z INFO Daemon Daemon Handle ovf-env.xml. Jul 6 23:46:15.117497 waagent[1989]: 2025-07-06T23:46:15.117459Z INFO Daemon Daemon Set hostname [ci-4344.1.1-a-5eeae23dc4] Jul 6 23:46:15.128626 waagent[1989]: 2025-07-06T23:46:15.128561Z INFO Daemon Daemon Publish hostname [ci-4344.1.1-a-5eeae23dc4] Jul 6 23:46:15.135407 waagent[1989]: 2025-07-06T23:46:15.135359Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jul 6 23:46:15.141478 waagent[1989]: 2025-07-06T23:46:15.141438Z INFO Daemon Daemon Primary interface is [eth0] Jul 6 23:46:15.152718 systemd-networkd[1583]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:46:15.152724 systemd-networkd[1583]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 6 23:46:15.152757 systemd-networkd[1583]: eth0: DHCP lease lost Jul 6 23:46:15.154381 waagent[1989]: 2025-07-06T23:46:15.154323Z INFO Daemon Daemon Create user account if not exists Jul 6 23:46:15.159826 waagent[1989]: 2025-07-06T23:46:15.159782Z INFO Daemon Daemon User core already exists, skip useradd Jul 6 23:46:15.165511 waagent[1989]: 2025-07-06T23:46:15.165476Z INFO Daemon Daemon Configure sudoer Jul 6 23:46:15.173415 waagent[1989]: 2025-07-06T23:46:15.173354Z INFO Daemon Daemon Configure sshd Jul 6 23:46:15.180875 waagent[1989]: 2025-07-06T23:46:15.180783Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jul 6 23:46:15.191269 waagent[1989]: 2025-07-06T23:46:15.191107Z INFO Daemon Daemon Deploy ssh public key. Jul 6 23:46:15.202768 systemd-networkd[1583]: eth0: DHCPv4 address 10.200.20.39/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jul 6 23:46:16.279913 waagent[1989]: 2025-07-06T23:46:16.279866Z INFO Daemon Daemon Provisioning complete Jul 6 23:46:16.296920 waagent[1989]: 2025-07-06T23:46:16.296882Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jul 6 23:46:16.303959 waagent[1989]: 2025-07-06T23:46:16.303915Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jul 6 23:46:16.312680 waagent[1989]: 2025-07-06T23:46:16.312639Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Jul 6 23:46:16.415748 waagent[2095]: 2025-07-06T23:46:16.415350Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Jul 6 23:46:16.415748 waagent[2095]: 2025-07-06T23:46:16.415494Z INFO ExtHandler ExtHandler OS: flatcar 4344.1.1 Jul 6 23:46:16.415748 waagent[2095]: 2025-07-06T23:46:16.415531Z INFO ExtHandler ExtHandler Python: 3.11.12 Jul 6 23:46:16.415748 waagent[2095]: 2025-07-06T23:46:16.415568Z INFO ExtHandler ExtHandler CPU Arch: aarch64 Jul 6 23:46:16.451738 waagent[2095]: 2025-07-06T23:46:16.451162Z INFO ExtHandler ExtHandler Distro: flatcar-4344.1.1; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.12; Arch: aarch64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Jul 6 23:46:16.451738 waagent[2095]: 2025-07-06T23:46:16.451378Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 6 23:46:16.451738 waagent[2095]: 2025-07-06T23:46:16.451423Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 6 23:46:16.457904 waagent[2095]: 2025-07-06T23:46:16.457850Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jul 6 23:46:16.464267 waagent[2095]: 2025-07-06T23:46:16.464227Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.175 Jul 6 23:46:16.464763 waagent[2095]: 2025-07-06T23:46:16.464729Z INFO ExtHandler Jul 6 23:46:16.464821 waagent[2095]: 2025-07-06T23:46:16.464804Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: d25269f3-f32f-4a5a-a13b-2b55a8753ff3 eTag: 1043958903616742487 source: Fabric] Jul 6 23:46:16.465052 waagent[2095]: 2025-07-06T23:46:16.465026Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jul 6 23:46:16.465450 waagent[2095]: 2025-07-06T23:46:16.465421Z INFO ExtHandler Jul 6 23:46:16.465485 waagent[2095]: 2025-07-06T23:46:16.465470Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jul 6 23:46:16.470462 waagent[2095]: 2025-07-06T23:46:16.470431Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jul 6 23:46:16.536004 waagent[2095]: 2025-07-06T23:46:16.535869Z INFO ExtHandler Downloaded certificate {'thumbprint': 'FF7C357209160D9C49AA8B9D1ACED75534AE3FC9', 'hasPrivateKey': False} Jul 6 23:46:16.536305 waagent[2095]: 2025-07-06T23:46:16.536269Z INFO ExtHandler Downloaded certificate {'thumbprint': 'C084FE94F1396A4BE0ED65313A146D3E898F1CB7', 'hasPrivateKey': True} Jul 6 23:46:16.536619 waagent[2095]: 2025-07-06T23:46:16.536590Z INFO ExtHandler Fetch goal state completed Jul 6 23:46:16.549918 waagent[2095]: 2025-07-06T23:46:16.549863Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.3.3 11 Feb 2025 (Library: OpenSSL 3.3.3 11 Feb 2025) Jul 6 23:46:16.553564 waagent[2095]: 2025-07-06T23:46:16.553511Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 2095 Jul 6 23:46:16.553679 waagent[2095]: 2025-07-06T23:46:16.553656Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jul 6 23:46:16.553968 waagent[2095]: 2025-07-06T23:46:16.553940Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Jul 6 23:46:16.555101 waagent[2095]: 2025-07-06T23:46:16.555068Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4344.1.1', '', 'Flatcar Container Linux by Kinvolk'] Jul 6 23:46:16.555424 waagent[2095]: 2025-07-06T23:46:16.555396Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4344.1.1', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Jul 6 23:46:16.555542 waagent[2095]: 2025-07-06T23:46:16.555521Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Jul 6 23:46:16.556013 waagent[2095]: 2025-07-06T23:46:16.555979Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jul 6 23:46:16.590968 waagent[2095]: 2025-07-06T23:46:16.590931Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jul 6 23:46:16.591161 waagent[2095]: 2025-07-06T23:46:16.591134Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jul 6 23:46:16.595661 waagent[2095]: 2025-07-06T23:46:16.595627Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jul 6 23:46:16.600981 systemd[1]: Reload requested from client PID 2112 ('systemctl') (unit waagent.service)... Jul 6 23:46:16.601235 systemd[1]: Reloading... Jul 6 23:46:16.681756 zram_generator::config[2159]: No configuration found. Jul 6 23:46:16.747886 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:46:16.831165 systemd[1]: Reloading finished in 229 ms. Jul 6 23:46:16.855558 waagent[2095]: 2025-07-06T23:46:16.854840Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jul 6 23:46:16.855558 waagent[2095]: 2025-07-06T23:46:16.854994Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jul 6 23:46:17.649215 waagent[2095]: 2025-07-06T23:46:17.649129Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jul 6 23:46:17.649533 waagent[2095]: 2025-07-06T23:46:17.649472Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Jul 6 23:46:17.650203 waagent[2095]: 2025-07-06T23:46:17.650160Z INFO ExtHandler ExtHandler Starting env monitor service. Jul 6 23:46:17.650495 waagent[2095]: 2025-07-06T23:46:17.650458Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jul 6 23:46:17.650856 waagent[2095]: 2025-07-06T23:46:17.650815Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jul 6 23:46:17.650978 waagent[2095]: 2025-07-06T23:46:17.650927Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jul 6 23:46:17.651107 waagent[2095]: 2025-07-06T23:46:17.651077Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 6 23:46:17.651725 waagent[2095]: 2025-07-06T23:46:17.651346Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 6 23:46:17.651725 waagent[2095]: 2025-07-06T23:46:17.651399Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 6 23:46:17.651725 waagent[2095]: 2025-07-06T23:46:17.651506Z INFO EnvHandler ExtHandler Configure routes Jul 6 23:46:17.651725 waagent[2095]: 2025-07-06T23:46:17.651543Z INFO EnvHandler ExtHandler Gateway:None Jul 6 23:46:17.651725 waagent[2095]: 2025-07-06T23:46:17.651565Z INFO EnvHandler ExtHandler Routes:None Jul 6 23:46:17.651919 waagent[2095]: 2025-07-06T23:46:17.651794Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jul 6 23:46:17.651986 waagent[2095]: 2025-07-06T23:46:17.651942Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jul 6 23:46:17.652323 waagent[2095]: 2025-07-06T23:46:17.652244Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jul 6 23:46:17.652323 waagent[2095]: 2025-07-06T23:46:17.652284Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 6 23:46:17.653169 waagent[2095]: 2025-07-06T23:46:17.653129Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jul 6 23:46:17.653431 waagent[2095]: 2025-07-06T23:46:17.653399Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jul 6 23:46:17.653431 waagent[2095]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jul 6 23:46:17.653431 waagent[2095]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Jul 6 23:46:17.653431 waagent[2095]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jul 6 23:46:17.653431 waagent[2095]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jul 6 23:46:17.653431 waagent[2095]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jul 6 23:46:17.653431 waagent[2095]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jul 6 23:46:17.662847 waagent[2095]: 2025-07-06T23:46:17.662795Z INFO ExtHandler ExtHandler Jul 6 23:46:17.662966 waagent[2095]: 2025-07-06T23:46:17.662886Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 3db888ad-bd76-41e0-9d36-61b9e63c4f3e correlation b2b81b67-aee0-46f8-aaaa-cb3924e03df4 created: 2025-07-06T23:45:02.218224Z] Jul 6 23:46:17.663202 waagent[2095]: 2025-07-06T23:46:17.663170Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jul 6 23:46:17.663613 waagent[2095]: 2025-07-06T23:46:17.663587Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 0 ms] Jul 6 23:46:17.732009 waagent[2095]: 2025-07-06T23:46:17.731939Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Jul 6 23:46:17.732009 waagent[2095]: Try `iptables -h' or 'iptables --help' for more information.) Jul 6 23:46:17.732806 waagent[2095]: 2025-07-06T23:46:17.732692Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 1DFEFAA5-C0BF-47CD-9971-ED4E355B304F;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Jul 6 23:46:17.784469 waagent[2095]: 2025-07-06T23:46:17.784395Z INFO MonitorHandler ExtHandler Network interfaces: Jul 6 23:46:17.784469 waagent[2095]: Executing ['ip', '-a', '-o', 'link']: Jul 6 23:46:17.784469 waagent[2095]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jul 6 23:46:17.784469 waagent[2095]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:c0:2c:98 brd ff:ff:ff:ff:ff:ff Jul 6 23:46:17.784469 waagent[2095]: 3: enP18971s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:c0:2c:98 brd ff:ff:ff:ff:ff:ff\ altname enP18971p0s2 Jul 6 23:46:17.784469 waagent[2095]: Executing ['ip', '-4', '-a', '-o', 'address']: Jul 6 23:46:17.784469 waagent[2095]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jul 6 23:46:17.784469 waagent[2095]: 2: eth0 inet 10.200.20.39/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Jul 6 23:46:17.784469 waagent[2095]: Executing ['ip', '-6', '-a', '-o', 'address']: Jul 6 23:46:17.784469 waagent[2095]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jul 6 23:46:17.784469 waagent[2095]: 2: eth0 inet6 fe80::222:48ff:fec0:2c98/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jul 6 23:46:17.784469 waagent[2095]: 3: enP18971s1 inet6 fe80::222:48ff:fec0:2c98/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jul 6 23:46:17.976174 waagent[2095]: 2025-07-06T23:46:17.976041Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Jul 6 23:46:17.976174 waagent[2095]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jul 6 23:46:17.976174 waagent[2095]: pkts bytes target prot opt in out source destination Jul 6 23:46:17.976174 waagent[2095]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jul 6 23:46:17.976174 waagent[2095]: pkts bytes target prot opt in out source destination Jul 6 23:46:17.976174 waagent[2095]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jul 6 23:46:17.976174 waagent[2095]: pkts bytes target prot opt in out source destination Jul 6 23:46:17.976174 waagent[2095]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jul 6 23:46:17.976174 waagent[2095]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jul 6 23:46:17.976174 waagent[2095]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jul 6 23:46:17.978664 waagent[2095]: 2025-07-06T23:46:17.978603Z INFO EnvHandler ExtHandler Current Firewall rules: Jul 6 23:46:17.978664 waagent[2095]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jul 6 23:46:17.978664 waagent[2095]: pkts bytes target prot opt in out source destination Jul 6 23:46:17.978664 waagent[2095]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jul 6 23:46:17.978664 waagent[2095]: pkts bytes target prot opt in out source destination Jul 6 23:46:17.978664 waagent[2095]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jul 6 23:46:17.978664 waagent[2095]: pkts bytes target prot opt in out source destination Jul 6 23:46:17.978664 waagent[2095]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jul 6 23:46:17.978664 waagent[2095]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jul 6 23:46:17.978664 waagent[2095]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jul 6 23:46:17.978908 waagent[2095]: 2025-07-06T23:46:17.978880Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jul 6 23:46:18.717541 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 6 23:46:18.718782 systemd[1]: Started sshd@0-10.200.20.39:22-10.200.16.10:44638.service - OpenSSH per-connection server daemon (10.200.16.10:44638). Jul 6 23:46:19.303827 sshd[2238]: Accepted publickey for core from 10.200.16.10 port 44638 ssh2: RSA SHA256:0/AHONPd/Cla0u01jeKf+n9bVAD+ttQ1+M75e1nZbX8 Jul 6 23:46:19.304975 sshd-session[2238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:46:19.308947 systemd-logind[1853]: New session 3 of user core. Jul 6 23:46:19.316080 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 6 23:46:19.725802 systemd[1]: Started sshd@1-10.200.20.39:22-10.200.16.10:45264.service - OpenSSH per-connection server daemon (10.200.16.10:45264). Jul 6 23:46:20.205523 sshd[2243]: Accepted publickey for core from 10.200.16.10 port 45264 ssh2: RSA SHA256:0/AHONPd/Cla0u01jeKf+n9bVAD+ttQ1+M75e1nZbX8 Jul 6 23:46:20.208226 sshd-session[2243]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:46:20.212099 systemd-logind[1853]: New session 4 of user core. Jul 6 23:46:20.222993 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 6 23:46:20.553246 sshd[2245]: Connection closed by 10.200.16.10 port 45264 Jul 6 23:46:20.553854 sshd-session[2243]: pam_unix(sshd:session): session closed for user core Jul 6 23:46:20.556829 systemd[1]: sshd@1-10.200.20.39:22-10.200.16.10:45264.service: Deactivated successfully. Jul 6 23:46:20.558269 systemd[1]: session-4.scope: Deactivated successfully. Jul 6 23:46:20.558971 systemd-logind[1853]: Session 4 logged out. Waiting for processes to exit. Jul 6 23:46:20.560167 systemd-logind[1853]: Removed session 4. Jul 6 23:46:20.643530 systemd[1]: Started sshd@2-10.200.20.39:22-10.200.16.10:45280.service - OpenSSH per-connection server daemon (10.200.16.10:45280). Jul 6 23:46:21.128401 sshd[2251]: Accepted publickey for core from 10.200.16.10 port 45280 ssh2: RSA SHA256:0/AHONPd/Cla0u01jeKf+n9bVAD+ttQ1+M75e1nZbX8 Jul 6 23:46:21.129500 sshd-session[2251]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:46:21.133324 systemd-logind[1853]: New session 5 of user core. Jul 6 23:46:21.144879 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 6 23:46:21.467205 sshd[2253]: Connection closed by 10.200.16.10 port 45280 Jul 6 23:46:21.467002 sshd-session[2251]: pam_unix(sshd:session): session closed for user core Jul 6 23:46:21.470334 systemd[1]: sshd@2-10.200.20.39:22-10.200.16.10:45280.service: Deactivated successfully. Jul 6 23:46:21.471786 systemd[1]: session-5.scope: Deactivated successfully. Jul 6 23:46:21.473315 systemd-logind[1853]: Session 5 logged out. Waiting for processes to exit. Jul 6 23:46:21.474591 systemd-logind[1853]: Removed session 5. Jul 6 23:46:21.557143 systemd[1]: Started sshd@3-10.200.20.39:22-10.200.16.10:45284.service - OpenSSH per-connection server daemon (10.200.16.10:45284). Jul 6 23:46:22.039679 sshd[2259]: Accepted publickey for core from 10.200.16.10 port 45284 ssh2: RSA SHA256:0/AHONPd/Cla0u01jeKf+n9bVAD+ttQ1+M75e1nZbX8 Jul 6 23:46:22.040807 sshd-session[2259]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:46:22.044530 systemd-logind[1853]: New session 6 of user core. Jul 6 23:46:22.051860 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 6 23:46:22.387233 sshd[2261]: Connection closed by 10.200.16.10 port 45284 Jul 6 23:46:22.387816 sshd-session[2259]: pam_unix(sshd:session): session closed for user core Jul 6 23:46:22.391374 systemd[1]: sshd@3-10.200.20.39:22-10.200.16.10:45284.service: Deactivated successfully. Jul 6 23:46:22.393389 systemd[1]: session-6.scope: Deactivated successfully. Jul 6 23:46:22.394796 systemd-logind[1853]: Session 6 logged out. Waiting for processes to exit. Jul 6 23:46:22.395711 systemd-logind[1853]: Removed session 6. Jul 6 23:46:22.477486 systemd[1]: Started sshd@4-10.200.20.39:22-10.200.16.10:45292.service - OpenSSH per-connection server daemon (10.200.16.10:45292). Jul 6 23:46:22.958000 sshd[2267]: Accepted publickey for core from 10.200.16.10 port 45292 ssh2: RSA SHA256:0/AHONPd/Cla0u01jeKf+n9bVAD+ttQ1+M75e1nZbX8 Jul 6 23:46:22.959176 sshd-session[2267]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:46:22.962934 systemd-logind[1853]: New session 7 of user core. Jul 6 23:46:22.970021 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 6 23:46:23.328692 sudo[2270]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 6 23:46:23.328958 sudo[2270]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:46:23.353571 sudo[2270]: pam_unix(sudo:session): session closed for user root Jul 6 23:46:23.436371 sshd[2269]: Connection closed by 10.200.16.10 port 45292 Jul 6 23:46:23.435592 sshd-session[2267]: pam_unix(sshd:session): session closed for user core Jul 6 23:46:23.439258 systemd[1]: sshd@4-10.200.20.39:22-10.200.16.10:45292.service: Deactivated successfully. Jul 6 23:46:23.440923 systemd[1]: session-7.scope: Deactivated successfully. Jul 6 23:46:23.441850 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 6 23:46:23.442775 systemd-logind[1853]: Session 7 logged out. Waiting for processes to exit. Jul 6 23:46:23.444719 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:46:23.446134 systemd-logind[1853]: Removed session 7. Jul 6 23:46:23.522925 systemd[1]: Started sshd@5-10.200.20.39:22-10.200.16.10:45306.service - OpenSSH per-connection server daemon (10.200.16.10:45306). Jul 6 23:46:23.551435 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:46:23.554220 (kubelet)[2286]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:46:23.666528 kubelet[2286]: E0706 23:46:23.666363 2286 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:46:23.669589 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:46:23.669716 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:46:23.670235 systemd[1]: kubelet.service: Consumed 118ms CPU time, 106M memory peak. Jul 6 23:46:24.009620 sshd[2279]: Accepted publickey for core from 10.200.16.10 port 45306 ssh2: RSA SHA256:0/AHONPd/Cla0u01jeKf+n9bVAD+ttQ1+M75e1nZbX8 Jul 6 23:46:24.010595 sshd-session[2279]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:46:24.014642 systemd-logind[1853]: New session 8 of user core. Jul 6 23:46:24.032873 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 6 23:46:24.277555 sudo[2294]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 6 23:46:24.277816 sudo[2294]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:46:24.284536 sudo[2294]: pam_unix(sudo:session): session closed for user root Jul 6 23:46:24.288592 sudo[2293]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 6 23:46:24.288826 sudo[2293]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:46:24.296688 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 6 23:46:24.326208 augenrules[2316]: No rules Jul 6 23:46:24.327378 systemd[1]: audit-rules.service: Deactivated successfully. Jul 6 23:46:24.327563 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 6 23:46:24.329386 sudo[2293]: pam_unix(sudo:session): session closed for user root Jul 6 23:46:24.403728 sshd[2292]: Connection closed by 10.200.16.10 port 45306 Jul 6 23:46:24.403925 sshd-session[2279]: pam_unix(sshd:session): session closed for user core Jul 6 23:46:24.406883 systemd[1]: sshd@5-10.200.20.39:22-10.200.16.10:45306.service: Deactivated successfully. Jul 6 23:46:24.408426 systemd[1]: session-8.scope: Deactivated successfully. Jul 6 23:46:24.409649 systemd-logind[1853]: Session 8 logged out. Waiting for processes to exit. Jul 6 23:46:24.411518 systemd-logind[1853]: Removed session 8. Jul 6 23:46:24.490938 systemd[1]: Started sshd@6-10.200.20.39:22-10.200.16.10:45312.service - OpenSSH per-connection server daemon (10.200.16.10:45312). Jul 6 23:46:24.970601 sshd[2325]: Accepted publickey for core from 10.200.16.10 port 45312 ssh2: RSA SHA256:0/AHONPd/Cla0u01jeKf+n9bVAD+ttQ1+M75e1nZbX8 Jul 6 23:46:24.971813 sshd-session[2325]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:46:24.975630 systemd-logind[1853]: New session 9 of user core. Jul 6 23:46:24.984154 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 6 23:46:25.237693 sudo[2328]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 6 23:46:25.238402 sudo[2328]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:46:26.721004 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 6 23:46:26.733024 (dockerd)[2345]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 6 23:46:27.452241 dockerd[2345]: time="2025-07-06T23:46:27.451971824Z" level=info msg="Starting up" Jul 6 23:46:27.452781 dockerd[2345]: time="2025-07-06T23:46:27.452757232Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 6 23:46:27.544978 systemd[1]: var-lib-docker-metacopy\x2dcheck1064196638-merged.mount: Deactivated successfully. Jul 6 23:46:27.560117 dockerd[2345]: time="2025-07-06T23:46:27.559908296Z" level=info msg="Loading containers: start." Jul 6 23:46:27.603748 kernel: Initializing XFRM netlink socket Jul 6 23:46:27.887815 systemd-networkd[1583]: docker0: Link UP Jul 6 23:46:27.904927 dockerd[2345]: time="2025-07-06T23:46:27.904869960Z" level=info msg="Loading containers: done." Jul 6 23:46:27.916283 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4287526931-merged.mount: Deactivated successfully. Jul 6 23:46:27.936618 dockerd[2345]: time="2025-07-06T23:46:27.936235744Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 6 23:46:27.936618 dockerd[2345]: time="2025-07-06T23:46:27.936338544Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jul 6 23:46:27.936618 dockerd[2345]: time="2025-07-06T23:46:27.936461808Z" level=info msg="Initializing buildkit" Jul 6 23:46:27.982199 dockerd[2345]: time="2025-07-06T23:46:27.982149152Z" level=info msg="Completed buildkit initialization" Jul 6 23:46:27.987796 dockerd[2345]: time="2025-07-06T23:46:27.987731712Z" level=info msg="Daemon has completed initialization" Jul 6 23:46:27.988099 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 6 23:46:27.988226 dockerd[2345]: time="2025-07-06T23:46:27.988172424Z" level=info msg="API listen on /run/docker.sock" Jul 6 23:46:28.545306 containerd[1882]: time="2025-07-06T23:46:28.545266800Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\"" Jul 6 23:46:29.262431 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1782922015.mount: Deactivated successfully. Jul 6 23:46:30.387251 containerd[1882]: time="2025-07-06T23:46:30.387184304Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:46:30.390173 containerd[1882]: time="2025-07-06T23:46:30.390130296Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.2: active requests=0, bytes read=27351716" Jul 6 23:46:30.393647 containerd[1882]: time="2025-07-06T23:46:30.393600744Z" level=info msg="ImageCreate event name:\"sha256:04ac773cca35cc457f24a6501b6b308d63a2cddd1aec14fe95559bccca3010a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:46:30.397557 containerd[1882]: time="2025-07-06T23:46:30.397485872Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:46:30.398353 containerd[1882]: time="2025-07-06T23:46:30.397939768Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.2\" with image id \"sha256:04ac773cca35cc457f24a6501b6b308d63a2cddd1aec14fe95559bccca3010a4\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\", size \"27348516\" in 1.852592456s" Jul 6 23:46:30.398353 containerd[1882]: time="2025-07-06T23:46:30.397971168Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\" returns image reference \"sha256:04ac773cca35cc457f24a6501b6b308d63a2cddd1aec14fe95559bccca3010a4\"" Jul 6 23:46:30.399040 containerd[1882]: time="2025-07-06T23:46:30.399018696Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\"" Jul 6 23:46:31.581758 containerd[1882]: time="2025-07-06T23:46:31.581694744Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:46:31.585599 containerd[1882]: time="2025-07-06T23:46:31.585384104Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.2: active requests=0, bytes read=23537623" Jul 6 23:46:31.589723 containerd[1882]: time="2025-07-06T23:46:31.589692952Z" level=info msg="ImageCreate event name:\"sha256:99a259072231375ad69a369cdf5620d60cdff72d450951c603fad8a94667af65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:46:31.595254 containerd[1882]: time="2025-07-06T23:46:31.595193032Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:46:31.595892 containerd[1882]: time="2025-07-06T23:46:31.595753912Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.2\" with image id \"sha256:99a259072231375ad69a369cdf5620d60cdff72d450951c603fad8a94667af65\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\", size \"25092541\" in 1.196709904s" Jul 6 23:46:31.595892 containerd[1882]: time="2025-07-06T23:46:31.595788384Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\" returns image reference \"sha256:99a259072231375ad69a369cdf5620d60cdff72d450951c603fad8a94667af65\"" Jul 6 23:46:31.596295 containerd[1882]: time="2025-07-06T23:46:31.596258560Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\"" Jul 6 23:46:32.667594 containerd[1882]: time="2025-07-06T23:46:32.667432328Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:46:32.670120 containerd[1882]: time="2025-07-06T23:46:32.670081448Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.2: active requests=0, bytes read=18293515" Jul 6 23:46:32.674296 containerd[1882]: time="2025-07-06T23:46:32.674242968Z" level=info msg="ImageCreate event name:\"sha256:bb3da57746ca4726b669d35145eb9b4085643c61bbc80b9df3bf1e6021ba9eaf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:46:32.679578 containerd[1882]: time="2025-07-06T23:46:32.679516848Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:46:32.680187 containerd[1882]: time="2025-07-06T23:46:32.680017160Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.2\" with image id \"sha256:bb3da57746ca4726b669d35145eb9b4085643c61bbc80b9df3bf1e6021ba9eaf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\", size \"19848451\" in 1.083729472s" Jul 6 23:46:32.680187 containerd[1882]: time="2025-07-06T23:46:32.680046928Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\" returns image reference \"sha256:bb3da57746ca4726b669d35145eb9b4085643c61bbc80b9df3bf1e6021ba9eaf\"" Jul 6 23:46:32.680704 containerd[1882]: time="2025-07-06T23:46:32.680657736Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\"" Jul 6 23:46:33.671294 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1569362986.mount: Deactivated successfully. Jul 6 23:46:33.673118 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 6 23:46:33.676926 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:46:33.815251 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:46:33.818047 (kubelet)[2618]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:46:33.960629 kubelet[2618]: E0706 23:46:33.960469 2618 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:46:33.963417 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:46:33.963532 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:46:33.964081 systemd[1]: kubelet.service: Consumed 115ms CPU time, 106.9M memory peak. Jul 6 23:46:35.020453 containerd[1882]: time="2025-07-06T23:46:35.020394488Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:46:35.023158 containerd[1882]: time="2025-07-06T23:46:35.023112952Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.2: active requests=0, bytes read=28199472" Jul 6 23:46:35.027541 containerd[1882]: time="2025-07-06T23:46:35.027488808Z" level=info msg="ImageCreate event name:\"sha256:c26522e54bad2e6bfbb1bf11500833c94433076a3fa38436a2ec496a422c5455\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:46:35.031665 containerd[1882]: time="2025-07-06T23:46:35.031614112Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:46:35.032023 containerd[1882]: time="2025-07-06T23:46:35.031857848Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.2\" with image id \"sha256:c26522e54bad2e6bfbb1bf11500833c94433076a3fa38436a2ec496a422c5455\", repo tag \"registry.k8s.io/kube-proxy:v1.33.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\", size \"28198491\" in 2.35114344s" Jul 6 23:46:35.032023 containerd[1882]: time="2025-07-06T23:46:35.031887704Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\" returns image reference \"sha256:c26522e54bad2e6bfbb1bf11500833c94433076a3fa38436a2ec496a422c5455\"" Jul 6 23:46:35.032311 containerd[1882]: time="2025-07-06T23:46:35.032287864Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jul 6 23:46:35.799731 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2803412201.mount: Deactivated successfully. Jul 6 23:46:36.041742 chronyd[1837]: Selected source PHC0 Jul 6 23:46:36.783447 containerd[1882]: time="2025-07-06T23:46:36.783367260Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:46:36.787770 containerd[1882]: time="2025-07-06T23:46:36.787719119Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152117" Jul 6 23:46:36.793574 containerd[1882]: time="2025-07-06T23:46:36.793493344Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:46:36.799693 containerd[1882]: time="2025-07-06T23:46:36.799627334Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:46:36.800531 containerd[1882]: time="2025-07-06T23:46:36.800383433Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.768063747s" Jul 6 23:46:36.800531 containerd[1882]: time="2025-07-06T23:46:36.800424343Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Jul 6 23:46:36.801116 containerd[1882]: time="2025-07-06T23:46:36.801012002Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 6 23:46:37.379960 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1034878259.mount: Deactivated successfully. Jul 6 23:46:37.410373 containerd[1882]: time="2025-07-06T23:46:37.409867271Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:46:37.413431 containerd[1882]: time="2025-07-06T23:46:37.413398126Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Jul 6 23:46:37.418941 containerd[1882]: time="2025-07-06T23:46:37.418907284Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:46:37.425117 containerd[1882]: time="2025-07-06T23:46:37.425066930Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:46:37.425566 containerd[1882]: time="2025-07-06T23:46:37.425538090Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 624.498757ms" Jul 6 23:46:37.425657 containerd[1882]: time="2025-07-06T23:46:37.425644922Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 6 23:46:37.426225 containerd[1882]: time="2025-07-06T23:46:37.426181378Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jul 6 23:46:38.055919 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount614557627.mount: Deactivated successfully. Jul 6 23:46:40.214125 containerd[1882]: time="2025-07-06T23:46:40.214065732Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:46:40.637206 containerd[1882]: time="2025-07-06T23:46:40.637156516Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69334599" Jul 6 23:46:40.688798 containerd[1882]: time="2025-07-06T23:46:40.688729844Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:46:40.694191 containerd[1882]: time="2025-07-06T23:46:40.694107804Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:46:40.694816 containerd[1882]: time="2025-07-06T23:46:40.694680532Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 3.268326442s" Jul 6 23:46:40.694816 containerd[1882]: time="2025-07-06T23:46:40.694723132Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Jul 6 23:46:44.077552 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 6 23:46:44.079994 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:46:44.092042 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 6 23:46:44.092298 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 6 23:46:44.092774 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:46:44.095730 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:46:44.119391 systemd[1]: Reload requested from client PID 2769 ('systemctl') (unit session-9.scope)... Jul 6 23:46:44.119405 systemd[1]: Reloading... Jul 6 23:46:44.228111 zram_generator::config[2814]: No configuration found. Jul 6 23:46:44.298574 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:46:44.384143 systemd[1]: Reloading finished in 264 ms. Jul 6 23:46:44.432121 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 6 23:46:44.432185 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 6 23:46:44.432404 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:46:44.432451 systemd[1]: kubelet.service: Consumed 76ms CPU time, 95M memory peak. Jul 6 23:46:44.433868 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:46:44.676352 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:46:44.683188 (kubelet)[2882]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 6 23:46:44.805650 kubelet[2882]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:46:44.805650 kubelet[2882]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 6 23:46:44.805650 kubelet[2882]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:46:44.806016 kubelet[2882]: I0706 23:46:44.805679 2882 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 6 23:46:45.403430 kubelet[2882]: I0706 23:46:45.403387 2882 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 6 23:46:45.403430 kubelet[2882]: I0706 23:46:45.403421 2882 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 6 23:46:45.403605 kubelet[2882]: I0706 23:46:45.403589 2882 server.go:956] "Client rotation is on, will bootstrap in background" Jul 6 23:46:45.655467 kubelet[2882]: E0706 23:46:45.655104 2882 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.20.39:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.39:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jul 6 23:46:45.655930 kubelet[2882]: I0706 23:46:45.655793 2882 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 6 23:46:45.660795 kubelet[2882]: I0706 23:46:45.660765 2882 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 6 23:46:45.664363 kubelet[2882]: I0706 23:46:45.664333 2882 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 6 23:46:45.665142 kubelet[2882]: I0706 23:46:45.665101 2882 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 6 23:46:45.665271 kubelet[2882]: I0706 23:46:45.665145 2882 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4344.1.1-a-5eeae23dc4","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 6 23:46:45.665345 kubelet[2882]: I0706 23:46:45.665278 2882 topology_manager.go:138] "Creating topology manager with none policy" Jul 6 23:46:45.665345 kubelet[2882]: I0706 23:46:45.665285 2882 container_manager_linux.go:303] "Creating device plugin manager" Jul 6 23:46:45.665453 kubelet[2882]: I0706 23:46:45.665437 2882 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:46:45.689291 kubelet[2882]: I0706 23:46:45.689257 2882 kubelet.go:480] "Attempting to sync node with API server" Jul 6 23:46:45.689399 kubelet[2882]: I0706 23:46:45.689383 2882 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 6 23:46:45.691163 kubelet[2882]: I0706 23:46:45.691140 2882 kubelet.go:386] "Adding apiserver pod source" Jul 6 23:46:45.692058 kubelet[2882]: I0706 23:46:45.692045 2882 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 6 23:46:45.693050 kubelet[2882]: E0706 23:46:45.692825 2882 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.39:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4344.1.1-a-5eeae23dc4&limit=500&resourceVersion=0\": dial tcp 10.200.20.39:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 6 23:46:45.694230 kubelet[2882]: E0706 23:46:45.694206 2882 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.39:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.39:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 6 23:46:45.694402 kubelet[2882]: I0706 23:46:45.694390 2882 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 6 23:46:45.694892 kubelet[2882]: I0706 23:46:45.694875 2882 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 6 23:46:45.695008 kubelet[2882]: W0706 23:46:45.694998 2882 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 6 23:46:45.697730 kubelet[2882]: I0706 23:46:45.697604 2882 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 6 23:46:45.697730 kubelet[2882]: I0706 23:46:45.697646 2882 server.go:1289] "Started kubelet" Jul 6 23:46:45.700093 kubelet[2882]: I0706 23:46:45.700018 2882 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 6 23:46:45.700359 kubelet[2882]: I0706 23:46:45.700336 2882 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 6 23:46:45.700602 kubelet[2882]: I0706 23:46:45.700583 2882 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 6 23:46:45.701557 kubelet[2882]: I0706 23:46:45.701046 2882 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 6 23:46:45.701830 kubelet[2882]: I0706 23:46:45.701813 2882 server.go:317] "Adding debug handlers to kubelet server" Jul 6 23:46:45.705199 kubelet[2882]: I0706 23:46:45.705158 2882 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 6 23:46:45.706039 kubelet[2882]: I0706 23:46:45.706013 2882 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 6 23:46:45.706205 kubelet[2882]: E0706 23:46:45.706178 2882 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4344.1.1-a-5eeae23dc4\" not found" Jul 6 23:46:45.707466 kubelet[2882]: I0706 23:46:45.707436 2882 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 6 23:46:45.707531 kubelet[2882]: I0706 23:46:45.707490 2882 reconciler.go:26] "Reconciler: start to sync state" Jul 6 23:46:45.708893 kubelet[2882]: E0706 23:46:45.708202 2882 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.39:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.39:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4344.1.1-a-5eeae23dc4.184fce4bf4ef9c0d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4344.1.1-a-5eeae23dc4,UID:ci-4344.1.1-a-5eeae23dc4,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4344.1.1-a-5eeae23dc4,},FirstTimestamp:2025-07-06 23:46:45.697625101 +0000 UTC m=+1.011216990,LastTimestamp:2025-07-06 23:46:45.697625101 +0000 UTC m=+1.011216990,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4344.1.1-a-5eeae23dc4,}" Jul 6 23:46:45.710358 kubelet[2882]: E0706 23:46:45.710320 2882 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.39:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.39:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 6 23:46:45.710415 kubelet[2882]: E0706 23:46:45.710384 2882 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344.1.1-a-5eeae23dc4?timeout=10s\": dial tcp 10.200.20.39:6443: connect: connection refused" interval="200ms" Jul 6 23:46:45.711136 kubelet[2882]: I0706 23:46:45.710505 2882 factory.go:223] Registration of the systemd container factory successfully Jul 6 23:46:45.711136 kubelet[2882]: I0706 23:46:45.710575 2882 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 6 23:46:45.715928 kubelet[2882]: I0706 23:46:45.715903 2882 factory.go:223] Registration of the containerd container factory successfully Jul 6 23:46:45.719806 kubelet[2882]: E0706 23:46:45.719779 2882 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 6 23:46:45.731061 kubelet[2882]: I0706 23:46:45.731036 2882 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 6 23:46:45.731061 kubelet[2882]: I0706 23:46:45.731068 2882 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 6 23:46:45.731186 kubelet[2882]: I0706 23:46:45.731088 2882 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:46:45.806425 kubelet[2882]: E0706 23:46:45.806379 2882 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4344.1.1-a-5eeae23dc4\" not found" Jul 6 23:46:45.907505 kubelet[2882]: E0706 23:46:45.906745 2882 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4344.1.1-a-5eeae23dc4\" not found" Jul 6 23:46:45.911341 kubelet[2882]: E0706 23:46:45.911299 2882 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344.1.1-a-5eeae23dc4?timeout=10s\": dial tcp 10.200.20.39:6443: connect: connection refused" interval="400ms" Jul 6 23:46:46.007612 kubelet[2882]: E0706 23:46:46.007556 2882 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4344.1.1-a-5eeae23dc4\" not found" Jul 6 23:46:46.108022 kubelet[2882]: E0706 23:46:46.107975 2882 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4344.1.1-a-5eeae23dc4\" not found" Jul 6 23:46:46.176107 kubelet[2882]: I0706 23:46:46.123496 2882 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 6 23:46:46.176107 kubelet[2882]: I0706 23:46:46.124395 2882 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 6 23:46:46.176107 kubelet[2882]: I0706 23:46:46.124412 2882 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 6 23:46:46.176107 kubelet[2882]: I0706 23:46:46.124432 2882 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 6 23:46:46.176107 kubelet[2882]: I0706 23:46:46.124437 2882 kubelet.go:2436] "Starting kubelet main sync loop" Jul 6 23:46:46.176107 kubelet[2882]: E0706 23:46:46.124474 2882 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 6 23:46:46.176107 kubelet[2882]: E0706 23:46:46.125637 2882 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.39:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.39:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 6 23:46:46.177651 kubelet[2882]: I0706 23:46:46.177626 2882 policy_none.go:49] "None policy: Start" Jul 6 23:46:46.177651 kubelet[2882]: I0706 23:46:46.177656 2882 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 6 23:46:46.177749 kubelet[2882]: I0706 23:46:46.177668 2882 state_mem.go:35] "Initializing new in-memory state store" Jul 6 23:46:46.202595 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 6 23:46:46.208503 kubelet[2882]: E0706 23:46:46.208452 2882 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4344.1.1-a-5eeae23dc4\" not found" Jul 6 23:46:46.216389 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 6 23:46:46.219517 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 6 23:46:46.225238 kubelet[2882]: E0706 23:46:46.225212 2882 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 6 23:46:46.227558 kubelet[2882]: E0706 23:46:46.227534 2882 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 6 23:46:46.227869 kubelet[2882]: I0706 23:46:46.227854 2882 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 6 23:46:46.228000 kubelet[2882]: I0706 23:46:46.227968 2882 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 6 23:46:46.228326 kubelet[2882]: I0706 23:46:46.228309 2882 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 6 23:46:46.229861 kubelet[2882]: E0706 23:46:46.229728 2882 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 6 23:46:46.229861 kubelet[2882]: E0706 23:46:46.229767 2882 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4344.1.1-a-5eeae23dc4\" not found" Jul 6 23:46:46.312322 kubelet[2882]: E0706 23:46:46.312272 2882 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344.1.1-a-5eeae23dc4?timeout=10s\": dial tcp 10.200.20.39:6443: connect: connection refused" interval="800ms" Jul 6 23:46:46.329995 kubelet[2882]: I0706 23:46:46.329968 2882 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344.1.1-a-5eeae23dc4" Jul 6 23:46:46.330319 kubelet[2882]: E0706 23:46:46.330298 2882 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.39:6443/api/v1/nodes\": dial tcp 10.200.20.39:6443: connect: connection refused" node="ci-4344.1.1-a-5eeae23dc4" Jul 6 23:46:46.438054 systemd[1]: Created slice kubepods-burstable-podb94547ce88ddd2fbde7661e2646e4dc8.slice - libcontainer container kubepods-burstable-podb94547ce88ddd2fbde7661e2646e4dc8.slice. Jul 6 23:46:46.448724 kubelet[2882]: E0706 23:46:46.448397 2882 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.1-a-5eeae23dc4\" not found" node="ci-4344.1.1-a-5eeae23dc4" Jul 6 23:46:46.452341 systemd[1]: Created slice kubepods-burstable-pod68b882555a1df3a1df39673196721cc8.slice - libcontainer container kubepods-burstable-pod68b882555a1df3a1df39673196721cc8.slice. Jul 6 23:46:46.454113 kubelet[2882]: E0706 23:46:46.453965 2882 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.1-a-5eeae23dc4\" not found" node="ci-4344.1.1-a-5eeae23dc4" Jul 6 23:46:46.469633 systemd[1]: Created slice kubepods-burstable-pod9e2c2fd244687c66e5b863090aa328bf.slice - libcontainer container kubepods-burstable-pod9e2c2fd244687c66e5b863090aa328bf.slice. Jul 6 23:46:46.471377 kubelet[2882]: E0706 23:46:46.471339 2882 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.1-a-5eeae23dc4\" not found" node="ci-4344.1.1-a-5eeae23dc4" Jul 6 23:46:46.512686 kubelet[2882]: I0706 23:46:46.512642 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b94547ce88ddd2fbde7661e2646e4dc8-k8s-certs\") pod \"kube-apiserver-ci-4344.1.1-a-5eeae23dc4\" (UID: \"b94547ce88ddd2fbde7661e2646e4dc8\") " pod="kube-system/kube-apiserver-ci-4344.1.1-a-5eeae23dc4" Jul 6 23:46:46.512779 kubelet[2882]: I0706 23:46:46.512690 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/68b882555a1df3a1df39673196721cc8-flexvolume-dir\") pod \"kube-controller-manager-ci-4344.1.1-a-5eeae23dc4\" (UID: \"68b882555a1df3a1df39673196721cc8\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-a-5eeae23dc4" Jul 6 23:46:46.512779 kubelet[2882]: I0706 23:46:46.512748 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/68b882555a1df3a1df39673196721cc8-kubeconfig\") pod \"kube-controller-manager-ci-4344.1.1-a-5eeae23dc4\" (UID: \"68b882555a1df3a1df39673196721cc8\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-a-5eeae23dc4" Jul 6 23:46:46.512779 kubelet[2882]: I0706 23:46:46.512759 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/68b882555a1df3a1df39673196721cc8-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4344.1.1-a-5eeae23dc4\" (UID: \"68b882555a1df3a1df39673196721cc8\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-a-5eeae23dc4" Jul 6 23:46:46.512832 kubelet[2882]: I0706 23:46:46.512775 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9e2c2fd244687c66e5b863090aa328bf-kubeconfig\") pod \"kube-scheduler-ci-4344.1.1-a-5eeae23dc4\" (UID: \"9e2c2fd244687c66e5b863090aa328bf\") " pod="kube-system/kube-scheduler-ci-4344.1.1-a-5eeae23dc4" Jul 6 23:46:46.512832 kubelet[2882]: I0706 23:46:46.512822 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b94547ce88ddd2fbde7661e2646e4dc8-ca-certs\") pod \"kube-apiserver-ci-4344.1.1-a-5eeae23dc4\" (UID: \"b94547ce88ddd2fbde7661e2646e4dc8\") " pod="kube-system/kube-apiserver-ci-4344.1.1-a-5eeae23dc4" Jul 6 23:46:46.512869 kubelet[2882]: I0706 23:46:46.512833 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b94547ce88ddd2fbde7661e2646e4dc8-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4344.1.1-a-5eeae23dc4\" (UID: \"b94547ce88ddd2fbde7661e2646e4dc8\") " pod="kube-system/kube-apiserver-ci-4344.1.1-a-5eeae23dc4" Jul 6 23:46:46.512869 kubelet[2882]: I0706 23:46:46.512853 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/68b882555a1df3a1df39673196721cc8-ca-certs\") pod \"kube-controller-manager-ci-4344.1.1-a-5eeae23dc4\" (UID: \"68b882555a1df3a1df39673196721cc8\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-a-5eeae23dc4" Jul 6 23:46:46.512899 kubelet[2882]: I0706 23:46:46.512879 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/68b882555a1df3a1df39673196721cc8-k8s-certs\") pod \"kube-controller-manager-ci-4344.1.1-a-5eeae23dc4\" (UID: \"68b882555a1df3a1df39673196721cc8\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-a-5eeae23dc4" Jul 6 23:46:46.532286 kubelet[2882]: I0706 23:46:46.532201 2882 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344.1.1-a-5eeae23dc4" Jul 6 23:46:46.532561 kubelet[2882]: E0706 23:46:46.532533 2882 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.39:6443/api/v1/nodes\": dial tcp 10.200.20.39:6443: connect: connection refused" node="ci-4344.1.1-a-5eeae23dc4" Jul 6 23:46:46.548096 kubelet[2882]: E0706 23:46:46.548055 2882 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.39:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.39:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 6 23:46:46.714567 kubelet[2882]: E0706 23:46:46.714440 2882 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.39:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4344.1.1-a-5eeae23dc4&limit=500&resourceVersion=0\": dial tcp 10.200.20.39:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 6 23:46:46.750056 containerd[1882]: time="2025-07-06T23:46:46.750005960Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4344.1.1-a-5eeae23dc4,Uid:b94547ce88ddd2fbde7661e2646e4dc8,Namespace:kube-system,Attempt:0,}" Jul 6 23:46:46.755169 containerd[1882]: time="2025-07-06T23:46:46.755030190Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4344.1.1-a-5eeae23dc4,Uid:68b882555a1df3a1df39673196721cc8,Namespace:kube-system,Attempt:0,}" Jul 6 23:46:46.773144 containerd[1882]: time="2025-07-06T23:46:46.773001203Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4344.1.1-a-5eeae23dc4,Uid:9e2c2fd244687c66e5b863090aa328bf,Namespace:kube-system,Attempt:0,}" Jul 6 23:46:46.846465 containerd[1882]: time="2025-07-06T23:46:46.846418351Z" level=info msg="connecting to shim 12f20424501c33e71bbfa58d795259abfc399866773c374e29d242fbe48f5e96" address="unix:///run/containerd/s/2727347cf73ca01a7b2ecaa007d7b41b18596792fac4c779211267d89d75f455" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:46:46.847166 containerd[1882]: time="2025-07-06T23:46:46.847088884Z" level=info msg="connecting to shim 8e0b852be13a41abae1dcab1d89e22122d65d11a6f712e8cc188f1cf9385b0dc" address="unix:///run/containerd/s/c8869da462e44be26e85fb2f32f465e287c72e26d266497ad42512e14a163b31" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:46:46.872870 systemd[1]: Started cri-containerd-8e0b852be13a41abae1dcab1d89e22122d65d11a6f712e8cc188f1cf9385b0dc.scope - libcontainer container 8e0b852be13a41abae1dcab1d89e22122d65d11a6f712e8cc188f1cf9385b0dc. Jul 6 23:46:46.877256 systemd[1]: Started cri-containerd-12f20424501c33e71bbfa58d795259abfc399866773c374e29d242fbe48f5e96.scope - libcontainer container 12f20424501c33e71bbfa58d795259abfc399866773c374e29d242fbe48f5e96. Jul 6 23:46:46.878816 containerd[1882]: time="2025-07-06T23:46:46.878595698Z" level=info msg="connecting to shim c7bdb8948b90ce0777ed91d024812e90737d8fa9c1468af36e9eea2625ce6c19" address="unix:///run/containerd/s/e8109beb214c9d85711ec7f4c16dba00b98cf4c3fbcb5982a592f55e4be48429" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:46:46.902863 systemd[1]: Started cri-containerd-c7bdb8948b90ce0777ed91d024812e90737d8fa9c1468af36e9eea2625ce6c19.scope - libcontainer container c7bdb8948b90ce0777ed91d024812e90737d8fa9c1468af36e9eea2625ce6c19. Jul 6 23:46:46.937227 kubelet[2882]: I0706 23:46:46.936914 2882 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344.1.1-a-5eeae23dc4" Jul 6 23:46:46.937779 kubelet[2882]: E0706 23:46:46.937612 2882 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.39:6443/api/v1/nodes\": dial tcp 10.200.20.39:6443: connect: connection refused" node="ci-4344.1.1-a-5eeae23dc4" Jul 6 23:46:46.938678 containerd[1882]: time="2025-07-06T23:46:46.938537614Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4344.1.1-a-5eeae23dc4,Uid:68b882555a1df3a1df39673196721cc8,Namespace:kube-system,Attempt:0,} returns sandbox id \"12f20424501c33e71bbfa58d795259abfc399866773c374e29d242fbe48f5e96\"" Jul 6 23:46:46.944376 containerd[1882]: time="2025-07-06T23:46:46.944325788Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4344.1.1-a-5eeae23dc4,Uid:b94547ce88ddd2fbde7661e2646e4dc8,Namespace:kube-system,Attempt:0,} returns sandbox id \"8e0b852be13a41abae1dcab1d89e22122d65d11a6f712e8cc188f1cf9385b0dc\"" Jul 6 23:46:46.951563 containerd[1882]: time="2025-07-06T23:46:46.951517638Z" level=info msg="CreateContainer within sandbox \"12f20424501c33e71bbfa58d795259abfc399866773c374e29d242fbe48f5e96\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 6 23:46:46.956875 containerd[1882]: time="2025-07-06T23:46:46.956833229Z" level=info msg="CreateContainer within sandbox \"8e0b852be13a41abae1dcab1d89e22122d65d11a6f712e8cc188f1cf9385b0dc\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 6 23:46:46.957639 containerd[1882]: time="2025-07-06T23:46:46.956849757Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4344.1.1-a-5eeae23dc4,Uid:9e2c2fd244687c66e5b863090aa328bf,Namespace:kube-system,Attempt:0,} returns sandbox id \"c7bdb8948b90ce0777ed91d024812e90737d8fa9c1468af36e9eea2625ce6c19\"" Jul 6 23:46:46.965668 containerd[1882]: time="2025-07-06T23:46:46.965562095Z" level=info msg="CreateContainer within sandbox \"c7bdb8948b90ce0777ed91d024812e90737d8fa9c1468af36e9eea2625ce6c19\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 6 23:46:46.989239 containerd[1882]: time="2025-07-06T23:46:46.989145524Z" level=info msg="Container 14acd1028765a1bf84669ff0b6e17dfdc6083af78058ee889ab5819577cdaa2d: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:46:47.002407 containerd[1882]: time="2025-07-06T23:46:47.002354412Z" level=info msg="Container 58b6e3d6861ce34aeb3ae74379e548f72aa779465404c4aa5ed2b8f5f75d8d13: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:46:47.017342 containerd[1882]: time="2025-07-06T23:46:47.016872316Z" level=info msg="Container 710d2841c3a5766db8954ffe55e4ff667256b6829673dcd0f2fc5d630af3075c: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:46:47.033134 containerd[1882]: time="2025-07-06T23:46:47.033092226Z" level=info msg="CreateContainer within sandbox \"8e0b852be13a41abae1dcab1d89e22122d65d11a6f712e8cc188f1cf9385b0dc\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"58b6e3d6861ce34aeb3ae74379e548f72aa779465404c4aa5ed2b8f5f75d8d13\"" Jul 6 23:46:47.034209 containerd[1882]: time="2025-07-06T23:46:47.034166531Z" level=info msg="StartContainer for \"58b6e3d6861ce34aeb3ae74379e548f72aa779465404c4aa5ed2b8f5f75d8d13\"" Jul 6 23:46:47.036846 containerd[1882]: time="2025-07-06T23:46:47.036814775Z" level=info msg="connecting to shim 58b6e3d6861ce34aeb3ae74379e548f72aa779465404c4aa5ed2b8f5f75d8d13" address="unix:///run/containerd/s/c8869da462e44be26e85fb2f32f465e287c72e26d266497ad42512e14a163b31" protocol=ttrpc version=3 Jul 6 23:46:47.039875 containerd[1882]: time="2025-07-06T23:46:47.039832509Z" level=info msg="CreateContainer within sandbox \"12f20424501c33e71bbfa58d795259abfc399866773c374e29d242fbe48f5e96\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"14acd1028765a1bf84669ff0b6e17dfdc6083af78058ee889ab5819577cdaa2d\"" Jul 6 23:46:47.041753 containerd[1882]: time="2025-07-06T23:46:47.041063524Z" level=info msg="StartContainer for \"14acd1028765a1bf84669ff0b6e17dfdc6083af78058ee889ab5819577cdaa2d\"" Jul 6 23:46:47.042092 containerd[1882]: time="2025-07-06T23:46:47.042055747Z" level=info msg="connecting to shim 14acd1028765a1bf84669ff0b6e17dfdc6083af78058ee889ab5819577cdaa2d" address="unix:///run/containerd/s/2727347cf73ca01a7b2ecaa007d7b41b18596792fac4c779211267d89d75f455" protocol=ttrpc version=3 Jul 6 23:46:47.051245 containerd[1882]: time="2025-07-06T23:46:47.051197867Z" level=info msg="CreateContainer within sandbox \"c7bdb8948b90ce0777ed91d024812e90737d8fa9c1468af36e9eea2625ce6c19\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"710d2841c3a5766db8954ffe55e4ff667256b6829673dcd0f2fc5d630af3075c\"" Jul 6 23:46:47.051978 containerd[1882]: time="2025-07-06T23:46:47.051950602Z" level=info msg="StartContainer for \"710d2841c3a5766db8954ffe55e4ff667256b6829673dcd0f2fc5d630af3075c\"" Jul 6 23:46:47.052813 containerd[1882]: time="2025-07-06T23:46:47.052681225Z" level=info msg="connecting to shim 710d2841c3a5766db8954ffe55e4ff667256b6829673dcd0f2fc5d630af3075c" address="unix:///run/containerd/s/e8109beb214c9d85711ec7f4c16dba00b98cf4c3fbcb5982a592f55e4be48429" protocol=ttrpc version=3 Jul 6 23:46:47.057881 systemd[1]: Started cri-containerd-58b6e3d6861ce34aeb3ae74379e548f72aa779465404c4aa5ed2b8f5f75d8d13.scope - libcontainer container 58b6e3d6861ce34aeb3ae74379e548f72aa779465404c4aa5ed2b8f5f75d8d13. Jul 6 23:46:47.062033 systemd[1]: Started cri-containerd-14acd1028765a1bf84669ff0b6e17dfdc6083af78058ee889ab5819577cdaa2d.scope - libcontainer container 14acd1028765a1bf84669ff0b6e17dfdc6083af78058ee889ab5819577cdaa2d. Jul 6 23:46:47.083985 systemd[1]: Started cri-containerd-710d2841c3a5766db8954ffe55e4ff667256b6829673dcd0f2fc5d630af3075c.scope - libcontainer container 710d2841c3a5766db8954ffe55e4ff667256b6829673dcd0f2fc5d630af3075c. Jul 6 23:46:47.113183 kubelet[2882]: E0706 23:46:47.113137 2882 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344.1.1-a-5eeae23dc4?timeout=10s\": dial tcp 10.200.20.39:6443: connect: connection refused" interval="1.6s" Jul 6 23:46:47.126331 containerd[1882]: time="2025-07-06T23:46:47.125828580Z" level=info msg="StartContainer for \"58b6e3d6861ce34aeb3ae74379e548f72aa779465404c4aa5ed2b8f5f75d8d13\" returns successfully" Jul 6 23:46:47.126331 containerd[1882]: time="2025-07-06T23:46:47.126087268Z" level=info msg="StartContainer for \"14acd1028765a1bf84669ff0b6e17dfdc6083af78058ee889ab5819577cdaa2d\" returns successfully" Jul 6 23:46:47.141727 kubelet[2882]: E0706 23:46:47.141687 2882 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.1-a-5eeae23dc4\" not found" node="ci-4344.1.1-a-5eeae23dc4" Jul 6 23:46:47.147018 kubelet[2882]: E0706 23:46:47.146991 2882 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.1-a-5eeae23dc4\" not found" node="ci-4344.1.1-a-5eeae23dc4" Jul 6 23:46:47.173072 containerd[1882]: time="2025-07-06T23:46:47.172918804Z" level=info msg="StartContainer for \"710d2841c3a5766db8954ffe55e4ff667256b6829673dcd0f2fc5d630af3075c\" returns successfully" Jul 6 23:46:47.739550 kubelet[2882]: I0706 23:46:47.739516 2882 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344.1.1-a-5eeae23dc4" Jul 6 23:46:48.149332 kubelet[2882]: E0706 23:46:48.149301 2882 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.1-a-5eeae23dc4\" not found" node="ci-4344.1.1-a-5eeae23dc4" Jul 6 23:46:48.149975 kubelet[2882]: E0706 23:46:48.149374 2882 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.1-a-5eeae23dc4\" not found" node="ci-4344.1.1-a-5eeae23dc4" Jul 6 23:46:48.659370 kubelet[2882]: I0706 23:46:48.659248 2882 kubelet_node_status.go:78] "Successfully registered node" node="ci-4344.1.1-a-5eeae23dc4" Jul 6 23:46:48.659370 kubelet[2882]: E0706 23:46:48.659287 2882 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4344.1.1-a-5eeae23dc4\": node \"ci-4344.1.1-a-5eeae23dc4\" not found" Jul 6 23:46:48.685442 kubelet[2882]: E0706 23:46:48.685402 2882 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4344.1.1-a-5eeae23dc4\" not found" Jul 6 23:46:48.786554 kubelet[2882]: E0706 23:46:48.786515 2882 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4344.1.1-a-5eeae23dc4\" not found" Jul 6 23:46:48.887110 kubelet[2882]: E0706 23:46:48.887057 2882 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4344.1.1-a-5eeae23dc4\" not found" Jul 6 23:46:48.987656 kubelet[2882]: E0706 23:46:48.987529 2882 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4344.1.1-a-5eeae23dc4\" not found" Jul 6 23:46:49.088012 kubelet[2882]: E0706 23:46:49.087968 2882 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4344.1.1-a-5eeae23dc4\" not found" Jul 6 23:46:49.150951 kubelet[2882]: E0706 23:46:49.150927 2882 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.1-a-5eeae23dc4\" not found" node="ci-4344.1.1-a-5eeae23dc4" Jul 6 23:46:49.188117 kubelet[2882]: E0706 23:46:49.188069 2882 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4344.1.1-a-5eeae23dc4\" not found" Jul 6 23:46:49.289060 kubelet[2882]: E0706 23:46:49.288932 2882 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4344.1.1-a-5eeae23dc4\" not found" Jul 6 23:46:49.389448 kubelet[2882]: E0706 23:46:49.389401 2882 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4344.1.1-a-5eeae23dc4\" not found" Jul 6 23:46:49.489899 kubelet[2882]: E0706 23:46:49.489855 2882 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4344.1.1-a-5eeae23dc4\" not found" Jul 6 23:46:49.590584 kubelet[2882]: E0706 23:46:49.590326 2882 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4344.1.1-a-5eeae23dc4\" not found" Jul 6 23:46:49.690593 kubelet[2882]: E0706 23:46:49.690553 2882 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4344.1.1-a-5eeae23dc4\" not found" Jul 6 23:46:49.790758 kubelet[2882]: E0706 23:46:49.790711 2882 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4344.1.1-a-5eeae23dc4\" not found" Jul 6 23:46:49.892101 kubelet[2882]: E0706 23:46:49.891273 2882 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4344.1.1-a-5eeae23dc4\" not found" Jul 6 23:46:49.907944 kubelet[2882]: I0706 23:46:49.907817 2882 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4344.1.1-a-5eeae23dc4" Jul 6 23:46:49.919466 kubelet[2882]: I0706 23:46:49.919425 2882 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jul 6 23:46:49.919619 kubelet[2882]: I0706 23:46:49.919558 2882 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4344.1.1-a-5eeae23dc4" Jul 6 23:46:49.929058 kubelet[2882]: I0706 23:46:49.929022 2882 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jul 6 23:46:49.929270 kubelet[2882]: I0706 23:46:49.929117 2882 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4344.1.1-a-5eeae23dc4" Jul 6 23:46:49.938324 kubelet[2882]: I0706 23:46:49.938292 2882 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jul 6 23:46:50.152540 kubelet[2882]: I0706 23:46:50.152391 2882 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4344.1.1-a-5eeae23dc4" Jul 6 23:46:50.162796 kubelet[2882]: I0706 23:46:50.162737 2882 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jul 6 23:46:50.162954 kubelet[2882]: E0706 23:46:50.162807 2882 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4344.1.1-a-5eeae23dc4\" already exists" pod="kube-system/kube-scheduler-ci-4344.1.1-a-5eeae23dc4" Jul 6 23:46:50.695112 kubelet[2882]: I0706 23:46:50.695075 2882 apiserver.go:52] "Watching apiserver" Jul 6 23:46:50.708487 kubelet[2882]: I0706 23:46:50.708442 2882 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 6 23:46:51.222414 systemd[1]: Reload requested from client PID 3162 ('systemctl') (unit session-9.scope)... Jul 6 23:46:51.222428 systemd[1]: Reloading... Jul 6 23:46:51.291823 zram_generator::config[3208]: No configuration found. Jul 6 23:46:51.380687 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:46:51.476720 systemd[1]: Reloading finished in 254 ms. Jul 6 23:46:51.506881 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:46:51.520072 systemd[1]: kubelet.service: Deactivated successfully. Jul 6 23:46:51.520262 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:46:51.520313 systemd[1]: kubelet.service: Consumed 933ms CPU time, 128.2M memory peak. Jul 6 23:46:51.522837 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:46:51.811634 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:46:51.817983 (kubelet)[3272]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 6 23:46:51.847193 kubelet[3272]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:46:51.847193 kubelet[3272]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 6 23:46:51.847193 kubelet[3272]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:46:51.847193 kubelet[3272]: I0706 23:46:51.847019 3272 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 6 23:46:51.853428 kubelet[3272]: I0706 23:46:51.853387 3272 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 6 23:46:51.853428 kubelet[3272]: I0706 23:46:51.853421 3272 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 6 23:46:51.853606 kubelet[3272]: I0706 23:46:51.853588 3272 server.go:956] "Client rotation is on, will bootstrap in background" Jul 6 23:46:51.854666 kubelet[3272]: I0706 23:46:51.854578 3272 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jul 6 23:46:52.048342 kubelet[3272]: I0706 23:46:52.048110 3272 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 6 23:46:52.053119 kubelet[3272]: I0706 23:46:52.052992 3272 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 6 23:46:52.055455 kubelet[3272]: I0706 23:46:52.055411 3272 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 6 23:46:52.055626 kubelet[3272]: I0706 23:46:52.055597 3272 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 6 23:46:52.055766 kubelet[3272]: I0706 23:46:52.055624 3272 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4344.1.1-a-5eeae23dc4","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 6 23:46:52.055864 kubelet[3272]: I0706 23:46:52.055771 3272 topology_manager.go:138] "Creating topology manager with none policy" Jul 6 23:46:52.055864 kubelet[3272]: I0706 23:46:52.055778 3272 container_manager_linux.go:303] "Creating device plugin manager" Jul 6 23:46:52.055864 kubelet[3272]: I0706 23:46:52.055816 3272 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:46:52.055942 kubelet[3272]: I0706 23:46:52.055929 3272 kubelet.go:480] "Attempting to sync node with API server" Jul 6 23:46:52.055942 kubelet[3272]: I0706 23:46:52.055938 3272 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 6 23:46:52.055979 kubelet[3272]: I0706 23:46:52.055959 3272 kubelet.go:386] "Adding apiserver pod source" Jul 6 23:46:52.055979 kubelet[3272]: I0706 23:46:52.055969 3272 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 6 23:46:52.061665 kubelet[3272]: I0706 23:46:52.061114 3272 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 6 23:46:52.061665 kubelet[3272]: I0706 23:46:52.061614 3272 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 6 23:46:52.064445 kubelet[3272]: I0706 23:46:52.063855 3272 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 6 23:46:52.064445 kubelet[3272]: I0706 23:46:52.063893 3272 server.go:1289] "Started kubelet" Jul 6 23:46:52.066720 kubelet[3272]: I0706 23:46:52.066683 3272 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 6 23:46:52.074745 kubelet[3272]: I0706 23:46:52.074639 3272 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 6 23:46:52.075778 kubelet[3272]: E0706 23:46:52.075089 3272 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 6 23:46:52.075778 kubelet[3272]: I0706 23:46:52.075561 3272 server.go:317] "Adding debug handlers to kubelet server" Jul 6 23:46:52.076780 kubelet[3272]: I0706 23:46:52.076628 3272 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 6 23:46:52.078230 kubelet[3272]: I0706 23:46:52.078185 3272 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 6 23:46:52.078417 kubelet[3272]: I0706 23:46:52.078401 3272 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 6 23:46:52.078562 kubelet[3272]: I0706 23:46:52.078551 3272 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 6 23:46:52.078878 kubelet[3272]: E0706 23:46:52.078858 3272 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4344.1.1-a-5eeae23dc4\" not found" Jul 6 23:46:52.080530 kubelet[3272]: I0706 23:46:52.080508 3272 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 6 23:46:52.080758 kubelet[3272]: I0706 23:46:52.080746 3272 reconciler.go:26] "Reconciler: start to sync state" Jul 6 23:46:52.085061 kubelet[3272]: I0706 23:46:52.082674 3272 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 6 23:46:52.087383 kubelet[3272]: I0706 23:46:52.087355 3272 factory.go:223] Registration of the systemd container factory successfully Jul 6 23:46:52.087615 kubelet[3272]: I0706 23:46:52.087592 3272 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 6 23:46:52.088541 kubelet[3272]: I0706 23:46:52.088513 3272 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 6 23:46:52.088718 kubelet[3272]: I0706 23:46:52.088625 3272 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 6 23:46:52.088718 kubelet[3272]: I0706 23:46:52.088653 3272 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 6 23:46:52.088718 kubelet[3272]: I0706 23:46:52.088658 3272 kubelet.go:2436] "Starting kubelet main sync loop" Jul 6 23:46:52.088888 kubelet[3272]: E0706 23:46:52.088867 3272 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 6 23:46:52.103687 kubelet[3272]: I0706 23:46:52.103661 3272 factory.go:223] Registration of the containerd container factory successfully Jul 6 23:46:52.159388 kubelet[3272]: I0706 23:46:52.159360 3272 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 6 23:46:52.159388 kubelet[3272]: I0706 23:46:52.159379 3272 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 6 23:46:52.159545 kubelet[3272]: I0706 23:46:52.159417 3272 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:46:52.159581 kubelet[3272]: I0706 23:46:52.159560 3272 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 6 23:46:52.159600 kubelet[3272]: I0706 23:46:52.159573 3272 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 6 23:46:52.159600 kubelet[3272]: I0706 23:46:52.159590 3272 policy_none.go:49] "None policy: Start" Jul 6 23:46:52.159600 kubelet[3272]: I0706 23:46:52.159599 3272 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 6 23:46:52.159653 kubelet[3272]: I0706 23:46:52.159606 3272 state_mem.go:35] "Initializing new in-memory state store" Jul 6 23:46:52.159688 kubelet[3272]: I0706 23:46:52.159675 3272 state_mem.go:75] "Updated machine memory state" Jul 6 23:46:52.168977 kubelet[3272]: E0706 23:46:52.168940 3272 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 6 23:46:52.169751 kubelet[3272]: I0706 23:46:52.169186 3272 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 6 23:46:52.169751 kubelet[3272]: I0706 23:46:52.169215 3272 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 6 23:46:52.540639 kubelet[3272]: I0706 23:46:52.169971 3272 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 6 23:46:52.540639 kubelet[3272]: E0706 23:46:52.173655 3272 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 6 23:46:52.540639 kubelet[3272]: I0706 23:46:52.190412 3272 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4344.1.1-a-5eeae23dc4" Jul 6 23:46:52.540639 kubelet[3272]: I0706 23:46:52.190488 3272 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4344.1.1-a-5eeae23dc4" Jul 6 23:46:52.540639 kubelet[3272]: I0706 23:46:52.190418 3272 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4344.1.1-a-5eeae23dc4" Jul 6 23:46:52.540639 kubelet[3272]: I0706 23:46:52.206950 3272 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jul 6 23:46:52.540639 kubelet[3272]: E0706 23:46:52.207023 3272 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4344.1.1-a-5eeae23dc4\" already exists" pod="kube-system/kube-controller-manager-ci-4344.1.1-a-5eeae23dc4" Jul 6 23:46:52.540639 kubelet[3272]: I0706 23:46:52.211513 3272 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jul 6 23:46:52.540639 kubelet[3272]: I0706 23:46:52.211559 3272 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jul 6 23:46:52.540639 kubelet[3272]: E0706 23:46:52.211593 3272 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4344.1.1-a-5eeae23dc4\" already exists" pod="kube-system/kube-apiserver-ci-4344.1.1-a-5eeae23dc4" Jul 6 23:46:52.540639 kubelet[3272]: E0706 23:46:52.211652 3272 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4344.1.1-a-5eeae23dc4\" already exists" pod="kube-system/kube-scheduler-ci-4344.1.1-a-5eeae23dc4" Jul 6 23:46:52.541101 kubelet[3272]: I0706 23:46:52.273495 3272 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344.1.1-a-5eeae23dc4" Jul 6 23:46:52.541101 kubelet[3272]: I0706 23:46:52.314897 3272 kubelet_node_status.go:124] "Node was previously registered" node="ci-4344.1.1-a-5eeae23dc4" Jul 6 23:46:52.541101 kubelet[3272]: I0706 23:46:52.381438 3272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b94547ce88ddd2fbde7661e2646e4dc8-ca-certs\") pod \"kube-apiserver-ci-4344.1.1-a-5eeae23dc4\" (UID: \"b94547ce88ddd2fbde7661e2646e4dc8\") " pod="kube-system/kube-apiserver-ci-4344.1.1-a-5eeae23dc4" Jul 6 23:46:52.541101 kubelet[3272]: I0706 23:46:52.381472 3272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b94547ce88ddd2fbde7661e2646e4dc8-k8s-certs\") pod \"kube-apiserver-ci-4344.1.1-a-5eeae23dc4\" (UID: \"b94547ce88ddd2fbde7661e2646e4dc8\") " pod="kube-system/kube-apiserver-ci-4344.1.1-a-5eeae23dc4" Jul 6 23:46:52.541101 kubelet[3272]: I0706 23:46:52.381488 3272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/68b882555a1df3a1df39673196721cc8-ca-certs\") pod \"kube-controller-manager-ci-4344.1.1-a-5eeae23dc4\" (UID: \"68b882555a1df3a1df39673196721cc8\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-a-5eeae23dc4" Jul 6 23:46:52.541101 kubelet[3272]: I0706 23:46:52.381501 3272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/68b882555a1df3a1df39673196721cc8-k8s-certs\") pod \"kube-controller-manager-ci-4344.1.1-a-5eeae23dc4\" (UID: \"68b882555a1df3a1df39673196721cc8\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-a-5eeae23dc4" Jul 6 23:46:52.541342 kubelet[3272]: I0706 23:46:52.381543 3272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9e2c2fd244687c66e5b863090aa328bf-kubeconfig\") pod \"kube-scheduler-ci-4344.1.1-a-5eeae23dc4\" (UID: \"9e2c2fd244687c66e5b863090aa328bf\") " pod="kube-system/kube-scheduler-ci-4344.1.1-a-5eeae23dc4" Jul 6 23:46:52.541342 kubelet[3272]: I0706 23:46:52.381572 3272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b94547ce88ddd2fbde7661e2646e4dc8-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4344.1.1-a-5eeae23dc4\" (UID: \"b94547ce88ddd2fbde7661e2646e4dc8\") " pod="kube-system/kube-apiserver-ci-4344.1.1-a-5eeae23dc4" Jul 6 23:46:52.541342 kubelet[3272]: I0706 23:46:52.381602 3272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/68b882555a1df3a1df39673196721cc8-flexvolume-dir\") pod \"kube-controller-manager-ci-4344.1.1-a-5eeae23dc4\" (UID: \"68b882555a1df3a1df39673196721cc8\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-a-5eeae23dc4" Jul 6 23:46:52.541342 kubelet[3272]: I0706 23:46:52.381641 3272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/68b882555a1df3a1df39673196721cc8-kubeconfig\") pod \"kube-controller-manager-ci-4344.1.1-a-5eeae23dc4\" (UID: \"68b882555a1df3a1df39673196721cc8\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-a-5eeae23dc4" Jul 6 23:46:52.541342 kubelet[3272]: I0706 23:46:52.381664 3272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/68b882555a1df3a1df39673196721cc8-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4344.1.1-a-5eeae23dc4\" (UID: \"68b882555a1df3a1df39673196721cc8\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-a-5eeae23dc4" Jul 6 23:46:52.541575 kubelet[3272]: I0706 23:46:52.541257 3272 kubelet_node_status.go:78] "Successfully registered node" node="ci-4344.1.1-a-5eeae23dc4" Jul 6 23:46:52.564755 sudo[3309]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 6 23:46:52.564985 sudo[3309]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jul 6 23:46:52.937961 sudo[3309]: pam_unix(sudo:session): session closed for user root Jul 6 23:46:53.057381 kubelet[3272]: I0706 23:46:53.057341 3272 apiserver.go:52] "Watching apiserver" Jul 6 23:46:53.081572 kubelet[3272]: I0706 23:46:53.081538 3272 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 6 23:46:53.144371 kubelet[3272]: I0706 23:46:53.144293 3272 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4344.1.1-a-5eeae23dc4" Jul 6 23:46:53.144985 kubelet[3272]: I0706 23:46:53.144802 3272 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4344.1.1-a-5eeae23dc4" Jul 6 23:46:53.145301 kubelet[3272]: I0706 23:46:53.145196 3272 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4344.1.1-a-5eeae23dc4" Jul 6 23:46:53.183057 kubelet[3272]: I0706 23:46:53.183009 3272 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jul 6 23:46:53.183470 kubelet[3272]: E0706 23:46:53.183242 3272 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4344.1.1-a-5eeae23dc4\" already exists" pod="kube-system/kube-controller-manager-ci-4344.1.1-a-5eeae23dc4" Jul 6 23:46:53.186466 kubelet[3272]: I0706 23:46:53.185480 3272 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jul 6 23:46:53.186466 kubelet[3272]: E0706 23:46:53.186131 3272 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4344.1.1-a-5eeae23dc4\" already exists" pod="kube-system/kube-scheduler-ci-4344.1.1-a-5eeae23dc4" Jul 6 23:46:53.186899 kubelet[3272]: I0706 23:46:53.186874 3272 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jul 6 23:46:53.187196 kubelet[3272]: E0706 23:46:53.187098 3272 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4344.1.1-a-5eeae23dc4\" already exists" pod="kube-system/kube-apiserver-ci-4344.1.1-a-5eeae23dc4" Jul 6 23:46:53.199610 kubelet[3272]: I0706 23:46:53.199071 3272 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4344.1.1-a-5eeae23dc4" podStartSLOduration=4.199054288 podStartE2EDuration="4.199054288s" podCreationTimestamp="2025-07-06 23:46:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:46:53.185727848 +0000 UTC m=+1.363875092" watchObservedRunningTime="2025-07-06 23:46:53.199054288 +0000 UTC m=+1.377201524" Jul 6 23:46:53.213023 kubelet[3272]: I0706 23:46:53.212958 3272 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4344.1.1-a-5eeae23dc4" podStartSLOduration=4.212939872 podStartE2EDuration="4.212939872s" podCreationTimestamp="2025-07-06 23:46:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:46:53.200204472 +0000 UTC m=+1.378351708" watchObservedRunningTime="2025-07-06 23:46:53.212939872 +0000 UTC m=+1.391087108" Jul 6 23:46:53.229079 kubelet[3272]: I0706 23:46:53.228803 3272 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4344.1.1-a-5eeae23dc4" podStartSLOduration=4.228784567 podStartE2EDuration="4.228784567s" podCreationTimestamp="2025-07-06 23:46:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:46:53.213497775 +0000 UTC m=+1.391645019" watchObservedRunningTime="2025-07-06 23:46:53.228784567 +0000 UTC m=+1.406931803" Jul 6 23:46:54.283763 sudo[2328]: pam_unix(sudo:session): session closed for user root Jul 6 23:46:54.372857 sshd[2327]: Connection closed by 10.200.16.10 port 45312 Jul 6 23:46:54.373436 sshd-session[2325]: pam_unix(sshd:session): session closed for user core Jul 6 23:46:54.377139 systemd-logind[1853]: Session 9 logged out. Waiting for processes to exit. Jul 6 23:46:54.377725 systemd[1]: sshd@6-10.200.20.39:22-10.200.16.10:45312.service: Deactivated successfully. Jul 6 23:46:54.379955 systemd[1]: session-9.scope: Deactivated successfully. Jul 6 23:46:54.380274 systemd[1]: session-9.scope: Consumed 4.709s CPU time, 271.3M memory peak. Jul 6 23:46:54.382670 systemd-logind[1853]: Removed session 9. Jul 6 23:46:56.516724 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Jul 6 23:46:56.774622 kubelet[3272]: I0706 23:46:56.774165 3272 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 6 23:46:56.775172 containerd[1882]: time="2025-07-06T23:46:56.775072450Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 6 23:46:56.775503 kubelet[3272]: I0706 23:46:56.775326 3272 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 6 23:46:57.531937 systemd[1]: Created slice kubepods-besteffort-pod89e8924b_64f8_48fd_b4d1_b0730b5ce88b.slice - libcontainer container kubepods-besteffort-pod89e8924b_64f8_48fd_b4d1_b0730b5ce88b.slice. Jul 6 23:46:57.543368 systemd[1]: Created slice kubepods-burstable-podfba76c19_3ff4_44c4_9758_45a547d100b2.slice - libcontainer container kubepods-burstable-podfba76c19_3ff4_44c4_9758_45a547d100b2.slice. Jul 6 23:46:57.613950 kubelet[3272]: I0706 23:46:57.613902 3272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fba76c19-3ff4-44c4-9758-45a547d100b2-cni-path\") pod \"cilium-4mx7q\" (UID: \"fba76c19-3ff4-44c4-9758-45a547d100b2\") " pod="kube-system/cilium-4mx7q" Jul 6 23:46:57.614096 kubelet[3272]: I0706 23:46:57.613969 3272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fba76c19-3ff4-44c4-9758-45a547d100b2-cilium-config-path\") pod \"cilium-4mx7q\" (UID: \"fba76c19-3ff4-44c4-9758-45a547d100b2\") " pod="kube-system/cilium-4mx7q" Jul 6 23:46:57.614096 kubelet[3272]: I0706 23:46:57.613988 3272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9lqz\" (UniqueName: \"kubernetes.io/projected/89e8924b-64f8-48fd-b4d1-b0730b5ce88b-kube-api-access-x9lqz\") pod \"kube-proxy-7zqdz\" (UID: \"89e8924b-64f8-48fd-b4d1-b0730b5ce88b\") " pod="kube-system/kube-proxy-7zqdz" Jul 6 23:46:57.614144 kubelet[3272]: I0706 23:46:57.614069 3272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fba76c19-3ff4-44c4-9758-45a547d100b2-cilium-run\") pod \"cilium-4mx7q\" (UID: \"fba76c19-3ff4-44c4-9758-45a547d100b2\") " pod="kube-system/cilium-4mx7q" Jul 6 23:46:57.614144 kubelet[3272]: I0706 23:46:57.614113 3272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fba76c19-3ff4-44c4-9758-45a547d100b2-hostproc\") pod \"cilium-4mx7q\" (UID: \"fba76c19-3ff4-44c4-9758-45a547d100b2\") " pod="kube-system/cilium-4mx7q" Jul 6 23:46:57.614144 kubelet[3272]: I0706 23:46:57.614125 3272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fba76c19-3ff4-44c4-9758-45a547d100b2-etc-cni-netd\") pod \"cilium-4mx7q\" (UID: \"fba76c19-3ff4-44c4-9758-45a547d100b2\") " pod="kube-system/cilium-4mx7q" Jul 6 23:46:57.614144 kubelet[3272]: I0706 23:46:57.614134 3272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fba76c19-3ff4-44c4-9758-45a547d100b2-xtables-lock\") pod \"cilium-4mx7q\" (UID: \"fba76c19-3ff4-44c4-9758-45a547d100b2\") " pod="kube-system/cilium-4mx7q" Jul 6 23:46:57.614144 kubelet[3272]: I0706 23:46:57.614144 3272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrk7j\" (UniqueName: \"kubernetes.io/projected/fba76c19-3ff4-44c4-9758-45a547d100b2-kube-api-access-hrk7j\") pod \"cilium-4mx7q\" (UID: \"fba76c19-3ff4-44c4-9758-45a547d100b2\") " pod="kube-system/cilium-4mx7q" Jul 6 23:46:57.614220 kubelet[3272]: I0706 23:46:57.614190 3272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/89e8924b-64f8-48fd-b4d1-b0730b5ce88b-kube-proxy\") pod \"kube-proxy-7zqdz\" (UID: \"89e8924b-64f8-48fd-b4d1-b0730b5ce88b\") " pod="kube-system/kube-proxy-7zqdz" Jul 6 23:46:57.614220 kubelet[3272]: I0706 23:46:57.614200 3272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/89e8924b-64f8-48fd-b4d1-b0730b5ce88b-xtables-lock\") pod \"kube-proxy-7zqdz\" (UID: \"89e8924b-64f8-48fd-b4d1-b0730b5ce88b\") " pod="kube-system/kube-proxy-7zqdz" Jul 6 23:46:57.614220 kubelet[3272]: I0706 23:46:57.614208 3272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/89e8924b-64f8-48fd-b4d1-b0730b5ce88b-lib-modules\") pod \"kube-proxy-7zqdz\" (UID: \"89e8924b-64f8-48fd-b4d1-b0730b5ce88b\") " pod="kube-system/kube-proxy-7zqdz" Jul 6 23:46:57.614220 kubelet[3272]: I0706 23:46:57.614220 3272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fba76c19-3ff4-44c4-9758-45a547d100b2-clustermesh-secrets\") pod \"cilium-4mx7q\" (UID: \"fba76c19-3ff4-44c4-9758-45a547d100b2\") " pod="kube-system/cilium-4mx7q" Jul 6 23:46:57.614295 kubelet[3272]: I0706 23:46:57.614229 3272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fba76c19-3ff4-44c4-9758-45a547d100b2-host-proc-sys-net\") pod \"cilium-4mx7q\" (UID: \"fba76c19-3ff4-44c4-9758-45a547d100b2\") " pod="kube-system/cilium-4mx7q" Jul 6 23:46:57.614316 kubelet[3272]: I0706 23:46:57.614300 3272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fba76c19-3ff4-44c4-9758-45a547d100b2-cilium-cgroup\") pod \"cilium-4mx7q\" (UID: \"fba76c19-3ff4-44c4-9758-45a547d100b2\") " pod="kube-system/cilium-4mx7q" Jul 6 23:46:57.614386 kubelet[3272]: I0706 23:46:57.614367 3272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fba76c19-3ff4-44c4-9758-45a547d100b2-lib-modules\") pod \"cilium-4mx7q\" (UID: \"fba76c19-3ff4-44c4-9758-45a547d100b2\") " pod="kube-system/cilium-4mx7q" Jul 6 23:46:57.614427 kubelet[3272]: I0706 23:46:57.614388 3272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fba76c19-3ff4-44c4-9758-45a547d100b2-host-proc-sys-kernel\") pod \"cilium-4mx7q\" (UID: \"fba76c19-3ff4-44c4-9758-45a547d100b2\") " pod="kube-system/cilium-4mx7q" Jul 6 23:46:57.614452 kubelet[3272]: I0706 23:46:57.614429 3272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fba76c19-3ff4-44c4-9758-45a547d100b2-hubble-tls\") pod \"cilium-4mx7q\" (UID: \"fba76c19-3ff4-44c4-9758-45a547d100b2\") " pod="kube-system/cilium-4mx7q" Jul 6 23:46:57.614452 kubelet[3272]: I0706 23:46:57.614447 3272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fba76c19-3ff4-44c4-9758-45a547d100b2-bpf-maps\") pod \"cilium-4mx7q\" (UID: \"fba76c19-3ff4-44c4-9758-45a547d100b2\") " pod="kube-system/cilium-4mx7q" Jul 6 23:46:57.739393 kubelet[3272]: E0706 23:46:57.739331 3272 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jul 6 23:46:57.739393 kubelet[3272]: E0706 23:46:57.739376 3272 projected.go:194] Error preparing data for projected volume kube-api-access-x9lqz for pod kube-system/kube-proxy-7zqdz: configmap "kube-root-ca.crt" not found Jul 6 23:46:57.739588 kubelet[3272]: E0706 23:46:57.739478 3272 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/89e8924b-64f8-48fd-b4d1-b0730b5ce88b-kube-api-access-x9lqz podName:89e8924b-64f8-48fd-b4d1-b0730b5ce88b nodeName:}" failed. No retries permitted until 2025-07-06 23:46:58.239443043 +0000 UTC m=+6.417590279 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-x9lqz" (UniqueName: "kubernetes.io/projected/89e8924b-64f8-48fd-b4d1-b0730b5ce88b-kube-api-access-x9lqz") pod "kube-proxy-7zqdz" (UID: "89e8924b-64f8-48fd-b4d1-b0730b5ce88b") : configmap "kube-root-ca.crt" not found Jul 6 23:46:57.741304 kubelet[3272]: E0706 23:46:57.741282 3272 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jul 6 23:46:57.741304 kubelet[3272]: E0706 23:46:57.741306 3272 projected.go:194] Error preparing data for projected volume kube-api-access-hrk7j for pod kube-system/cilium-4mx7q: configmap "kube-root-ca.crt" not found Jul 6 23:46:57.741404 kubelet[3272]: E0706 23:46:57.741384 3272 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fba76c19-3ff4-44c4-9758-45a547d100b2-kube-api-access-hrk7j podName:fba76c19-3ff4-44c4-9758-45a547d100b2 nodeName:}" failed. No retries permitted until 2025-07-06 23:46:58.241370287 +0000 UTC m=+6.419517523 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-hrk7j" (UniqueName: "kubernetes.io/projected/fba76c19-3ff4-44c4-9758-45a547d100b2-kube-api-access-hrk7j") pod "cilium-4mx7q" (UID: "fba76c19-3ff4-44c4-9758-45a547d100b2") : configmap "kube-root-ca.crt" not found Jul 6 23:46:57.957710 update_engine[1859]: I20250706 23:46:57.957623 1859 update_attempter.cc:509] Updating boot flags... Jul 6 23:46:58.120028 kubelet[3272]: I0706 23:46:58.119997 3272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cf88a9f0-831b-4530-9bb8-e8176f95342c-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-x49dc\" (UID: \"cf88a9f0-831b-4530-9bb8-e8176f95342c\") " pod="kube-system/cilium-operator-6c4d7847fc-x49dc" Jul 6 23:46:58.122688 kubelet[3272]: I0706 23:46:58.122620 3272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nsdk\" (UniqueName: \"kubernetes.io/projected/cf88a9f0-831b-4530-9bb8-e8176f95342c-kube-api-access-2nsdk\") pod \"cilium-operator-6c4d7847fc-x49dc\" (UID: \"cf88a9f0-831b-4530-9bb8-e8176f95342c\") " pod="kube-system/cilium-operator-6c4d7847fc-x49dc" Jul 6 23:46:58.131414 systemd[1]: Created slice kubepods-besteffort-podcf88a9f0_831b_4530_9bb8_e8176f95342c.slice - libcontainer container kubepods-besteffort-podcf88a9f0_831b_4530_9bb8_e8176f95342c.slice. Jul 6 23:46:58.441336 containerd[1882]: time="2025-07-06T23:46:58.441280972Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7zqdz,Uid:89e8924b-64f8-48fd-b4d1-b0730b5ce88b,Namespace:kube-system,Attempt:0,}" Jul 6 23:46:58.446093 containerd[1882]: time="2025-07-06T23:46:58.445976790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-x49dc,Uid:cf88a9f0-831b-4530-9bb8-e8176f95342c,Namespace:kube-system,Attempt:0,}" Jul 6 23:46:58.448976 containerd[1882]: time="2025-07-06T23:46:58.448855160Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4mx7q,Uid:fba76c19-3ff4-44c4-9758-45a547d100b2,Namespace:kube-system,Attempt:0,}" Jul 6 23:46:58.543647 containerd[1882]: time="2025-07-06T23:46:58.543527007Z" level=info msg="connecting to shim 8a0f9ad7ea2395b5ed4a14266e0e6963ed60c65d4cea8504550dab6e18d18dc9" address="unix:///run/containerd/s/897993c0b7c915ff597362c80373f1f91802aa0175857b6b891cf96e123debf4" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:46:58.563013 systemd[1]: Started cri-containerd-8a0f9ad7ea2395b5ed4a14266e0e6963ed60c65d4cea8504550dab6e18d18dc9.scope - libcontainer container 8a0f9ad7ea2395b5ed4a14266e0e6963ed60c65d4cea8504550dab6e18d18dc9. Jul 6 23:46:58.578481 containerd[1882]: time="2025-07-06T23:46:58.578118834Z" level=info msg="connecting to shim 2daaf30177d0477c00b6c008bcf38829c9f7337c8b3ccbbc13f66b668df4e98a" address="unix:///run/containerd/s/8ce90650b8b910d509c1acdf9c283c6035db3a293620e5a25a2f9a72634ffb72" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:46:58.589372 containerd[1882]: time="2025-07-06T23:46:58.589224787Z" level=info msg="connecting to shim 1e6f7a7ccf4f6dbae6ac2e336af24a68cf013d791718f000a0e2cd5eeb83cc95" address="unix:///run/containerd/s/c58a091d2f48d15d1d68d13c3386aedebf40e15e9fcd936c707405940a7536b9" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:46:58.592369 containerd[1882]: time="2025-07-06T23:46:58.592334764Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7zqdz,Uid:89e8924b-64f8-48fd-b4d1-b0730b5ce88b,Namespace:kube-system,Attempt:0,} returns sandbox id \"8a0f9ad7ea2395b5ed4a14266e0e6963ed60c65d4cea8504550dab6e18d18dc9\"" Jul 6 23:46:58.605378 containerd[1882]: time="2025-07-06T23:46:58.605274926Z" level=info msg="CreateContainer within sandbox \"8a0f9ad7ea2395b5ed4a14266e0e6963ed60c65d4cea8504550dab6e18d18dc9\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 6 23:46:58.611895 systemd[1]: Started cri-containerd-2daaf30177d0477c00b6c008bcf38829c9f7337c8b3ccbbc13f66b668df4e98a.scope - libcontainer container 2daaf30177d0477c00b6c008bcf38829c9f7337c8b3ccbbc13f66b668df4e98a. Jul 6 23:46:58.615923 systemd[1]: Started cri-containerd-1e6f7a7ccf4f6dbae6ac2e336af24a68cf013d791718f000a0e2cd5eeb83cc95.scope - libcontainer container 1e6f7a7ccf4f6dbae6ac2e336af24a68cf013d791718f000a0e2cd5eeb83cc95. Jul 6 23:46:58.636062 containerd[1882]: time="2025-07-06T23:46:58.636000473Z" level=info msg="Container d67501824c72c84f5799b306293ef10f752e528ec0f290f42f2ecf9cf9a9dc94: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:46:58.662182 containerd[1882]: time="2025-07-06T23:46:58.662136358Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4mx7q,Uid:fba76c19-3ff4-44c4-9758-45a547d100b2,Namespace:kube-system,Attempt:0,} returns sandbox id \"1e6f7a7ccf4f6dbae6ac2e336af24a68cf013d791718f000a0e2cd5eeb83cc95\"" Jul 6 23:46:58.662713 containerd[1882]: time="2025-07-06T23:46:58.662639429Z" level=info msg="CreateContainer within sandbox \"8a0f9ad7ea2395b5ed4a14266e0e6963ed60c65d4cea8504550dab6e18d18dc9\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d67501824c72c84f5799b306293ef10f752e528ec0f290f42f2ecf9cf9a9dc94\"" Jul 6 23:46:58.663412 containerd[1882]: time="2025-07-06T23:46:58.663377100Z" level=info msg="StartContainer for \"d67501824c72c84f5799b306293ef10f752e528ec0f290f42f2ecf9cf9a9dc94\"" Jul 6 23:46:58.663672 containerd[1882]: time="2025-07-06T23:46:58.663616740Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 6 23:46:58.666132 containerd[1882]: time="2025-07-06T23:46:58.666101257Z" level=info msg="connecting to shim d67501824c72c84f5799b306293ef10f752e528ec0f290f42f2ecf9cf9a9dc94" address="unix:///run/containerd/s/897993c0b7c915ff597362c80373f1f91802aa0175857b6b891cf96e123debf4" protocol=ttrpc version=3 Jul 6 23:46:58.667049 containerd[1882]: time="2025-07-06T23:46:58.667011197Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-x49dc,Uid:cf88a9f0-831b-4530-9bb8-e8176f95342c,Namespace:kube-system,Attempt:0,} returns sandbox id \"2daaf30177d0477c00b6c008bcf38829c9f7337c8b3ccbbc13f66b668df4e98a\"" Jul 6 23:46:58.685890 systemd[1]: Started cri-containerd-d67501824c72c84f5799b306293ef10f752e528ec0f290f42f2ecf9cf9a9dc94.scope - libcontainer container d67501824c72c84f5799b306293ef10f752e528ec0f290f42f2ecf9cf9a9dc94. Jul 6 23:46:58.724385 containerd[1882]: time="2025-07-06T23:46:58.724159622Z" level=info msg="StartContainer for \"d67501824c72c84f5799b306293ef10f752e528ec0f290f42f2ecf9cf9a9dc94\" returns successfully" Jul 6 23:46:59.444430 kubelet[3272]: I0706 23:46:59.444073 3272 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-7zqdz" podStartSLOduration=2.444058191 podStartE2EDuration="2.444058191s" podCreationTimestamp="2025-07-06 23:46:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:46:59.175688513 +0000 UTC m=+7.353835749" watchObservedRunningTime="2025-07-06 23:46:59.444058191 +0000 UTC m=+7.622205427" Jul 6 23:47:02.212730 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2101764767.mount: Deactivated successfully. Jul 6 23:47:03.949999 containerd[1882]: time="2025-07-06T23:47:03.949939554Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:47:03.955772 containerd[1882]: time="2025-07-06T23:47:03.955713529Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Jul 6 23:47:03.962143 containerd[1882]: time="2025-07-06T23:47:03.962083948Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:47:03.964756 containerd[1882]: time="2025-07-06T23:47:03.963296547Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 5.299644327s" Jul 6 23:47:03.964756 containerd[1882]: time="2025-07-06T23:47:03.963335132Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jul 6 23:47:03.965179 containerd[1882]: time="2025-07-06T23:47:03.964992297Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 6 23:47:04.143407 containerd[1882]: time="2025-07-06T23:47:04.143045958Z" level=info msg="CreateContainer within sandbox \"1e6f7a7ccf4f6dbae6ac2e336af24a68cf013d791718f000a0e2cd5eeb83cc95\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 6 23:47:04.781463 containerd[1882]: time="2025-07-06T23:47:04.780928085Z" level=info msg="Container fd6153cdf4c09c045b4ac89c0bd925286127117aebd20991b5288bd2d3095333: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:47:04.797968 containerd[1882]: time="2025-07-06T23:47:04.797925130Z" level=info msg="CreateContainer within sandbox \"1e6f7a7ccf4f6dbae6ac2e336af24a68cf013d791718f000a0e2cd5eeb83cc95\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"fd6153cdf4c09c045b4ac89c0bd925286127117aebd20991b5288bd2d3095333\"" Jul 6 23:47:04.798648 containerd[1882]: time="2025-07-06T23:47:04.798623625Z" level=info msg="StartContainer for \"fd6153cdf4c09c045b4ac89c0bd925286127117aebd20991b5288bd2d3095333\"" Jul 6 23:47:04.800026 containerd[1882]: time="2025-07-06T23:47:04.799992660Z" level=info msg="connecting to shim fd6153cdf4c09c045b4ac89c0bd925286127117aebd20991b5288bd2d3095333" address="unix:///run/containerd/s/c58a091d2f48d15d1d68d13c3386aedebf40e15e9fcd936c707405940a7536b9" protocol=ttrpc version=3 Jul 6 23:47:04.817882 systemd[1]: Started cri-containerd-fd6153cdf4c09c045b4ac89c0bd925286127117aebd20991b5288bd2d3095333.scope - libcontainer container fd6153cdf4c09c045b4ac89c0bd925286127117aebd20991b5288bd2d3095333. Jul 6 23:47:04.845245 containerd[1882]: time="2025-07-06T23:47:04.845154754Z" level=info msg="StartContainer for \"fd6153cdf4c09c045b4ac89c0bd925286127117aebd20991b5288bd2d3095333\" returns successfully" Jul 6 23:47:04.852457 systemd[1]: cri-containerd-fd6153cdf4c09c045b4ac89c0bd925286127117aebd20991b5288bd2d3095333.scope: Deactivated successfully. Jul 6 23:47:04.854465 containerd[1882]: time="2025-07-06T23:47:04.854411553Z" level=info msg="received exit event container_id:\"fd6153cdf4c09c045b4ac89c0bd925286127117aebd20991b5288bd2d3095333\" id:\"fd6153cdf4c09c045b4ac89c0bd925286127117aebd20991b5288bd2d3095333\" pid:3855 exited_at:{seconds:1751845624 nanos:853093791}" Jul 6 23:47:04.854958 containerd[1882]: time="2025-07-06T23:47:04.854442306Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fd6153cdf4c09c045b4ac89c0bd925286127117aebd20991b5288bd2d3095333\" id:\"fd6153cdf4c09c045b4ac89c0bd925286127117aebd20991b5288bd2d3095333\" pid:3855 exited_at:{seconds:1751845624 nanos:853093791}" Jul 6 23:47:04.875737 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fd6153cdf4c09c045b4ac89c0bd925286127117aebd20991b5288bd2d3095333-rootfs.mount: Deactivated successfully. Jul 6 23:47:06.188548 containerd[1882]: time="2025-07-06T23:47:06.188501160Z" level=info msg="CreateContainer within sandbox \"1e6f7a7ccf4f6dbae6ac2e336af24a68cf013d791718f000a0e2cd5eeb83cc95\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 6 23:47:06.229928 containerd[1882]: time="2025-07-06T23:47:06.229429895Z" level=info msg="Container 5c49923edab74781a961b78d4c778c64eb5b5c8657c29400ed30790047eb988f: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:47:06.246033 containerd[1882]: time="2025-07-06T23:47:06.245989311Z" level=info msg="CreateContainer within sandbox \"1e6f7a7ccf4f6dbae6ac2e336af24a68cf013d791718f000a0e2cd5eeb83cc95\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"5c49923edab74781a961b78d4c778c64eb5b5c8657c29400ed30790047eb988f\"" Jul 6 23:47:06.247005 containerd[1882]: time="2025-07-06T23:47:06.246980598Z" level=info msg="StartContainer for \"5c49923edab74781a961b78d4c778c64eb5b5c8657c29400ed30790047eb988f\"" Jul 6 23:47:06.247822 containerd[1882]: time="2025-07-06T23:47:06.247797128Z" level=info msg="connecting to shim 5c49923edab74781a961b78d4c778c64eb5b5c8657c29400ed30790047eb988f" address="unix:///run/containerd/s/c58a091d2f48d15d1d68d13c3386aedebf40e15e9fcd936c707405940a7536b9" protocol=ttrpc version=3 Jul 6 23:47:06.268904 systemd[1]: Started cri-containerd-5c49923edab74781a961b78d4c778c64eb5b5c8657c29400ed30790047eb988f.scope - libcontainer container 5c49923edab74781a961b78d4c778c64eb5b5c8657c29400ed30790047eb988f. Jul 6 23:47:06.306334 containerd[1882]: time="2025-07-06T23:47:06.306297559Z" level=info msg="StartContainer for \"5c49923edab74781a961b78d4c778c64eb5b5c8657c29400ed30790047eb988f\" returns successfully" Jul 6 23:47:06.315353 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 6 23:47:06.315537 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:47:06.316366 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:47:06.320994 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:47:06.322405 containerd[1882]: time="2025-07-06T23:47:06.322279188Z" level=info msg="received exit event container_id:\"5c49923edab74781a961b78d4c778c64eb5b5c8657c29400ed30790047eb988f\" id:\"5c49923edab74781a961b78d4c778c64eb5b5c8657c29400ed30790047eb988f\" pid:3899 exited_at:{seconds:1751845626 nanos:321417216}" Jul 6 23:47:06.323031 containerd[1882]: time="2025-07-06T23:47:06.322897591Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5c49923edab74781a961b78d4c778c64eb5b5c8657c29400ed30790047eb988f\" id:\"5c49923edab74781a961b78d4c778c64eb5b5c8657c29400ed30790047eb988f\" pid:3899 exited_at:{seconds:1751845626 nanos:321417216}" Jul 6 23:47:06.323381 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 6 23:47:06.324288 systemd[1]: cri-containerd-5c49923edab74781a961b78d4c778c64eb5b5c8657c29400ed30790047eb988f.scope: Deactivated successfully. Jul 6 23:47:06.347615 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:47:06.893718 containerd[1882]: time="2025-07-06T23:47:06.893644029Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:47:06.896582 containerd[1882]: time="2025-07-06T23:47:06.896528880Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Jul 6 23:47:06.900801 containerd[1882]: time="2025-07-06T23:47:06.900123075Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:47:06.900801 containerd[1882]: time="2025-07-06T23:47:06.900762223Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 2.935743582s" Jul 6 23:47:06.900801 containerd[1882]: time="2025-07-06T23:47:06.900800313Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jul 6 23:47:06.909299 containerd[1882]: time="2025-07-06T23:47:06.908696092Z" level=info msg="CreateContainer within sandbox \"2daaf30177d0477c00b6c008bcf38829c9f7337c8b3ccbbc13f66b668df4e98a\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 6 23:47:06.928104 containerd[1882]: time="2025-07-06T23:47:06.928056124Z" level=info msg="Container 51d4aded514cc283157853d7e0a4217185632617ee2b278c0ea2d482eae0a53e: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:47:06.942933 containerd[1882]: time="2025-07-06T23:47:06.942809162Z" level=info msg="CreateContainer within sandbox \"2daaf30177d0477c00b6c008bcf38829c9f7337c8b3ccbbc13f66b668df4e98a\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"51d4aded514cc283157853d7e0a4217185632617ee2b278c0ea2d482eae0a53e\"" Jul 6 23:47:06.943371 containerd[1882]: time="2025-07-06T23:47:06.943331619Z" level=info msg="StartContainer for \"51d4aded514cc283157853d7e0a4217185632617ee2b278c0ea2d482eae0a53e\"" Jul 6 23:47:06.944436 containerd[1882]: time="2025-07-06T23:47:06.944393125Z" level=info msg="connecting to shim 51d4aded514cc283157853d7e0a4217185632617ee2b278c0ea2d482eae0a53e" address="unix:///run/containerd/s/8ce90650b8b910d509c1acdf9c283c6035db3a293620e5a25a2f9a72634ffb72" protocol=ttrpc version=3 Jul 6 23:47:06.961883 systemd[1]: Started cri-containerd-51d4aded514cc283157853d7e0a4217185632617ee2b278c0ea2d482eae0a53e.scope - libcontainer container 51d4aded514cc283157853d7e0a4217185632617ee2b278c0ea2d482eae0a53e. Jul 6 23:47:06.988418 containerd[1882]: time="2025-07-06T23:47:06.988216952Z" level=info msg="StartContainer for \"51d4aded514cc283157853d7e0a4217185632617ee2b278c0ea2d482eae0a53e\" returns successfully" Jul 6 23:47:07.191870 containerd[1882]: time="2025-07-06T23:47:07.190578227Z" level=info msg="CreateContainer within sandbox \"1e6f7a7ccf4f6dbae6ac2e336af24a68cf013d791718f000a0e2cd5eeb83cc95\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 6 23:47:07.223269 kubelet[3272]: I0706 23:47:07.222822 3272 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-x49dc" podStartSLOduration=0.989893206 podStartE2EDuration="9.22280363s" podCreationTimestamp="2025-07-06 23:46:58 +0000 UTC" firstStartedPulling="2025-07-06 23:46:58.668574686 +0000 UTC m=+6.846721922" lastFinishedPulling="2025-07-06 23:47:06.90148511 +0000 UTC m=+15.079632346" observedRunningTime="2025-07-06 23:47:07.222454834 +0000 UTC m=+15.400602070" watchObservedRunningTime="2025-07-06 23:47:07.22280363 +0000 UTC m=+15.400950866" Jul 6 23:47:07.223671 containerd[1882]: time="2025-07-06T23:47:07.222883328Z" level=info msg="Container 545245a6dc8dc382f9ee0b0d0b61888d996e663f53bf62fe9526e494e224ea94: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:47:07.230595 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5c49923edab74781a961b78d4c778c64eb5b5c8657c29400ed30790047eb988f-rootfs.mount: Deactivated successfully. Jul 6 23:47:07.245467 containerd[1882]: time="2025-07-06T23:47:07.245226416Z" level=info msg="CreateContainer within sandbox \"1e6f7a7ccf4f6dbae6ac2e336af24a68cf013d791718f000a0e2cd5eeb83cc95\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"545245a6dc8dc382f9ee0b0d0b61888d996e663f53bf62fe9526e494e224ea94\"" Jul 6 23:47:07.246142 containerd[1882]: time="2025-07-06T23:47:07.246118580Z" level=info msg="StartContainer for \"545245a6dc8dc382f9ee0b0d0b61888d996e663f53bf62fe9526e494e224ea94\"" Jul 6 23:47:07.247195 containerd[1882]: time="2025-07-06T23:47:07.247165701Z" level=info msg="connecting to shim 545245a6dc8dc382f9ee0b0d0b61888d996e663f53bf62fe9526e494e224ea94" address="unix:///run/containerd/s/c58a091d2f48d15d1d68d13c3386aedebf40e15e9fcd936c707405940a7536b9" protocol=ttrpc version=3 Jul 6 23:47:07.270904 systemd[1]: Started cri-containerd-545245a6dc8dc382f9ee0b0d0b61888d996e663f53bf62fe9526e494e224ea94.scope - libcontainer container 545245a6dc8dc382f9ee0b0d0b61888d996e663f53bf62fe9526e494e224ea94. Jul 6 23:47:07.357901 systemd[1]: cri-containerd-545245a6dc8dc382f9ee0b0d0b61888d996e663f53bf62fe9526e494e224ea94.scope: Deactivated successfully. Jul 6 23:47:07.360264 containerd[1882]: time="2025-07-06T23:47:07.360151683Z" level=info msg="received exit event container_id:\"545245a6dc8dc382f9ee0b0d0b61888d996e663f53bf62fe9526e494e224ea94\" id:\"545245a6dc8dc382f9ee0b0d0b61888d996e663f53bf62fe9526e494e224ea94\" pid:3993 exited_at:{seconds:1751845627 nanos:358377794}" Jul 6 23:47:07.361233 containerd[1882]: time="2025-07-06T23:47:07.361201460Z" level=info msg="TaskExit event in podsandbox handler container_id:\"545245a6dc8dc382f9ee0b0d0b61888d996e663f53bf62fe9526e494e224ea94\" id:\"545245a6dc8dc382f9ee0b0d0b61888d996e663f53bf62fe9526e494e224ea94\" pid:3993 exited_at:{seconds:1751845627 nanos:358377794}" Jul 6 23:47:07.361603 containerd[1882]: time="2025-07-06T23:47:07.361585193Z" level=info msg="StartContainer for \"545245a6dc8dc382f9ee0b0d0b61888d996e663f53bf62fe9526e494e224ea94\" returns successfully" Jul 6 23:47:07.384553 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-545245a6dc8dc382f9ee0b0d0b61888d996e663f53bf62fe9526e494e224ea94-rootfs.mount: Deactivated successfully. Jul 6 23:47:08.195265 containerd[1882]: time="2025-07-06T23:47:08.195218435Z" level=info msg="CreateContainer within sandbox \"1e6f7a7ccf4f6dbae6ac2e336af24a68cf013d791718f000a0e2cd5eeb83cc95\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 6 23:47:08.221314 containerd[1882]: time="2025-07-06T23:47:08.221271318Z" level=info msg="Container c6caec2589e12b5232e308772f37d728e402775ba4c123b87f36e8ff496b1ae2: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:47:08.240914 containerd[1882]: time="2025-07-06T23:47:08.240782603Z" level=info msg="CreateContainer within sandbox \"1e6f7a7ccf4f6dbae6ac2e336af24a68cf013d791718f000a0e2cd5eeb83cc95\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c6caec2589e12b5232e308772f37d728e402775ba4c123b87f36e8ff496b1ae2\"" Jul 6 23:47:08.241687 containerd[1882]: time="2025-07-06T23:47:08.241651128Z" level=info msg="StartContainer for \"c6caec2589e12b5232e308772f37d728e402775ba4c123b87f36e8ff496b1ae2\"" Jul 6 23:47:08.245861 containerd[1882]: time="2025-07-06T23:47:08.245793079Z" level=info msg="connecting to shim c6caec2589e12b5232e308772f37d728e402775ba4c123b87f36e8ff496b1ae2" address="unix:///run/containerd/s/c58a091d2f48d15d1d68d13c3386aedebf40e15e9fcd936c707405940a7536b9" protocol=ttrpc version=3 Jul 6 23:47:08.266864 systemd[1]: Started cri-containerd-c6caec2589e12b5232e308772f37d728e402775ba4c123b87f36e8ff496b1ae2.scope - libcontainer container c6caec2589e12b5232e308772f37d728e402775ba4c123b87f36e8ff496b1ae2. Jul 6 23:47:08.286842 systemd[1]: cri-containerd-c6caec2589e12b5232e308772f37d728e402775ba4c123b87f36e8ff496b1ae2.scope: Deactivated successfully. Jul 6 23:47:08.290405 containerd[1882]: time="2025-07-06T23:47:08.289058052Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c6caec2589e12b5232e308772f37d728e402775ba4c123b87f36e8ff496b1ae2\" id:\"c6caec2589e12b5232e308772f37d728e402775ba4c123b87f36e8ff496b1ae2\" pid:4035 exited_at:{seconds:1751845628 nanos:288641094}" Jul 6 23:47:08.292280 containerd[1882]: time="2025-07-06T23:47:08.292221931Z" level=info msg="received exit event container_id:\"c6caec2589e12b5232e308772f37d728e402775ba4c123b87f36e8ff496b1ae2\" id:\"c6caec2589e12b5232e308772f37d728e402775ba4c123b87f36e8ff496b1ae2\" pid:4035 exited_at:{seconds:1751845628 nanos:288641094}" Jul 6 23:47:08.307707 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c6caec2589e12b5232e308772f37d728e402775ba4c123b87f36e8ff496b1ae2-rootfs.mount: Deactivated successfully. Jul 6 23:47:08.318689 containerd[1882]: time="2025-07-06T23:47:08.318651722Z" level=info msg="StartContainer for \"c6caec2589e12b5232e308772f37d728e402775ba4c123b87f36e8ff496b1ae2\" returns successfully" Jul 6 23:47:08.319074 containerd[1882]: time="2025-07-06T23:47:08.318661242Z" level=error msg="copy shim log after reload" error="read /proc/self/fd/40: file already closed" Jul 6 23:47:09.210744 containerd[1882]: time="2025-07-06T23:47:09.210299351Z" level=info msg="CreateContainer within sandbox \"1e6f7a7ccf4f6dbae6ac2e336af24a68cf013d791718f000a0e2cd5eeb83cc95\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 6 23:47:09.241743 containerd[1882]: time="2025-07-06T23:47:09.241685392Z" level=info msg="Container 09f98a2c27cfb185063927d4284d7a1abe7a10eee97d0bc02f51bb6380fb1f0d: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:47:09.257731 containerd[1882]: time="2025-07-06T23:47:09.257120968Z" level=info msg="CreateContainer within sandbox \"1e6f7a7ccf4f6dbae6ac2e336af24a68cf013d791718f000a0e2cd5eeb83cc95\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"09f98a2c27cfb185063927d4284d7a1abe7a10eee97d0bc02f51bb6380fb1f0d\"" Jul 6 23:47:09.258621 containerd[1882]: time="2025-07-06T23:47:09.258592200Z" level=info msg="StartContainer for \"09f98a2c27cfb185063927d4284d7a1abe7a10eee97d0bc02f51bb6380fb1f0d\"" Jul 6 23:47:09.259522 containerd[1882]: time="2025-07-06T23:47:09.259379130Z" level=info msg="connecting to shim 09f98a2c27cfb185063927d4284d7a1abe7a10eee97d0bc02f51bb6380fb1f0d" address="unix:///run/containerd/s/c58a091d2f48d15d1d68d13c3386aedebf40e15e9fcd936c707405940a7536b9" protocol=ttrpc version=3 Jul 6 23:47:09.281872 systemd[1]: Started cri-containerd-09f98a2c27cfb185063927d4284d7a1abe7a10eee97d0bc02f51bb6380fb1f0d.scope - libcontainer container 09f98a2c27cfb185063927d4284d7a1abe7a10eee97d0bc02f51bb6380fb1f0d. Jul 6 23:47:09.319464 containerd[1882]: time="2025-07-06T23:47:09.319421195Z" level=info msg="StartContainer for \"09f98a2c27cfb185063927d4284d7a1abe7a10eee97d0bc02f51bb6380fb1f0d\" returns successfully" Jul 6 23:47:09.389958 containerd[1882]: time="2025-07-06T23:47:09.389900840Z" level=info msg="TaskExit event in podsandbox handler container_id:\"09f98a2c27cfb185063927d4284d7a1abe7a10eee97d0bc02f51bb6380fb1f0d\" id:\"f3127439fa3fcd9ebccf9ac303e9a09f32273d212c595b64e87291d770f23f51\" pid:4105 exited_at:{seconds:1751845629 nanos:389453625}" Jul 6 23:47:09.480926 kubelet[3272]: I0706 23:47:09.480822 3272 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 6 23:47:09.550129 systemd[1]: Created slice kubepods-burstable-pod173ca19d_a274_458f_88d7_5b19d352c269.slice - libcontainer container kubepods-burstable-pod173ca19d_a274_458f_88d7_5b19d352c269.slice. Jul 6 23:47:09.554732 systemd[1]: Created slice kubepods-burstable-pod78ab7b94_a234_4f2d_a8c0_d858c7b5ec38.slice - libcontainer container kubepods-burstable-pod78ab7b94_a234_4f2d_a8c0_d858c7b5ec38.slice. Jul 6 23:47:09.601200 kubelet[3272]: I0706 23:47:09.601145 3272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjtht\" (UniqueName: \"kubernetes.io/projected/173ca19d-a274-458f-88d7-5b19d352c269-kube-api-access-kjtht\") pod \"coredns-674b8bbfcf-4sw4x\" (UID: \"173ca19d-a274-458f-88d7-5b19d352c269\") " pod="kube-system/coredns-674b8bbfcf-4sw4x" Jul 6 23:47:09.601200 kubelet[3272]: I0706 23:47:09.601200 3272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/78ab7b94-a234-4f2d-a8c0-d858c7b5ec38-config-volume\") pod \"coredns-674b8bbfcf-fxfhn\" (UID: \"78ab7b94-a234-4f2d-a8c0-d858c7b5ec38\") " pod="kube-system/coredns-674b8bbfcf-fxfhn" Jul 6 23:47:09.601379 kubelet[3272]: I0706 23:47:09.601254 3272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/173ca19d-a274-458f-88d7-5b19d352c269-config-volume\") pod \"coredns-674b8bbfcf-4sw4x\" (UID: \"173ca19d-a274-458f-88d7-5b19d352c269\") " pod="kube-system/coredns-674b8bbfcf-4sw4x" Jul 6 23:47:09.601379 kubelet[3272]: I0706 23:47:09.601268 3272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l26fk\" (UniqueName: \"kubernetes.io/projected/78ab7b94-a234-4f2d-a8c0-d858c7b5ec38-kube-api-access-l26fk\") pod \"coredns-674b8bbfcf-fxfhn\" (UID: \"78ab7b94-a234-4f2d-a8c0-d858c7b5ec38\") " pod="kube-system/coredns-674b8bbfcf-fxfhn" Jul 6 23:47:09.854423 containerd[1882]: time="2025-07-06T23:47:09.854097798Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-4sw4x,Uid:173ca19d-a274-458f-88d7-5b19d352c269,Namespace:kube-system,Attempt:0,}" Jul 6 23:47:09.858330 containerd[1882]: time="2025-07-06T23:47:09.858195908Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fxfhn,Uid:78ab7b94-a234-4f2d-a8c0-d858c7b5ec38,Namespace:kube-system,Attempt:0,}" Jul 6 23:47:10.218800 kubelet[3272]: I0706 23:47:10.218381 3272 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-4mx7q" podStartSLOduration=7.91756533 podStartE2EDuration="13.218366517s" podCreationTimestamp="2025-07-06 23:46:57 +0000 UTC" firstStartedPulling="2025-07-06 23:46:58.663374532 +0000 UTC m=+6.841521768" lastFinishedPulling="2025-07-06 23:47:03.964175719 +0000 UTC m=+12.142322955" observedRunningTime="2025-07-06 23:47:10.218244321 +0000 UTC m=+18.396391557" watchObservedRunningTime="2025-07-06 23:47:10.218366517 +0000 UTC m=+18.396513761" Jul 6 23:47:11.360222 systemd-networkd[1583]: cilium_host: Link UP Jul 6 23:47:11.362382 systemd-networkd[1583]: cilium_net: Link UP Jul 6 23:47:11.363450 systemd-networkd[1583]: cilium_net: Gained carrier Jul 6 23:47:11.364231 systemd-networkd[1583]: cilium_host: Gained carrier Jul 6 23:47:11.378438 systemd-networkd[1583]: cilium_net: Gained IPv6LL Jul 6 23:47:11.496036 systemd-networkd[1583]: cilium_vxlan: Link UP Jul 6 23:47:11.496042 systemd-networkd[1583]: cilium_vxlan: Gained carrier Jul 6 23:47:11.861728 kernel: NET: Registered PF_ALG protocol family Jul 6 23:47:11.873905 systemd-networkd[1583]: cilium_host: Gained IPv6LL Jul 6 23:47:12.428687 systemd-networkd[1583]: lxc_health: Link UP Jul 6 23:47:12.440333 systemd-networkd[1583]: lxc_health: Gained carrier Jul 6 23:47:12.898942 systemd-networkd[1583]: lxc2f6164d1994a: Link UP Jul 6 23:47:12.903157 kernel: eth0: renamed from tmp62c11 Jul 6 23:47:12.917901 kernel: eth0: renamed from tmp937ee Jul 6 23:47:12.921276 systemd-networkd[1583]: lxcff94a81768df: Link UP Jul 6 23:47:12.921571 systemd-networkd[1583]: lxc2f6164d1994a: Gained carrier Jul 6 23:47:12.923839 systemd-networkd[1583]: lxcff94a81768df: Gained carrier Jul 6 23:47:13.297901 systemd-networkd[1583]: cilium_vxlan: Gained IPv6LL Jul 6 23:47:13.554840 systemd-networkd[1583]: lxc_health: Gained IPv6LL Jul 6 23:47:14.705929 systemd-networkd[1583]: lxc2f6164d1994a: Gained IPv6LL Jul 6 23:47:14.769875 systemd-networkd[1583]: lxcff94a81768df: Gained IPv6LL Jul 6 23:47:14.967787 kubelet[3272]: I0706 23:47:14.967064 3272 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 6 23:47:15.584720 containerd[1882]: time="2025-07-06T23:47:15.584302560Z" level=info msg="connecting to shim 937eecc2c2d713ec01d3ee2c79d2f4b8b4920f84bcfc86efb7961deac0d7c702" address="unix:///run/containerd/s/3a73b7195bf7f242eb9f121f7cb0625f09a482ae25adb4dd45d60c4eed649510" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:47:15.601045 containerd[1882]: time="2025-07-06T23:47:15.600981169Z" level=info msg="connecting to shim 62c11a7e4490cf94cc6e9243e46604fe6747d414896dc133e1cb40e786425131" address="unix:///run/containerd/s/c38232a33580a9f0fe1fcf7b6947f28c2e3ea02b95e9fc511ccb79c26e668318" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:47:15.620886 systemd[1]: Started cri-containerd-937eecc2c2d713ec01d3ee2c79d2f4b8b4920f84bcfc86efb7961deac0d7c702.scope - libcontainer container 937eecc2c2d713ec01d3ee2c79d2f4b8b4920f84bcfc86efb7961deac0d7c702. Jul 6 23:47:15.624232 systemd[1]: Started cri-containerd-62c11a7e4490cf94cc6e9243e46604fe6747d414896dc133e1cb40e786425131.scope - libcontainer container 62c11a7e4490cf94cc6e9243e46604fe6747d414896dc133e1cb40e786425131. Jul 6 23:47:15.672296 containerd[1882]: time="2025-07-06T23:47:15.672245512Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fxfhn,Uid:78ab7b94-a234-4f2d-a8c0-d858c7b5ec38,Namespace:kube-system,Attempt:0,} returns sandbox id \"937eecc2c2d713ec01d3ee2c79d2f4b8b4920f84bcfc86efb7961deac0d7c702\"" Jul 6 23:47:15.680132 containerd[1882]: time="2025-07-06T23:47:15.679822720Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-4sw4x,Uid:173ca19d-a274-458f-88d7-5b19d352c269,Namespace:kube-system,Attempt:0,} returns sandbox id \"62c11a7e4490cf94cc6e9243e46604fe6747d414896dc133e1cb40e786425131\"" Jul 6 23:47:15.684837 containerd[1882]: time="2025-07-06T23:47:15.683949326Z" level=info msg="CreateContainer within sandbox \"937eecc2c2d713ec01d3ee2c79d2f4b8b4920f84bcfc86efb7961deac0d7c702\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 6 23:47:15.691976 containerd[1882]: time="2025-07-06T23:47:15.691846632Z" level=info msg="CreateContainer within sandbox \"62c11a7e4490cf94cc6e9243e46604fe6747d414896dc133e1cb40e786425131\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 6 23:47:15.719976 containerd[1882]: time="2025-07-06T23:47:15.719925229Z" level=info msg="Container d528315a6ab8ecac8640644ef8297b114ffdcb272fa34858ebfd1effcc02f3bb: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:47:15.727607 containerd[1882]: time="2025-07-06T23:47:15.727496748Z" level=info msg="Container 23a3cff9fac2331e905dbaa55fab728800d3a496e16f1dea8d1e01e62f81d499: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:47:15.749579 containerd[1882]: time="2025-07-06T23:47:15.749446441Z" level=info msg="CreateContainer within sandbox \"937eecc2c2d713ec01d3ee2c79d2f4b8b4920f84bcfc86efb7961deac0d7c702\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d528315a6ab8ecac8640644ef8297b114ffdcb272fa34858ebfd1effcc02f3bb\"" Jul 6 23:47:15.749988 containerd[1882]: time="2025-07-06T23:47:15.749962914Z" level=info msg="StartContainer for \"d528315a6ab8ecac8640644ef8297b114ffdcb272fa34858ebfd1effcc02f3bb\"" Jul 6 23:47:15.752391 containerd[1882]: time="2025-07-06T23:47:15.752355424Z" level=info msg="connecting to shim d528315a6ab8ecac8640644ef8297b114ffdcb272fa34858ebfd1effcc02f3bb" address="unix:///run/containerd/s/3a73b7195bf7f242eb9f121f7cb0625f09a482ae25adb4dd45d60c4eed649510" protocol=ttrpc version=3 Jul 6 23:47:15.754869 containerd[1882]: time="2025-07-06T23:47:15.754771655Z" level=info msg="CreateContainer within sandbox \"62c11a7e4490cf94cc6e9243e46604fe6747d414896dc133e1cb40e786425131\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"23a3cff9fac2331e905dbaa55fab728800d3a496e16f1dea8d1e01e62f81d499\"" Jul 6 23:47:15.755551 containerd[1882]: time="2025-07-06T23:47:15.755477550Z" level=info msg="StartContainer for \"23a3cff9fac2331e905dbaa55fab728800d3a496e16f1dea8d1e01e62f81d499\"" Jul 6 23:47:15.756183 containerd[1882]: time="2025-07-06T23:47:15.756085338Z" level=info msg="connecting to shim 23a3cff9fac2331e905dbaa55fab728800d3a496e16f1dea8d1e01e62f81d499" address="unix:///run/containerd/s/c38232a33580a9f0fe1fcf7b6947f28c2e3ea02b95e9fc511ccb79c26e668318" protocol=ttrpc version=3 Jul 6 23:47:15.772884 systemd[1]: Started cri-containerd-d528315a6ab8ecac8640644ef8297b114ffdcb272fa34858ebfd1effcc02f3bb.scope - libcontainer container d528315a6ab8ecac8640644ef8297b114ffdcb272fa34858ebfd1effcc02f3bb. Jul 6 23:47:15.776026 systemd[1]: Started cri-containerd-23a3cff9fac2331e905dbaa55fab728800d3a496e16f1dea8d1e01e62f81d499.scope - libcontainer container 23a3cff9fac2331e905dbaa55fab728800d3a496e16f1dea8d1e01e62f81d499. Jul 6 23:47:15.827696 containerd[1882]: time="2025-07-06T23:47:15.827639867Z" level=info msg="StartContainer for \"d528315a6ab8ecac8640644ef8297b114ffdcb272fa34858ebfd1effcc02f3bb\" returns successfully" Jul 6 23:47:15.829128 containerd[1882]: time="2025-07-06T23:47:15.828686533Z" level=info msg="StartContainer for \"23a3cff9fac2331e905dbaa55fab728800d3a496e16f1dea8d1e01e62f81d499\" returns successfully" Jul 6 23:47:16.234939 kubelet[3272]: I0706 23:47:16.234873 3272 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-4sw4x" podStartSLOduration=18.234832897 podStartE2EDuration="18.234832897s" podCreationTimestamp="2025-07-06 23:46:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:47:16.23367322 +0000 UTC m=+24.411820488" watchObservedRunningTime="2025-07-06 23:47:16.234832897 +0000 UTC m=+24.412980141" Jul 6 23:47:16.569219 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3319798314.mount: Deactivated successfully. Jul 6 23:47:26.234876 kubelet[3272]: I0706 23:47:26.234806 3272 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-fxfhn" podStartSLOduration=28.234785785 podStartE2EDuration="28.234785785s" podCreationTimestamp="2025-07-06 23:46:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:47:16.253642118 +0000 UTC m=+24.431789354" watchObservedRunningTime="2025-07-06 23:47:26.234785785 +0000 UTC m=+34.412933021" Jul 6 23:48:27.434727 systemd[1]: Started sshd@7-10.200.20.39:22-10.200.16.10:55522.service - OpenSSH per-connection server daemon (10.200.16.10:55522). Jul 6 23:48:27.914953 sshd[4769]: Accepted publickey for core from 10.200.16.10 port 55522 ssh2: RSA SHA256:0/AHONPd/Cla0u01jeKf+n9bVAD+ttQ1+M75e1nZbX8 Jul 6 23:48:27.916375 sshd-session[4769]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:48:27.923110 systemd-logind[1853]: New session 10 of user core. Jul 6 23:48:27.926855 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 6 23:48:28.328327 sshd[4771]: Connection closed by 10.200.16.10 port 55522 Jul 6 23:48:28.329074 sshd-session[4769]: pam_unix(sshd:session): session closed for user core Jul 6 23:48:28.332839 systemd[1]: sshd@7-10.200.20.39:22-10.200.16.10:55522.service: Deactivated successfully. Jul 6 23:48:28.334581 systemd[1]: session-10.scope: Deactivated successfully. Jul 6 23:48:28.335354 systemd-logind[1853]: Session 10 logged out. Waiting for processes to exit. Jul 6 23:48:28.337037 systemd-logind[1853]: Removed session 10. Jul 6 23:48:33.415939 systemd[1]: Started sshd@8-10.200.20.39:22-10.200.16.10:35054.service - OpenSSH per-connection server daemon (10.200.16.10:35054). Jul 6 23:48:33.899595 sshd[4786]: Accepted publickey for core from 10.200.16.10 port 35054 ssh2: RSA SHA256:0/AHONPd/Cla0u01jeKf+n9bVAD+ttQ1+M75e1nZbX8 Jul 6 23:48:33.900734 sshd-session[4786]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:48:33.904856 systemd-logind[1853]: New session 11 of user core. Jul 6 23:48:33.913897 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 6 23:48:34.299724 sshd[4788]: Connection closed by 10.200.16.10 port 35054 Jul 6 23:48:34.300388 sshd-session[4786]: pam_unix(sshd:session): session closed for user core Jul 6 23:48:34.303862 systemd[1]: sshd@8-10.200.20.39:22-10.200.16.10:35054.service: Deactivated successfully. Jul 6 23:48:34.306085 systemd[1]: session-11.scope: Deactivated successfully. Jul 6 23:48:34.307138 systemd-logind[1853]: Session 11 logged out. Waiting for processes to exit. Jul 6 23:48:34.308549 systemd-logind[1853]: Removed session 11. Jul 6 23:48:39.391905 systemd[1]: Started sshd@9-10.200.20.39:22-10.200.16.10:35068.service - OpenSSH per-connection server daemon (10.200.16.10:35068). Jul 6 23:48:39.872571 sshd[4801]: Accepted publickey for core from 10.200.16.10 port 35068 ssh2: RSA SHA256:0/AHONPd/Cla0u01jeKf+n9bVAD+ttQ1+M75e1nZbX8 Jul 6 23:48:39.873696 sshd-session[4801]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:48:39.877599 systemd-logind[1853]: New session 12 of user core. Jul 6 23:48:39.883877 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 6 23:48:40.255014 sshd[4803]: Connection closed by 10.200.16.10 port 35068 Jul 6 23:48:40.255666 sshd-session[4801]: pam_unix(sshd:session): session closed for user core Jul 6 23:48:40.259121 systemd[1]: sshd@9-10.200.20.39:22-10.200.16.10:35068.service: Deactivated successfully. Jul 6 23:48:40.261050 systemd[1]: session-12.scope: Deactivated successfully. Jul 6 23:48:40.262263 systemd-logind[1853]: Session 12 logged out. Waiting for processes to exit. Jul 6 23:48:40.263940 systemd-logind[1853]: Removed session 12. Jul 6 23:48:45.345164 systemd[1]: Started sshd@10-10.200.20.39:22-10.200.16.10:39782.service - OpenSSH per-connection server daemon (10.200.16.10:39782). Jul 6 23:48:45.832654 sshd[4816]: Accepted publickey for core from 10.200.16.10 port 39782 ssh2: RSA SHA256:0/AHONPd/Cla0u01jeKf+n9bVAD+ttQ1+M75e1nZbX8 Jul 6 23:48:45.833795 sshd-session[4816]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:48:45.837691 systemd-logind[1853]: New session 13 of user core. Jul 6 23:48:45.846068 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 6 23:48:46.215284 sshd[4819]: Connection closed by 10.200.16.10 port 39782 Jul 6 23:48:46.215921 sshd-session[4816]: pam_unix(sshd:session): session closed for user core Jul 6 23:48:46.219487 systemd[1]: sshd@10-10.200.20.39:22-10.200.16.10:39782.service: Deactivated successfully. Jul 6 23:48:46.221921 systemd[1]: session-13.scope: Deactivated successfully. Jul 6 23:48:46.223629 systemd-logind[1853]: Session 13 logged out. Waiting for processes to exit. Jul 6 23:48:46.225126 systemd-logind[1853]: Removed session 13. Jul 6 23:48:46.302825 systemd[1]: Started sshd@11-10.200.20.39:22-10.200.16.10:39798.service - OpenSSH per-connection server daemon (10.200.16.10:39798). Jul 6 23:48:46.785317 sshd[4831]: Accepted publickey for core from 10.200.16.10 port 39798 ssh2: RSA SHA256:0/AHONPd/Cla0u01jeKf+n9bVAD+ttQ1+M75e1nZbX8 Jul 6 23:48:46.786520 sshd-session[4831]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:48:46.790389 systemd-logind[1853]: New session 14 of user core. Jul 6 23:48:46.796082 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 6 23:48:47.191834 sshd[4833]: Connection closed by 10.200.16.10 port 39798 Jul 6 23:48:47.194318 sshd-session[4831]: pam_unix(sshd:session): session closed for user core Jul 6 23:48:47.198349 systemd[1]: sshd@11-10.200.20.39:22-10.200.16.10:39798.service: Deactivated successfully. Jul 6 23:48:47.200264 systemd[1]: session-14.scope: Deactivated successfully. Jul 6 23:48:47.201744 systemd-logind[1853]: Session 14 logged out. Waiting for processes to exit. Jul 6 23:48:47.203302 systemd-logind[1853]: Removed session 14. Jul 6 23:48:47.289027 systemd[1]: Started sshd@12-10.200.20.39:22-10.200.16.10:39812.service - OpenSSH per-connection server daemon (10.200.16.10:39812). Jul 6 23:48:47.786995 sshd[4843]: Accepted publickey for core from 10.200.16.10 port 39812 ssh2: RSA SHA256:0/AHONPd/Cla0u01jeKf+n9bVAD+ttQ1+M75e1nZbX8 Jul 6 23:48:47.788638 sshd-session[4843]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:48:47.792693 systemd-logind[1853]: New session 15 of user core. Jul 6 23:48:47.801882 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 6 23:48:48.184545 sshd[4845]: Connection closed by 10.200.16.10 port 39812 Jul 6 23:48:48.185147 sshd-session[4843]: pam_unix(sshd:session): session closed for user core Jul 6 23:48:48.189244 systemd[1]: sshd@12-10.200.20.39:22-10.200.16.10:39812.service: Deactivated successfully. Jul 6 23:48:48.190730 systemd[1]: session-15.scope: Deactivated successfully. Jul 6 23:48:48.191955 systemd-logind[1853]: Session 15 logged out. Waiting for processes to exit. Jul 6 23:48:48.193198 systemd-logind[1853]: Removed session 15. Jul 6 23:48:53.272862 systemd[1]: Started sshd@13-10.200.20.39:22-10.200.16.10:45740.service - OpenSSH per-connection server daemon (10.200.16.10:45740). Jul 6 23:48:53.759055 sshd[4858]: Accepted publickey for core from 10.200.16.10 port 45740 ssh2: RSA SHA256:0/AHONPd/Cla0u01jeKf+n9bVAD+ttQ1+M75e1nZbX8 Jul 6 23:48:53.760197 sshd-session[4858]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:48:53.764187 systemd-logind[1853]: New session 16 of user core. Jul 6 23:48:53.779880 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 6 23:48:54.139762 sshd[4860]: Connection closed by 10.200.16.10 port 45740 Jul 6 23:48:54.140465 sshd-session[4858]: pam_unix(sshd:session): session closed for user core Jul 6 23:48:54.144033 systemd[1]: sshd@13-10.200.20.39:22-10.200.16.10:45740.service: Deactivated successfully. Jul 6 23:48:54.146617 systemd[1]: session-16.scope: Deactivated successfully. Jul 6 23:48:54.148033 systemd-logind[1853]: Session 16 logged out. Waiting for processes to exit. Jul 6 23:48:54.149800 systemd-logind[1853]: Removed session 16. Jul 6 23:48:54.235908 systemd[1]: Started sshd@14-10.200.20.39:22-10.200.16.10:45754.service - OpenSSH per-connection server daemon (10.200.16.10:45754). Jul 6 23:48:54.718331 sshd[4871]: Accepted publickey for core from 10.200.16.10 port 45754 ssh2: RSA SHA256:0/AHONPd/Cla0u01jeKf+n9bVAD+ttQ1+M75e1nZbX8 Jul 6 23:48:54.719605 sshd-session[4871]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:48:54.723789 systemd-logind[1853]: New session 17 of user core. Jul 6 23:48:54.730862 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 6 23:48:55.123579 sshd[4873]: Connection closed by 10.200.16.10 port 45754 Jul 6 23:48:55.123473 sshd-session[4871]: pam_unix(sshd:session): session closed for user core Jul 6 23:48:55.126871 systemd[1]: sshd@14-10.200.20.39:22-10.200.16.10:45754.service: Deactivated successfully. Jul 6 23:48:55.128592 systemd[1]: session-17.scope: Deactivated successfully. Jul 6 23:48:55.130180 systemd-logind[1853]: Session 17 logged out. Waiting for processes to exit. Jul 6 23:48:55.131608 systemd-logind[1853]: Removed session 17. Jul 6 23:48:55.213787 systemd[1]: Started sshd@15-10.200.20.39:22-10.200.16.10:45758.service - OpenSSH per-connection server daemon (10.200.16.10:45758). Jul 6 23:48:55.695174 sshd[4882]: Accepted publickey for core from 10.200.16.10 port 45758 ssh2: RSA SHA256:0/AHONPd/Cla0u01jeKf+n9bVAD+ttQ1+M75e1nZbX8 Jul 6 23:48:55.696336 sshd-session[4882]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:48:55.700328 systemd-logind[1853]: New session 18 of user core. Jul 6 23:48:55.715880 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 6 23:48:56.766500 sshd[4884]: Connection closed by 10.200.16.10 port 45758 Jul 6 23:48:56.766876 sshd-session[4882]: pam_unix(sshd:session): session closed for user core Jul 6 23:48:56.770614 systemd[1]: sshd@15-10.200.20.39:22-10.200.16.10:45758.service: Deactivated successfully. Jul 6 23:48:56.772551 systemd[1]: session-18.scope: Deactivated successfully. Jul 6 23:48:56.773404 systemd-logind[1853]: Session 18 logged out. Waiting for processes to exit. Jul 6 23:48:56.775258 systemd-logind[1853]: Removed session 18. Jul 6 23:48:56.860864 systemd[1]: Started sshd@16-10.200.20.39:22-10.200.16.10:45770.service - OpenSSH per-connection server daemon (10.200.16.10:45770). Jul 6 23:48:57.345443 sshd[4901]: Accepted publickey for core from 10.200.16.10 port 45770 ssh2: RSA SHA256:0/AHONPd/Cla0u01jeKf+n9bVAD+ttQ1+M75e1nZbX8 Jul 6 23:48:57.348581 sshd-session[4901]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:48:57.352796 systemd-logind[1853]: New session 19 of user core. Jul 6 23:48:57.358863 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 6 23:48:57.819815 sshd[4903]: Connection closed by 10.200.16.10 port 45770 Jul 6 23:48:57.820199 sshd-session[4901]: pam_unix(sshd:session): session closed for user core Jul 6 23:48:57.823344 systemd-logind[1853]: Session 19 logged out. Waiting for processes to exit. Jul 6 23:48:57.824215 systemd[1]: sshd@16-10.200.20.39:22-10.200.16.10:45770.service: Deactivated successfully. Jul 6 23:48:57.826020 systemd[1]: session-19.scope: Deactivated successfully. Jul 6 23:48:57.828630 systemd-logind[1853]: Removed session 19. Jul 6 23:48:57.909972 systemd[1]: Started sshd@17-10.200.20.39:22-10.200.16.10:45772.service - OpenSSH per-connection server daemon (10.200.16.10:45772). Jul 6 23:48:58.406803 sshd[4913]: Accepted publickey for core from 10.200.16.10 port 45772 ssh2: RSA SHA256:0/AHONPd/Cla0u01jeKf+n9bVAD+ttQ1+M75e1nZbX8 Jul 6 23:48:58.408040 sshd-session[4913]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:48:58.412232 systemd-logind[1853]: New session 20 of user core. Jul 6 23:48:58.423068 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 6 23:48:58.802495 sshd[4915]: Connection closed by 10.200.16.10 port 45772 Jul 6 23:48:58.803053 sshd-session[4913]: pam_unix(sshd:session): session closed for user core Jul 6 23:48:58.806838 systemd[1]: sshd@17-10.200.20.39:22-10.200.16.10:45772.service: Deactivated successfully. Jul 6 23:48:58.807032 systemd-logind[1853]: Session 20 logged out. Waiting for processes to exit. Jul 6 23:48:58.809604 systemd[1]: session-20.scope: Deactivated successfully. Jul 6 23:48:58.811431 systemd-logind[1853]: Removed session 20. Jul 6 23:49:03.891043 systemd[1]: Started sshd@18-10.200.20.39:22-10.200.16.10:57492.service - OpenSSH per-connection server daemon (10.200.16.10:57492). Jul 6 23:49:04.370442 sshd[4931]: Accepted publickey for core from 10.200.16.10 port 57492 ssh2: RSA SHA256:0/AHONPd/Cla0u01jeKf+n9bVAD+ttQ1+M75e1nZbX8 Jul 6 23:49:04.371598 sshd-session[4931]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:49:04.375828 systemd-logind[1853]: New session 21 of user core. Jul 6 23:49:04.382870 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 6 23:49:04.768831 sshd[4933]: Connection closed by 10.200.16.10 port 57492 Jul 6 23:49:04.769495 sshd-session[4931]: pam_unix(sshd:session): session closed for user core Jul 6 23:49:04.773222 systemd-logind[1853]: Session 21 logged out. Waiting for processes to exit. Jul 6 23:49:04.773287 systemd[1]: sshd@18-10.200.20.39:22-10.200.16.10:57492.service: Deactivated successfully. Jul 6 23:49:04.776585 systemd[1]: session-21.scope: Deactivated successfully. Jul 6 23:49:04.778762 systemd-logind[1853]: Removed session 21. Jul 6 23:49:09.858254 systemd[1]: Started sshd@19-10.200.20.39:22-10.200.16.10:43780.service - OpenSSH per-connection server daemon (10.200.16.10:43780). Jul 6 23:49:10.337021 sshd[4944]: Accepted publickey for core from 10.200.16.10 port 43780 ssh2: RSA SHA256:0/AHONPd/Cla0u01jeKf+n9bVAD+ttQ1+M75e1nZbX8 Jul 6 23:49:10.338167 sshd-session[4944]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:49:10.342874 systemd-logind[1853]: New session 22 of user core. Jul 6 23:49:10.351883 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 6 23:49:10.750748 sshd[4946]: Connection closed by 10.200.16.10 port 43780 Jul 6 23:49:10.751350 sshd-session[4944]: pam_unix(sshd:session): session closed for user core Jul 6 23:49:10.754791 systemd[1]: sshd@19-10.200.20.39:22-10.200.16.10:43780.service: Deactivated successfully. Jul 6 23:49:10.757155 systemd[1]: session-22.scope: Deactivated successfully. Jul 6 23:49:10.758444 systemd-logind[1853]: Session 22 logged out. Waiting for processes to exit. Jul 6 23:49:10.759882 systemd-logind[1853]: Removed session 22. Jul 6 23:49:10.839287 systemd[1]: Started sshd@20-10.200.20.39:22-10.200.16.10:43796.service - OpenSSH per-connection server daemon (10.200.16.10:43796). Jul 6 23:49:11.317081 sshd[4958]: Accepted publickey for core from 10.200.16.10 port 43796 ssh2: RSA SHA256:0/AHONPd/Cla0u01jeKf+n9bVAD+ttQ1+M75e1nZbX8 Jul 6 23:49:11.318333 sshd-session[4958]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:49:11.323143 systemd-logind[1853]: New session 23 of user core. Jul 6 23:49:11.325849 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 6 23:49:13.052565 containerd[1882]: time="2025-07-06T23:49:13.051866851Z" level=info msg="StopContainer for \"51d4aded514cc283157853d7e0a4217185632617ee2b278c0ea2d482eae0a53e\" with timeout 30 (s)" Jul 6 23:49:13.052565 containerd[1882]: time="2025-07-06T23:49:13.052469605Z" level=info msg="Stop container \"51d4aded514cc283157853d7e0a4217185632617ee2b278c0ea2d482eae0a53e\" with signal terminated" Jul 6 23:49:13.054630 containerd[1882]: time="2025-07-06T23:49:13.054585237Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 6 23:49:13.064445 containerd[1882]: time="2025-07-06T23:49:13.064073974Z" level=info msg="TaskExit event in podsandbox handler container_id:\"09f98a2c27cfb185063927d4284d7a1abe7a10eee97d0bc02f51bb6380fb1f0d\" id:\"c943af5376761d98b5bba0b13b661a0d81c9f7326435590023d3759f499b8f7d\" pid:4978 exited_at:{seconds:1751845753 nanos:63050447}" Jul 6 23:49:13.068092 containerd[1882]: time="2025-07-06T23:49:13.068060503Z" level=info msg="StopContainer for \"09f98a2c27cfb185063927d4284d7a1abe7a10eee97d0bc02f51bb6380fb1f0d\" with timeout 2 (s)" Jul 6 23:49:13.068455 containerd[1882]: time="2025-07-06T23:49:13.068435187Z" level=info msg="Stop container \"09f98a2c27cfb185063927d4284d7a1abe7a10eee97d0bc02f51bb6380fb1f0d\" with signal terminated" Jul 6 23:49:13.069358 systemd[1]: cri-containerd-51d4aded514cc283157853d7e0a4217185632617ee2b278c0ea2d482eae0a53e.scope: Deactivated successfully. Jul 6 23:49:13.071800 containerd[1882]: time="2025-07-06T23:49:13.071764992Z" level=info msg="TaskExit event in podsandbox handler container_id:\"51d4aded514cc283157853d7e0a4217185632617ee2b278c0ea2d482eae0a53e\" id:\"51d4aded514cc283157853d7e0a4217185632617ee2b278c0ea2d482eae0a53e\" pid:3962 exited_at:{seconds:1751845753 nanos:71423662}" Jul 6 23:49:13.071973 containerd[1882]: time="2025-07-06T23:49:13.071935517Z" level=info msg="received exit event container_id:\"51d4aded514cc283157853d7e0a4217185632617ee2b278c0ea2d482eae0a53e\" id:\"51d4aded514cc283157853d7e0a4217185632617ee2b278c0ea2d482eae0a53e\" pid:3962 exited_at:{seconds:1751845753 nanos:71423662}" Jul 6 23:49:13.080635 systemd-networkd[1583]: lxc_health: Link DOWN Jul 6 23:49:13.080642 systemd-networkd[1583]: lxc_health: Lost carrier Jul 6 23:49:13.099843 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-51d4aded514cc283157853d7e0a4217185632617ee2b278c0ea2d482eae0a53e-rootfs.mount: Deactivated successfully. Jul 6 23:49:13.100165 containerd[1882]: time="2025-07-06T23:49:13.099372361Z" level=info msg="received exit event container_id:\"09f98a2c27cfb185063927d4284d7a1abe7a10eee97d0bc02f51bb6380fb1f0d\" id:\"09f98a2c27cfb185063927d4284d7a1abe7a10eee97d0bc02f51bb6380fb1f0d\" pid:4072 exited_at:{seconds:1751845753 nanos:99162202}" Jul 6 23:49:13.100501 containerd[1882]: time="2025-07-06T23:49:13.099461771Z" level=info msg="TaskExit event in podsandbox handler container_id:\"09f98a2c27cfb185063927d4284d7a1abe7a10eee97d0bc02f51bb6380fb1f0d\" id:\"09f98a2c27cfb185063927d4284d7a1abe7a10eee97d0bc02f51bb6380fb1f0d\" pid:4072 exited_at:{seconds:1751845753 nanos:99162202}" Jul 6 23:49:13.101082 systemd[1]: cri-containerd-09f98a2c27cfb185063927d4284d7a1abe7a10eee97d0bc02f51bb6380fb1f0d.scope: Deactivated successfully. Jul 6 23:49:13.101660 systemd[1]: cri-containerd-09f98a2c27cfb185063927d4284d7a1abe7a10eee97d0bc02f51bb6380fb1f0d.scope: Consumed 4.609s CPU time, 124.3M memory peak, 152K read from disk, 12.9M written to disk. Jul 6 23:49:13.117830 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-09f98a2c27cfb185063927d4284d7a1abe7a10eee97d0bc02f51bb6380fb1f0d-rootfs.mount: Deactivated successfully. Jul 6 23:49:13.167490 containerd[1882]: time="2025-07-06T23:49:13.167441289Z" level=info msg="StopContainer for \"09f98a2c27cfb185063927d4284d7a1abe7a10eee97d0bc02f51bb6380fb1f0d\" returns successfully" Jul 6 23:49:13.168087 containerd[1882]: time="2025-07-06T23:49:13.168056291Z" level=info msg="StopContainer for \"51d4aded514cc283157853d7e0a4217185632617ee2b278c0ea2d482eae0a53e\" returns successfully" Jul 6 23:49:13.168474 containerd[1882]: time="2025-07-06T23:49:13.168446959Z" level=info msg="StopPodSandbox for \"1e6f7a7ccf4f6dbae6ac2e336af24a68cf013d791718f000a0e2cd5eeb83cc95\"" Jul 6 23:49:13.168616 containerd[1882]: time="2025-07-06T23:49:13.168599124Z" level=info msg="Container to stop \"09f98a2c27cfb185063927d4284d7a1abe7a10eee97d0bc02f51bb6380fb1f0d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 6 23:49:13.168684 containerd[1882]: time="2025-07-06T23:49:13.168672758Z" level=info msg="Container to stop \"fd6153cdf4c09c045b4ac89c0bd925286127117aebd20991b5288bd2d3095333\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 6 23:49:13.168764 containerd[1882]: time="2025-07-06T23:49:13.168751873Z" level=info msg="Container to stop \"545245a6dc8dc382f9ee0b0d0b61888d996e663f53bf62fe9526e494e224ea94\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 6 23:49:13.168817 containerd[1882]: time="2025-07-06T23:49:13.168808042Z" level=info msg="Container to stop \"5c49923edab74781a961b78d4c778c64eb5b5c8657c29400ed30790047eb988f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 6 23:49:13.168999 containerd[1882]: time="2025-07-06T23:49:13.168849732Z" level=info msg="Container to stop \"c6caec2589e12b5232e308772f37d728e402775ba4c123b87f36e8ff496b1ae2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 6 23:49:13.168999 containerd[1882]: time="2025-07-06T23:49:13.168512289Z" level=info msg="StopPodSandbox for \"2daaf30177d0477c00b6c008bcf38829c9f7337c8b3ccbbc13f66b668df4e98a\"" Jul 6 23:49:13.168999 containerd[1882]: time="2025-07-06T23:49:13.168901837Z" level=info msg="Container to stop \"51d4aded514cc283157853d7e0a4217185632617ee2b278c0ea2d482eae0a53e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 6 23:49:13.174268 systemd[1]: cri-containerd-1e6f7a7ccf4f6dbae6ac2e336af24a68cf013d791718f000a0e2cd5eeb83cc95.scope: Deactivated successfully. Jul 6 23:49:13.176389 systemd[1]: cri-containerd-2daaf30177d0477c00b6c008bcf38829c9f7337c8b3ccbbc13f66b668df4e98a.scope: Deactivated successfully. Jul 6 23:49:13.176952 containerd[1882]: time="2025-07-06T23:49:13.176891960Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1e6f7a7ccf4f6dbae6ac2e336af24a68cf013d791718f000a0e2cd5eeb83cc95\" id:\"1e6f7a7ccf4f6dbae6ac2e336af24a68cf013d791718f000a0e2cd5eeb83cc95\" pid:3636 exit_status:137 exited_at:{seconds:1751845753 nanos:174886347}" Jul 6 23:49:13.198591 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2daaf30177d0477c00b6c008bcf38829c9f7337c8b3ccbbc13f66b668df4e98a-rootfs.mount: Deactivated successfully. Jul 6 23:49:13.203979 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1e6f7a7ccf4f6dbae6ac2e336af24a68cf013d791718f000a0e2cd5eeb83cc95-rootfs.mount: Deactivated successfully. Jul 6 23:49:13.217741 containerd[1882]: time="2025-07-06T23:49:13.217508717Z" level=info msg="shim disconnected" id=1e6f7a7ccf4f6dbae6ac2e336af24a68cf013d791718f000a0e2cd5eeb83cc95 namespace=k8s.io Jul 6 23:49:13.217741 containerd[1882]: time="2025-07-06T23:49:13.217537734Z" level=warning msg="cleaning up after shim disconnected" id=1e6f7a7ccf4f6dbae6ac2e336af24a68cf013d791718f000a0e2cd5eeb83cc95 namespace=k8s.io Jul 6 23:49:13.217741 containerd[1882]: time="2025-07-06T23:49:13.217562822Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:49:13.219968 containerd[1882]: time="2025-07-06T23:49:13.219787642Z" level=info msg="shim disconnected" id=2daaf30177d0477c00b6c008bcf38829c9f7337c8b3ccbbc13f66b668df4e98a namespace=k8s.io Jul 6 23:49:13.219968 containerd[1882]: time="2025-07-06T23:49:13.219825979Z" level=warning msg="cleaning up after shim disconnected" id=2daaf30177d0477c00b6c008bcf38829c9f7337c8b3ccbbc13f66b668df4e98a namespace=k8s.io Jul 6 23:49:13.219968 containerd[1882]: time="2025-07-06T23:49:13.219851172Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:49:13.229012 containerd[1882]: time="2025-07-06T23:49:13.228971658Z" level=info msg="received exit event sandbox_id:\"2daaf30177d0477c00b6c008bcf38829c9f7337c8b3ccbbc13f66b668df4e98a\" exit_status:137 exited_at:{seconds:1751845753 nanos:176288126}" Jul 6 23:49:13.229356 containerd[1882]: time="2025-07-06T23:49:13.229154519Z" level=info msg="received exit event sandbox_id:\"1e6f7a7ccf4f6dbae6ac2e336af24a68cf013d791718f000a0e2cd5eeb83cc95\" exit_status:137 exited_at:{seconds:1751845753 nanos:174886347}" Jul 6 23:49:13.231084 containerd[1882]: time="2025-07-06T23:49:13.229577660Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2daaf30177d0477c00b6c008bcf38829c9f7337c8b3ccbbc13f66b668df4e98a\" id:\"2daaf30177d0477c00b6c008bcf38829c9f7337c8b3ccbbc13f66b668df4e98a\" pid:3629 exit_status:137 exited_at:{seconds:1751845753 nanos:176288126}" Jul 6 23:49:13.230893 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2daaf30177d0477c00b6c008bcf38829c9f7337c8b3ccbbc13f66b668df4e98a-shm.mount: Deactivated successfully. Jul 6 23:49:13.232358 containerd[1882]: time="2025-07-06T23:49:13.231856969Z" level=info msg="TearDown network for sandbox \"2daaf30177d0477c00b6c008bcf38829c9f7337c8b3ccbbc13f66b668df4e98a\" successfully" Jul 6 23:49:13.232358 containerd[1882]: time="2025-07-06T23:49:13.232239509Z" level=info msg="StopPodSandbox for \"2daaf30177d0477c00b6c008bcf38829c9f7337c8b3ccbbc13f66b668df4e98a\" returns successfully" Jul 6 23:49:13.232358 containerd[1882]: time="2025-07-06T23:49:13.232160131Z" level=info msg="TearDown network for sandbox \"1e6f7a7ccf4f6dbae6ac2e336af24a68cf013d791718f000a0e2cd5eeb83cc95\" successfully" Jul 6 23:49:13.232358 containerd[1882]: time="2025-07-06T23:49:13.232328552Z" level=info msg="StopPodSandbox for \"1e6f7a7ccf4f6dbae6ac2e336af24a68cf013d791718f000a0e2cd5eeb83cc95\" returns successfully" Jul 6 23:49:13.363503 kubelet[3272]: I0706 23:49:13.362805 3272 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fba76c19-3ff4-44c4-9758-45a547d100b2-xtables-lock\") pod \"fba76c19-3ff4-44c4-9758-45a547d100b2\" (UID: \"fba76c19-3ff4-44c4-9758-45a547d100b2\") " Jul 6 23:49:13.363503 kubelet[3272]: I0706 23:49:13.362851 3272 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fba76c19-3ff4-44c4-9758-45a547d100b2-clustermesh-secrets\") pod \"fba76c19-3ff4-44c4-9758-45a547d100b2\" (UID: \"fba76c19-3ff4-44c4-9758-45a547d100b2\") " Jul 6 23:49:13.363503 kubelet[3272]: I0706 23:49:13.362861 3272 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fba76c19-3ff4-44c4-9758-45a547d100b2-cilium-cgroup\") pod \"fba76c19-3ff4-44c4-9758-45a547d100b2\" (UID: \"fba76c19-3ff4-44c4-9758-45a547d100b2\") " Jul 6 23:49:13.363503 kubelet[3272]: I0706 23:49:13.362870 3272 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fba76c19-3ff4-44c4-9758-45a547d100b2-host-proc-sys-net\") pod \"fba76c19-3ff4-44c4-9758-45a547d100b2\" (UID: \"fba76c19-3ff4-44c4-9758-45a547d100b2\") " Jul 6 23:49:13.363503 kubelet[3272]: I0706 23:49:13.362887 3272 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hrk7j\" (UniqueName: \"kubernetes.io/projected/fba76c19-3ff4-44c4-9758-45a547d100b2-kube-api-access-hrk7j\") pod \"fba76c19-3ff4-44c4-9758-45a547d100b2\" (UID: \"fba76c19-3ff4-44c4-9758-45a547d100b2\") " Jul 6 23:49:13.363503 kubelet[3272]: I0706 23:49:13.362898 3272 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fba76c19-3ff4-44c4-9758-45a547d100b2-lib-modules\") pod \"fba76c19-3ff4-44c4-9758-45a547d100b2\" (UID: \"fba76c19-3ff4-44c4-9758-45a547d100b2\") " Jul 6 23:49:13.363965 kubelet[3272]: I0706 23:49:13.362911 3272 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cf88a9f0-831b-4530-9bb8-e8176f95342c-cilium-config-path\") pod \"cf88a9f0-831b-4530-9bb8-e8176f95342c\" (UID: \"cf88a9f0-831b-4530-9bb8-e8176f95342c\") " Jul 6 23:49:13.363965 kubelet[3272]: I0706 23:49:13.362946 3272 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fba76c19-3ff4-44c4-9758-45a547d100b2-etc-cni-netd\") pod \"fba76c19-3ff4-44c4-9758-45a547d100b2\" (UID: \"fba76c19-3ff4-44c4-9758-45a547d100b2\") " Jul 6 23:49:13.363965 kubelet[3272]: I0706 23:49:13.362956 3272 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fba76c19-3ff4-44c4-9758-45a547d100b2-hostproc\") pod \"fba76c19-3ff4-44c4-9758-45a547d100b2\" (UID: \"fba76c19-3ff4-44c4-9758-45a547d100b2\") " Jul 6 23:49:13.363965 kubelet[3272]: I0706 23:49:13.362967 3272 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fba76c19-3ff4-44c4-9758-45a547d100b2-hubble-tls\") pod \"fba76c19-3ff4-44c4-9758-45a547d100b2\" (UID: \"fba76c19-3ff4-44c4-9758-45a547d100b2\") " Jul 6 23:49:13.363965 kubelet[3272]: I0706 23:49:13.362975 3272 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fba76c19-3ff4-44c4-9758-45a547d100b2-cni-path\") pod \"fba76c19-3ff4-44c4-9758-45a547d100b2\" (UID: \"fba76c19-3ff4-44c4-9758-45a547d100b2\") " Jul 6 23:49:13.363965 kubelet[3272]: I0706 23:49:13.362985 3272 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fba76c19-3ff4-44c4-9758-45a547d100b2-cilium-run\") pod \"fba76c19-3ff4-44c4-9758-45a547d100b2\" (UID: \"fba76c19-3ff4-44c4-9758-45a547d100b2\") " Jul 6 23:49:13.364055 kubelet[3272]: I0706 23:49:13.362993 3272 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fba76c19-3ff4-44c4-9758-45a547d100b2-bpf-maps\") pod \"fba76c19-3ff4-44c4-9758-45a547d100b2\" (UID: \"fba76c19-3ff4-44c4-9758-45a547d100b2\") " Jul 6 23:49:13.364055 kubelet[3272]: I0706 23:49:13.363008 3272 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2nsdk\" (UniqueName: \"kubernetes.io/projected/cf88a9f0-831b-4530-9bb8-e8176f95342c-kube-api-access-2nsdk\") pod \"cf88a9f0-831b-4530-9bb8-e8176f95342c\" (UID: \"cf88a9f0-831b-4530-9bb8-e8176f95342c\") " Jul 6 23:49:13.364055 kubelet[3272]: I0706 23:49:13.363023 3272 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fba76c19-3ff4-44c4-9758-45a547d100b2-cilium-config-path\") pod \"fba76c19-3ff4-44c4-9758-45a547d100b2\" (UID: \"fba76c19-3ff4-44c4-9758-45a547d100b2\") " Jul 6 23:49:13.364055 kubelet[3272]: I0706 23:49:13.363032 3272 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fba76c19-3ff4-44c4-9758-45a547d100b2-host-proc-sys-kernel\") pod \"fba76c19-3ff4-44c4-9758-45a547d100b2\" (UID: \"fba76c19-3ff4-44c4-9758-45a547d100b2\") " Jul 6 23:49:13.364055 kubelet[3272]: I0706 23:49:13.363123 3272 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fba76c19-3ff4-44c4-9758-45a547d100b2-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "fba76c19-3ff4-44c4-9758-45a547d100b2" (UID: "fba76c19-3ff4-44c4-9758-45a547d100b2"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:49:13.364126 kubelet[3272]: I0706 23:49:13.363175 3272 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fba76c19-3ff4-44c4-9758-45a547d100b2-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "fba76c19-3ff4-44c4-9758-45a547d100b2" (UID: "fba76c19-3ff4-44c4-9758-45a547d100b2"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:49:13.364126 kubelet[3272]: I0706 23:49:13.363478 3272 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fba76c19-3ff4-44c4-9758-45a547d100b2-hostproc" (OuterVolumeSpecName: "hostproc") pod "fba76c19-3ff4-44c4-9758-45a547d100b2" (UID: "fba76c19-3ff4-44c4-9758-45a547d100b2"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:49:13.364126 kubelet[3272]: I0706 23:49:13.363532 3272 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fba76c19-3ff4-44c4-9758-45a547d100b2-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "fba76c19-3ff4-44c4-9758-45a547d100b2" (UID: "fba76c19-3ff4-44c4-9758-45a547d100b2"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:49:13.364126 kubelet[3272]: I0706 23:49:13.363546 3272 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fba76c19-3ff4-44c4-9758-45a547d100b2-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "fba76c19-3ff4-44c4-9758-45a547d100b2" (UID: "fba76c19-3ff4-44c4-9758-45a547d100b2"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:49:13.365836 kubelet[3272]: I0706 23:49:13.365761 3272 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fba76c19-3ff4-44c4-9758-45a547d100b2-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "fba76c19-3ff4-44c4-9758-45a547d100b2" (UID: "fba76c19-3ff4-44c4-9758-45a547d100b2"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:49:13.366491 kubelet[3272]: I0706 23:49:13.366465 3272 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fba76c19-3ff4-44c4-9758-45a547d100b2-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "fba76c19-3ff4-44c4-9758-45a547d100b2" (UID: "fba76c19-3ff4-44c4-9758-45a547d100b2"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:49:13.367717 kubelet[3272]: I0706 23:49:13.366531 3272 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fba76c19-3ff4-44c4-9758-45a547d100b2-cni-path" (OuterVolumeSpecName: "cni-path") pod "fba76c19-3ff4-44c4-9758-45a547d100b2" (UID: "fba76c19-3ff4-44c4-9758-45a547d100b2"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:49:13.367717 kubelet[3272]: I0706 23:49:13.366546 3272 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fba76c19-3ff4-44c4-9758-45a547d100b2-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "fba76c19-3ff4-44c4-9758-45a547d100b2" (UID: "fba76c19-3ff4-44c4-9758-45a547d100b2"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:49:13.367717 kubelet[3272]: I0706 23:49:13.366554 3272 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fba76c19-3ff4-44c4-9758-45a547d100b2-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "fba76c19-3ff4-44c4-9758-45a547d100b2" (UID: "fba76c19-3ff4-44c4-9758-45a547d100b2"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:49:13.367717 kubelet[3272]: I0706 23:49:13.366781 3272 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fba76c19-3ff4-44c4-9758-45a547d100b2-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "fba76c19-3ff4-44c4-9758-45a547d100b2" (UID: "fba76c19-3ff4-44c4-9758-45a547d100b2"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 6 23:49:13.368484 kubelet[3272]: I0706 23:49:13.368453 3272 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fba76c19-3ff4-44c4-9758-45a547d100b2-kube-api-access-hrk7j" (OuterVolumeSpecName: "kube-api-access-hrk7j") pod "fba76c19-3ff4-44c4-9758-45a547d100b2" (UID: "fba76c19-3ff4-44c4-9758-45a547d100b2"). InnerVolumeSpecName "kube-api-access-hrk7j". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 6 23:49:13.369388 kubelet[3272]: I0706 23:49:13.369357 3272 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fba76c19-3ff4-44c4-9758-45a547d100b2-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "fba76c19-3ff4-44c4-9758-45a547d100b2" (UID: "fba76c19-3ff4-44c4-9758-45a547d100b2"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 6 23:49:13.369841 kubelet[3272]: I0706 23:49:13.369782 3272 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf88a9f0-831b-4530-9bb8-e8176f95342c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "cf88a9f0-831b-4530-9bb8-e8176f95342c" (UID: "cf88a9f0-831b-4530-9bb8-e8176f95342c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 6 23:49:13.370075 kubelet[3272]: I0706 23:49:13.370043 3272 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fba76c19-3ff4-44c4-9758-45a547d100b2-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "fba76c19-3ff4-44c4-9758-45a547d100b2" (UID: "fba76c19-3ff4-44c4-9758-45a547d100b2"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 6 23:49:13.370328 kubelet[3272]: I0706 23:49:13.370295 3272 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf88a9f0-831b-4530-9bb8-e8176f95342c-kube-api-access-2nsdk" (OuterVolumeSpecName: "kube-api-access-2nsdk") pod "cf88a9f0-831b-4530-9bb8-e8176f95342c" (UID: "cf88a9f0-831b-4530-9bb8-e8176f95342c"). InnerVolumeSpecName "kube-api-access-2nsdk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 6 23:49:13.434351 kubelet[3272]: I0706 23:49:13.434318 3272 scope.go:117] "RemoveContainer" containerID="09f98a2c27cfb185063927d4284d7a1abe7a10eee97d0bc02f51bb6380fb1f0d" Jul 6 23:49:13.439089 containerd[1882]: time="2025-07-06T23:49:13.437850504Z" level=info msg="RemoveContainer for \"09f98a2c27cfb185063927d4284d7a1abe7a10eee97d0bc02f51bb6380fb1f0d\"" Jul 6 23:49:13.440989 systemd[1]: Removed slice kubepods-burstable-podfba76c19_3ff4_44c4_9758_45a547d100b2.slice - libcontainer container kubepods-burstable-podfba76c19_3ff4_44c4_9758_45a547d100b2.slice. Jul 6 23:49:13.442784 systemd[1]: kubepods-burstable-podfba76c19_3ff4_44c4_9758_45a547d100b2.slice: Consumed 4.677s CPU time, 124.7M memory peak, 152K read from disk, 12.9M written to disk. Jul 6 23:49:13.444505 systemd[1]: Removed slice kubepods-besteffort-podcf88a9f0_831b_4530_9bb8_e8176f95342c.slice - libcontainer container kubepods-besteffort-podcf88a9f0_831b_4530_9bb8_e8176f95342c.slice. Jul 6 23:49:13.453735 containerd[1882]: time="2025-07-06T23:49:13.453055471Z" level=info msg="RemoveContainer for \"09f98a2c27cfb185063927d4284d7a1abe7a10eee97d0bc02f51bb6380fb1f0d\" returns successfully" Jul 6 23:49:13.455104 kubelet[3272]: I0706 23:49:13.455076 3272 scope.go:117] "RemoveContainer" containerID="c6caec2589e12b5232e308772f37d728e402775ba4c123b87f36e8ff496b1ae2" Jul 6 23:49:13.459217 containerd[1882]: time="2025-07-06T23:49:13.459187873Z" level=info msg="RemoveContainer for \"c6caec2589e12b5232e308772f37d728e402775ba4c123b87f36e8ff496b1ae2\"" Jul 6 23:49:13.463944 kubelet[3272]: I0706 23:49:13.463912 3272 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fba76c19-3ff4-44c4-9758-45a547d100b2-bpf-maps\") on node \"ci-4344.1.1-a-5eeae23dc4\" DevicePath \"\"" Jul 6 23:49:13.464100 kubelet[3272]: I0706 23:49:13.464088 3272 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2nsdk\" (UniqueName: \"kubernetes.io/projected/cf88a9f0-831b-4530-9bb8-e8176f95342c-kube-api-access-2nsdk\") on node \"ci-4344.1.1-a-5eeae23dc4\" DevicePath \"\"" Jul 6 23:49:13.464166 kubelet[3272]: I0706 23:49:13.464158 3272 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fba76c19-3ff4-44c4-9758-45a547d100b2-cilium-config-path\") on node \"ci-4344.1.1-a-5eeae23dc4\" DevicePath \"\"" Jul 6 23:49:13.464277 kubelet[3272]: I0706 23:49:13.464266 3272 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fba76c19-3ff4-44c4-9758-45a547d100b2-host-proc-sys-kernel\") on node \"ci-4344.1.1-a-5eeae23dc4\" DevicePath \"\"" Jul 6 23:49:13.464328 kubelet[3272]: I0706 23:49:13.464319 3272 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fba76c19-3ff4-44c4-9758-45a547d100b2-xtables-lock\") on node \"ci-4344.1.1-a-5eeae23dc4\" DevicePath \"\"" Jul 6 23:49:13.464401 kubelet[3272]: I0706 23:49:13.464391 3272 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fba76c19-3ff4-44c4-9758-45a547d100b2-clustermesh-secrets\") on node \"ci-4344.1.1-a-5eeae23dc4\" DevicePath \"\"" Jul 6 23:49:13.465122 kubelet[3272]: I0706 23:49:13.465104 3272 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fba76c19-3ff4-44c4-9758-45a547d100b2-cilium-cgroup\") on node \"ci-4344.1.1-a-5eeae23dc4\" DevicePath \"\"" Jul 6 23:49:13.465413 kubelet[3272]: I0706 23:49:13.465399 3272 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fba76c19-3ff4-44c4-9758-45a547d100b2-host-proc-sys-net\") on node \"ci-4344.1.1-a-5eeae23dc4\" DevicePath \"\"" Jul 6 23:49:13.465543 kubelet[3272]: I0706 23:49:13.465481 3272 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hrk7j\" (UniqueName: \"kubernetes.io/projected/fba76c19-3ff4-44c4-9758-45a547d100b2-kube-api-access-hrk7j\") on node \"ci-4344.1.1-a-5eeae23dc4\" DevicePath \"\"" Jul 6 23:49:13.465543 kubelet[3272]: I0706 23:49:13.465494 3272 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fba76c19-3ff4-44c4-9758-45a547d100b2-lib-modules\") on node \"ci-4344.1.1-a-5eeae23dc4\" DevicePath \"\"" Jul 6 23:49:13.465543 kubelet[3272]: I0706 23:49:13.465501 3272 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cf88a9f0-831b-4530-9bb8-e8176f95342c-cilium-config-path\") on node \"ci-4344.1.1-a-5eeae23dc4\" DevicePath \"\"" Jul 6 23:49:13.465543 kubelet[3272]: I0706 23:49:13.465506 3272 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fba76c19-3ff4-44c4-9758-45a547d100b2-etc-cni-netd\") on node \"ci-4344.1.1-a-5eeae23dc4\" DevicePath \"\"" Jul 6 23:49:13.465543 kubelet[3272]: I0706 23:49:13.465513 3272 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fba76c19-3ff4-44c4-9758-45a547d100b2-hostproc\") on node \"ci-4344.1.1-a-5eeae23dc4\" DevicePath \"\"" Jul 6 23:49:13.465543 kubelet[3272]: I0706 23:49:13.465519 3272 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fba76c19-3ff4-44c4-9758-45a547d100b2-hubble-tls\") on node \"ci-4344.1.1-a-5eeae23dc4\" DevicePath \"\"" Jul 6 23:49:13.465543 kubelet[3272]: I0706 23:49:13.465524 3272 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fba76c19-3ff4-44c4-9758-45a547d100b2-cni-path\") on node \"ci-4344.1.1-a-5eeae23dc4\" DevicePath \"\"" Jul 6 23:49:13.465543 kubelet[3272]: I0706 23:49:13.465529 3272 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fba76c19-3ff4-44c4-9758-45a547d100b2-cilium-run\") on node \"ci-4344.1.1-a-5eeae23dc4\" DevicePath \"\"" Jul 6 23:49:13.474171 containerd[1882]: time="2025-07-06T23:49:13.474063862Z" level=info msg="RemoveContainer for \"c6caec2589e12b5232e308772f37d728e402775ba4c123b87f36e8ff496b1ae2\" returns successfully" Jul 6 23:49:13.474412 kubelet[3272]: I0706 23:49:13.474384 3272 scope.go:117] "RemoveContainer" containerID="545245a6dc8dc382f9ee0b0d0b61888d996e663f53bf62fe9526e494e224ea94" Jul 6 23:49:13.476521 containerd[1882]: time="2025-07-06T23:49:13.476446535Z" level=info msg="RemoveContainer for \"545245a6dc8dc382f9ee0b0d0b61888d996e663f53bf62fe9526e494e224ea94\"" Jul 6 23:49:13.485787 containerd[1882]: time="2025-07-06T23:49:13.485714657Z" level=info msg="RemoveContainer for \"545245a6dc8dc382f9ee0b0d0b61888d996e663f53bf62fe9526e494e224ea94\" returns successfully" Jul 6 23:49:13.486069 kubelet[3272]: I0706 23:49:13.485933 3272 scope.go:117] "RemoveContainer" containerID="5c49923edab74781a961b78d4c778c64eb5b5c8657c29400ed30790047eb988f" Jul 6 23:49:13.487263 containerd[1882]: time="2025-07-06T23:49:13.487235687Z" level=info msg="RemoveContainer for \"5c49923edab74781a961b78d4c778c64eb5b5c8657c29400ed30790047eb988f\"" Jul 6 23:49:13.497494 containerd[1882]: time="2025-07-06T23:49:13.497454486Z" level=info msg="RemoveContainer for \"5c49923edab74781a961b78d4c778c64eb5b5c8657c29400ed30790047eb988f\" returns successfully" Jul 6 23:49:13.497813 kubelet[3272]: I0706 23:49:13.497787 3272 scope.go:117] "RemoveContainer" containerID="fd6153cdf4c09c045b4ac89c0bd925286127117aebd20991b5288bd2d3095333" Jul 6 23:49:13.499225 containerd[1882]: time="2025-07-06T23:49:13.499163722Z" level=info msg="RemoveContainer for \"fd6153cdf4c09c045b4ac89c0bd925286127117aebd20991b5288bd2d3095333\"" Jul 6 23:49:13.509347 containerd[1882]: time="2025-07-06T23:49:13.509314375Z" level=info msg="RemoveContainer for \"fd6153cdf4c09c045b4ac89c0bd925286127117aebd20991b5288bd2d3095333\" returns successfully" Jul 6 23:49:13.509729 kubelet[3272]: I0706 23:49:13.509691 3272 scope.go:117] "RemoveContainer" containerID="09f98a2c27cfb185063927d4284d7a1abe7a10eee97d0bc02f51bb6380fb1f0d" Jul 6 23:49:13.509986 containerd[1882]: time="2025-07-06T23:49:13.509953658Z" level=error msg="ContainerStatus for \"09f98a2c27cfb185063927d4284d7a1abe7a10eee97d0bc02f51bb6380fb1f0d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"09f98a2c27cfb185063927d4284d7a1abe7a10eee97d0bc02f51bb6380fb1f0d\": not found" Jul 6 23:49:13.510267 kubelet[3272]: E0706 23:49:13.510232 3272 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"09f98a2c27cfb185063927d4284d7a1abe7a10eee97d0bc02f51bb6380fb1f0d\": not found" containerID="09f98a2c27cfb185063927d4284d7a1abe7a10eee97d0bc02f51bb6380fb1f0d" Jul 6 23:49:13.510391 kubelet[3272]: I0706 23:49:13.510352 3272 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"09f98a2c27cfb185063927d4284d7a1abe7a10eee97d0bc02f51bb6380fb1f0d"} err="failed to get container status \"09f98a2c27cfb185063927d4284d7a1abe7a10eee97d0bc02f51bb6380fb1f0d\": rpc error: code = NotFound desc = an error occurred when try to find container \"09f98a2c27cfb185063927d4284d7a1abe7a10eee97d0bc02f51bb6380fb1f0d\": not found" Jul 6 23:49:13.510448 kubelet[3272]: I0706 23:49:13.510440 3272 scope.go:117] "RemoveContainer" containerID="c6caec2589e12b5232e308772f37d728e402775ba4c123b87f36e8ff496b1ae2" Jul 6 23:49:13.510664 containerd[1882]: time="2025-07-06T23:49:13.510640303Z" level=error msg="ContainerStatus for \"c6caec2589e12b5232e308772f37d728e402775ba4c123b87f36e8ff496b1ae2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c6caec2589e12b5232e308772f37d728e402775ba4c123b87f36e8ff496b1ae2\": not found" Jul 6 23:49:13.510955 kubelet[3272]: E0706 23:49:13.510924 3272 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c6caec2589e12b5232e308772f37d728e402775ba4c123b87f36e8ff496b1ae2\": not found" containerID="c6caec2589e12b5232e308772f37d728e402775ba4c123b87f36e8ff496b1ae2" Jul 6 23:49:13.511078 kubelet[3272]: I0706 23:49:13.511052 3272 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c6caec2589e12b5232e308772f37d728e402775ba4c123b87f36e8ff496b1ae2"} err="failed to get container status \"c6caec2589e12b5232e308772f37d728e402775ba4c123b87f36e8ff496b1ae2\": rpc error: code = NotFound desc = an error occurred when try to find container \"c6caec2589e12b5232e308772f37d728e402775ba4c123b87f36e8ff496b1ae2\": not found" Jul 6 23:49:13.511204 kubelet[3272]: I0706 23:49:13.511125 3272 scope.go:117] "RemoveContainer" containerID="545245a6dc8dc382f9ee0b0d0b61888d996e663f53bf62fe9526e494e224ea94" Jul 6 23:49:13.511373 containerd[1882]: time="2025-07-06T23:49:13.511351005Z" level=error msg="ContainerStatus for \"545245a6dc8dc382f9ee0b0d0b61888d996e663f53bf62fe9526e494e224ea94\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"545245a6dc8dc382f9ee0b0d0b61888d996e663f53bf62fe9526e494e224ea94\": not found" Jul 6 23:49:13.511620 kubelet[3272]: E0706 23:49:13.511599 3272 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"545245a6dc8dc382f9ee0b0d0b61888d996e663f53bf62fe9526e494e224ea94\": not found" containerID="545245a6dc8dc382f9ee0b0d0b61888d996e663f53bf62fe9526e494e224ea94" Jul 6 23:49:13.511670 kubelet[3272]: I0706 23:49:13.511630 3272 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"545245a6dc8dc382f9ee0b0d0b61888d996e663f53bf62fe9526e494e224ea94"} err="failed to get container status \"545245a6dc8dc382f9ee0b0d0b61888d996e663f53bf62fe9526e494e224ea94\": rpc error: code = NotFound desc = an error occurred when try to find container \"545245a6dc8dc382f9ee0b0d0b61888d996e663f53bf62fe9526e494e224ea94\": not found" Jul 6 23:49:13.511670 kubelet[3272]: I0706 23:49:13.511643 3272 scope.go:117] "RemoveContainer" containerID="5c49923edab74781a961b78d4c778c64eb5b5c8657c29400ed30790047eb988f" Jul 6 23:49:13.511879 containerd[1882]: time="2025-07-06T23:49:13.511853436Z" level=error msg="ContainerStatus for \"5c49923edab74781a961b78d4c778c64eb5b5c8657c29400ed30790047eb988f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5c49923edab74781a961b78d4c778c64eb5b5c8657c29400ed30790047eb988f\": not found" Jul 6 23:49:13.512092 kubelet[3272]: E0706 23:49:13.512078 3272 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5c49923edab74781a961b78d4c778c64eb5b5c8657c29400ed30790047eb988f\": not found" containerID="5c49923edab74781a961b78d4c778c64eb5b5c8657c29400ed30790047eb988f" Jul 6 23:49:13.512275 kubelet[3272]: I0706 23:49:13.512178 3272 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5c49923edab74781a961b78d4c778c64eb5b5c8657c29400ed30790047eb988f"} err="failed to get container status \"5c49923edab74781a961b78d4c778c64eb5b5c8657c29400ed30790047eb988f\": rpc error: code = NotFound desc = an error occurred when try to find container \"5c49923edab74781a961b78d4c778c64eb5b5c8657c29400ed30790047eb988f\": not found" Jul 6 23:49:13.512275 kubelet[3272]: I0706 23:49:13.512198 3272 scope.go:117] "RemoveContainer" containerID="fd6153cdf4c09c045b4ac89c0bd925286127117aebd20991b5288bd2d3095333" Jul 6 23:49:13.512434 containerd[1882]: time="2025-07-06T23:49:13.512391069Z" level=error msg="ContainerStatus for \"fd6153cdf4c09c045b4ac89c0bd925286127117aebd20991b5288bd2d3095333\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fd6153cdf4c09c045b4ac89c0bd925286127117aebd20991b5288bd2d3095333\": not found" Jul 6 23:49:13.512542 kubelet[3272]: E0706 23:49:13.512519 3272 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fd6153cdf4c09c045b4ac89c0bd925286127117aebd20991b5288bd2d3095333\": not found" containerID="fd6153cdf4c09c045b4ac89c0bd925286127117aebd20991b5288bd2d3095333" Jul 6 23:49:13.512576 kubelet[3272]: I0706 23:49:13.512542 3272 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fd6153cdf4c09c045b4ac89c0bd925286127117aebd20991b5288bd2d3095333"} err="failed to get container status \"fd6153cdf4c09c045b4ac89c0bd925286127117aebd20991b5288bd2d3095333\": rpc error: code = NotFound desc = an error occurred when try to find container \"fd6153cdf4c09c045b4ac89c0bd925286127117aebd20991b5288bd2d3095333\": not found" Jul 6 23:49:13.512576 kubelet[3272]: I0706 23:49:13.512555 3272 scope.go:117] "RemoveContainer" containerID="51d4aded514cc283157853d7e0a4217185632617ee2b278c0ea2d482eae0a53e" Jul 6 23:49:13.514034 containerd[1882]: time="2025-07-06T23:49:13.513963997Z" level=info msg="RemoveContainer for \"51d4aded514cc283157853d7e0a4217185632617ee2b278c0ea2d482eae0a53e\"" Jul 6 23:49:13.525997 containerd[1882]: time="2025-07-06T23:49:13.525962074Z" level=info msg="RemoveContainer for \"51d4aded514cc283157853d7e0a4217185632617ee2b278c0ea2d482eae0a53e\" returns successfully" Jul 6 23:49:13.526325 kubelet[3272]: I0706 23:49:13.526306 3272 scope.go:117] "RemoveContainer" containerID="51d4aded514cc283157853d7e0a4217185632617ee2b278c0ea2d482eae0a53e" Jul 6 23:49:13.526788 containerd[1882]: time="2025-07-06T23:49:13.526727257Z" level=error msg="ContainerStatus for \"51d4aded514cc283157853d7e0a4217185632617ee2b278c0ea2d482eae0a53e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"51d4aded514cc283157853d7e0a4217185632617ee2b278c0ea2d482eae0a53e\": not found" Jul 6 23:49:13.526866 kubelet[3272]: E0706 23:49:13.526836 3272 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"51d4aded514cc283157853d7e0a4217185632617ee2b278c0ea2d482eae0a53e\": not found" containerID="51d4aded514cc283157853d7e0a4217185632617ee2b278c0ea2d482eae0a53e" Jul 6 23:49:13.526866 kubelet[3272]: I0706 23:49:13.526856 3272 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"51d4aded514cc283157853d7e0a4217185632617ee2b278c0ea2d482eae0a53e"} err="failed to get container status \"51d4aded514cc283157853d7e0a4217185632617ee2b278c0ea2d482eae0a53e\": rpc error: code = NotFound desc = an error occurred when try to find container \"51d4aded514cc283157853d7e0a4217185632617ee2b278c0ea2d482eae0a53e\": not found" Jul 6 23:49:14.091423 kubelet[3272]: I0706 23:49:14.091383 3272 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf88a9f0-831b-4530-9bb8-e8176f95342c" path="/var/lib/kubelet/pods/cf88a9f0-831b-4530-9bb8-e8176f95342c/volumes" Jul 6 23:49:14.091728 kubelet[3272]: I0706 23:49:14.091694 3272 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fba76c19-3ff4-44c4-9758-45a547d100b2" path="/var/lib/kubelet/pods/fba76c19-3ff4-44c4-9758-45a547d100b2/volumes" Jul 6 23:49:14.097881 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1e6f7a7ccf4f6dbae6ac2e336af24a68cf013d791718f000a0e2cd5eeb83cc95-shm.mount: Deactivated successfully. Jul 6 23:49:14.097977 systemd[1]: var-lib-kubelet-pods-fba76c19\x2d3ff4\x2d44c4\x2d9758\x2d45a547d100b2-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhrk7j.mount: Deactivated successfully. Jul 6 23:49:14.098027 systemd[1]: var-lib-kubelet-pods-cf88a9f0\x2d831b\x2d4530\x2d9bb8\x2de8176f95342c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2nsdk.mount: Deactivated successfully. Jul 6 23:49:14.098065 systemd[1]: var-lib-kubelet-pods-fba76c19\x2d3ff4\x2d44c4\x2d9758\x2d45a547d100b2-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 6 23:49:14.098100 systemd[1]: var-lib-kubelet-pods-fba76c19\x2d3ff4\x2d44c4\x2d9758\x2d45a547d100b2-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 6 23:49:15.058332 sshd[4960]: Connection closed by 10.200.16.10 port 43796 Jul 6 23:49:15.060114 sshd-session[4958]: pam_unix(sshd:session): session closed for user core Jul 6 23:49:15.063596 systemd[1]: session-23.scope: Deactivated successfully. Jul 6 23:49:15.064289 systemd[1]: sshd@20-10.200.20.39:22-10.200.16.10:43796.service: Deactivated successfully. Jul 6 23:49:15.066357 systemd-logind[1853]: Session 23 logged out. Waiting for processes to exit. Jul 6 23:49:15.067574 systemd-logind[1853]: Removed session 23. Jul 6 23:49:15.151089 systemd[1]: Started sshd@21-10.200.20.39:22-10.200.16.10:43798.service - OpenSSH per-connection server daemon (10.200.16.10:43798). Jul 6 23:49:15.654867 sshd[5112]: Accepted publickey for core from 10.200.16.10 port 43798 ssh2: RSA SHA256:0/AHONPd/Cla0u01jeKf+n9bVAD+ttQ1+M75e1nZbX8 Jul 6 23:49:15.656073 sshd-session[5112]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:49:15.659740 systemd-logind[1853]: New session 24 of user core. Jul 6 23:49:15.667868 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 6 23:49:16.330288 systemd[1]: Created slice kubepods-burstable-podda9f5dd9_1751_4fc0_8ac8_684e7e2c999d.slice - libcontainer container kubepods-burstable-podda9f5dd9_1751_4fc0_8ac8_684e7e2c999d.slice. Jul 6 23:49:16.374591 sshd[5114]: Connection closed by 10.200.16.10 port 43798 Jul 6 23:49:16.375226 sshd-session[5112]: pam_unix(sshd:session): session closed for user core Jul 6 23:49:16.378395 systemd-logind[1853]: Session 24 logged out. Waiting for processes to exit. Jul 6 23:49:16.378439 systemd[1]: session-24.scope: Deactivated successfully. Jul 6 23:49:16.380219 kubelet[3272]: I0706 23:49:16.380180 3272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/da9f5dd9-1751-4fc0-8ac8-684e7e2c999d-cilium-ipsec-secrets\") pod \"cilium-rwpzj\" (UID: \"da9f5dd9-1751-4fc0-8ac8-684e7e2c999d\") " pod="kube-system/cilium-rwpzj" Jul 6 23:49:16.380219 kubelet[3272]: I0706 23:49:16.380223 3272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/da9f5dd9-1751-4fc0-8ac8-684e7e2c999d-lib-modules\") pod \"cilium-rwpzj\" (UID: \"da9f5dd9-1751-4fc0-8ac8-684e7e2c999d\") " pod="kube-system/cilium-rwpzj" Jul 6 23:49:16.380219 kubelet[3272]: I0706 23:49:16.380242 3272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/da9f5dd9-1751-4fc0-8ac8-684e7e2c999d-bpf-maps\") pod \"cilium-rwpzj\" (UID: \"da9f5dd9-1751-4fc0-8ac8-684e7e2c999d\") " pod="kube-system/cilium-rwpzj" Jul 6 23:49:16.380219 kubelet[3272]: I0706 23:49:16.380253 3272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/da9f5dd9-1751-4fc0-8ac8-684e7e2c999d-etc-cni-netd\") pod \"cilium-rwpzj\" (UID: \"da9f5dd9-1751-4fc0-8ac8-684e7e2c999d\") " pod="kube-system/cilium-rwpzj" Jul 6 23:49:16.380219 kubelet[3272]: I0706 23:49:16.380264 3272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/da9f5dd9-1751-4fc0-8ac8-684e7e2c999d-xtables-lock\") pod \"cilium-rwpzj\" (UID: \"da9f5dd9-1751-4fc0-8ac8-684e7e2c999d\") " pod="kube-system/cilium-rwpzj" Jul 6 23:49:16.380219 kubelet[3272]: I0706 23:49:16.380275 3272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/da9f5dd9-1751-4fc0-8ac8-684e7e2c999d-clustermesh-secrets\") pod \"cilium-rwpzj\" (UID: \"da9f5dd9-1751-4fc0-8ac8-684e7e2c999d\") " pod="kube-system/cilium-rwpzj" Jul 6 23:49:16.380749 kubelet[3272]: I0706 23:49:16.380287 3272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/da9f5dd9-1751-4fc0-8ac8-684e7e2c999d-host-proc-sys-net\") pod \"cilium-rwpzj\" (UID: \"da9f5dd9-1751-4fc0-8ac8-684e7e2c999d\") " pod="kube-system/cilium-rwpzj" Jul 6 23:49:16.380749 kubelet[3272]: I0706 23:49:16.380295 3272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/da9f5dd9-1751-4fc0-8ac8-684e7e2c999d-cni-path\") pod \"cilium-rwpzj\" (UID: \"da9f5dd9-1751-4fc0-8ac8-684e7e2c999d\") " pod="kube-system/cilium-rwpzj" Jul 6 23:49:16.380749 kubelet[3272]: I0706 23:49:16.380306 3272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbb4t\" (UniqueName: \"kubernetes.io/projected/da9f5dd9-1751-4fc0-8ac8-684e7e2c999d-kube-api-access-jbb4t\") pod \"cilium-rwpzj\" (UID: \"da9f5dd9-1751-4fc0-8ac8-684e7e2c999d\") " pod="kube-system/cilium-rwpzj" Jul 6 23:49:16.380749 kubelet[3272]: I0706 23:49:16.380316 3272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/da9f5dd9-1751-4fc0-8ac8-684e7e2c999d-hostproc\") pod \"cilium-rwpzj\" (UID: \"da9f5dd9-1751-4fc0-8ac8-684e7e2c999d\") " pod="kube-system/cilium-rwpzj" Jul 6 23:49:16.380749 kubelet[3272]: I0706 23:49:16.380328 3272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/da9f5dd9-1751-4fc0-8ac8-684e7e2c999d-cilium-cgroup\") pod \"cilium-rwpzj\" (UID: \"da9f5dd9-1751-4fc0-8ac8-684e7e2c999d\") " pod="kube-system/cilium-rwpzj" Jul 6 23:49:16.380749 kubelet[3272]: I0706 23:49:16.380339 3272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/da9f5dd9-1751-4fc0-8ac8-684e7e2c999d-cilium-run\") pod \"cilium-rwpzj\" (UID: \"da9f5dd9-1751-4fc0-8ac8-684e7e2c999d\") " pod="kube-system/cilium-rwpzj" Jul 6 23:49:16.380734 systemd[1]: sshd@21-10.200.20.39:22-10.200.16.10:43798.service: Deactivated successfully. Jul 6 23:49:16.380957 kubelet[3272]: I0706 23:49:16.380352 3272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/da9f5dd9-1751-4fc0-8ac8-684e7e2c999d-cilium-config-path\") pod \"cilium-rwpzj\" (UID: \"da9f5dd9-1751-4fc0-8ac8-684e7e2c999d\") " pod="kube-system/cilium-rwpzj" Jul 6 23:49:16.380957 kubelet[3272]: I0706 23:49:16.380365 3272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/da9f5dd9-1751-4fc0-8ac8-684e7e2c999d-host-proc-sys-kernel\") pod \"cilium-rwpzj\" (UID: \"da9f5dd9-1751-4fc0-8ac8-684e7e2c999d\") " pod="kube-system/cilium-rwpzj" Jul 6 23:49:16.380957 kubelet[3272]: I0706 23:49:16.380376 3272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/da9f5dd9-1751-4fc0-8ac8-684e7e2c999d-hubble-tls\") pod \"cilium-rwpzj\" (UID: \"da9f5dd9-1751-4fc0-8ac8-684e7e2c999d\") " pod="kube-system/cilium-rwpzj" Jul 6 23:49:16.464726 systemd[1]: Started sshd@22-10.200.20.39:22-10.200.16.10:43810.service - OpenSSH per-connection server daemon (10.200.16.10:43810). Jul 6 23:49:16.637294 containerd[1882]: time="2025-07-06T23:49:16.637247867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rwpzj,Uid:da9f5dd9-1751-4fc0-8ac8-684e7e2c999d,Namespace:kube-system,Attempt:0,}" Jul 6 23:49:16.689769 containerd[1882]: time="2025-07-06T23:49:16.689352155Z" level=info msg="connecting to shim 486321eeb29b600c86c1a14da0b44b599ef3711370eb4ed39b215ac1ea37be6c" address="unix:///run/containerd/s/d109a7f2f301c3f8802c360496c06b1707960678de3ad22145878c3610d2132f" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:49:16.705876 systemd[1]: Started cri-containerd-486321eeb29b600c86c1a14da0b44b599ef3711370eb4ed39b215ac1ea37be6c.scope - libcontainer container 486321eeb29b600c86c1a14da0b44b599ef3711370eb4ed39b215ac1ea37be6c. Jul 6 23:49:16.729303 containerd[1882]: time="2025-07-06T23:49:16.729190879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rwpzj,Uid:da9f5dd9-1751-4fc0-8ac8-684e7e2c999d,Namespace:kube-system,Attempt:0,} returns sandbox id \"486321eeb29b600c86c1a14da0b44b599ef3711370eb4ed39b215ac1ea37be6c\"" Jul 6 23:49:16.739578 containerd[1882]: time="2025-07-06T23:49:16.739440757Z" level=info msg="CreateContainer within sandbox \"486321eeb29b600c86c1a14da0b44b599ef3711370eb4ed39b215ac1ea37be6c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 6 23:49:16.761598 containerd[1882]: time="2025-07-06T23:49:16.761520914Z" level=info msg="Container ee2358ab04504cb672f7d98ee6ae0a8d881fe2b1907fa0eda9251d62984a641d: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:49:16.779135 containerd[1882]: time="2025-07-06T23:49:16.779086347Z" level=info msg="CreateContainer within sandbox \"486321eeb29b600c86c1a14da0b44b599ef3711370eb4ed39b215ac1ea37be6c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ee2358ab04504cb672f7d98ee6ae0a8d881fe2b1907fa0eda9251d62984a641d\"" Jul 6 23:49:16.779849 containerd[1882]: time="2025-07-06T23:49:16.779632980Z" level=info msg="StartContainer for \"ee2358ab04504cb672f7d98ee6ae0a8d881fe2b1907fa0eda9251d62984a641d\"" Jul 6 23:49:16.781240 containerd[1882]: time="2025-07-06T23:49:16.781195324Z" level=info msg="connecting to shim ee2358ab04504cb672f7d98ee6ae0a8d881fe2b1907fa0eda9251d62984a641d" address="unix:///run/containerd/s/d109a7f2f301c3f8802c360496c06b1707960678de3ad22145878c3610d2132f" protocol=ttrpc version=3 Jul 6 23:49:16.799887 systemd[1]: Started cri-containerd-ee2358ab04504cb672f7d98ee6ae0a8d881fe2b1907fa0eda9251d62984a641d.scope - libcontainer container ee2358ab04504cb672f7d98ee6ae0a8d881fe2b1907fa0eda9251d62984a641d. Jul 6 23:49:16.829738 containerd[1882]: time="2025-07-06T23:49:16.829626131Z" level=info msg="StartContainer for \"ee2358ab04504cb672f7d98ee6ae0a8d881fe2b1907fa0eda9251d62984a641d\" returns successfully" Jul 6 23:49:16.830212 systemd[1]: cri-containerd-ee2358ab04504cb672f7d98ee6ae0a8d881fe2b1907fa0eda9251d62984a641d.scope: Deactivated successfully. Jul 6 23:49:16.832181 containerd[1882]: time="2025-07-06T23:49:16.831684578Z" level=info msg="received exit event container_id:\"ee2358ab04504cb672f7d98ee6ae0a8d881fe2b1907fa0eda9251d62984a641d\" id:\"ee2358ab04504cb672f7d98ee6ae0a8d881fe2b1907fa0eda9251d62984a641d\" pid:5186 exited_at:{seconds:1751845756 nanos:831012846}" Jul 6 23:49:16.833126 containerd[1882]: time="2025-07-06T23:49:16.832433154Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ee2358ab04504cb672f7d98ee6ae0a8d881fe2b1907fa0eda9251d62984a641d\" id:\"ee2358ab04504cb672f7d98ee6ae0a8d881fe2b1907fa0eda9251d62984a641d\" pid:5186 exited_at:{seconds:1751845756 nanos:831012846}" Jul 6 23:49:16.958451 sshd[5124]: Accepted publickey for core from 10.200.16.10 port 43810 ssh2: RSA SHA256:0/AHONPd/Cla0u01jeKf+n9bVAD+ttQ1+M75e1nZbX8 Jul 6 23:49:16.960102 sshd-session[5124]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:49:16.964325 systemd-logind[1853]: New session 25 of user core. Jul 6 23:49:16.970872 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 6 23:49:17.205074 kubelet[3272]: E0706 23:49:17.205030 3272 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 6 23:49:17.294012 sshd[5223]: Connection closed by 10.200.16.10 port 43810 Jul 6 23:49:17.294695 sshd-session[5124]: pam_unix(sshd:session): session closed for user core Jul 6 23:49:17.298128 systemd[1]: sshd@22-10.200.20.39:22-10.200.16.10:43810.service: Deactivated successfully. Jul 6 23:49:17.300230 systemd[1]: session-25.scope: Deactivated successfully. Jul 6 23:49:17.301568 systemd-logind[1853]: Session 25 logged out. Waiting for processes to exit. Jul 6 23:49:17.302800 systemd-logind[1853]: Removed session 25. Jul 6 23:49:17.384096 systemd[1]: Started sshd@23-10.200.20.39:22-10.200.16.10:43824.service - OpenSSH per-connection server daemon (10.200.16.10:43824). Jul 6 23:49:17.460798 containerd[1882]: time="2025-07-06T23:49:17.460762470Z" level=info msg="CreateContainer within sandbox \"486321eeb29b600c86c1a14da0b44b599ef3711370eb4ed39b215ac1ea37be6c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 6 23:49:17.481784 containerd[1882]: time="2025-07-06T23:49:17.481584669Z" level=info msg="Container 825b00e288cd2da1ebf61453e6de567354f376f6c9b136cc2f66776e26529233: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:49:17.496017 containerd[1882]: time="2025-07-06T23:49:17.495955058Z" level=info msg="CreateContainer within sandbox \"486321eeb29b600c86c1a14da0b44b599ef3711370eb4ed39b215ac1ea37be6c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"825b00e288cd2da1ebf61453e6de567354f376f6c9b136cc2f66776e26529233\"" Jul 6 23:49:17.496981 containerd[1882]: time="2025-07-06T23:49:17.496757443Z" level=info msg="StartContainer for \"825b00e288cd2da1ebf61453e6de567354f376f6c9b136cc2f66776e26529233\"" Jul 6 23:49:17.497514 containerd[1882]: time="2025-07-06T23:49:17.497489674Z" level=info msg="connecting to shim 825b00e288cd2da1ebf61453e6de567354f376f6c9b136cc2f66776e26529233" address="unix:///run/containerd/s/d109a7f2f301c3f8802c360496c06b1707960678de3ad22145878c3610d2132f" protocol=ttrpc version=3 Jul 6 23:49:17.516873 systemd[1]: Started cri-containerd-825b00e288cd2da1ebf61453e6de567354f376f6c9b136cc2f66776e26529233.scope - libcontainer container 825b00e288cd2da1ebf61453e6de567354f376f6c9b136cc2f66776e26529233. Jul 6 23:49:17.542844 containerd[1882]: time="2025-07-06T23:49:17.542778791Z" level=info msg="StartContainer for \"825b00e288cd2da1ebf61453e6de567354f376f6c9b136cc2f66776e26529233\" returns successfully" Jul 6 23:49:17.546696 systemd[1]: cri-containerd-825b00e288cd2da1ebf61453e6de567354f376f6c9b136cc2f66776e26529233.scope: Deactivated successfully. Jul 6 23:49:17.548314 containerd[1882]: time="2025-07-06T23:49:17.548259721Z" level=info msg="received exit event container_id:\"825b00e288cd2da1ebf61453e6de567354f376f6c9b136cc2f66776e26529233\" id:\"825b00e288cd2da1ebf61453e6de567354f376f6c9b136cc2f66776e26529233\" pid:5244 exited_at:{seconds:1751845757 nanos:547973392}" Jul 6 23:49:17.548941 containerd[1882]: time="2025-07-06T23:49:17.548778913Z" level=info msg="TaskExit event in podsandbox handler container_id:\"825b00e288cd2da1ebf61453e6de567354f376f6c9b136cc2f66776e26529233\" id:\"825b00e288cd2da1ebf61453e6de567354f376f6c9b136cc2f66776e26529233\" pid:5244 exited_at:{seconds:1751845757 nanos:547973392}" Jul 6 23:49:17.564498 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-825b00e288cd2da1ebf61453e6de567354f376f6c9b136cc2f66776e26529233-rootfs.mount: Deactivated successfully. Jul 6 23:49:17.868493 sshd[5230]: Accepted publickey for core from 10.200.16.10 port 43824 ssh2: RSA SHA256:0/AHONPd/Cla0u01jeKf+n9bVAD+ttQ1+M75e1nZbX8 Jul 6 23:49:17.869470 sshd-session[5230]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:49:17.873710 systemd-logind[1853]: New session 26 of user core. Jul 6 23:49:17.880882 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 6 23:49:18.462848 containerd[1882]: time="2025-07-06T23:49:18.462689306Z" level=info msg="CreateContainer within sandbox \"486321eeb29b600c86c1a14da0b44b599ef3711370eb4ed39b215ac1ea37be6c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 6 23:49:18.489155 containerd[1882]: time="2025-07-06T23:49:18.488378919Z" level=info msg="Container c02c2112dd1f686d5524a0cd5890f929f20ad495fe0c660a7447d2b367657436: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:49:18.491350 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2447969654.mount: Deactivated successfully. Jul 6 23:49:18.509405 containerd[1882]: time="2025-07-06T23:49:18.509358978Z" level=info msg="CreateContainer within sandbox \"486321eeb29b600c86c1a14da0b44b599ef3711370eb4ed39b215ac1ea37be6c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"c02c2112dd1f686d5524a0cd5890f929f20ad495fe0c660a7447d2b367657436\"" Jul 6 23:49:18.510176 containerd[1882]: time="2025-07-06T23:49:18.510152067Z" level=info msg="StartContainer for \"c02c2112dd1f686d5524a0cd5890f929f20ad495fe0c660a7447d2b367657436\"" Jul 6 23:49:18.511785 containerd[1882]: time="2025-07-06T23:49:18.511681234Z" level=info msg="connecting to shim c02c2112dd1f686d5524a0cd5890f929f20ad495fe0c660a7447d2b367657436" address="unix:///run/containerd/s/d109a7f2f301c3f8802c360496c06b1707960678de3ad22145878c3610d2132f" protocol=ttrpc version=3 Jul 6 23:49:18.534869 systemd[1]: Started cri-containerd-c02c2112dd1f686d5524a0cd5890f929f20ad495fe0c660a7447d2b367657436.scope - libcontainer container c02c2112dd1f686d5524a0cd5890f929f20ad495fe0c660a7447d2b367657436. Jul 6 23:49:18.562363 systemd[1]: cri-containerd-c02c2112dd1f686d5524a0cd5890f929f20ad495fe0c660a7447d2b367657436.scope: Deactivated successfully. Jul 6 23:49:18.568133 containerd[1882]: time="2025-07-06T23:49:18.567589545Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c02c2112dd1f686d5524a0cd5890f929f20ad495fe0c660a7447d2b367657436\" id:\"c02c2112dd1f686d5524a0cd5890f929f20ad495fe0c660a7447d2b367657436\" pid:5294 exited_at:{seconds:1751845758 nanos:565326666}" Jul 6 23:49:18.568133 containerd[1882]: time="2025-07-06T23:49:18.567884378Z" level=info msg="received exit event container_id:\"c02c2112dd1f686d5524a0cd5890f929f20ad495fe0c660a7447d2b367657436\" id:\"c02c2112dd1f686d5524a0cd5890f929f20ad495fe0c660a7447d2b367657436\" pid:5294 exited_at:{seconds:1751845758 nanos:565326666}" Jul 6 23:49:18.569826 containerd[1882]: time="2025-07-06T23:49:18.569796877Z" level=info msg="StartContainer for \"c02c2112dd1f686d5524a0cd5890f929f20ad495fe0c660a7447d2b367657436\" returns successfully" Jul 6 23:49:18.586598 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c02c2112dd1f686d5524a0cd5890f929f20ad495fe0c660a7447d2b367657436-rootfs.mount: Deactivated successfully. Jul 6 23:49:19.468321 containerd[1882]: time="2025-07-06T23:49:19.468269616Z" level=info msg="CreateContainer within sandbox \"486321eeb29b600c86c1a14da0b44b599ef3711370eb4ed39b215ac1ea37be6c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 6 23:49:19.495640 containerd[1882]: time="2025-07-06T23:49:19.495602727Z" level=info msg="Container 3fa414a41c2361dcc9934f29fbcb3c881986ab55495c1957dc166872c5707464: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:49:19.513987 containerd[1882]: time="2025-07-06T23:49:19.513920632Z" level=info msg="CreateContainer within sandbox \"486321eeb29b600c86c1a14da0b44b599ef3711370eb4ed39b215ac1ea37be6c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"3fa414a41c2361dcc9934f29fbcb3c881986ab55495c1957dc166872c5707464\"" Jul 6 23:49:19.514867 containerd[1882]: time="2025-07-06T23:49:19.514745978Z" level=info msg="StartContainer for \"3fa414a41c2361dcc9934f29fbcb3c881986ab55495c1957dc166872c5707464\"" Jul 6 23:49:19.515958 containerd[1882]: time="2025-07-06T23:49:19.515893989Z" level=info msg="connecting to shim 3fa414a41c2361dcc9934f29fbcb3c881986ab55495c1957dc166872c5707464" address="unix:///run/containerd/s/d109a7f2f301c3f8802c360496c06b1707960678de3ad22145878c3610d2132f" protocol=ttrpc version=3 Jul 6 23:49:19.540870 systemd[1]: Started cri-containerd-3fa414a41c2361dcc9934f29fbcb3c881986ab55495c1957dc166872c5707464.scope - libcontainer container 3fa414a41c2361dcc9934f29fbcb3c881986ab55495c1957dc166872c5707464. Jul 6 23:49:19.561210 systemd[1]: cri-containerd-3fa414a41c2361dcc9934f29fbcb3c881986ab55495c1957dc166872c5707464.scope: Deactivated successfully. Jul 6 23:49:19.563120 containerd[1882]: time="2025-07-06T23:49:19.562962521Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3fa414a41c2361dcc9934f29fbcb3c881986ab55495c1957dc166872c5707464\" id:\"3fa414a41c2361dcc9934f29fbcb3c881986ab55495c1957dc166872c5707464\" pid:5334 exited_at:{seconds:1751845759 nanos:562481227}" Jul 6 23:49:19.566092 containerd[1882]: time="2025-07-06T23:49:19.565989263Z" level=info msg="received exit event container_id:\"3fa414a41c2361dcc9934f29fbcb3c881986ab55495c1957dc166872c5707464\" id:\"3fa414a41c2361dcc9934f29fbcb3c881986ab55495c1957dc166872c5707464\" pid:5334 exited_at:{seconds:1751845759 nanos:562481227}" Jul 6 23:49:19.567419 containerd[1882]: time="2025-07-06T23:49:19.567316849Z" level=info msg="StartContainer for \"3fa414a41c2361dcc9934f29fbcb3c881986ab55495c1957dc166872c5707464\" returns successfully" Jul 6 23:49:19.583182 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3fa414a41c2361dcc9934f29fbcb3c881986ab55495c1957dc166872c5707464-rootfs.mount: Deactivated successfully. Jul 6 23:49:20.473016 containerd[1882]: time="2025-07-06T23:49:20.472854502Z" level=info msg="CreateContainer within sandbox \"486321eeb29b600c86c1a14da0b44b599ef3711370eb4ed39b215ac1ea37be6c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 6 23:49:20.501084 containerd[1882]: time="2025-07-06T23:49:20.501032200Z" level=info msg="Container f9976bf690717ec24d3ea748d236a6c5d174b838f8c4cb9a241304e10ef8dfcc: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:49:20.516926 containerd[1882]: time="2025-07-06T23:49:20.516813754Z" level=info msg="CreateContainer within sandbox \"486321eeb29b600c86c1a14da0b44b599ef3711370eb4ed39b215ac1ea37be6c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f9976bf690717ec24d3ea748d236a6c5d174b838f8c4cb9a241304e10ef8dfcc\"" Jul 6 23:49:20.518082 containerd[1882]: time="2025-07-06T23:49:20.517804288Z" level=info msg="StartContainer for \"f9976bf690717ec24d3ea748d236a6c5d174b838f8c4cb9a241304e10ef8dfcc\"" Jul 6 23:49:20.520474 containerd[1882]: time="2025-07-06T23:49:20.520211603Z" level=info msg="connecting to shim f9976bf690717ec24d3ea748d236a6c5d174b838f8c4cb9a241304e10ef8dfcc" address="unix:///run/containerd/s/d109a7f2f301c3f8802c360496c06b1707960678de3ad22145878c3610d2132f" protocol=ttrpc version=3 Jul 6 23:49:20.539886 systemd[1]: Started cri-containerd-f9976bf690717ec24d3ea748d236a6c5d174b838f8c4cb9a241304e10ef8dfcc.scope - libcontainer container f9976bf690717ec24d3ea748d236a6c5d174b838f8c4cb9a241304e10ef8dfcc. Jul 6 23:49:20.574166 containerd[1882]: time="2025-07-06T23:49:20.573962486Z" level=info msg="StartContainer for \"f9976bf690717ec24d3ea748d236a6c5d174b838f8c4cb9a241304e10ef8dfcc\" returns successfully" Jul 6 23:49:20.638000 containerd[1882]: time="2025-07-06T23:49:20.637932655Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f9976bf690717ec24d3ea748d236a6c5d174b838f8c4cb9a241304e10ef8dfcc\" id:\"8a2179c2522334d68fc32d1b49beef18c6808f7a792377eb2dc80251c8011396\" pid:5401 exited_at:{seconds:1751845760 nanos:636939000}" Jul 6 23:49:20.956731 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jul 6 23:49:21.486586 kubelet[3272]: I0706 23:49:21.486527 3272 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-rwpzj" podStartSLOduration=5.486506885 podStartE2EDuration="5.486506885s" podCreationTimestamp="2025-07-06 23:49:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:49:21.486309975 +0000 UTC m=+149.664457219" watchObservedRunningTime="2025-07-06 23:49:21.486506885 +0000 UTC m=+149.664654121" Jul 6 23:49:22.282552 containerd[1882]: time="2025-07-06T23:49:22.282504012Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f9976bf690717ec24d3ea748d236a6c5d174b838f8c4cb9a241304e10ef8dfcc\" id:\"e9b6a85285143f87ce8d40ecd67de7b60fcf816770bbedc45104c508a0373f5a\" pid:5478 exit_status:1 exited_at:{seconds:1751845762 nanos:281990428}" Jul 6 23:49:23.404796 systemd-networkd[1583]: lxc_health: Link UP Jul 6 23:49:23.411656 systemd-networkd[1583]: lxc_health: Gained carrier Jul 6 23:49:24.372814 containerd[1882]: time="2025-07-06T23:49:24.372658027Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f9976bf690717ec24d3ea748d236a6c5d174b838f8c4cb9a241304e10ef8dfcc\" id:\"46488e5fe45b85a4f77559185fed2fe16bb4b4fd3e99a9be0ba61f979b1ca49d\" pid:5936 exited_at:{seconds:1751845764 nanos:371902371}" Jul 6 23:49:24.561919 systemd-networkd[1583]: lxc_health: Gained IPv6LL Jul 6 23:49:26.459263 containerd[1882]: time="2025-07-06T23:49:26.459209755Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f9976bf690717ec24d3ea748d236a6c5d174b838f8c4cb9a241304e10ef8dfcc\" id:\"c3e6830ed01c99c2fb008bf954a358d5f0d041e55b7e1abfc647166f3751429c\" pid:5969 exited_at:{seconds:1751845766 nanos:458404690}" Jul 6 23:49:28.536078 containerd[1882]: time="2025-07-06T23:49:28.536031246Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f9976bf690717ec24d3ea748d236a6c5d174b838f8c4cb9a241304e10ef8dfcc\" id:\"ed3dbaa3d7a94f7ac581970098647784c5cf2f10936744e3e5b72308578d856b\" pid:5993 exited_at:{seconds:1751845768 nanos:535511830}" Jul 6 23:49:28.629251 sshd[5275]: Connection closed by 10.200.16.10 port 43824 Jul 6 23:49:28.629938 sshd-session[5230]: pam_unix(sshd:session): session closed for user core Jul 6 23:49:28.632907 systemd-logind[1853]: Session 26 logged out. Waiting for processes to exit. Jul 6 23:49:28.633882 systemd[1]: sshd@23-10.200.20.39:22-10.200.16.10:43824.service: Deactivated successfully. Jul 6 23:49:28.636028 systemd[1]: session-26.scope: Deactivated successfully. Jul 6 23:49:28.637538 systemd-logind[1853]: Removed session 26.