Jun 20 18:39:17.320141 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jun 20 18:39:17.320165 kernel: Linux version 6.6.94-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Fri Jun 20 17:15:00 -00 2025 Jun 20 18:39:17.320173 kernel: KASLR enabled Jun 20 18:39:17.320179 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jun 20 18:39:17.320186 kernel: printk: bootconsole [pl11] enabled Jun 20 18:39:17.320192 kernel: efi: EFI v2.7 by EDK II Jun 20 18:39:17.320199 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f20f698 RNG=0x3fd5f998 MEMRESERVE=0x3e477598 Jun 20 18:39:17.320205 kernel: random: crng init done Jun 20 18:39:17.320211 kernel: secureboot: Secure boot disabled Jun 20 18:39:17.320216 kernel: ACPI: Early table checksum verification disabled Jun 20 18:39:17.320222 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Jun 20 18:39:17.320228 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 20 18:39:17.320234 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 20 18:39:17.320242 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jun 20 18:39:17.320249 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 20 18:39:17.320255 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 20 18:39:17.320261 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 20 18:39:17.320269 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 20 18:39:17.320275 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 20 18:39:17.320287 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 20 18:39:17.320293 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jun 20 18:39:17.320300 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 20 18:39:17.320306 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jun 20 18:39:17.320312 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Jun 20 18:39:17.320318 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Jun 20 18:39:17.320324 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Jun 20 18:39:17.320330 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Jun 20 18:39:17.320337 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Jun 20 18:39:17.320345 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Jun 20 18:39:17.320351 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Jun 20 18:39:17.320357 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Jun 20 18:39:17.320363 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Jun 20 18:39:17.320370 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Jun 20 18:39:17.320376 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Jun 20 18:39:17.320382 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Jun 20 18:39:17.320389 kernel: NUMA: NODE_DATA [mem 0x1bf7ee800-0x1bf7f3fff] Jun 20 18:39:17.320395 kernel: Zone ranges: Jun 20 18:39:17.320401 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jun 20 18:39:17.320407 kernel: DMA32 empty Jun 20 18:39:17.320414 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jun 20 18:39:17.320424 kernel: Movable zone start for each node Jun 20 18:39:17.320431 kernel: Early memory node ranges Jun 20 18:39:17.320438 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jun 20 18:39:17.320510 kernel: node 0: [mem 0x0000000000824000-0x000000003e45ffff] Jun 20 18:39:17.320519 kernel: node 0: [mem 0x000000003e460000-0x000000003e46ffff] Jun 20 18:39:17.320528 kernel: node 0: [mem 0x000000003e470000-0x000000003e54ffff] Jun 20 18:39:17.320535 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Jun 20 18:39:17.320541 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Jun 20 18:39:17.320547 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Jun 20 18:39:17.320554 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Jun 20 18:39:17.320560 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jun 20 18:39:17.320567 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jun 20 18:39:17.320573 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jun 20 18:39:17.320580 kernel: psci: probing for conduit method from ACPI. Jun 20 18:39:17.320586 kernel: psci: PSCIv1.1 detected in firmware. Jun 20 18:39:17.320592 kernel: psci: Using standard PSCI v0.2 function IDs Jun 20 18:39:17.320599 kernel: psci: MIGRATE_INFO_TYPE not supported. Jun 20 18:39:17.320607 kernel: psci: SMC Calling Convention v1.4 Jun 20 18:39:17.320614 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jun 20 18:39:17.320620 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jun 20 18:39:17.320627 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jun 20 18:39:17.320633 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jun 20 18:39:17.320640 kernel: pcpu-alloc: [0] 0 [0] 1 Jun 20 18:39:17.320646 kernel: Detected PIPT I-cache on CPU0 Jun 20 18:39:17.320653 kernel: CPU features: detected: GIC system register CPU interface Jun 20 18:39:17.320660 kernel: CPU features: detected: Hardware dirty bit management Jun 20 18:39:17.320666 kernel: CPU features: detected: Spectre-BHB Jun 20 18:39:17.320673 kernel: CPU features: kernel page table isolation forced ON by KASLR Jun 20 18:39:17.320680 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jun 20 18:39:17.320687 kernel: CPU features: detected: ARM erratum 1418040 Jun 20 18:39:17.320693 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Jun 20 18:39:17.320700 kernel: CPU features: detected: SSBS not fully self-synchronizing Jun 20 18:39:17.320706 kernel: alternatives: applying boot alternatives Jun 20 18:39:17.320714 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=8a081d870e25287d755f6d580d3ffafd8d53f08173c09683922f11f1a622a40e Jun 20 18:39:17.320721 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jun 20 18:39:17.320728 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jun 20 18:39:17.320734 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jun 20 18:39:17.320741 kernel: Fallback order for Node 0: 0 Jun 20 18:39:17.320747 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Jun 20 18:39:17.320755 kernel: Policy zone: Normal Jun 20 18:39:17.320762 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jun 20 18:39:17.320769 kernel: software IO TLB: area num 2. Jun 20 18:39:17.320775 kernel: software IO TLB: mapped [mem 0x0000000036540000-0x000000003a540000] (64MB) Jun 20 18:39:17.320782 kernel: Memory: 3983588K/4194160K available (10368K kernel code, 2186K rwdata, 8104K rodata, 38336K init, 897K bss, 210572K reserved, 0K cma-reserved) Jun 20 18:39:17.320789 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jun 20 18:39:17.320795 kernel: rcu: Preemptible hierarchical RCU implementation. Jun 20 18:39:17.320802 kernel: rcu: RCU event tracing is enabled. Jun 20 18:39:17.320809 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jun 20 18:39:17.320815 kernel: Trampoline variant of Tasks RCU enabled. Jun 20 18:39:17.320822 kernel: Tracing variant of Tasks RCU enabled. Jun 20 18:39:17.320830 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jun 20 18:39:17.320837 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jun 20 18:39:17.320843 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jun 20 18:39:17.320849 kernel: GICv3: 960 SPIs implemented Jun 20 18:39:17.320856 kernel: GICv3: 0 Extended SPIs implemented Jun 20 18:39:17.320862 kernel: Root IRQ handler: gic_handle_irq Jun 20 18:39:17.320868 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jun 20 18:39:17.320875 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jun 20 18:39:17.320881 kernel: ITS: No ITS available, not enabling LPIs Jun 20 18:39:17.320902 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jun 20 18:39:17.320908 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 20 18:39:17.320915 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jun 20 18:39:17.320923 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jun 20 18:39:17.320930 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jun 20 18:39:17.320936 kernel: Console: colour dummy device 80x25 Jun 20 18:39:17.320943 kernel: printk: console [tty1] enabled Jun 20 18:39:17.320950 kernel: ACPI: Core revision 20230628 Jun 20 18:39:17.320957 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jun 20 18:39:17.320964 kernel: pid_max: default: 32768 minimum: 301 Jun 20 18:39:17.320970 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jun 20 18:39:17.320977 kernel: landlock: Up and running. Jun 20 18:39:17.320985 kernel: SELinux: Initializing. Jun 20 18:39:17.320992 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jun 20 18:39:17.320999 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jun 20 18:39:17.321005 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jun 20 18:39:17.321012 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jun 20 18:39:17.321019 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Jun 20 18:39:17.321026 kernel: Hyper-V: Host Build 10.0.22477.1619-1-0 Jun 20 18:39:17.321038 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jun 20 18:39:17.321045 kernel: rcu: Hierarchical SRCU implementation. Jun 20 18:39:17.321052 kernel: rcu: Max phase no-delay instances is 400. Jun 20 18:39:17.321059 kernel: Remapping and enabling EFI services. Jun 20 18:39:17.321066 kernel: smp: Bringing up secondary CPUs ... Jun 20 18:39:17.321075 kernel: Detected PIPT I-cache on CPU1 Jun 20 18:39:17.321082 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jun 20 18:39:17.321089 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 20 18:39:17.321096 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jun 20 18:39:17.321103 kernel: smp: Brought up 1 node, 2 CPUs Jun 20 18:39:17.321112 kernel: SMP: Total of 2 processors activated. Jun 20 18:39:17.321119 kernel: CPU features: detected: 32-bit EL0 Support Jun 20 18:39:17.321126 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jun 20 18:39:17.321133 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jun 20 18:39:17.321140 kernel: CPU features: detected: CRC32 instructions Jun 20 18:39:17.321147 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jun 20 18:39:17.321154 kernel: CPU features: detected: LSE atomic instructions Jun 20 18:39:17.321161 kernel: CPU features: detected: Privileged Access Never Jun 20 18:39:17.321167 kernel: CPU: All CPU(s) started at EL1 Jun 20 18:39:17.321176 kernel: alternatives: applying system-wide alternatives Jun 20 18:39:17.321183 kernel: devtmpfs: initialized Jun 20 18:39:17.321190 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jun 20 18:39:17.321197 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jun 20 18:39:17.321204 kernel: pinctrl core: initialized pinctrl subsystem Jun 20 18:39:17.321211 kernel: SMBIOS 3.1.0 present. Jun 20 18:39:17.321218 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Jun 20 18:39:17.321225 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jun 20 18:39:17.321232 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jun 20 18:39:17.321240 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jun 20 18:39:17.321247 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jun 20 18:39:17.321255 kernel: audit: initializing netlink subsys (disabled) Jun 20 18:39:17.321262 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Jun 20 18:39:17.321272 kernel: thermal_sys: Registered thermal governor 'step_wise' Jun 20 18:39:17.321279 kernel: cpuidle: using governor menu Jun 20 18:39:17.321286 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jun 20 18:39:17.321293 kernel: ASID allocator initialised with 32768 entries Jun 20 18:39:17.321300 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jun 20 18:39:17.321309 kernel: Serial: AMBA PL011 UART driver Jun 20 18:39:17.321316 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jun 20 18:39:17.321323 kernel: Modules: 0 pages in range for non-PLT usage Jun 20 18:39:17.321330 kernel: Modules: 509264 pages in range for PLT usage Jun 20 18:39:17.321337 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jun 20 18:39:17.321344 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jun 20 18:39:17.321351 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jun 20 18:39:17.321358 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jun 20 18:39:17.321365 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jun 20 18:39:17.321373 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jun 20 18:39:17.321380 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jun 20 18:39:17.321387 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jun 20 18:39:17.321394 kernel: ACPI: Added _OSI(Module Device) Jun 20 18:39:17.321401 kernel: ACPI: Added _OSI(Processor Device) Jun 20 18:39:17.321408 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jun 20 18:39:17.321415 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jun 20 18:39:17.321422 kernel: ACPI: Interpreter enabled Jun 20 18:39:17.321429 kernel: ACPI: Using GIC for interrupt routing Jun 20 18:39:17.321438 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jun 20 18:39:17.321453 kernel: printk: console [ttyAMA0] enabled Jun 20 18:39:17.321461 kernel: printk: bootconsole [pl11] disabled Jun 20 18:39:17.321468 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jun 20 18:39:17.321475 kernel: iommu: Default domain type: Translated Jun 20 18:39:17.321482 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jun 20 18:39:17.321489 kernel: efivars: Registered efivars operations Jun 20 18:39:17.321497 kernel: vgaarb: loaded Jun 20 18:39:17.321504 kernel: clocksource: Switched to clocksource arch_sys_counter Jun 20 18:39:17.321513 kernel: VFS: Disk quotas dquot_6.6.0 Jun 20 18:39:17.321520 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jun 20 18:39:17.321526 kernel: pnp: PnP ACPI init Jun 20 18:39:17.321533 kernel: pnp: PnP ACPI: found 0 devices Jun 20 18:39:17.321540 kernel: NET: Registered PF_INET protocol family Jun 20 18:39:17.321547 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jun 20 18:39:17.321554 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jun 20 18:39:17.321562 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jun 20 18:39:17.321569 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jun 20 18:39:17.321577 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jun 20 18:39:17.321584 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jun 20 18:39:17.321591 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jun 20 18:39:17.321598 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jun 20 18:39:17.321606 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jun 20 18:39:17.321613 kernel: PCI: CLS 0 bytes, default 64 Jun 20 18:39:17.321620 kernel: kvm [1]: HYP mode not available Jun 20 18:39:17.321627 kernel: Initialise system trusted keyrings Jun 20 18:39:17.321634 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jun 20 18:39:17.321642 kernel: Key type asymmetric registered Jun 20 18:39:17.321649 kernel: Asymmetric key parser 'x509' registered Jun 20 18:39:17.321656 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jun 20 18:39:17.321663 kernel: io scheduler mq-deadline registered Jun 20 18:39:17.321670 kernel: io scheduler kyber registered Jun 20 18:39:17.321677 kernel: io scheduler bfq registered Jun 20 18:39:17.321684 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jun 20 18:39:17.321691 kernel: thunder_xcv, ver 1.0 Jun 20 18:39:17.321698 kernel: thunder_bgx, ver 1.0 Jun 20 18:39:17.321707 kernel: nicpf, ver 1.0 Jun 20 18:39:17.321714 kernel: nicvf, ver 1.0 Jun 20 18:39:17.321848 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jun 20 18:39:17.321917 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-06-20T18:39:16 UTC (1750444756) Jun 20 18:39:17.321927 kernel: efifb: probing for efifb Jun 20 18:39:17.321935 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jun 20 18:39:17.321942 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jun 20 18:39:17.321949 kernel: efifb: scrolling: redraw Jun 20 18:39:17.321958 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jun 20 18:39:17.321965 kernel: Console: switching to colour frame buffer device 128x48 Jun 20 18:39:17.321972 kernel: fb0: EFI VGA frame buffer device Jun 20 18:39:17.321979 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jun 20 18:39:17.321986 kernel: hid: raw HID events driver (C) Jiri Kosina Jun 20 18:39:17.321993 kernel: No ACPI PMU IRQ for CPU0 Jun 20 18:39:17.322000 kernel: No ACPI PMU IRQ for CPU1 Jun 20 18:39:17.322007 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Jun 20 18:39:17.322014 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jun 20 18:39:17.322023 kernel: watchdog: Hard watchdog permanently disabled Jun 20 18:39:17.322030 kernel: NET: Registered PF_INET6 protocol family Jun 20 18:39:17.322037 kernel: Segment Routing with IPv6 Jun 20 18:39:17.322044 kernel: In-situ OAM (IOAM) with IPv6 Jun 20 18:39:17.322051 kernel: NET: Registered PF_PACKET protocol family Jun 20 18:39:17.322058 kernel: Key type dns_resolver registered Jun 20 18:39:17.322065 kernel: registered taskstats version 1 Jun 20 18:39:17.322072 kernel: Loading compiled-in X.509 certificates Jun 20 18:39:17.322079 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.94-flatcar: 8506faa781fda315da94c2790de0e5c860361c93' Jun 20 18:39:17.322088 kernel: Key type .fscrypt registered Jun 20 18:39:17.322095 kernel: Key type fscrypt-provisioning registered Jun 20 18:39:17.322102 kernel: ima: No TPM chip found, activating TPM-bypass! Jun 20 18:39:17.322109 kernel: ima: Allocated hash algorithm: sha1 Jun 20 18:39:17.322116 kernel: ima: No architecture policies found Jun 20 18:39:17.322123 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jun 20 18:39:17.322131 kernel: clk: Disabling unused clocks Jun 20 18:39:17.322138 kernel: Freeing unused kernel memory: 38336K Jun 20 18:39:17.322145 kernel: Run /init as init process Jun 20 18:39:17.322153 kernel: with arguments: Jun 20 18:39:17.322160 kernel: /init Jun 20 18:39:17.322168 kernel: with environment: Jun 20 18:39:17.322174 kernel: HOME=/ Jun 20 18:39:17.322181 kernel: TERM=linux Jun 20 18:39:17.322188 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jun 20 18:39:17.322197 systemd[1]: Successfully made /usr/ read-only. Jun 20 18:39:17.322207 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jun 20 18:39:17.322217 systemd[1]: Detected virtualization microsoft. Jun 20 18:39:17.322225 systemd[1]: Detected architecture arm64. Jun 20 18:39:17.322233 systemd[1]: Running in initrd. Jun 20 18:39:17.322240 systemd[1]: No hostname configured, using default hostname. Jun 20 18:39:17.322248 systemd[1]: Hostname set to . Jun 20 18:39:17.322256 systemd[1]: Initializing machine ID from random generator. Jun 20 18:39:17.322263 systemd[1]: Queued start job for default target initrd.target. Jun 20 18:39:17.322271 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 20 18:39:17.322281 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 20 18:39:17.322290 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jun 20 18:39:17.322298 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 20 18:39:17.322306 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jun 20 18:39:17.322314 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jun 20 18:39:17.322323 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jun 20 18:39:17.322333 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jun 20 18:39:17.322341 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 20 18:39:17.322349 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 20 18:39:17.322357 systemd[1]: Reached target paths.target - Path Units. Jun 20 18:39:17.322365 systemd[1]: Reached target slices.target - Slice Units. Jun 20 18:39:17.322372 systemd[1]: Reached target swap.target - Swaps. Jun 20 18:39:17.322380 systemd[1]: Reached target timers.target - Timer Units. Jun 20 18:39:17.322388 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jun 20 18:39:17.322395 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 20 18:39:17.322405 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jun 20 18:39:17.322413 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jun 20 18:39:17.322421 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 20 18:39:17.322429 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 20 18:39:17.322437 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 20 18:39:17.327003 systemd[1]: Reached target sockets.target - Socket Units. Jun 20 18:39:17.327015 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jun 20 18:39:17.327023 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 20 18:39:17.327031 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jun 20 18:39:17.327041 systemd[1]: Starting systemd-fsck-usr.service... Jun 20 18:39:17.327049 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 20 18:39:17.327056 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 20 18:39:17.327087 systemd-journald[218]: Collecting audit messages is disabled. Jun 20 18:39:17.327108 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 18:39:17.327117 systemd-journald[218]: Journal started Jun 20 18:39:17.327135 systemd-journald[218]: Runtime Journal (/run/log/journal/0634e52edca1432891e1d4f1dee31a4c) is 8M, max 78.5M, 70.5M free. Jun 20 18:39:17.327574 systemd-modules-load[220]: Inserted module 'overlay' Jun 20 18:39:17.342732 systemd[1]: Started systemd-journald.service - Journal Service. Jun 20 18:39:17.349195 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jun 20 18:39:17.366299 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 20 18:39:17.382050 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jun 20 18:39:17.382076 kernel: Bridge firewalling registered Jun 20 18:39:17.382192 systemd-modules-load[220]: Inserted module 'br_netfilter' Jun 20 18:39:17.389351 systemd[1]: Finished systemd-fsck-usr.service. Jun 20 18:39:17.398142 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 20 18:39:17.407836 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 18:39:17.427740 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 20 18:39:17.436650 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 20 18:39:17.463672 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jun 20 18:39:17.476666 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jun 20 18:39:17.499080 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 20 18:39:17.508859 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 20 18:39:17.521282 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 20 18:39:17.533283 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 20 18:39:17.557749 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jun 20 18:39:17.564630 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 20 18:39:17.592680 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 20 18:39:17.608535 dracut-cmdline[251]: dracut-dracut-053 Jun 20 18:39:17.624136 dracut-cmdline[251]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=8a081d870e25287d755f6d580d3ffafd8d53f08173c09683922f11f1a622a40e Jun 20 18:39:17.612302 systemd-resolved[253]: Positive Trust Anchors: Jun 20 18:39:17.612312 systemd-resolved[253]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 20 18:39:17.612343 systemd-resolved[253]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jun 20 18:39:17.616757 systemd-resolved[253]: Defaulting to hostname 'linux'. Jun 20 18:39:17.616884 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 20 18:39:17.629706 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 20 18:39:17.670932 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 20 18:39:17.795472 kernel: SCSI subsystem initialized Jun 20 18:39:17.805462 kernel: Loading iSCSI transport class v2.0-870. Jun 20 18:39:17.814480 kernel: iscsi: registered transport (tcp) Jun 20 18:39:17.832155 kernel: iscsi: registered transport (qla4xxx) Jun 20 18:39:17.832228 kernel: QLogic iSCSI HBA Driver Jun 20 18:39:17.866509 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jun 20 18:39:17.882607 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jun 20 18:39:17.920462 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jun 20 18:39:17.920528 kernel: device-mapper: uevent: version 1.0.3 Jun 20 18:39:17.926594 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jun 20 18:39:17.988468 kernel: raid6: neonx8 gen() 15773 MB/s Jun 20 18:39:17.995458 kernel: raid6: neonx4 gen() 15811 MB/s Jun 20 18:39:18.015459 kernel: raid6: neonx2 gen() 13291 MB/s Jun 20 18:39:18.036456 kernel: raid6: neonx1 gen() 10561 MB/s Jun 20 18:39:18.056455 kernel: raid6: int64x8 gen() 6799 MB/s Jun 20 18:39:18.076464 kernel: raid6: int64x4 gen() 7349 MB/s Jun 20 18:39:18.097461 kernel: raid6: int64x2 gen() 6112 MB/s Jun 20 18:39:18.120812 kernel: raid6: int64x1 gen() 5061 MB/s Jun 20 18:39:18.120828 kernel: raid6: using algorithm neonx4 gen() 15811 MB/s Jun 20 18:39:18.144296 kernel: raid6: .... xor() 12292 MB/s, rmw enabled Jun 20 18:39:18.144334 kernel: raid6: using neon recovery algorithm Jun 20 18:39:18.156434 kernel: xor: measuring software checksum speed Jun 20 18:39:18.156472 kernel: 8regs : 21601 MB/sec Jun 20 18:39:18.159802 kernel: 32regs : 21670 MB/sec Jun 20 18:39:18.163102 kernel: arm64_neon : 27204 MB/sec Jun 20 18:39:18.169723 kernel: xor: using function: arm64_neon (27204 MB/sec) Jun 20 18:39:18.220503 kernel: Btrfs loaded, zoned=no, fsverity=no Jun 20 18:39:18.230025 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jun 20 18:39:18.246623 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 20 18:39:18.271243 systemd-udevd[438]: Using default interface naming scheme 'v255'. Jun 20 18:39:18.276547 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 20 18:39:18.296566 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jun 20 18:39:18.313991 dracut-pre-trigger[454]: rd.md=0: removing MD RAID activation Jun 20 18:39:18.346215 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jun 20 18:39:18.359912 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 20 18:39:18.398817 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 20 18:39:18.413669 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jun 20 18:39:18.451737 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jun 20 18:39:18.469082 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jun 20 18:39:18.487092 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 20 18:39:18.501673 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 20 18:39:18.519069 kernel: hv_vmbus: Vmbus version:5.3 Jun 20 18:39:18.522745 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jun 20 18:39:18.549164 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 20 18:39:18.582189 kernel: hv_vmbus: registering driver hyperv_keyboard Jun 20 18:39:18.582214 kernel: pps_core: LinuxPPS API ver. 1 registered Jun 20 18:39:18.582223 kernel: hv_vmbus: registering driver hid_hyperv Jun 20 18:39:18.582233 kernel: hv_vmbus: registering driver hv_netvsc Jun 20 18:39:18.582250 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jun 20 18:39:18.549391 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 20 18:39:18.621002 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jun 20 18:39:18.621029 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jun 20 18:39:18.621040 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jun 20 18:39:18.582258 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 20 18:39:18.654802 kernel: hv_vmbus: registering driver hv_storvsc Jun 20 18:39:18.654828 kernel: PTP clock support registered Jun 20 18:39:18.654838 kernel: scsi host0: storvsc_host_t Jun 20 18:39:18.655008 kernel: scsi host1: storvsc_host_t Jun 20 18:39:18.600572 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 20 18:39:18.684003 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jun 20 18:39:18.684059 kernel: hv_utils: Registering HyperV Utility Driver Jun 20 18:39:18.684158 kernel: hv_vmbus: registering driver hv_utils Jun 20 18:39:18.600815 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 18:39:18.702011 kernel: hv_utils: Heartbeat IC version 3.0 Jun 20 18:39:18.702038 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jun 20 18:39:18.641736 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 18:39:18.802451 kernel: hv_utils: Shutdown IC version 3.2 Jun 20 18:39:18.802474 kernel: hv_utils: TimeSync IC version 4.0 Jun 20 18:39:18.793398 systemd-resolved[253]: Clock change detected. Flushing caches. Jun 20 18:39:18.844315 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jun 20 18:39:18.844527 kernel: hv_netvsc 002248b5-351b-0022-48b5-351b002248b5 eth0: VF slot 1 added Jun 20 18:39:18.844624 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jun 20 18:39:18.801509 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 18:39:18.868050 kernel: hv_vmbus: registering driver hv_pci Jun 20 18:39:18.868076 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jun 20 18:39:18.812714 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jun 20 18:39:18.878597 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jun 20 18:39:18.841976 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 18:39:18.901658 kernel: hv_pci 6f09b00f-271d-4313-ab12-4996da8170d8: PCI VMBus probing: Using version 0x10004 Jun 20 18:39:18.901849 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jun 20 18:39:18.902022 kernel: sd 0:0:0:0: [sda] Write Protect is off Jun 20 18:39:18.874392 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 20 18:39:18.920519 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jun 20 18:39:18.920712 kernel: hv_pci 6f09b00f-271d-4313-ab12-4996da8170d8: PCI host bridge to bus 271d:00 Jun 20 18:39:18.920814 kernel: pci_bus 271d:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jun 20 18:39:18.874628 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 18:39:18.966515 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jun 20 18:39:18.966668 kernel: pci_bus 271d:00: No busn resource found for root bus, will use [bus 00-ff] Jun 20 18:39:18.966770 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 20 18:39:18.966781 kernel: pci 271d:00:02.0: [15b3:1018] type 00 class 0x020000 Jun 20 18:39:18.966812 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jun 20 18:39:18.966913 kernel: pci 271d:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Jun 20 18:39:18.902339 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 18:39:18.978424 kernel: pci 271d:00:02.0: enabling Extended Tags Jun 20 18:39:18.980162 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 18:39:19.008986 kernel: pci 271d:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 271d:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Jun 20 18:39:19.023320 kernel: pci_bus 271d:00: busn_res: [bus 00-ff] end is updated to 00 Jun 20 18:39:19.023545 kernel: pci 271d:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Jun 20 18:39:19.025568 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 18:39:19.043130 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 20 18:39:19.076831 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 20 18:39:19.101379 kernel: mlx5_core 271d:00:02.0: enabling device (0000 -> 0002) Jun 20 18:39:19.101606 kernel: mlx5_core 271d:00:02.0: firmware version: 16.30.1284 Jun 20 18:39:19.311520 kernel: hv_netvsc 002248b5-351b-0022-48b5-351b002248b5 eth0: VF registering: eth1 Jun 20 18:39:19.311744 kernel: mlx5_core 271d:00:02.0 eth1: joined to eth0 Jun 20 18:39:19.319985 kernel: mlx5_core 271d:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Jun 20 18:39:19.330971 kernel: mlx5_core 271d:00:02.0 enP10013s1: renamed from eth1 Jun 20 18:39:19.422161 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jun 20 18:39:19.503185 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by (udev-worker) (488) Jun 20 18:39:19.518957 kernel: BTRFS: device fsid c1b254aa-fc5c-4606-9f4d-9a81b9ab3a0f devid 1 transid 39 /dev/sda3 scanned by (udev-worker) (490) Jun 20 18:39:19.541792 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jun 20 18:39:19.561751 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jun 20 18:39:19.568639 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jun 20 18:39:19.599127 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jun 20 18:39:19.793906 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jun 20 18:39:20.630945 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 20 18:39:20.631386 disk-uuid[608]: The operation has completed successfully. Jun 20 18:39:20.688017 systemd[1]: disk-uuid.service: Deactivated successfully. Jun 20 18:39:20.688114 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jun 20 18:39:20.740075 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jun 20 18:39:20.755645 sh[697]: Success Jun 20 18:39:20.784959 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jun 20 18:39:20.959520 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jun 20 18:39:20.979082 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jun 20 18:39:20.989187 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jun 20 18:39:21.016884 kernel: BTRFS info (device dm-0): first mount of filesystem c1b254aa-fc5c-4606-9f4d-9a81b9ab3a0f Jun 20 18:39:21.016948 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jun 20 18:39:21.023884 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jun 20 18:39:21.029046 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jun 20 18:39:21.033769 kernel: BTRFS info (device dm-0): using free space tree Jun 20 18:39:21.351082 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jun 20 18:39:21.356648 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jun 20 18:39:21.377149 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jun 20 18:39:21.400136 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jun 20 18:39:21.425775 kernel: BTRFS info (device sda6): first mount of filesystem 068a5250-b7b4-4dc6-8e6c-a1610cec1941 Jun 20 18:39:21.425799 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jun 20 18:39:21.425809 kernel: BTRFS info (device sda6): using free space tree Jun 20 18:39:21.473973 kernel: BTRFS info (device sda6): auto enabling async discard Jun 20 18:39:21.486006 kernel: BTRFS info (device sda6): last unmount of filesystem 068a5250-b7b4-4dc6-8e6c-a1610cec1941 Jun 20 18:39:21.491367 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jun 20 18:39:21.509208 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jun 20 18:39:21.526980 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 20 18:39:21.547105 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 20 18:39:21.578131 systemd-networkd[878]: lo: Link UP Jun 20 18:39:21.578140 systemd-networkd[878]: lo: Gained carrier Jun 20 18:39:21.579895 systemd-networkd[878]: Enumeration completed Jun 20 18:39:21.583090 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 20 18:39:21.588325 systemd-networkd[878]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 18:39:21.588329 systemd-networkd[878]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 20 18:39:21.594507 systemd[1]: Reached target network.target - Network. Jun 20 18:39:21.683957 kernel: mlx5_core 271d:00:02.0 enP10013s1: Link up Jun 20 18:39:21.720951 kernel: hv_netvsc 002248b5-351b-0022-48b5-351b002248b5 eth0: Data path switched to VF: enP10013s1 Jun 20 18:39:21.721619 systemd-networkd[878]: enP10013s1: Link UP Jun 20 18:39:21.721703 systemd-networkd[878]: eth0: Link UP Jun 20 18:39:21.721819 systemd-networkd[878]: eth0: Gained carrier Jun 20 18:39:21.721828 systemd-networkd[878]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 18:39:21.746525 systemd-networkd[878]: enP10013s1: Gained carrier Jun 20 18:39:21.757990 systemd-networkd[878]: eth0: DHCPv4 address 10.200.20.37/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jun 20 18:39:22.205634 ignition[861]: Ignition 2.20.0 Jun 20 18:39:22.205647 ignition[861]: Stage: fetch-offline Jun 20 18:39:22.210246 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jun 20 18:39:22.205684 ignition[861]: no configs at "/usr/lib/ignition/base.d" Jun 20 18:39:22.205693 ignition[861]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 20 18:39:22.205796 ignition[861]: parsed url from cmdline: "" Jun 20 18:39:22.205800 ignition[861]: no config URL provided Jun 20 18:39:22.205805 ignition[861]: reading system config file "/usr/lib/ignition/user.ign" Jun 20 18:39:22.205812 ignition[861]: no config at "/usr/lib/ignition/user.ign" Jun 20 18:39:22.240226 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jun 20 18:39:22.205817 ignition[861]: failed to fetch config: resource requires networking Jun 20 18:39:22.206025 ignition[861]: Ignition finished successfully Jun 20 18:39:22.265492 ignition[887]: Ignition 2.20.0 Jun 20 18:39:22.265498 ignition[887]: Stage: fetch Jun 20 18:39:22.265689 ignition[887]: no configs at "/usr/lib/ignition/base.d" Jun 20 18:39:22.265698 ignition[887]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 20 18:39:22.265799 ignition[887]: parsed url from cmdline: "" Jun 20 18:39:22.265802 ignition[887]: no config URL provided Jun 20 18:39:22.265806 ignition[887]: reading system config file "/usr/lib/ignition/user.ign" Jun 20 18:39:22.265813 ignition[887]: no config at "/usr/lib/ignition/user.ign" Jun 20 18:39:22.265841 ignition[887]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jun 20 18:39:22.386521 ignition[887]: GET result: OK Jun 20 18:39:22.386586 ignition[887]: config has been read from IMDS userdata Jun 20 18:39:22.386629 ignition[887]: parsing config with SHA512: d5da3e95e9d53419c8fd99939d1fca8163238dd8ccfcb1f01d28b262cd1efcb08bdcb3ae689a95571b5b854246fa7d192996cafcb375f43a3cdf3ae8f95eca5c Jun 20 18:39:22.390785 unknown[887]: fetched base config from "system" Jun 20 18:39:22.391190 ignition[887]: fetch: fetch complete Jun 20 18:39:22.390793 unknown[887]: fetched base config from "system" Jun 20 18:39:22.391195 ignition[887]: fetch: fetch passed Jun 20 18:39:22.390799 unknown[887]: fetched user config from "azure" Jun 20 18:39:22.391238 ignition[887]: Ignition finished successfully Jun 20 18:39:22.396290 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jun 20 18:39:22.416669 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jun 20 18:39:22.441817 ignition[893]: Ignition 2.20.0 Jun 20 18:39:22.441831 ignition[893]: Stage: kargs Jun 20 18:39:22.442043 ignition[893]: no configs at "/usr/lib/ignition/base.d" Jun 20 18:39:22.449219 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jun 20 18:39:22.442056 ignition[893]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 20 18:39:22.443044 ignition[893]: kargs: kargs passed Jun 20 18:39:22.443095 ignition[893]: Ignition finished successfully Jun 20 18:39:22.480204 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jun 20 18:39:22.492356 ignition[899]: Ignition 2.20.0 Jun 20 18:39:22.492372 ignition[899]: Stage: disks Jun 20 18:39:22.497119 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jun 20 18:39:22.492542 ignition[899]: no configs at "/usr/lib/ignition/base.d" Jun 20 18:39:22.504006 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jun 20 18:39:22.492552 ignition[899]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 20 18:39:22.512613 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 20 18:39:22.493802 ignition[899]: disks: disks passed Jun 20 18:39:22.524646 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 20 18:39:22.493861 ignition[899]: Ignition finished successfully Jun 20 18:39:22.535061 systemd[1]: Reached target sysinit.target - System Initialization. Jun 20 18:39:22.546479 systemd[1]: Reached target basic.target - Basic System. Jun 20 18:39:22.573110 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jun 20 18:39:22.644222 systemd-fsck[908]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jun 20 18:39:22.654223 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jun 20 18:39:22.670103 systemd[1]: Mounting sysroot.mount - /sysroot... Jun 20 18:39:22.730262 kernel: EXT4-fs (sda9): mounted filesystem f172a629-efc5-4850-a631-f3c62b46134c r/w with ordered data mode. Quota mode: none. Jun 20 18:39:22.731310 systemd[1]: Mounted sysroot.mount - /sysroot. Jun 20 18:39:22.740559 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jun 20 18:39:22.765083 systemd-networkd[878]: eth0: Gained IPv6LL Jun 20 18:39:22.785051 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 20 18:39:22.793097 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jun 20 18:39:22.815134 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (919) Jun 20 18:39:22.812172 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jun 20 18:39:22.822094 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jun 20 18:39:22.853421 kernel: BTRFS info (device sda6): first mount of filesystem 068a5250-b7b4-4dc6-8e6c-a1610cec1941 Jun 20 18:39:22.853453 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jun 20 18:39:22.822137 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jun 20 18:39:22.871228 kernel: BTRFS info (device sda6): using free space tree Jun 20 18:39:22.862633 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jun 20 18:39:22.888013 kernel: BTRFS info (device sda6): auto enabling async discard Jun 20 18:39:22.893202 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jun 20 18:39:22.905746 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 20 18:39:23.290343 coreos-metadata[921]: Jun 20 18:39:23.290 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jun 20 18:39:23.298745 coreos-metadata[921]: Jun 20 18:39:23.298 INFO Fetch successful Jun 20 18:39:23.298745 coreos-metadata[921]: Jun 20 18:39:23.298 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jun 20 18:39:23.316260 coreos-metadata[921]: Jun 20 18:39:23.316 INFO Fetch successful Jun 20 18:39:23.321795 coreos-metadata[921]: Jun 20 18:39:23.316 INFO wrote hostname ci-4230.2.0-a-431835d741 to /sysroot/etc/hostname Jun 20 18:39:23.330870 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jun 20 18:39:23.400279 initrd-setup-root[949]: cut: /sysroot/etc/passwd: No such file or directory Jun 20 18:39:23.453740 initrd-setup-root[956]: cut: /sysroot/etc/group: No such file or directory Jun 20 18:39:23.463368 initrd-setup-root[963]: cut: /sysroot/etc/shadow: No such file or directory Jun 20 18:39:23.472011 initrd-setup-root[970]: cut: /sysroot/etc/gshadow: No such file or directory Jun 20 18:39:23.725043 systemd-networkd[878]: enP10013s1: Gained IPv6LL Jun 20 18:39:24.038703 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jun 20 18:39:24.054366 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jun 20 18:39:24.065172 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jun 20 18:39:24.085019 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jun 20 18:39:24.093345 kernel: BTRFS info (device sda6): last unmount of filesystem 068a5250-b7b4-4dc6-8e6c-a1610cec1941 Jun 20 18:39:24.113784 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jun 20 18:39:24.129993 ignition[1044]: INFO : Ignition 2.20.0 Jun 20 18:39:24.129993 ignition[1044]: INFO : Stage: mount Jun 20 18:39:24.129993 ignition[1044]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 20 18:39:24.129993 ignition[1044]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 20 18:39:24.155442 ignition[1044]: INFO : mount: mount passed Jun 20 18:39:24.155442 ignition[1044]: INFO : Ignition finished successfully Jun 20 18:39:24.137768 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jun 20 18:39:24.156200 systemd[1]: Starting ignition-files.service - Ignition (files)... Jun 20 18:39:24.184655 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 20 18:39:24.204952 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (1054) Jun 20 18:39:24.217177 kernel: BTRFS info (device sda6): first mount of filesystem 068a5250-b7b4-4dc6-8e6c-a1610cec1941 Jun 20 18:39:24.217213 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jun 20 18:39:24.221498 kernel: BTRFS info (device sda6): using free space tree Jun 20 18:39:24.227941 kernel: BTRFS info (device sda6): auto enabling async discard Jun 20 18:39:24.230223 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 20 18:39:24.255413 ignition[1071]: INFO : Ignition 2.20.0 Jun 20 18:39:24.255413 ignition[1071]: INFO : Stage: files Jun 20 18:39:24.255413 ignition[1071]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 20 18:39:24.255413 ignition[1071]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 20 18:39:24.255413 ignition[1071]: DEBUG : files: compiled without relabeling support, skipping Jun 20 18:39:24.281000 ignition[1071]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jun 20 18:39:24.281000 ignition[1071]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jun 20 18:39:24.324461 ignition[1071]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jun 20 18:39:24.332089 ignition[1071]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jun 20 18:39:24.332089 ignition[1071]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jun 20 18:39:24.324922 unknown[1071]: wrote ssh authorized keys file for user: core Jun 20 18:39:24.379646 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jun 20 18:39:24.390204 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Jun 20 18:39:24.432422 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jun 20 18:39:24.528120 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jun 20 18:39:24.538837 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jun 20 18:39:24.538837 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jun 20 18:39:25.038153 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jun 20 18:39:25.118100 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jun 20 18:39:25.118100 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jun 20 18:39:25.136639 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jun 20 18:39:25.136639 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jun 20 18:39:25.136639 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jun 20 18:39:25.136639 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 20 18:39:25.136639 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 20 18:39:25.136639 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 20 18:39:25.136639 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 20 18:39:25.136639 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jun 20 18:39:25.136639 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jun 20 18:39:25.136639 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jun 20 18:39:25.136639 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jun 20 18:39:25.136639 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jun 20 18:39:25.136639 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Jun 20 18:39:25.802980 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jun 20 18:39:26.005175 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jun 20 18:39:26.005175 ignition[1071]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jun 20 18:39:26.051965 ignition[1071]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 20 18:39:26.051965 ignition[1071]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 20 18:39:26.051965 ignition[1071]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jun 20 18:39:26.051965 ignition[1071]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jun 20 18:39:26.051965 ignition[1071]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jun 20 18:39:26.105623 ignition[1071]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jun 20 18:39:26.105623 ignition[1071]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jun 20 18:39:26.105623 ignition[1071]: INFO : files: files passed Jun 20 18:39:26.105623 ignition[1071]: INFO : Ignition finished successfully Jun 20 18:39:26.066993 systemd[1]: Finished ignition-files.service - Ignition (files). Jun 20 18:39:26.115208 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jun 20 18:39:26.133157 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jun 20 18:39:26.152138 systemd[1]: ignition-quench.service: Deactivated successfully. Jun 20 18:39:26.190141 initrd-setup-root-after-ignition[1099]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 20 18:39:26.190141 initrd-setup-root-after-ignition[1099]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jun 20 18:39:26.153972 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jun 20 18:39:26.219439 initrd-setup-root-after-ignition[1103]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 20 18:39:26.170351 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 20 18:39:26.185685 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jun 20 18:39:26.228192 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jun 20 18:39:26.267675 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jun 20 18:39:26.267842 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jun 20 18:39:26.280860 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jun 20 18:39:26.293319 systemd[1]: Reached target initrd.target - Initrd Default Target. Jun 20 18:39:26.304893 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jun 20 18:39:26.320228 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jun 20 18:39:26.341515 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 20 18:39:26.358177 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jun 20 18:39:26.379623 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jun 20 18:39:26.379728 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jun 20 18:39:26.392121 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jun 20 18:39:26.405088 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 20 18:39:26.417672 systemd[1]: Stopped target timers.target - Timer Units. Jun 20 18:39:26.428903 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jun 20 18:39:26.429002 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 20 18:39:26.445260 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jun 20 18:39:26.456902 systemd[1]: Stopped target basic.target - Basic System. Jun 20 18:39:26.467430 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jun 20 18:39:26.477793 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jun 20 18:39:26.489550 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jun 20 18:39:26.501141 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jun 20 18:39:26.512104 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jun 20 18:39:26.523625 systemd[1]: Stopped target sysinit.target - System Initialization. Jun 20 18:39:26.535293 systemd[1]: Stopped target local-fs.target - Local File Systems. Jun 20 18:39:26.545773 systemd[1]: Stopped target swap.target - Swaps. Jun 20 18:39:26.555274 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jun 20 18:39:26.555361 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jun 20 18:39:26.569514 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jun 20 18:39:26.575333 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 20 18:39:26.589442 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jun 20 18:39:26.595625 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 20 18:39:26.602840 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jun 20 18:39:26.602947 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jun 20 18:39:26.620087 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jun 20 18:39:26.620152 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 20 18:39:26.626875 systemd[1]: ignition-files.service: Deactivated successfully. Jun 20 18:39:26.626949 systemd[1]: Stopped ignition-files.service - Ignition (files). Jun 20 18:39:26.639172 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jun 20 18:39:26.639234 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jun 20 18:39:26.714619 ignition[1125]: INFO : Ignition 2.20.0 Jun 20 18:39:26.714619 ignition[1125]: INFO : Stage: umount Jun 20 18:39:26.714619 ignition[1125]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 20 18:39:26.714619 ignition[1125]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 20 18:39:26.714619 ignition[1125]: INFO : umount: umount passed Jun 20 18:39:26.714619 ignition[1125]: INFO : Ignition finished successfully Jun 20 18:39:26.671115 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jun 20 18:39:26.688492 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jun 20 18:39:26.700612 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jun 20 18:39:26.700709 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jun 20 18:39:26.709385 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jun 20 18:39:26.709455 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jun 20 18:39:26.728186 systemd[1]: ignition-mount.service: Deactivated successfully. Jun 20 18:39:26.730952 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jun 20 18:39:26.741230 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jun 20 18:39:26.741974 systemd[1]: ignition-disks.service: Deactivated successfully. Jun 20 18:39:26.742078 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jun 20 18:39:26.759936 systemd[1]: ignition-kargs.service: Deactivated successfully. Jun 20 18:39:26.760021 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jun 20 18:39:26.770637 systemd[1]: ignition-fetch.service: Deactivated successfully. Jun 20 18:39:26.770705 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jun 20 18:39:26.780608 systemd[1]: Stopped target network.target - Network. Jun 20 18:39:26.790123 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jun 20 18:39:26.790209 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jun 20 18:39:26.802359 systemd[1]: Stopped target paths.target - Path Units. Jun 20 18:39:26.813112 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jun 20 18:39:26.822960 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 20 18:39:26.831419 systemd[1]: Stopped target slices.target - Slice Units. Jun 20 18:39:26.840920 systemd[1]: Stopped target sockets.target - Socket Units. Jun 20 18:39:26.853520 systemd[1]: iscsid.socket: Deactivated successfully. Jun 20 18:39:26.853577 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jun 20 18:39:26.863529 systemd[1]: iscsiuio.socket: Deactivated successfully. Jun 20 18:39:26.863574 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 20 18:39:26.874219 systemd[1]: ignition-setup.service: Deactivated successfully. Jun 20 18:39:26.874281 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jun 20 18:39:26.884187 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jun 20 18:39:26.884234 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jun 20 18:39:26.894778 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jun 20 18:39:26.904585 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jun 20 18:39:26.918217 systemd[1]: systemd-resolved.service: Deactivated successfully. Jun 20 18:39:26.918354 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jun 20 18:39:26.938901 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jun 20 18:39:26.939249 systemd[1]: systemd-networkd.service: Deactivated successfully. Jun 20 18:39:26.939355 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jun 20 18:39:26.955204 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jun 20 18:39:26.955899 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jun 20 18:39:26.956112 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jun 20 18:39:26.988638 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jun 20 18:39:26.994758 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jun 20 18:39:27.177253 kernel: hv_netvsc 002248b5-351b-0022-48b5-351b002248b5 eth0: Data path switched from VF: enP10013s1 Jun 20 18:39:26.994844 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 20 18:39:27.006127 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 20 18:39:27.006188 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 20 18:39:27.022491 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jun 20 18:39:27.022551 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jun 20 18:39:27.030000 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jun 20 18:39:27.030058 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 20 18:39:27.047171 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 20 18:39:27.071115 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jun 20 18:39:27.071197 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jun 20 18:39:27.093275 systemd[1]: systemd-udevd.service: Deactivated successfully. Jun 20 18:39:27.094487 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 20 18:39:27.108922 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jun 20 18:39:27.109032 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jun 20 18:39:27.118971 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jun 20 18:39:27.119021 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jun 20 18:39:27.136525 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jun 20 18:39:27.136600 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jun 20 18:39:27.160682 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jun 20 18:39:27.160754 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jun 20 18:39:27.177331 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 20 18:39:27.177405 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 20 18:39:27.208117 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jun 20 18:39:27.222890 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jun 20 18:39:27.222997 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 20 18:39:27.241185 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jun 20 18:39:27.241246 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 20 18:39:27.247950 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jun 20 18:39:27.248004 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jun 20 18:39:27.261493 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 20 18:39:27.261546 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 18:39:27.281275 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jun 20 18:39:27.281348 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jun 20 18:39:27.465782 systemd-journald[218]: Received SIGTERM from PID 1 (systemd). Jun 20 18:39:27.281716 systemd[1]: sysroot-boot.service: Deactivated successfully. Jun 20 18:39:27.281830 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jun 20 18:39:27.290687 systemd[1]: network-cleanup.service: Deactivated successfully. Jun 20 18:39:27.290791 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jun 20 18:39:27.302450 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jun 20 18:39:27.302538 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jun 20 18:39:27.314880 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jun 20 18:39:27.325513 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jun 20 18:39:27.325608 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jun 20 18:39:27.352171 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jun 20 18:39:27.371045 systemd[1]: Switching root. Jun 20 18:39:27.523795 systemd-journald[218]: Journal stopped Jun 20 18:39:31.563161 kernel: SELinux: policy capability network_peer_controls=1 Jun 20 18:39:31.563204 kernel: SELinux: policy capability open_perms=1 Jun 20 18:39:31.563214 kernel: SELinux: policy capability extended_socket_class=1 Jun 20 18:39:31.563226 kernel: SELinux: policy capability always_check_network=0 Jun 20 18:39:31.563238 kernel: SELinux: policy capability cgroup_seclabel=1 Jun 20 18:39:31.563246 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jun 20 18:39:31.563256 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jun 20 18:39:31.563263 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jun 20 18:39:31.563271 kernel: audit: type=1403 audit(1750444768.345:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jun 20 18:39:31.563282 systemd[1]: Successfully loaded SELinux policy in 159.618ms. Jun 20 18:39:31.563294 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.941ms. Jun 20 18:39:31.563304 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jun 20 18:39:31.563313 systemd[1]: Detected virtualization microsoft. Jun 20 18:39:31.563322 systemd[1]: Detected architecture arm64. Jun 20 18:39:31.563331 systemd[1]: Detected first boot. Jun 20 18:39:31.563342 systemd[1]: Hostname set to . Jun 20 18:39:31.563351 systemd[1]: Initializing machine ID from random generator. Jun 20 18:39:31.563360 zram_generator::config[1168]: No configuration found. Jun 20 18:39:31.563369 kernel: NET: Registered PF_VSOCK protocol family Jun 20 18:39:31.563377 systemd[1]: Populated /etc with preset unit settings. Jun 20 18:39:31.563387 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jun 20 18:39:31.563396 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jun 20 18:39:31.563406 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jun 20 18:39:31.563415 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jun 20 18:39:31.563425 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jun 20 18:39:31.563434 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jun 20 18:39:31.563444 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jun 20 18:39:31.563453 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jun 20 18:39:31.563462 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jun 20 18:39:31.563473 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jun 20 18:39:31.563483 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jun 20 18:39:31.563492 systemd[1]: Created slice user.slice - User and Session Slice. Jun 20 18:39:31.563501 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 20 18:39:31.563510 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 20 18:39:31.563519 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jun 20 18:39:31.563528 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jun 20 18:39:31.563537 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jun 20 18:39:31.563548 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 20 18:39:31.563557 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jun 20 18:39:31.563566 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 20 18:39:31.563577 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jun 20 18:39:31.563587 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jun 20 18:39:31.563596 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jun 20 18:39:31.563605 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jun 20 18:39:31.563615 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 20 18:39:31.563626 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 20 18:39:31.563635 systemd[1]: Reached target slices.target - Slice Units. Jun 20 18:39:31.563645 systemd[1]: Reached target swap.target - Swaps. Jun 20 18:39:31.563654 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jun 20 18:39:31.563663 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jun 20 18:39:31.563672 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jun 20 18:39:31.563684 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 20 18:39:31.563693 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 20 18:39:31.563703 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 20 18:39:31.563712 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jun 20 18:39:31.563722 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jun 20 18:39:31.563731 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jun 20 18:39:31.563740 systemd[1]: Mounting media.mount - External Media Directory... Jun 20 18:39:31.563751 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jun 20 18:39:31.563761 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jun 20 18:39:31.563770 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jun 20 18:39:31.563780 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jun 20 18:39:31.563790 systemd[1]: Reached target machines.target - Containers. Jun 20 18:39:31.563799 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jun 20 18:39:31.563808 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 20 18:39:31.563818 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 20 18:39:31.563830 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jun 20 18:39:31.563839 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 20 18:39:31.563849 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 20 18:39:31.563859 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 20 18:39:31.563868 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jun 20 18:39:31.563877 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 20 18:39:31.563887 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jun 20 18:39:31.563897 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jun 20 18:39:31.563908 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jun 20 18:39:31.563917 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jun 20 18:39:31.563940 systemd[1]: Stopped systemd-fsck-usr.service. Jun 20 18:39:31.563952 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 20 18:39:31.563961 kernel: fuse: init (API version 7.39) Jun 20 18:39:31.563970 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 20 18:39:31.563979 kernel: loop: module loaded Jun 20 18:39:31.563988 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 20 18:39:31.563997 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 20 18:39:31.564043 systemd-journald[1265]: Collecting audit messages is disabled. Jun 20 18:39:31.564065 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jun 20 18:39:31.564076 systemd-journald[1265]: Journal started Jun 20 18:39:31.564099 systemd-journald[1265]: Runtime Journal (/run/log/journal/f788974b2dde4e5fa62f3e87cda47461) is 8M, max 78.5M, 70.5M free. Jun 20 18:39:30.664855 systemd[1]: Queued start job for default target multi-user.target. Jun 20 18:39:30.670989 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jun 20 18:39:30.671408 systemd[1]: systemd-journald.service: Deactivated successfully. Jun 20 18:39:30.671813 systemd[1]: systemd-journald.service: Consumed 3.271s CPU time. Jun 20 18:39:31.574529 kernel: ACPI: bus type drm_connector registered Jun 20 18:39:31.574603 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jun 20 18:39:31.616460 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 20 18:39:31.626950 systemd[1]: verity-setup.service: Deactivated successfully. Jun 20 18:39:31.627025 systemd[1]: Stopped verity-setup.service. Jun 20 18:39:31.643511 systemd[1]: Started systemd-journald.service - Journal Service. Jun 20 18:39:31.644448 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jun 20 18:39:31.650523 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jun 20 18:39:31.657029 systemd[1]: Mounted media.mount - External Media Directory. Jun 20 18:39:31.662780 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jun 20 18:39:31.669225 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jun 20 18:39:31.675537 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jun 20 18:39:31.682090 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jun 20 18:39:31.690074 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 20 18:39:31.697244 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jun 20 18:39:31.697435 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jun 20 18:39:31.704258 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 20 18:39:31.704424 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 20 18:39:31.710834 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 20 18:39:31.711043 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 20 18:39:31.717736 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 20 18:39:31.719034 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 20 18:39:31.725894 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jun 20 18:39:31.726101 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jun 20 18:39:31.732747 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 20 18:39:31.732936 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 20 18:39:31.741287 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 20 18:39:31.748671 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 20 18:39:31.757979 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jun 20 18:39:31.765681 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jun 20 18:39:31.773105 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 20 18:39:31.791319 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 20 18:39:31.804028 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jun 20 18:39:31.811130 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jun 20 18:39:31.817102 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jun 20 18:39:31.817148 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 20 18:39:31.824168 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jun 20 18:39:31.838084 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jun 20 18:39:31.845753 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jun 20 18:39:31.851304 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 20 18:39:31.852785 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jun 20 18:39:31.861036 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jun 20 18:39:31.868232 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 20 18:39:31.869415 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jun 20 18:39:31.876090 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 20 18:39:31.878259 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 20 18:39:31.892300 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jun 20 18:39:31.911075 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jun 20 18:39:31.927035 systemd-journald[1265]: Time spent on flushing to /var/log/journal/f788974b2dde4e5fa62f3e87cda47461 is 30.470ms for 914 entries. Jun 20 18:39:31.927035 systemd-journald[1265]: System Journal (/var/log/journal/f788974b2dde4e5fa62f3e87cda47461) is 8M, max 2.6G, 2.6G free. Jun 20 18:39:32.004698 systemd-journald[1265]: Received client request to flush runtime journal. Jun 20 18:39:32.004733 kernel: loop0: detected capacity change from 0 to 123192 Jun 20 18:39:31.925659 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jun 20 18:39:31.941562 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jun 20 18:39:31.949794 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jun 20 18:39:31.963235 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jun 20 18:39:31.975878 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jun 20 18:39:31.991413 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 20 18:39:32.002988 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jun 20 18:39:32.018279 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jun 20 18:39:32.025287 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jun 20 18:39:32.033654 udevadm[1312]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jun 20 18:39:32.066220 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jun 20 18:39:32.067619 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jun 20 18:39:32.095096 systemd-tmpfiles[1310]: ACLs are not supported, ignoring. Jun 20 18:39:32.095111 systemd-tmpfiles[1310]: ACLs are not supported, ignoring. Jun 20 18:39:32.102064 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 20 18:39:32.118199 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jun 20 18:39:32.331990 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jun 20 18:39:32.343403 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 20 18:39:32.362104 systemd-tmpfiles[1329]: ACLs are not supported, ignoring. Jun 20 18:39:32.362123 systemd-tmpfiles[1329]: ACLs are not supported, ignoring. Jun 20 18:39:32.367174 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 20 18:39:32.387961 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jun 20 18:39:32.431948 kernel: loop1: detected capacity change from 0 to 207008 Jun 20 18:39:32.475962 kernel: loop2: detected capacity change from 0 to 28720 Jun 20 18:39:32.841964 kernel: loop3: detected capacity change from 0 to 113512 Jun 20 18:39:33.066847 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jun 20 18:39:33.079124 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 20 18:39:33.118651 systemd-udevd[1337]: Using default interface naming scheme 'v255'. Jun 20 18:39:33.119960 kernel: loop4: detected capacity change from 0 to 123192 Jun 20 18:39:33.130986 kernel: loop5: detected capacity change from 0 to 207008 Jun 20 18:39:33.143947 kernel: loop6: detected capacity change from 0 to 28720 Jun 20 18:39:33.156957 kernel: loop7: detected capacity change from 0 to 113512 Jun 20 18:39:33.162338 (sd-merge)[1339]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jun 20 18:39:33.162813 (sd-merge)[1339]: Merged extensions into '/usr'. Jun 20 18:39:33.165977 systemd[1]: Reload requested from client PID 1309 ('systemd-sysext') (unit systemd-sysext.service)... Jun 20 18:39:33.165991 systemd[1]: Reloading... Jun 20 18:39:33.230953 zram_generator::config[1366]: No configuration found. Jun 20 18:39:33.415324 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 18:39:33.451995 kernel: mousedev: PS/2 mouse device common for all mice Jun 20 18:39:33.508958 kernel: hv_vmbus: registering driver hyperv_fb Jun 20 18:39:33.519920 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jun 20 18:39:33.520021 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jun 20 18:39:33.525963 kernel: hv_vmbus: registering driver hv_balloon Jun 20 18:39:33.530097 kernel: Console: switching to colour dummy device 80x25 Jun 20 18:39:33.530965 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jun 20 18:39:33.531016 kernel: hv_balloon: Memory hot add disabled on ARM64 Jun 20 18:39:33.552947 kernel: Console: switching to colour frame buffer device 128x48 Jun 20 18:39:33.565985 systemd[1]: Reloading finished in 399 ms. Jun 20 18:39:33.581758 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 20 18:39:33.594716 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jun 20 18:39:33.615700 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jun 20 18:39:33.629199 systemd[1]: Starting ensure-sysext.service... Jun 20 18:39:33.648140 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 20 18:39:33.657166 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jun 20 18:39:33.684690 systemd[1]: Reload requested from client PID 1468 ('systemctl') (unit ensure-sysext.service)... Jun 20 18:39:33.684704 systemd[1]: Reloading... Jun 20 18:39:33.697449 systemd-tmpfiles[1475]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jun 20 18:39:33.699012 systemd-tmpfiles[1475]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jun 20 18:39:33.700676 systemd-tmpfiles[1475]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jun 20 18:39:33.700888 systemd-tmpfiles[1475]: ACLs are not supported, ignoring. Jun 20 18:39:33.700951 systemd-tmpfiles[1475]: ACLs are not supported, ignoring. Jun 20 18:39:33.723536 systemd-tmpfiles[1475]: Detected autofs mount point /boot during canonicalization of boot. Jun 20 18:39:33.723547 systemd-tmpfiles[1475]: Skipping /boot Jun 20 18:39:33.735878 systemd-tmpfiles[1475]: Detected autofs mount point /boot during canonicalization of boot. Jun 20 18:39:33.735898 systemd-tmpfiles[1475]: Skipping /boot Jun 20 18:39:33.758797 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1437) Jun 20 18:39:33.816632 zram_generator::config[1522]: No configuration found. Jun 20 18:39:33.954892 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 18:39:34.074014 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jun 20 18:39:34.081714 systemd[1]: Reloading finished in 396 ms. Jun 20 18:39:34.117063 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 20 18:39:34.167405 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jun 20 18:39:34.175028 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jun 20 18:39:34.181319 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 20 18:39:34.184259 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 20 18:39:34.195755 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 20 18:39:34.202705 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 20 18:39:34.213307 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 20 18:39:34.222455 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 20 18:39:34.223834 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jun 20 18:39:34.229979 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 20 18:39:34.232139 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jun 20 18:39:34.246832 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 20 18:39:34.256313 systemd[1]: Reached target time-set.target - System Time Set. Jun 20 18:39:34.269505 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jun 20 18:39:34.279288 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jun 20 18:39:34.287262 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 18:39:34.300971 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jun 20 18:39:34.311292 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 20 18:39:34.311504 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 20 18:39:34.321499 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 20 18:39:34.321672 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 20 18:39:34.332059 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 20 18:39:34.333971 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 20 18:39:34.342319 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 20 18:39:34.342488 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 20 18:39:34.350293 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jun 20 18:39:34.366157 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jun 20 18:39:34.373809 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jun 20 18:39:34.400793 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jun 20 18:39:34.411322 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 20 18:39:34.411531 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 20 18:39:34.415384 augenrules[1656]: No rules Jun 20 18:39:34.418730 systemd[1]: Finished ensure-sysext.service. Jun 20 18:39:34.427763 systemd[1]: audit-rules.service: Deactivated successfully. Jun 20 18:39:34.429960 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jun 20 18:39:34.441039 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jun 20 18:39:34.478741 lvm[1657]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 20 18:39:34.507344 systemd-networkd[1474]: lo: Link UP Jun 20 18:39:34.507354 systemd-networkd[1474]: lo: Gained carrier Jun 20 18:39:34.510180 systemd-networkd[1474]: Enumeration completed Jun 20 18:39:34.510606 systemd-networkd[1474]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 18:39:34.510616 systemd-networkd[1474]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 20 18:39:34.511252 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 20 18:39:34.519790 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jun 20 18:39:34.524707 systemd-resolved[1624]: Positive Trust Anchors: Jun 20 18:39:34.524720 systemd-resolved[1624]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 20 18:39:34.524756 systemd-resolved[1624]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jun 20 18:39:34.527690 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 20 18:39:34.540207 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jun 20 18:39:34.546833 systemd-resolved[1624]: Using system hostname 'ci-4230.2.0-a-431835d741'. Jun 20 18:39:34.551853 lvm[1669]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 20 18:39:34.558301 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jun 20 18:39:34.570246 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jun 20 18:39:34.580584 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jun 20 18:39:34.592058 kernel: mlx5_core 271d:00:02.0 enP10013s1: Link up Jun 20 18:39:34.619435 kernel: hv_netvsc 002248b5-351b-0022-48b5-351b002248b5 eth0: Data path switched to VF: enP10013s1 Jun 20 18:39:34.620771 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 20 18:39:34.627139 systemd[1]: Reached target network.target - Network. Jun 20 18:39:34.628301 systemd-networkd[1474]: enP10013s1: Link UP Jun 20 18:39:34.628949 systemd-networkd[1474]: eth0: Link UP Jun 20 18:39:34.628999 systemd-networkd[1474]: eth0: Gained carrier Jun 20 18:39:34.629059 systemd-networkd[1474]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 18:39:34.632565 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 20 18:39:34.640470 systemd-networkd[1474]: enP10013s1: Gained carrier Jun 20 18:39:34.641490 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jun 20 18:39:34.654048 systemd-networkd[1474]: eth0: DHCPv4 address 10.200.20.37/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jun 20 18:39:34.871730 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 18:39:35.240656 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jun 20 18:39:35.247789 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jun 20 18:39:36.269110 systemd-networkd[1474]: enP10013s1: Gained IPv6LL Jun 20 18:39:36.335077 ldconfig[1303]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jun 20 18:39:36.349442 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jun 20 18:39:36.362151 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jun 20 18:39:36.376784 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jun 20 18:39:36.383152 systemd[1]: Reached target sysinit.target - System Initialization. Jun 20 18:39:36.389249 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jun 20 18:39:36.396025 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jun 20 18:39:36.403561 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jun 20 18:39:36.409347 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jun 20 18:39:36.416053 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jun 20 18:39:36.423232 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jun 20 18:39:36.423275 systemd[1]: Reached target paths.target - Path Units. Jun 20 18:39:36.428243 systemd[1]: Reached target timers.target - Timer Units. Jun 20 18:39:36.434099 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jun 20 18:39:36.441970 systemd[1]: Starting docker.socket - Docker Socket for the API... Jun 20 18:39:36.449725 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jun 20 18:39:36.457153 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jun 20 18:39:36.462151 systemd-networkd[1474]: eth0: Gained IPv6LL Jun 20 18:39:36.467755 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jun 20 18:39:36.482736 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jun 20 18:39:36.488837 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jun 20 18:39:36.496203 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jun 20 18:39:36.503772 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jun 20 18:39:36.510535 systemd[1]: Reached target network-online.target - Network is Online. Jun 20 18:39:36.516719 systemd[1]: Reached target sockets.target - Socket Units. Jun 20 18:39:36.521911 systemd[1]: Reached target basic.target - Basic System. Jun 20 18:39:36.527155 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jun 20 18:39:36.527187 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jun 20 18:39:36.536047 systemd[1]: Starting chronyd.service - NTP client/server... Jun 20 18:39:36.543790 systemd[1]: Starting containerd.service - containerd container runtime... Jun 20 18:39:36.561074 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jun 20 18:39:36.570207 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jun 20 18:39:36.579973 (chronyd)[1684]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Jun 20 18:39:36.580965 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jun 20 18:39:36.589185 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jun 20 18:39:36.590659 jq[1691]: false Jun 20 18:39:36.596573 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jun 20 18:39:36.596621 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Jun 20 18:39:36.597832 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jun 20 18:39:36.605346 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jun 20 18:39:36.607289 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 18:39:36.614035 KVP[1693]: KVP starting; pid is:1693 Jun 20 18:39:36.619131 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jun 20 18:39:36.620793 KVP[1693]: KVP LIC Version: 3.1 Jun 20 18:39:36.621097 kernel: hv_utils: KVP IC version 4.0 Jun 20 18:39:36.626827 chronyd[1697]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Jun 20 18:39:36.629678 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jun 20 18:39:36.637169 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jun 20 18:39:36.655416 chronyd[1697]: Timezone right/UTC failed leap second check, ignoring Jun 20 18:39:36.655976 chronyd[1697]: Loaded seccomp filter (level 2) Jun 20 18:39:36.656282 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jun 20 18:39:36.674164 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jun 20 18:39:36.679642 extend-filesystems[1692]: Found loop4 Jun 20 18:39:36.679642 extend-filesystems[1692]: Found loop5 Jun 20 18:39:36.679642 extend-filesystems[1692]: Found loop6 Jun 20 18:39:36.679642 extend-filesystems[1692]: Found loop7 Jun 20 18:39:36.679642 extend-filesystems[1692]: Found sda Jun 20 18:39:36.679642 extend-filesystems[1692]: Found sda1 Jun 20 18:39:36.679642 extend-filesystems[1692]: Found sda2 Jun 20 18:39:36.679642 extend-filesystems[1692]: Found sda3 Jun 20 18:39:36.679642 extend-filesystems[1692]: Found usr Jun 20 18:39:36.679642 extend-filesystems[1692]: Found sda4 Jun 20 18:39:36.679642 extend-filesystems[1692]: Found sda6 Jun 20 18:39:36.679642 extend-filesystems[1692]: Found sda7 Jun 20 18:39:36.679642 extend-filesystems[1692]: Found sda9 Jun 20 18:39:36.679642 extend-filesystems[1692]: Checking size of /dev/sda9 Jun 20 18:39:36.701173 systemd[1]: Starting systemd-logind.service - User Login Management... Jun 20 18:39:36.861329 coreos-metadata[1686]: Jun 20 18:39:36.802 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jun 20 18:39:36.861329 coreos-metadata[1686]: Jun 20 18:39:36.805 INFO Fetch successful Jun 20 18:39:36.861329 coreos-metadata[1686]: Jun 20 18:39:36.805 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jun 20 18:39:36.861329 coreos-metadata[1686]: Jun 20 18:39:36.811 INFO Fetch successful Jun 20 18:39:36.861329 coreos-metadata[1686]: Jun 20 18:39:36.811 INFO Fetching http://168.63.129.16/machine/14dd86e6-31af-406b-b3f9-39758670f68d/b9ce6b64%2D8187%2D44d0%2D9d0e%2D0360a7457ba7.%5Fci%2D4230.2.0%2Da%2D431835d741?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jun 20 18:39:36.861329 coreos-metadata[1686]: Jun 20 18:39:36.813 INFO Fetch successful Jun 20 18:39:36.861329 coreos-metadata[1686]: Jun 20 18:39:36.813 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jun 20 18:39:36.861329 coreos-metadata[1686]: Jun 20 18:39:36.830 INFO Fetch successful Jun 20 18:39:36.863118 extend-filesystems[1692]: Old size kept for /dev/sda9 Jun 20 18:39:36.863118 extend-filesystems[1692]: Found sr0 Jun 20 18:39:36.705958 dbus-daemon[1687]: [system] SELinux support is enabled Jun 20 18:39:36.712654 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jun 20 18:39:36.713224 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jun 20 18:39:36.718136 systemd[1]: Starting update-engine.service - Update Engine... Jun 20 18:39:36.898269 update_engine[1721]: I20250620 18:39:36.811909 1721 main.cc:92] Flatcar Update Engine starting Jun 20 18:39:36.898269 update_engine[1721]: I20250620 18:39:36.827227 1721 update_check_scheduler.cc:74] Next update check in 3m43s Jun 20 18:39:36.757858 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jun 20 18:39:36.898543 jq[1726]: true Jun 20 18:39:36.771209 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jun 20 18:39:36.786035 systemd[1]: Started chronyd.service - NTP client/server. Jun 20 18:39:36.806309 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jun 20 18:39:36.806510 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jun 20 18:39:36.806782 systemd[1]: extend-filesystems.service: Deactivated successfully. Jun 20 18:39:36.806970 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jun 20 18:39:36.837370 systemd[1]: motdgen.service: Deactivated successfully. Jun 20 18:39:36.837636 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jun 20 18:39:36.856626 systemd-logind[1715]: New seat seat0. Jun 20 18:39:36.859466 systemd-logind[1715]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jun 20 18:39:36.863005 systemd[1]: Started systemd-logind.service - User Login Management. Jun 20 18:39:36.883490 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jun 20 18:39:36.900363 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jun 20 18:39:36.900599 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jun 20 18:39:36.931107 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jun 20 18:39:36.938842 dbus-daemon[1687]: [system] Successfully activated service 'org.freedesktop.systemd1' Jun 20 18:39:36.931165 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jun 20 18:39:36.942315 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jun 20 18:39:36.942343 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jun 20 18:39:36.950359 (ntainerd)[1748]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jun 20 18:39:36.952691 systemd[1]: Started update-engine.service - Update Engine. Jun 20 18:39:36.961680 jq[1747]: true Jun 20 18:39:36.963951 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jun 20 18:39:36.984143 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jun 20 18:39:36.990266 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jun 20 18:39:37.002906 tar[1745]: linux-arm64/LICENSE Jun 20 18:39:37.002906 tar[1745]: linux-arm64/helm Jun 20 18:39:37.006473 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1735) Jun 20 18:39:37.164587 bash[1801]: Updated "/home/core/.ssh/authorized_keys" Jun 20 18:39:37.168413 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jun 20 18:39:37.186467 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jun 20 18:39:37.379837 locksmithd[1770]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jun 20 18:39:37.474019 containerd[1748]: time="2025-06-20T18:39:37.472506800Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jun 20 18:39:37.534943 containerd[1748]: time="2025-06-20T18:39:37.534879120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jun 20 18:39:37.546583 containerd[1748]: time="2025-06-20T18:39:37.546525400Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.94-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jun 20 18:39:37.547225 containerd[1748]: time="2025-06-20T18:39:37.546730880Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jun 20 18:39:37.547225 containerd[1748]: time="2025-06-20T18:39:37.546761480Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jun 20 18:39:37.547225 containerd[1748]: time="2025-06-20T18:39:37.546971040Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jun 20 18:39:37.547225 containerd[1748]: time="2025-06-20T18:39:37.546995920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jun 20 18:39:37.547225 containerd[1748]: time="2025-06-20T18:39:37.547075040Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jun 20 18:39:37.547225 containerd[1748]: time="2025-06-20T18:39:37.547087240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jun 20 18:39:37.547544 containerd[1748]: time="2025-06-20T18:39:37.547314200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 20 18:39:37.547544 containerd[1748]: time="2025-06-20T18:39:37.547329480Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jun 20 18:39:37.547544 containerd[1748]: time="2025-06-20T18:39:37.547343040Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jun 20 18:39:37.547544 containerd[1748]: time="2025-06-20T18:39:37.547354040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jun 20 18:39:37.547544 containerd[1748]: time="2025-06-20T18:39:37.547426560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jun 20 18:39:37.547680 containerd[1748]: time="2025-06-20T18:39:37.547634520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jun 20 18:39:37.547809 containerd[1748]: time="2025-06-20T18:39:37.547766160Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 20 18:39:37.547809 containerd[1748]: time="2025-06-20T18:39:37.547778880Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jun 20 18:39:37.547881 containerd[1748]: time="2025-06-20T18:39:37.547857360Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jun 20 18:39:37.549970 containerd[1748]: time="2025-06-20T18:39:37.547915840Z" level=info msg="metadata content store policy set" policy=shared Jun 20 18:39:37.570596 containerd[1748]: time="2025-06-20T18:39:37.570259680Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jun 20 18:39:37.570596 containerd[1748]: time="2025-06-20T18:39:37.570337360Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jun 20 18:39:37.570596 containerd[1748]: time="2025-06-20T18:39:37.570354120Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jun 20 18:39:37.570596 containerd[1748]: time="2025-06-20T18:39:37.570370800Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jun 20 18:39:37.570596 containerd[1748]: time="2025-06-20T18:39:37.570387640Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jun 20 18:39:37.570596 containerd[1748]: time="2025-06-20T18:39:37.570575240Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jun 20 18:39:37.571439 containerd[1748]: time="2025-06-20T18:39:37.570812720Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jun 20 18:39:37.571439 containerd[1748]: time="2025-06-20T18:39:37.570910840Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jun 20 18:39:37.571439 containerd[1748]: time="2025-06-20T18:39:37.570950720Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jun 20 18:39:37.571439 containerd[1748]: time="2025-06-20T18:39:37.570968160Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jun 20 18:39:37.571439 containerd[1748]: time="2025-06-20T18:39:37.570982240Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jun 20 18:39:37.571439 containerd[1748]: time="2025-06-20T18:39:37.570994760Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jun 20 18:39:37.571439 containerd[1748]: time="2025-06-20T18:39:37.571007600Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jun 20 18:39:37.571439 containerd[1748]: time="2025-06-20T18:39:37.571021200Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jun 20 18:39:37.571439 containerd[1748]: time="2025-06-20T18:39:37.571035560Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jun 20 18:39:37.571439 containerd[1748]: time="2025-06-20T18:39:37.571069480Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jun 20 18:39:37.571439 containerd[1748]: time="2025-06-20T18:39:37.571081280Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jun 20 18:39:37.571439 containerd[1748]: time="2025-06-20T18:39:37.571093600Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jun 20 18:39:37.571439 containerd[1748]: time="2025-06-20T18:39:37.571113040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jun 20 18:39:37.571439 containerd[1748]: time="2025-06-20T18:39:37.571128400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jun 20 18:39:37.571741 containerd[1748]: time="2025-06-20T18:39:37.571141160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jun 20 18:39:37.571741 containerd[1748]: time="2025-06-20T18:39:37.571153880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jun 20 18:39:37.571741 containerd[1748]: time="2025-06-20T18:39:37.571174280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jun 20 18:39:37.571741 containerd[1748]: time="2025-06-20T18:39:37.571187680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jun 20 18:39:37.571741 containerd[1748]: time="2025-06-20T18:39:37.571198840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jun 20 18:39:37.571741 containerd[1748]: time="2025-06-20T18:39:37.571212120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jun 20 18:39:37.571741 containerd[1748]: time="2025-06-20T18:39:37.571224480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jun 20 18:39:37.571741 containerd[1748]: time="2025-06-20T18:39:37.571238280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jun 20 18:39:37.571741 containerd[1748]: time="2025-06-20T18:39:37.571249600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jun 20 18:39:37.571741 containerd[1748]: time="2025-06-20T18:39:37.571260720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jun 20 18:39:37.571741 containerd[1748]: time="2025-06-20T18:39:37.571274360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jun 20 18:39:37.571741 containerd[1748]: time="2025-06-20T18:39:37.571289800Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jun 20 18:39:37.571741 containerd[1748]: time="2025-06-20T18:39:37.571312960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jun 20 18:39:37.571741 containerd[1748]: time="2025-06-20T18:39:37.571328320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jun 20 18:39:37.571741 containerd[1748]: time="2025-06-20T18:39:37.571340280Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jun 20 18:39:37.572014 containerd[1748]: time="2025-06-20T18:39:37.571396120Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jun 20 18:39:37.572014 containerd[1748]: time="2025-06-20T18:39:37.571413920Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jun 20 18:39:37.572014 containerd[1748]: time="2025-06-20T18:39:37.571423680Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jun 20 18:39:37.572014 containerd[1748]: time="2025-06-20T18:39:37.571435080Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jun 20 18:39:37.572014 containerd[1748]: time="2025-06-20T18:39:37.571445480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jun 20 18:39:37.572014 containerd[1748]: time="2025-06-20T18:39:37.571457440Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jun 20 18:39:37.572014 containerd[1748]: time="2025-06-20T18:39:37.571467200Z" level=info msg="NRI interface is disabled by configuration." Jun 20 18:39:37.572014 containerd[1748]: time="2025-06-20T18:39:37.571476600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jun 20 18:39:37.572152 containerd[1748]: time="2025-06-20T18:39:37.571750160Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jun 20 18:39:37.572152 containerd[1748]: time="2025-06-20T18:39:37.571797480Z" level=info msg="Connect containerd service" Jun 20 18:39:37.572152 containerd[1748]: time="2025-06-20T18:39:37.571835040Z" level=info msg="using legacy CRI server" Jun 20 18:39:37.572152 containerd[1748]: time="2025-06-20T18:39:37.571842120Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jun 20 18:39:37.577946 containerd[1748]: time="2025-06-20T18:39:37.574969000Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jun 20 18:39:37.578085 containerd[1748]: time="2025-06-20T18:39:37.578033040Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 20 18:39:37.578584 containerd[1748]: time="2025-06-20T18:39:37.578247560Z" level=info msg="Start subscribing containerd event" Jun 20 18:39:37.578584 containerd[1748]: time="2025-06-20T18:39:37.578315800Z" level=info msg="Start recovering state" Jun 20 18:39:37.578584 containerd[1748]: time="2025-06-20T18:39:37.578394000Z" level=info msg="Start event monitor" Jun 20 18:39:37.578584 containerd[1748]: time="2025-06-20T18:39:37.578405080Z" level=info msg="Start snapshots syncer" Jun 20 18:39:37.578584 containerd[1748]: time="2025-06-20T18:39:37.578415160Z" level=info msg="Start cni network conf syncer for default" Jun 20 18:39:37.578584 containerd[1748]: time="2025-06-20T18:39:37.578425000Z" level=info msg="Start streaming server" Jun 20 18:39:37.580935 containerd[1748]: time="2025-06-20T18:39:37.579309280Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jun 20 18:39:37.580935 containerd[1748]: time="2025-06-20T18:39:37.579778880Z" level=info msg=serving... address=/run/containerd/containerd.sock Jun 20 18:39:37.586549 systemd[1]: Started containerd.service - containerd container runtime. Jun 20 18:39:37.594424 containerd[1748]: time="2025-06-20T18:39:37.594384000Z" level=info msg="containerd successfully booted in 0.126197s" Jun 20 18:39:37.809183 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:39:37.828393 (kubelet)[1846]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 18:39:37.868967 tar[1745]: linux-arm64/README.md Jun 20 18:39:37.886348 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jun 20 18:39:38.190130 kubelet[1846]: E0620 18:39:38.190019 1846 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 18:39:38.193376 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 18:39:38.193517 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 18:39:38.193805 systemd[1]: kubelet.service: Consumed 714ms CPU time, 255.7M memory peak. Jun 20 18:39:38.551647 sshd_keygen[1722]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jun 20 18:39:38.570336 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jun 20 18:39:38.583330 systemd[1]: Starting issuegen.service - Generate /run/issue... Jun 20 18:39:38.590176 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jun 20 18:39:38.597686 systemd[1]: issuegen.service: Deactivated successfully. Jun 20 18:39:38.598164 systemd[1]: Finished issuegen.service - Generate /run/issue. Jun 20 18:39:38.613602 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jun 20 18:39:38.623197 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jun 20 18:39:38.637442 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jun 20 18:39:38.652296 systemd[1]: Started getty@tty1.service - Getty on tty1. Jun 20 18:39:38.663196 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jun 20 18:39:38.670500 systemd[1]: Reached target getty.target - Login Prompts. Jun 20 18:39:38.676567 systemd[1]: Reached target multi-user.target - Multi-User System. Jun 20 18:39:38.684731 systemd[1]: Startup finished in 683ms (kernel) + 11.362s (initrd) + 10.497s (userspace) = 22.542s. Jun 20 18:39:38.934749 login[1878]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:39:38.935571 login[1879]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:39:38.949567 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jun 20 18:39:38.950009 systemd-logind[1715]: New session 2 of user core. Jun 20 18:39:38.955225 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jun 20 18:39:38.959044 systemd-logind[1715]: New session 1 of user core. Jun 20 18:39:38.968781 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jun 20 18:39:38.973311 systemd[1]: Starting user@500.service - User Manager for UID 500... Jun 20 18:39:38.978534 (systemd)[1886]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jun 20 18:39:38.980746 systemd-logind[1715]: New session c1 of user core. Jun 20 18:39:39.137801 systemd[1886]: Queued start job for default target default.target. Jun 20 18:39:39.147992 systemd[1886]: Created slice app.slice - User Application Slice. Jun 20 18:39:39.148176 systemd[1886]: Reached target paths.target - Paths. Jun 20 18:39:39.148223 systemd[1886]: Reached target timers.target - Timers. Jun 20 18:39:39.150269 systemd[1886]: Starting dbus.socket - D-Bus User Message Bus Socket... Jun 20 18:39:39.162283 systemd[1886]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jun 20 18:39:39.162527 systemd[1886]: Reached target sockets.target - Sockets. Jun 20 18:39:39.162587 systemd[1886]: Reached target basic.target - Basic System. Jun 20 18:39:39.162620 systemd[1886]: Reached target default.target - Main User Target. Jun 20 18:39:39.162647 systemd[1886]: Startup finished in 175ms. Jun 20 18:39:39.162672 systemd[1]: Started user@500.service - User Manager for UID 500. Jun 20 18:39:39.164479 systemd[1]: Started session-1.scope - Session 1 of User core. Jun 20 18:39:39.166619 systemd[1]: Started session-2.scope - Session 2 of User core. Jun 20 18:39:40.093716 waagent[1875]: 2025-06-20T18:39:40.093590Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Jun 20 18:39:40.099394 waagent[1875]: 2025-06-20T18:39:40.099307Z INFO Daemon Daemon OS: flatcar 4230.2.0 Jun 20 18:39:40.104169 waagent[1875]: 2025-06-20T18:39:40.104101Z INFO Daemon Daemon Python: 3.11.11 Jun 20 18:39:40.108651 waagent[1875]: 2025-06-20T18:39:40.108583Z INFO Daemon Daemon Run daemon Jun 20 18:39:40.112627 waagent[1875]: 2025-06-20T18:39:40.112564Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4230.2.0' Jun 20 18:39:40.121210 waagent[1875]: 2025-06-20T18:39:40.121137Z INFO Daemon Daemon Using waagent for provisioning Jun 20 18:39:40.126392 waagent[1875]: 2025-06-20T18:39:40.126336Z INFO Daemon Daemon Activate resource disk Jun 20 18:39:40.130846 waagent[1875]: 2025-06-20T18:39:40.130783Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jun 20 18:39:40.143461 waagent[1875]: 2025-06-20T18:39:40.143378Z INFO Daemon Daemon Found device: None Jun 20 18:39:40.147752 waagent[1875]: 2025-06-20T18:39:40.147688Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jun 20 18:39:40.156206 waagent[1875]: 2025-06-20T18:39:40.156140Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jun 20 18:39:40.167361 waagent[1875]: 2025-06-20T18:39:40.167296Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jun 20 18:39:40.172852 waagent[1875]: 2025-06-20T18:39:40.172791Z INFO Daemon Daemon Running default provisioning handler Jun 20 18:39:40.184293 waagent[1875]: 2025-06-20T18:39:40.184205Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jun 20 18:39:40.197614 waagent[1875]: 2025-06-20T18:39:40.197535Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jun 20 18:39:40.207046 waagent[1875]: 2025-06-20T18:39:40.206973Z INFO Daemon Daemon cloud-init is enabled: False Jun 20 18:39:40.211999 waagent[1875]: 2025-06-20T18:39:40.211921Z INFO Daemon Daemon Copying ovf-env.xml Jun 20 18:39:41.682441 waagent[1875]: 2025-06-20T18:39:41.681515Z INFO Daemon Daemon Successfully mounted dvd Jun 20 18:39:41.710760 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jun 20 18:39:41.713680 waagent[1875]: 2025-06-20T18:39:41.713592Z INFO Daemon Daemon Detect protocol endpoint Jun 20 18:39:41.718812 waagent[1875]: 2025-06-20T18:39:41.718734Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jun 20 18:39:41.724399 waagent[1875]: 2025-06-20T18:39:41.724335Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jun 20 18:39:41.731343 waagent[1875]: 2025-06-20T18:39:41.731269Z INFO Daemon Daemon Test for route to 168.63.129.16 Jun 20 18:39:41.736604 waagent[1875]: 2025-06-20T18:39:41.736538Z INFO Daemon Daemon Route to 168.63.129.16 exists Jun 20 18:39:41.741629 waagent[1875]: 2025-06-20T18:39:41.741570Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jun 20 18:39:41.770729 waagent[1875]: 2025-06-20T18:39:41.770671Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jun 20 18:39:41.780325 waagent[1875]: 2025-06-20T18:39:41.780289Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jun 20 18:39:41.785632 waagent[1875]: 2025-06-20T18:39:41.785575Z INFO Daemon Daemon Server preferred version:2015-04-05 Jun 20 18:39:42.012971 waagent[1875]: 2025-06-20T18:39:42.012720Z INFO Daemon Daemon Initializing goal state during protocol detection Jun 20 18:39:42.019190 waagent[1875]: 2025-06-20T18:39:42.019112Z INFO Daemon Daemon Forcing an update of the goal state. Jun 20 18:39:42.029325 waagent[1875]: 2025-06-20T18:39:42.029257Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jun 20 18:39:42.115102 waagent[1875]: 2025-06-20T18:39:42.115044Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.175 Jun 20 18:39:42.121768 waagent[1875]: 2025-06-20T18:39:42.121707Z INFO Daemon Jun 20 18:39:42.124851 waagent[1875]: 2025-06-20T18:39:42.124792Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: a79c65d6-2b74-42be-99d7-b350a48a866c eTag: 9473622947583391316 source: Fabric] Jun 20 18:39:42.136442 waagent[1875]: 2025-06-20T18:39:42.136386Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jun 20 18:39:42.143814 waagent[1875]: 2025-06-20T18:39:42.143756Z INFO Daemon Jun 20 18:39:42.146898 waagent[1875]: 2025-06-20T18:39:42.146828Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jun 20 18:39:42.158746 waagent[1875]: 2025-06-20T18:39:42.158698Z INFO Daemon Daemon Downloading artifacts profile blob Jun 20 18:39:42.251975 waagent[1875]: 2025-06-20T18:39:42.251286Z INFO Daemon Downloaded certificate {'thumbprint': 'D34610ECE61A22610B80FD6A06132C2EF727A98D', 'hasPrivateKey': True} Jun 20 18:39:42.262549 waagent[1875]: 2025-06-20T18:39:42.262351Z INFO Daemon Downloaded certificate {'thumbprint': '71C0531C4457D035CE4A9D2D803DCCA6FA373CA3', 'hasPrivateKey': False} Jun 20 18:39:42.272898 waagent[1875]: 2025-06-20T18:39:42.272794Z INFO Daemon Fetch goal state completed Jun 20 18:39:42.287500 waagent[1875]: 2025-06-20T18:39:42.287432Z INFO Daemon Daemon Starting provisioning Jun 20 18:39:42.292451 waagent[1875]: 2025-06-20T18:39:42.292375Z INFO Daemon Daemon Handle ovf-env.xml. Jun 20 18:39:42.297255 waagent[1875]: 2025-06-20T18:39:42.297196Z INFO Daemon Daemon Set hostname [ci-4230.2.0-a-431835d741] Jun 20 18:39:42.375956 waagent[1875]: 2025-06-20T18:39:42.371008Z INFO Daemon Daemon Publish hostname [ci-4230.2.0-a-431835d741] Jun 20 18:39:42.379131 waagent[1875]: 2025-06-20T18:39:42.379058Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jun 20 18:39:42.385522 waagent[1875]: 2025-06-20T18:39:42.385459Z INFO Daemon Daemon Primary interface is [eth0] Jun 20 18:39:42.398615 systemd-networkd[1474]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 18:39:42.399471 systemd-networkd[1474]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 20 18:39:42.399509 systemd-networkd[1474]: eth0: DHCP lease lost Jun 20 18:39:42.404867 waagent[1875]: 2025-06-20T18:39:42.399744Z INFO Daemon Daemon Create user account if not exists Jun 20 18:39:42.405403 waagent[1875]: 2025-06-20T18:39:42.405332Z INFO Daemon Daemon User core already exists, skip useradd Jun 20 18:39:42.411563 waagent[1875]: 2025-06-20T18:39:42.411474Z INFO Daemon Daemon Configure sudoer Jun 20 18:39:42.416647 waagent[1875]: 2025-06-20T18:39:42.416569Z INFO Daemon Daemon Configure sshd Jun 20 18:39:42.421440 waagent[1875]: 2025-06-20T18:39:42.421303Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jun 20 18:39:42.433904 waagent[1875]: 2025-06-20T18:39:42.433811Z INFO Daemon Daemon Deploy ssh public key. Jun 20 18:39:42.445016 systemd-networkd[1474]: eth0: DHCPv4 address 10.200.20.37/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jun 20 18:39:43.586954 waagent[1875]: 2025-06-20T18:39:43.586197Z INFO Daemon Daemon Provisioning complete Jun 20 18:39:43.603380 waagent[1875]: 2025-06-20T18:39:43.603326Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jun 20 18:39:43.610504 waagent[1875]: 2025-06-20T18:39:43.610442Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jun 20 18:39:43.619592 waagent[1875]: 2025-06-20T18:39:43.619528Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Jun 20 18:39:43.758700 waagent[1943]: 2025-06-20T18:39:43.758602Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Jun 20 18:39:43.759047 waagent[1943]: 2025-06-20T18:39:43.758769Z INFO ExtHandler ExtHandler OS: flatcar 4230.2.0 Jun 20 18:39:43.759047 waagent[1943]: 2025-06-20T18:39:43.758824Z INFO ExtHandler ExtHandler Python: 3.11.11 Jun 20 18:39:43.789965 waagent[1943]: 2025-06-20T18:39:43.789471Z INFO ExtHandler ExtHandler Distro: flatcar-4230.2.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.11; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Jun 20 18:39:43.789965 waagent[1943]: 2025-06-20T18:39:43.789739Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jun 20 18:39:43.789965 waagent[1943]: 2025-06-20T18:39:43.789802Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jun 20 18:39:43.798976 waagent[1943]: 2025-06-20T18:39:43.798858Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jun 20 18:39:43.808930 waagent[1943]: 2025-06-20T18:39:43.808865Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.175 Jun 20 18:39:43.809542 waagent[1943]: 2025-06-20T18:39:43.809491Z INFO ExtHandler Jun 20 18:39:43.809616 waagent[1943]: 2025-06-20T18:39:43.809585Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: ea47f9bf-00bc-4b09-a36b-17fd9eac86c1 eTag: 9473622947583391316 source: Fabric] Jun 20 18:39:43.809920 waagent[1943]: 2025-06-20T18:39:43.809877Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jun 20 18:39:43.810567 waagent[1943]: 2025-06-20T18:39:43.810512Z INFO ExtHandler Jun 20 18:39:43.810636 waagent[1943]: 2025-06-20T18:39:43.810605Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jun 20 18:39:43.817392 waagent[1943]: 2025-06-20T18:39:43.817340Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jun 20 18:39:43.899039 waagent[1943]: 2025-06-20T18:39:43.898838Z INFO ExtHandler Downloaded certificate {'thumbprint': 'D34610ECE61A22610B80FD6A06132C2EF727A98D', 'hasPrivateKey': True} Jun 20 18:39:43.899472 waagent[1943]: 2025-06-20T18:39:43.899419Z INFO ExtHandler Downloaded certificate {'thumbprint': '71C0531C4457D035CE4A9D2D803DCCA6FA373CA3', 'hasPrivateKey': False} Jun 20 18:39:43.899904 waagent[1943]: 2025-06-20T18:39:43.899856Z INFO ExtHandler Fetch goal state completed Jun 20 18:39:43.915741 waagent[1943]: 2025-06-20T18:39:43.915666Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1943 Jun 20 18:39:43.915917 waagent[1943]: 2025-06-20T18:39:43.915877Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jun 20 18:39:43.917762 waagent[1943]: 2025-06-20T18:39:43.917703Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4230.2.0', '', 'Flatcar Container Linux by Kinvolk'] Jun 20 18:39:43.918192 waagent[1943]: 2025-06-20T18:39:43.918149Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jun 20 18:39:43.947047 waagent[1943]: 2025-06-20T18:39:43.946989Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jun 20 18:39:43.947296 waagent[1943]: 2025-06-20T18:39:43.947246Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jun 20 18:39:43.953780 waagent[1943]: 2025-06-20T18:39:43.953716Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jun 20 18:39:43.960674 systemd[1]: Reload requested from client PID 1958 ('systemctl') (unit waagent.service)... Jun 20 18:39:43.960694 systemd[1]: Reloading... Jun 20 18:39:44.056981 zram_generator::config[1997]: No configuration found. Jun 20 18:39:44.182768 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 18:39:44.307829 systemd[1]: Reloading finished in 346 ms. Jun 20 18:39:44.331969 waagent[1943]: 2025-06-20T18:39:44.326317Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Jun 20 18:39:44.332859 systemd[1]: Reload requested from client PID 2051 ('systemctl') (unit waagent.service)... Jun 20 18:39:44.333081 systemd[1]: Reloading... Jun 20 18:39:44.414059 zram_generator::config[2090]: No configuration found. Jun 20 18:39:44.538201 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 18:39:44.663165 systemd[1]: Reloading finished in 329 ms. Jun 20 18:39:44.678127 waagent[1943]: 2025-06-20T18:39:44.677281Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jun 20 18:39:44.678127 waagent[1943]: 2025-06-20T18:39:44.677465Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jun 20 18:39:44.923867 waagent[1943]: 2025-06-20T18:39:44.923729Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jun 20 18:39:44.924887 waagent[1943]: 2025-06-20T18:39:44.924803Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Jun 20 18:39:44.926164 waagent[1943]: 2025-06-20T18:39:44.926104Z INFO ExtHandler ExtHandler Starting env monitor service. Jun 20 18:39:44.926501 waagent[1943]: 2025-06-20T18:39:44.926439Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jun 20 18:39:44.927065 waagent[1943]: 2025-06-20T18:39:44.926871Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jun 20 18:39:44.927065 waagent[1943]: 2025-06-20T18:39:44.926997Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jun 20 18:39:44.927159 waagent[1943]: 2025-06-20T18:39:44.927135Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jun 20 18:39:44.927270 waagent[1943]: 2025-06-20T18:39:44.927192Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jun 20 18:39:44.927635 waagent[1943]: 2025-06-20T18:39:44.927583Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jun 20 18:39:44.927909 waagent[1943]: 2025-06-20T18:39:44.927863Z INFO EnvHandler ExtHandler Configure routes Jun 20 18:39:44.928002 waagent[1943]: 2025-06-20T18:39:44.927968Z INFO EnvHandler ExtHandler Gateway:None Jun 20 18:39:44.928056 waagent[1943]: 2025-06-20T18:39:44.928027Z INFO EnvHandler ExtHandler Routes:None Jun 20 18:39:44.928903 waagent[1943]: 2025-06-20T18:39:44.928842Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jun 20 18:39:44.929732 waagent[1943]: 2025-06-20T18:39:44.929073Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jun 20 18:39:44.929732 waagent[1943]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jun 20 18:39:44.929732 waagent[1943]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Jun 20 18:39:44.929732 waagent[1943]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jun 20 18:39:44.929732 waagent[1943]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jun 20 18:39:44.929732 waagent[1943]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jun 20 18:39:44.929732 waagent[1943]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jun 20 18:39:44.929732 waagent[1943]: 2025-06-20T18:39:44.928644Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jun 20 18:39:44.930543 waagent[1943]: 2025-06-20T18:39:44.930080Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jun 20 18:39:44.930616 waagent[1943]: 2025-06-20T18:39:44.930023Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jun 20 18:39:44.930776 waagent[1943]: 2025-06-20T18:39:44.930732Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jun 20 18:39:44.938774 waagent[1943]: 2025-06-20T18:39:44.937444Z INFO ExtHandler ExtHandler Jun 20 18:39:44.938774 waagent[1943]: 2025-06-20T18:39:44.937580Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 8f550c64-d2c4-4feb-8f60-a80d7ed409d2 correlation 56244108-006e-47ae-8858-81a582352751 created: 2025-06-20T18:38:35.933235Z] Jun 20 18:39:44.938774 waagent[1943]: 2025-06-20T18:39:44.938059Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jun 20 18:39:44.938774 waagent[1943]: 2025-06-20T18:39:44.938650Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Jun 20 18:39:45.029403 waagent[1943]: 2025-06-20T18:39:45.029330Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: F06DA833-50BB-400D-A07D-4DC64FDFB16D;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Jun 20 18:39:45.069884 waagent[1943]: 2025-06-20T18:39:45.069800Z INFO MonitorHandler ExtHandler Network interfaces: Jun 20 18:39:45.069884 waagent[1943]: Executing ['ip', '-a', '-o', 'link']: Jun 20 18:39:45.069884 waagent[1943]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jun 20 18:39:45.069884 waagent[1943]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:b5:35:1b brd ff:ff:ff:ff:ff:ff Jun 20 18:39:45.069884 waagent[1943]: 3: enP10013s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:b5:35:1b brd ff:ff:ff:ff:ff:ff\ altname enP10013p0s2 Jun 20 18:39:45.069884 waagent[1943]: Executing ['ip', '-4', '-a', '-o', 'address']: Jun 20 18:39:45.069884 waagent[1943]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jun 20 18:39:45.069884 waagent[1943]: 2: eth0 inet 10.200.20.37/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Jun 20 18:39:45.069884 waagent[1943]: Executing ['ip', '-6', '-a', '-o', 'address']: Jun 20 18:39:45.069884 waagent[1943]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jun 20 18:39:45.069884 waagent[1943]: 2: eth0 inet6 fe80::222:48ff:feb5:351b/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jun 20 18:39:45.069884 waagent[1943]: 3: enP10013s1 inet6 fe80::222:48ff:feb5:351b/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jun 20 18:39:45.428459 waagent[1943]: 2025-06-20T18:39:45.428358Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Jun 20 18:39:45.428459 waagent[1943]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jun 20 18:39:45.428459 waagent[1943]: pkts bytes target prot opt in out source destination Jun 20 18:39:45.428459 waagent[1943]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jun 20 18:39:45.428459 waagent[1943]: pkts bytes target prot opt in out source destination Jun 20 18:39:45.428459 waagent[1943]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jun 20 18:39:45.428459 waagent[1943]: pkts bytes target prot opt in out source destination Jun 20 18:39:45.428459 waagent[1943]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jun 20 18:39:45.428459 waagent[1943]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jun 20 18:39:45.428459 waagent[1943]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jun 20 18:39:45.432034 waagent[1943]: 2025-06-20T18:39:45.431932Z INFO EnvHandler ExtHandler Current Firewall rules: Jun 20 18:39:45.432034 waagent[1943]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jun 20 18:39:45.432034 waagent[1943]: pkts bytes target prot opt in out source destination Jun 20 18:39:45.432034 waagent[1943]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jun 20 18:39:45.432034 waagent[1943]: pkts bytes target prot opt in out source destination Jun 20 18:39:45.432034 waagent[1943]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jun 20 18:39:45.432034 waagent[1943]: pkts bytes target prot opt in out source destination Jun 20 18:39:45.432034 waagent[1943]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jun 20 18:39:45.432034 waagent[1943]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jun 20 18:39:45.432034 waagent[1943]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jun 20 18:39:45.432341 waagent[1943]: 2025-06-20T18:39:45.432298Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jun 20 18:39:48.258404 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jun 20 18:39:48.266290 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 18:39:48.392774 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:39:48.401258 (kubelet)[2186]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 18:39:48.493405 kubelet[2186]: E0620 18:39:48.493256 2186 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 18:39:48.496247 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 18:39:48.496407 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 18:39:48.498020 systemd[1]: kubelet.service: Consumed 143ms CPU time, 105.2M memory peak. Jun 20 18:39:52.999228 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jun 20 18:39:53.000885 systemd[1]: Started sshd@0-10.200.20.37:22-10.200.16.10:52792.service - OpenSSH per-connection server daemon (10.200.16.10:52792). Jun 20 18:39:53.550195 sshd[2194]: Accepted publickey for core from 10.200.16.10 port 52792 ssh2: RSA SHA256:V0PIl5bgEVSoJBEasTsvuXRLX17TbdMMyIxacTZE6p0 Jun 20 18:39:53.551494 sshd-session[2194]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:39:53.556489 systemd-logind[1715]: New session 3 of user core. Jun 20 18:39:53.564159 systemd[1]: Started session-3.scope - Session 3 of User core. Jun 20 18:39:53.970318 systemd[1]: Started sshd@1-10.200.20.37:22-10.200.16.10:52808.service - OpenSSH per-connection server daemon (10.200.16.10:52808). Jun 20 18:39:54.422797 sshd[2199]: Accepted publickey for core from 10.200.16.10 port 52808 ssh2: RSA SHA256:V0PIl5bgEVSoJBEasTsvuXRLX17TbdMMyIxacTZE6p0 Jun 20 18:39:54.424150 sshd-session[2199]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:39:54.428391 systemd-logind[1715]: New session 4 of user core. Jun 20 18:39:54.438132 systemd[1]: Started session-4.scope - Session 4 of User core. Jun 20 18:39:54.758060 sshd[2201]: Connection closed by 10.200.16.10 port 52808 Jun 20 18:39:54.758591 sshd-session[2199]: pam_unix(sshd:session): session closed for user core Jun 20 18:39:54.762428 systemd[1]: sshd@1-10.200.20.37:22-10.200.16.10:52808.service: Deactivated successfully. Jun 20 18:39:54.764265 systemd[1]: session-4.scope: Deactivated successfully. Jun 20 18:39:54.765741 systemd-logind[1715]: Session 4 logged out. Waiting for processes to exit. Jun 20 18:39:54.766873 systemd-logind[1715]: Removed session 4. Jun 20 18:39:54.860222 systemd[1]: Started sshd@2-10.200.20.37:22-10.200.16.10:52818.service - OpenSSH per-connection server daemon (10.200.16.10:52818). Jun 20 18:39:55.393293 sshd[2207]: Accepted publickey for core from 10.200.16.10 port 52818 ssh2: RSA SHA256:V0PIl5bgEVSoJBEasTsvuXRLX17TbdMMyIxacTZE6p0 Jun 20 18:39:55.394627 sshd-session[2207]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:39:55.402063 systemd-logind[1715]: New session 5 of user core. Jun 20 18:39:55.408156 systemd[1]: Started session-5.scope - Session 5 of User core. Jun 20 18:39:55.769996 sshd[2209]: Connection closed by 10.200.16.10 port 52818 Jun 20 18:39:55.770611 sshd-session[2207]: pam_unix(sshd:session): session closed for user core Jun 20 18:39:55.774262 systemd[1]: sshd@2-10.200.20.37:22-10.200.16.10:52818.service: Deactivated successfully. Jun 20 18:39:55.776020 systemd[1]: session-5.scope: Deactivated successfully. Jun 20 18:39:55.776802 systemd-logind[1715]: Session 5 logged out. Waiting for processes to exit. Jun 20 18:39:55.777911 systemd-logind[1715]: Removed session 5. Jun 20 18:39:55.864863 systemd[1]: Started sshd@3-10.200.20.37:22-10.200.16.10:52828.service - OpenSSH per-connection server daemon (10.200.16.10:52828). Jun 20 18:39:56.398792 sshd[2215]: Accepted publickey for core from 10.200.16.10 port 52828 ssh2: RSA SHA256:V0PIl5bgEVSoJBEasTsvuXRLX17TbdMMyIxacTZE6p0 Jun 20 18:39:56.400379 sshd-session[2215]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:39:56.406094 systemd-logind[1715]: New session 6 of user core. Jun 20 18:39:56.411127 systemd[1]: Started session-6.scope - Session 6 of User core. Jun 20 18:39:56.775728 sshd[2217]: Connection closed by 10.200.16.10 port 52828 Jun 20 18:39:56.776308 sshd-session[2215]: pam_unix(sshd:session): session closed for user core Jun 20 18:39:56.779974 systemd[1]: sshd@3-10.200.20.37:22-10.200.16.10:52828.service: Deactivated successfully. Jun 20 18:39:56.782183 systemd[1]: session-6.scope: Deactivated successfully. Jun 20 18:39:56.782855 systemd-logind[1715]: Session 6 logged out. Waiting for processes to exit. Jun 20 18:39:56.783894 systemd-logind[1715]: Removed session 6. Jun 20 18:39:56.870227 systemd[1]: Started sshd@4-10.200.20.37:22-10.200.16.10:52836.service - OpenSSH per-connection server daemon (10.200.16.10:52836). Jun 20 18:39:57.354813 sshd[2223]: Accepted publickey for core from 10.200.16.10 port 52836 ssh2: RSA SHA256:V0PIl5bgEVSoJBEasTsvuXRLX17TbdMMyIxacTZE6p0 Jun 20 18:39:57.356177 sshd-session[2223]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:39:57.362002 systemd-logind[1715]: New session 7 of user core. Jun 20 18:39:57.368123 systemd[1]: Started session-7.scope - Session 7 of User core. Jun 20 18:39:57.759024 sudo[2226]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jun 20 18:39:57.759304 sudo[2226]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 18:39:57.785349 sudo[2226]: pam_unix(sudo:session): session closed for user root Jun 20 18:39:57.863019 sshd[2225]: Connection closed by 10.200.16.10 port 52836 Jun 20 18:39:57.862855 sshd-session[2223]: pam_unix(sshd:session): session closed for user core Jun 20 18:39:57.866569 systemd-logind[1715]: Session 7 logged out. Waiting for processes to exit. Jun 20 18:39:57.867602 systemd[1]: sshd@4-10.200.20.37:22-10.200.16.10:52836.service: Deactivated successfully. Jun 20 18:39:57.869698 systemd[1]: session-7.scope: Deactivated successfully. Jun 20 18:39:57.870623 systemd-logind[1715]: Removed session 7. Jun 20 18:39:57.948202 systemd[1]: Started sshd@5-10.200.20.37:22-10.200.16.10:52838.service - OpenSSH per-connection server daemon (10.200.16.10:52838). Jun 20 18:39:58.401562 sshd[2232]: Accepted publickey for core from 10.200.16.10 port 52838 ssh2: RSA SHA256:V0PIl5bgEVSoJBEasTsvuXRLX17TbdMMyIxacTZE6p0 Jun 20 18:39:58.402921 sshd-session[2232]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:39:58.407993 systemd-logind[1715]: New session 8 of user core. Jun 20 18:39:58.414104 systemd[1]: Started session-8.scope - Session 8 of User core. Jun 20 18:39:58.508337 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jun 20 18:39:58.522575 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 18:39:58.621916 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:39:58.631258 (kubelet)[2243]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 18:39:58.658140 sudo[2249]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jun 20 18:39:58.658438 sudo[2249]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 18:39:58.662354 sudo[2249]: pam_unix(sudo:session): session closed for user root Jun 20 18:39:58.667714 sudo[2248]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jun 20 18:39:58.668095 sudo[2248]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 18:39:58.684298 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jun 20 18:39:58.718745 augenrules[2272]: No rules Jun 20 18:39:58.720774 systemd[1]: audit-rules.service: Deactivated successfully. Jun 20 18:39:58.720920 kubelet[2243]: E0620 18:39:58.720786 2243 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 18:39:58.722408 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jun 20 18:39:58.723715 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 18:39:58.723843 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 18:39:58.724176 systemd[1]: kubelet.service: Consumed 139ms CPU time, 107.1M memory peak. Jun 20 18:39:58.724304 sudo[2248]: pam_unix(sudo:session): session closed for user root Jun 20 18:39:58.808195 sshd[2234]: Connection closed by 10.200.16.10 port 52838 Jun 20 18:39:58.808749 sshd-session[2232]: pam_unix(sshd:session): session closed for user core Jun 20 18:39:58.811977 systemd-logind[1715]: Session 8 logged out. Waiting for processes to exit. Jun 20 18:39:58.812272 systemd[1]: sshd@5-10.200.20.37:22-10.200.16.10:52838.service: Deactivated successfully. Jun 20 18:39:58.816074 systemd[1]: session-8.scope: Deactivated successfully. Jun 20 18:39:58.818682 systemd-logind[1715]: Removed session 8. Jun 20 18:39:58.891822 systemd[1]: Started sshd@6-10.200.20.37:22-10.200.16.10:34020.service - OpenSSH per-connection server daemon (10.200.16.10:34020). Jun 20 18:39:59.346116 sshd[2282]: Accepted publickey for core from 10.200.16.10 port 34020 ssh2: RSA SHA256:V0PIl5bgEVSoJBEasTsvuXRLX17TbdMMyIxacTZE6p0 Jun 20 18:39:59.347431 sshd-session[2282]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:39:59.351642 systemd-logind[1715]: New session 9 of user core. Jun 20 18:39:59.358118 systemd[1]: Started session-9.scope - Session 9 of User core. Jun 20 18:39:59.601398 sudo[2285]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jun 20 18:39:59.601703 sudo[2285]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 18:40:00.449954 chronyd[1697]: Selected source PHC0 Jun 20 18:40:00.656241 systemd[1]: Starting docker.service - Docker Application Container Engine... Jun 20 18:40:00.656320 (dockerd)[2303]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jun 20 18:40:01.348496 dockerd[2303]: time="2025-06-20T18:40:01.348431351Z" level=info msg="Starting up" Jun 20 18:40:01.636042 dockerd[2303]: time="2025-06-20T18:40:01.635862049Z" level=info msg="Loading containers: start." Jun 20 18:40:01.821958 kernel: Initializing XFRM netlink socket Jun 20 18:40:01.914110 systemd-networkd[1474]: docker0: Link UP Jun 20 18:40:01.957354 dockerd[2303]: time="2025-06-20T18:40:01.957313461Z" level=info msg="Loading containers: done." Jun 20 18:40:01.970250 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2007806839-merged.mount: Deactivated successfully. Jun 20 18:40:01.984870 dockerd[2303]: time="2025-06-20T18:40:01.984812946Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jun 20 18:40:01.985041 dockerd[2303]: time="2025-06-20T18:40:01.984954585Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Jun 20 18:40:01.985135 dockerd[2303]: time="2025-06-20T18:40:01.985091785Z" level=info msg="Daemon has completed initialization" Jun 20 18:40:02.058216 dockerd[2303]: time="2025-06-20T18:40:02.057998119Z" level=info msg="API listen on /run/docker.sock" Jun 20 18:40:02.058119 systemd[1]: Started docker.service - Docker Application Container Engine. Jun 20 18:40:03.067642 containerd[1748]: time="2025-06-20T18:40:03.067597483Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\"" Jun 20 18:40:04.054464 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3432461366.mount: Deactivated successfully. Jun 20 18:40:05.921399 containerd[1748]: time="2025-06-20T18:40:05.921348073Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:40:05.924267 containerd[1748]: time="2025-06-20T18:40:05.924215741Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.6: active requests=0, bytes read=26328194" Jun 20 18:40:05.928805 containerd[1748]: time="2025-06-20T18:40:05.928773562Z" level=info msg="ImageCreate event name:\"sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:40:05.934838 containerd[1748]: time="2025-06-20T18:40:05.934785416Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:40:05.936399 containerd[1748]: time="2025-06-20T18:40:05.936037411Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.6\" with image id \"sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\", size \"26324994\" in 2.868395408s" Jun 20 18:40:05.936399 containerd[1748]: time="2025-06-20T18:40:05.936072411Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\" returns image reference \"sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a\"" Jun 20 18:40:05.936922 containerd[1748]: time="2025-06-20T18:40:05.936749608Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\"" Jun 20 18:40:07.889979 containerd[1748]: time="2025-06-20T18:40:07.889904559Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:40:07.894778 containerd[1748]: time="2025-06-20T18:40:07.894407540Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.6: active requests=0, bytes read=22529228" Jun 20 18:40:07.901003 containerd[1748]: time="2025-06-20T18:40:07.900946112Z" level=info msg="ImageCreate event name:\"sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:40:07.908949 containerd[1748]: time="2025-06-20T18:40:07.908860478Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:40:07.910310 containerd[1748]: time="2025-06-20T18:40:07.909887234Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.6\" with image id \"sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\", size \"24065018\" in 1.973105106s" Jun 20 18:40:07.910310 containerd[1748]: time="2025-06-20T18:40:07.909943473Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\" returns image reference \"sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038\"" Jun 20 18:40:07.910863 containerd[1748]: time="2025-06-20T18:40:07.910613790Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\"" Jun 20 18:40:08.758365 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jun 20 18:40:08.767159 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 18:40:08.879863 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:40:08.893258 (kubelet)[2552]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 18:40:08.965829 kubelet[2552]: E0620 18:40:08.965766 2552 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 18:40:08.968396 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 18:40:08.968682 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 18:40:08.970064 systemd[1]: kubelet.service: Consumed 138ms CPU time, 105.1M memory peak. Jun 20 18:40:09.829570 containerd[1748]: time="2025-06-20T18:40:09.829511912Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:40:09.832755 containerd[1748]: time="2025-06-20T18:40:09.832498465Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.6: active requests=0, bytes read=17484141" Jun 20 18:40:09.839462 containerd[1748]: time="2025-06-20T18:40:09.839426848Z" level=info msg="ImageCreate event name:\"sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:40:09.847395 containerd[1748]: time="2025-06-20T18:40:09.847316029Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:40:09.848896 containerd[1748]: time="2025-06-20T18:40:09.848481466Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.6\" with image id \"sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\", size \"19019949\" in 1.937833596s" Jun 20 18:40:09.848896 containerd[1748]: time="2025-06-20T18:40:09.848518986Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\" returns image reference \"sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f\"" Jun 20 18:40:09.849173 containerd[1748]: time="2025-06-20T18:40:09.849141464Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\"" Jun 20 18:40:11.016938 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount208032318.mount: Deactivated successfully. Jun 20 18:40:11.466039 containerd[1748]: time="2025-06-20T18:40:11.465853931Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:40:11.469053 containerd[1748]: time="2025-06-20T18:40:11.468980279Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.6: active requests=0, bytes read=27378406" Jun 20 18:40:11.474173 containerd[1748]: time="2025-06-20T18:40:11.474069698Z" level=info msg="ImageCreate event name:\"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:40:11.481963 containerd[1748]: time="2025-06-20T18:40:11.480374872Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:40:11.483200 containerd[1748]: time="2025-06-20T18:40:11.483155581Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.6\" with image id \"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\", repo tag \"registry.k8s.io/kube-proxy:v1.32.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\", size \"27377425\" in 1.633973477s" Jun 20 18:40:11.483333 containerd[1748]: time="2025-06-20T18:40:11.483309980Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\" returns image reference \"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\"" Jun 20 18:40:11.485041 containerd[1748]: time="2025-06-20T18:40:11.485010973Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jun 20 18:40:12.223690 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2328208109.mount: Deactivated successfully. Jun 20 18:40:14.027989 containerd[1748]: time="2025-06-20T18:40:14.027164541Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:40:14.030402 containerd[1748]: time="2025-06-20T18:40:14.030069090Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" Jun 20 18:40:14.036586 containerd[1748]: time="2025-06-20T18:40:14.036529903Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:40:14.042395 containerd[1748]: time="2025-06-20T18:40:14.042309600Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:40:14.043811 containerd[1748]: time="2025-06-20T18:40:14.043566475Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 2.558399662s" Jun 20 18:40:14.043811 containerd[1748]: time="2025-06-20T18:40:14.043607275Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jun 20 18:40:14.044468 containerd[1748]: time="2025-06-20T18:40:14.044273232Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jun 20 18:40:14.671802 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1817774924.mount: Deactivated successfully. Jun 20 18:40:14.705975 containerd[1748]: time="2025-06-20T18:40:14.705738618Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:40:14.709301 containerd[1748]: time="2025-06-20T18:40:14.709086805Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Jun 20 18:40:14.715245 containerd[1748]: time="2025-06-20T18:40:14.715188940Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:40:14.721357 containerd[1748]: time="2025-06-20T18:40:14.721282795Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:40:14.724321 containerd[1748]: time="2025-06-20T18:40:14.722394230Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 678.088998ms" Jun 20 18:40:14.724321 containerd[1748]: time="2025-06-20T18:40:14.722434590Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jun 20 18:40:14.726068 containerd[1748]: time="2025-06-20T18:40:14.726030416Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jun 20 18:40:15.582364 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3931591413.mount: Deactivated successfully. Jun 20 18:40:19.008517 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jun 20 18:40:19.017225 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 18:40:19.136311 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:40:19.139474 (kubelet)[2688]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 18:40:19.236309 kubelet[2688]: E0620 18:40:19.236151 2688 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 18:40:19.238607 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 18:40:19.238759 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 18:40:19.241030 systemd[1]: kubelet.service: Consumed 139ms CPU time, 105.5M memory peak. Jun 20 18:40:19.722573 containerd[1748]: time="2025-06-20T18:40:19.722490764Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:40:19.725490 containerd[1748]: time="2025-06-20T18:40:19.725423796Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812469" Jun 20 18:40:19.729042 containerd[1748]: time="2025-06-20T18:40:19.729002785Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:40:19.736196 containerd[1748]: time="2025-06-20T18:40:19.736110405Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:40:19.737679 containerd[1748]: time="2025-06-20T18:40:19.737539360Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 5.011465105s" Jun 20 18:40:19.737679 containerd[1748]: time="2025-06-20T18:40:19.737578880Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Jun 20 18:40:21.631997 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Jun 20 18:40:22.161384 update_engine[1721]: I20250620 18:40:22.161310 1721 update_attempter.cc:509] Updating boot flags... Jun 20 18:40:22.241947 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (2733) Jun 20 18:40:25.204001 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:40:25.204667 systemd[1]: kubelet.service: Consumed 139ms CPU time, 105.5M memory peak. Jun 20 18:40:25.214235 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 18:40:25.243471 systemd[1]: Reload requested from client PID 2788 ('systemctl') (unit session-9.scope)... Jun 20 18:40:25.243497 systemd[1]: Reloading... Jun 20 18:40:25.392978 zram_generator::config[2850]: No configuration found. Jun 20 18:40:25.505589 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 18:40:25.635420 systemd[1]: Reloading finished in 391 ms. Jun 20 18:40:25.681996 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:40:25.686842 (kubelet)[2892]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 20 18:40:25.692600 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 18:40:25.695206 systemd[1]: kubelet.service: Deactivated successfully. Jun 20 18:40:25.695508 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:40:25.695576 systemd[1]: kubelet.service: Consumed 101ms CPU time, 98.6M memory peak. Jun 20 18:40:25.706279 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 18:40:25.841351 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:40:25.852293 (kubelet)[2909]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 20 18:40:25.920199 kubelet[2909]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 18:40:25.920199 kubelet[2909]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jun 20 18:40:25.920199 kubelet[2909]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 18:40:25.920578 kubelet[2909]: I0620 18:40:25.920259 2909 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 20 18:40:26.729026 kubelet[2909]: I0620 18:40:26.728979 2909 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jun 20 18:40:26.729210 kubelet[2909]: I0620 18:40:26.729199 2909 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 20 18:40:26.729591 kubelet[2909]: I0620 18:40:26.729574 2909 server.go:954] "Client rotation is on, will bootstrap in background" Jun 20 18:40:26.746845 kubelet[2909]: E0620 18:40:26.746799 2909 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.37:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.37:6443: connect: connection refused" logger="UnhandledError" Jun 20 18:40:26.749751 kubelet[2909]: I0620 18:40:26.749708 2909 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 20 18:40:26.756272 kubelet[2909]: E0620 18:40:26.756220 2909 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jun 20 18:40:26.756272 kubelet[2909]: I0620 18:40:26.756271 2909 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jun 20 18:40:26.760944 kubelet[2909]: I0620 18:40:26.760753 2909 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 20 18:40:26.761091 kubelet[2909]: I0620 18:40:26.761044 2909 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 20 18:40:26.761340 kubelet[2909]: I0620 18:40:26.761087 2909 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230.2.0-a-431835d741","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jun 20 18:40:26.761437 kubelet[2909]: I0620 18:40:26.761348 2909 topology_manager.go:138] "Creating topology manager with none policy" Jun 20 18:40:26.761437 kubelet[2909]: I0620 18:40:26.761358 2909 container_manager_linux.go:304] "Creating device plugin manager" Jun 20 18:40:26.761516 kubelet[2909]: I0620 18:40:26.761496 2909 state_mem.go:36] "Initialized new in-memory state store" Jun 20 18:40:26.765531 kubelet[2909]: I0620 18:40:26.765252 2909 kubelet.go:446] "Attempting to sync node with API server" Jun 20 18:40:26.765531 kubelet[2909]: I0620 18:40:26.765292 2909 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 20 18:40:26.765531 kubelet[2909]: I0620 18:40:26.765318 2909 kubelet.go:352] "Adding apiserver pod source" Jun 20 18:40:26.765531 kubelet[2909]: I0620 18:40:26.765329 2909 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 20 18:40:26.767719 kubelet[2909]: W0620 18:40:26.767670 2909 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.37:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.2.0-a-431835d741&limit=500&resourceVersion=0": dial tcp 10.200.20.37:6443: connect: connection refused Jun 20 18:40:26.767874 kubelet[2909]: E0620 18:40:26.767853 2909 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.37:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.2.0-a-431835d741&limit=500&resourceVersion=0\": dial tcp 10.200.20.37:6443: connect: connection refused" logger="UnhandledError" Jun 20 18:40:26.768048 kubelet[2909]: W0620 18:40:26.768019 2909 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.37:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.37:6443: connect: connection refused Jun 20 18:40:26.768133 kubelet[2909]: E0620 18:40:26.768118 2909 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.37:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.37:6443: connect: connection refused" logger="UnhandledError" Jun 20 18:40:26.769658 kubelet[2909]: I0620 18:40:26.768598 2909 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jun 20 18:40:26.769658 kubelet[2909]: I0620 18:40:26.769099 2909 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jun 20 18:40:26.769658 kubelet[2909]: W0620 18:40:26.769153 2909 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jun 20 18:40:26.770765 kubelet[2909]: I0620 18:40:26.770742 2909 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jun 20 18:40:26.770877 kubelet[2909]: I0620 18:40:26.770866 2909 server.go:1287] "Started kubelet" Jun 20 18:40:26.775010 kubelet[2909]: E0620 18:40:26.774831 2909 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.37:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.37:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230.2.0-a-431835d741.184ad44bb2989b02 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230.2.0-a-431835d741,UID:ci-4230.2.0-a-431835d741,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230.2.0-a-431835d741,},FirstTimestamp:2025-06-20 18:40:26.770840322 +0000 UTC m=+0.915372348,LastTimestamp:2025-06-20 18:40:26.770840322 +0000 UTC m=+0.915372348,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230.2.0-a-431835d741,}" Jun 20 18:40:26.775137 kubelet[2909]: I0620 18:40:26.775118 2909 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jun 20 18:40:26.775966 kubelet[2909]: I0620 18:40:26.775922 2909 server.go:479] "Adding debug handlers to kubelet server" Jun 20 18:40:26.776219 kubelet[2909]: I0620 18:40:26.776161 2909 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 20 18:40:26.776623 kubelet[2909]: I0620 18:40:26.776602 2909 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 20 18:40:26.778494 kubelet[2909]: I0620 18:40:26.778459 2909 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 20 18:40:26.780145 kubelet[2909]: I0620 18:40:26.780117 2909 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jun 20 18:40:26.782180 kubelet[2909]: E0620 18:40:26.782157 2909 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 20 18:40:26.783061 kubelet[2909]: E0620 18:40:26.783027 2909 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.0-a-431835d741\" not found" Jun 20 18:40:26.783332 kubelet[2909]: I0620 18:40:26.783317 2909 volume_manager.go:297] "Starting Kubelet Volume Manager" Jun 20 18:40:26.783646 kubelet[2909]: I0620 18:40:26.783620 2909 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jun 20 18:40:26.783791 kubelet[2909]: I0620 18:40:26.783778 2909 reconciler.go:26] "Reconciler: start to sync state" Jun 20 18:40:26.784998 kubelet[2909]: W0620 18:40:26.784563 2909 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.37:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.37:6443: connect: connection refused Jun 20 18:40:26.784998 kubelet[2909]: E0620 18:40:26.784614 2909 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.37:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.37:6443: connect: connection refused" logger="UnhandledError" Jun 20 18:40:26.784998 kubelet[2909]: I0620 18:40:26.784775 2909 factory.go:221] Registration of the systemd container factory successfully Jun 20 18:40:26.784998 kubelet[2909]: I0620 18:40:26.784883 2909 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 20 18:40:26.786328 kubelet[2909]: I0620 18:40:26.786307 2909 factory.go:221] Registration of the containerd container factory successfully Jun 20 18:40:26.792355 kubelet[2909]: E0620 18:40:26.792302 2909 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.2.0-a-431835d741?timeout=10s\": dial tcp 10.200.20.37:6443: connect: connection refused" interval="200ms" Jun 20 18:40:26.809995 kubelet[2909]: I0620 18:40:26.809948 2909 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 20 18:40:26.811624 kubelet[2909]: I0620 18:40:26.810783 2909 cpu_manager.go:221] "Starting CPU manager" policy="none" Jun 20 18:40:26.811624 kubelet[2909]: I0620 18:40:26.810807 2909 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jun 20 18:40:26.811624 kubelet[2909]: I0620 18:40:26.810842 2909 state_mem.go:36] "Initialized new in-memory state store" Jun 20 18:40:26.816190 kubelet[2909]: I0620 18:40:26.816047 2909 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 20 18:40:26.816190 kubelet[2909]: I0620 18:40:26.816081 2909 status_manager.go:227] "Starting to sync pod status with apiserver" Jun 20 18:40:26.816524 kubelet[2909]: I0620 18:40:26.816385 2909 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jun 20 18:40:26.816524 kubelet[2909]: I0620 18:40:26.816405 2909 kubelet.go:2382] "Starting kubelet main sync loop" Jun 20 18:40:26.816773 kubelet[2909]: E0620 18:40:26.816636 2909 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 20 18:40:26.819547 kubelet[2909]: I0620 18:40:26.819514 2909 policy_none.go:49] "None policy: Start" Jun 20 18:40:26.819547 kubelet[2909]: I0620 18:40:26.819541 2909 memory_manager.go:186] "Starting memorymanager" policy="None" Jun 20 18:40:26.819547 kubelet[2909]: I0620 18:40:26.819555 2909 state_mem.go:35] "Initializing new in-memory state store" Jun 20 18:40:26.820890 kubelet[2909]: W0620 18:40:26.820456 2909 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.37:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.37:6443: connect: connection refused Jun 20 18:40:26.820890 kubelet[2909]: E0620 18:40:26.820519 2909 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.37:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.37:6443: connect: connection refused" logger="UnhandledError" Jun 20 18:40:26.829545 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jun 20 18:40:26.839415 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jun 20 18:40:26.843122 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jun 20 18:40:26.854168 kubelet[2909]: I0620 18:40:26.853913 2909 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 20 18:40:26.854168 kubelet[2909]: I0620 18:40:26.854161 2909 eviction_manager.go:189] "Eviction manager: starting control loop" Jun 20 18:40:26.854320 kubelet[2909]: I0620 18:40:26.854173 2909 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 20 18:40:26.855108 kubelet[2909]: I0620 18:40:26.854486 2909 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 20 18:40:26.856284 kubelet[2909]: E0620 18:40:26.856002 2909 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jun 20 18:40:26.856284 kubelet[2909]: E0620 18:40:26.856050 2909 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4230.2.0-a-431835d741\" not found" Jun 20 18:40:26.928773 systemd[1]: Created slice kubepods-burstable-podf48880ad39258f2447342bac498ffbb8.slice - libcontainer container kubepods-burstable-podf48880ad39258f2447342bac498ffbb8.slice. Jun 20 18:40:26.953809 kubelet[2909]: E0620 18:40:26.953595 2909 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.0-a-431835d741\" not found" node="ci-4230.2.0-a-431835d741" Jun 20 18:40:26.957463 kubelet[2909]: I0620 18:40:26.957033 2909 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.0-a-431835d741" Jun 20 18:40:26.957686 kubelet[2909]: E0620 18:40:26.957651 2909 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.37:6443/api/v1/nodes\": dial tcp 10.200.20.37:6443: connect: connection refused" node="ci-4230.2.0-a-431835d741" Jun 20 18:40:26.959026 systemd[1]: Created slice kubepods-burstable-pod8dac460befa80644731225d3ccbfb8f9.slice - libcontainer container kubepods-burstable-pod8dac460befa80644731225d3ccbfb8f9.slice. Jun 20 18:40:26.975428 kubelet[2909]: E0620 18:40:26.975400 2909 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.0-a-431835d741\" not found" node="ci-4230.2.0-a-431835d741" Jun 20 18:40:26.978522 systemd[1]: Created slice kubepods-burstable-podee3b5db6e49861a29e98bcb0c87dcf6c.slice - libcontainer container kubepods-burstable-podee3b5db6e49861a29e98bcb0c87dcf6c.slice. Jun 20 18:40:26.980870 kubelet[2909]: E0620 18:40:26.980737 2909 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.0-a-431835d741\" not found" node="ci-4230.2.0-a-431835d741" Jun 20 18:40:26.985031 kubelet[2909]: I0620 18:40:26.984987 2909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f48880ad39258f2447342bac498ffbb8-k8s-certs\") pod \"kube-apiserver-ci-4230.2.0-a-431835d741\" (UID: \"f48880ad39258f2447342bac498ffbb8\") " pod="kube-system/kube-apiserver-ci-4230.2.0-a-431835d741" Jun 20 18:40:26.985031 kubelet[2909]: I0620 18:40:26.985035 2909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8dac460befa80644731225d3ccbfb8f9-ca-certs\") pod \"kube-controller-manager-ci-4230.2.0-a-431835d741\" (UID: \"8dac460befa80644731225d3ccbfb8f9\") " pod="kube-system/kube-controller-manager-ci-4230.2.0-a-431835d741" Jun 20 18:40:26.985197 kubelet[2909]: I0620 18:40:26.985055 2909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8dac460befa80644731225d3ccbfb8f9-flexvolume-dir\") pod \"kube-controller-manager-ci-4230.2.0-a-431835d741\" (UID: \"8dac460befa80644731225d3ccbfb8f9\") " pod="kube-system/kube-controller-manager-ci-4230.2.0-a-431835d741" Jun 20 18:40:26.985197 kubelet[2909]: I0620 18:40:26.985090 2909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8dac460befa80644731225d3ccbfb8f9-k8s-certs\") pod \"kube-controller-manager-ci-4230.2.0-a-431835d741\" (UID: \"8dac460befa80644731225d3ccbfb8f9\") " pod="kube-system/kube-controller-manager-ci-4230.2.0-a-431835d741" Jun 20 18:40:26.985197 kubelet[2909]: I0620 18:40:26.985108 2909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f48880ad39258f2447342bac498ffbb8-ca-certs\") pod \"kube-apiserver-ci-4230.2.0-a-431835d741\" (UID: \"f48880ad39258f2447342bac498ffbb8\") " pod="kube-system/kube-apiserver-ci-4230.2.0-a-431835d741" Jun 20 18:40:26.985197 kubelet[2909]: I0620 18:40:26.985127 2909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8dac460befa80644731225d3ccbfb8f9-kubeconfig\") pod \"kube-controller-manager-ci-4230.2.0-a-431835d741\" (UID: \"8dac460befa80644731225d3ccbfb8f9\") " pod="kube-system/kube-controller-manager-ci-4230.2.0-a-431835d741" Jun 20 18:40:26.985197 kubelet[2909]: I0620 18:40:26.985144 2909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8dac460befa80644731225d3ccbfb8f9-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230.2.0-a-431835d741\" (UID: \"8dac460befa80644731225d3ccbfb8f9\") " pod="kube-system/kube-controller-manager-ci-4230.2.0-a-431835d741" Jun 20 18:40:26.985311 kubelet[2909]: I0620 18:40:26.985164 2909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ee3b5db6e49861a29e98bcb0c87dcf6c-kubeconfig\") pod \"kube-scheduler-ci-4230.2.0-a-431835d741\" (UID: \"ee3b5db6e49861a29e98bcb0c87dcf6c\") " pod="kube-system/kube-scheduler-ci-4230.2.0-a-431835d741" Jun 20 18:40:26.985311 kubelet[2909]: I0620 18:40:26.985180 2909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f48880ad39258f2447342bac498ffbb8-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230.2.0-a-431835d741\" (UID: \"f48880ad39258f2447342bac498ffbb8\") " pod="kube-system/kube-apiserver-ci-4230.2.0-a-431835d741" Jun 20 18:40:26.993467 kubelet[2909]: E0620 18:40:26.993416 2909 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.2.0-a-431835d741?timeout=10s\": dial tcp 10.200.20.37:6443: connect: connection refused" interval="400ms" Jun 20 18:40:27.159691 kubelet[2909]: I0620 18:40:27.159330 2909 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.0-a-431835d741" Jun 20 18:40:27.159918 kubelet[2909]: E0620 18:40:27.159878 2909 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.37:6443/api/v1/nodes\": dial tcp 10.200.20.37:6443: connect: connection refused" node="ci-4230.2.0-a-431835d741" Jun 20 18:40:27.255278 containerd[1748]: time="2025-06-20T18:40:27.255148140Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230.2.0-a-431835d741,Uid:f48880ad39258f2447342bac498ffbb8,Namespace:kube-system,Attempt:0,}" Jun 20 18:40:27.277294 containerd[1748]: time="2025-06-20T18:40:27.277143621Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230.2.0-a-431835d741,Uid:8dac460befa80644731225d3ccbfb8f9,Namespace:kube-system,Attempt:0,}" Jun 20 18:40:27.282464 containerd[1748]: time="2025-06-20T18:40:27.282195803Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230.2.0-a-431835d741,Uid:ee3b5db6e49861a29e98bcb0c87dcf6c,Namespace:kube-system,Attempt:0,}" Jun 20 18:40:27.394040 kubelet[2909]: E0620 18:40:27.393992 2909 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.2.0-a-431835d741?timeout=10s\": dial tcp 10.200.20.37:6443: connect: connection refused" interval="800ms" Jun 20 18:40:27.562433 kubelet[2909]: I0620 18:40:27.562028 2909 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.0-a-431835d741" Jun 20 18:40:27.562433 kubelet[2909]: E0620 18:40:27.562358 2909 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.37:6443/api/v1/nodes\": dial tcp 10.200.20.37:6443: connect: connection refused" node="ci-4230.2.0-a-431835d741" Jun 20 18:40:27.680742 kubelet[2909]: W0620 18:40:27.680679 2909 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.37:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.2.0-a-431835d741&limit=500&resourceVersion=0": dial tcp 10.200.20.37:6443: connect: connection refused Jun 20 18:40:27.680885 kubelet[2909]: E0620 18:40:27.680754 2909 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.37:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.2.0-a-431835d741&limit=500&resourceVersion=0\": dial tcp 10.200.20.37:6443: connect: connection refused" logger="UnhandledError" Jun 20 18:40:27.754091 kubelet[2909]: W0620 18:40:27.754049 2909 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.37:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.37:6443: connect: connection refused Jun 20 18:40:27.754091 kubelet[2909]: E0620 18:40:27.754093 2909 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.37:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.37:6443: connect: connection refused" logger="UnhandledError" Jun 20 18:40:27.881759 kubelet[2909]: W0620 18:40:27.881571 2909 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.37:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.37:6443: connect: connection refused Jun 20 18:40:27.881759 kubelet[2909]: E0620 18:40:27.881640 2909 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.37:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.37:6443: connect: connection refused" logger="UnhandledError" Jun 20 18:40:27.968125 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1080131600.mount: Deactivated successfully. Jun 20 18:40:28.009939 containerd[1748]: time="2025-06-20T18:40:28.009863985Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 18:40:28.030452 containerd[1748]: time="2025-06-20T18:40:28.030394511Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Jun 20 18:40:28.038471 containerd[1748]: time="2025-06-20T18:40:28.038422242Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 18:40:28.045975 containerd[1748]: time="2025-06-20T18:40:28.045175298Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 18:40:28.056094 containerd[1748]: time="2025-06-20T18:40:28.055822900Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 20 18:40:28.064960 containerd[1748]: time="2025-06-20T18:40:28.062973234Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 18:40:28.069690 containerd[1748]: time="2025-06-20T18:40:28.069621090Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 18:40:28.070637 containerd[1748]: time="2025-06-20T18:40:28.070600526Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 815.368067ms" Jun 20 18:40:28.074267 containerd[1748]: time="2025-06-20T18:40:28.074197914Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 20 18:40:28.080967 containerd[1748]: time="2025-06-20T18:40:28.080845570Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 803.62455ms" Jun 20 18:40:28.103555 containerd[1748]: time="2025-06-20T18:40:28.103505328Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 821.235726ms" Jun 20 18:40:28.153076 kubelet[2909]: W0620 18:40:28.152900 2909 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.37:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.37:6443: connect: connection refused Jun 20 18:40:28.153076 kubelet[2909]: E0620 18:40:28.153006 2909 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.37:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.37:6443: connect: connection refused" logger="UnhandledError" Jun 20 18:40:28.194669 kubelet[2909]: E0620 18:40:28.194614 2909 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.2.0-a-431835d741?timeout=10s\": dial tcp 10.200.20.37:6443: connect: connection refused" interval="1.6s" Jun 20 18:40:28.328762 kubelet[2909]: E0620 18:40:28.328640 2909 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.37:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.37:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230.2.0-a-431835d741.184ad44bb2989b02 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230.2.0-a-431835d741,UID:ci-4230.2.0-a-431835d741,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230.2.0-a-431835d741,},FirstTimestamp:2025-06-20 18:40:26.770840322 +0000 UTC m=+0.915372348,LastTimestamp:2025-06-20 18:40:26.770840322 +0000 UTC m=+0.915372348,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230.2.0-a-431835d741,}" Jun 20 18:40:28.364579 kubelet[2909]: I0620 18:40:28.364539 2909 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.0-a-431835d741" Jun 20 18:40:28.364980 kubelet[2909]: E0620 18:40:28.364912 2909 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.37:6443/api/v1/nodes\": dial tcp 10.200.20.37:6443: connect: connection refused" node="ci-4230.2.0-a-431835d741" Jun 20 18:40:28.579344 containerd[1748]: time="2025-06-20T18:40:28.579102617Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 18:40:28.579344 containerd[1748]: time="2025-06-20T18:40:28.579177417Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 18:40:28.579344 containerd[1748]: time="2025-06-20T18:40:28.579271177Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:40:28.580178 containerd[1748]: time="2025-06-20T18:40:28.579512856Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:40:28.586364 containerd[1748]: time="2025-06-20T18:40:28.586271871Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 18:40:28.586577 containerd[1748]: time="2025-06-20T18:40:28.586551030Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 18:40:28.586778 containerd[1748]: time="2025-06-20T18:40:28.586710470Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:40:28.587191 containerd[1748]: time="2025-06-20T18:40:28.587022749Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:40:28.593547 containerd[1748]: time="2025-06-20T18:40:28.593419086Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 18:40:28.593547 containerd[1748]: time="2025-06-20T18:40:28.593475686Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 18:40:28.593547 containerd[1748]: time="2025-06-20T18:40:28.593486526Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:40:28.593913 containerd[1748]: time="2025-06-20T18:40:28.593558165Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:40:28.623179 systemd[1]: Started cri-containerd-268633eba27febac2f0da716cf1ee9c866b9a4d8ac01c84a5617756e8624c225.scope - libcontainer container 268633eba27febac2f0da716cf1ee9c866b9a4d8ac01c84a5617756e8624c225. Jun 20 18:40:28.625556 systemd[1]: Started cri-containerd-a2b68fee22f10c3b8e6ec5c1aff4222e30e3919b5c97bc9b895fa5e981fc88e1.scope - libcontainer container a2b68fee22f10c3b8e6ec5c1aff4222e30e3919b5c97bc9b895fa5e981fc88e1. Jun 20 18:40:28.632526 systemd[1]: Started cri-containerd-629008e46c124348d1c79dd005586aad183b6cb066f20a8f3e855d23200cbdeb.scope - libcontainer container 629008e46c124348d1c79dd005586aad183b6cb066f20a8f3e855d23200cbdeb. Jun 20 18:40:28.670712 containerd[1748]: time="2025-06-20T18:40:28.670595488Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230.2.0-a-431835d741,Uid:ee3b5db6e49861a29e98bcb0c87dcf6c,Namespace:kube-system,Attempt:0,} returns sandbox id \"a2b68fee22f10c3b8e6ec5c1aff4222e30e3919b5c97bc9b895fa5e981fc88e1\"" Jun 20 18:40:28.676852 containerd[1748]: time="2025-06-20T18:40:28.676707186Z" level=info msg="CreateContainer within sandbox \"a2b68fee22f10c3b8e6ec5c1aff4222e30e3919b5c97bc9b895fa5e981fc88e1\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jun 20 18:40:28.693288 containerd[1748]: time="2025-06-20T18:40:28.693080967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230.2.0-a-431835d741,Uid:f48880ad39258f2447342bac498ffbb8,Namespace:kube-system,Attempt:0,} returns sandbox id \"629008e46c124348d1c79dd005586aad183b6cb066f20a8f3e855d23200cbdeb\"" Jun 20 18:40:28.698281 containerd[1748]: time="2025-06-20T18:40:28.698047309Z" level=info msg="CreateContainer within sandbox \"629008e46c124348d1c79dd005586aad183b6cb066f20a8f3e855d23200cbdeb\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jun 20 18:40:28.699920 containerd[1748]: time="2025-06-20T18:40:28.699883143Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230.2.0-a-431835d741,Uid:8dac460befa80644731225d3ccbfb8f9,Namespace:kube-system,Attempt:0,} returns sandbox id \"268633eba27febac2f0da716cf1ee9c866b9a4d8ac01c84a5617756e8624c225\"" Jun 20 18:40:28.703685 containerd[1748]: time="2025-06-20T18:40:28.703484970Z" level=info msg="CreateContainer within sandbox \"268633eba27febac2f0da716cf1ee9c866b9a4d8ac01c84a5617756e8624c225\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jun 20 18:40:28.767595 containerd[1748]: time="2025-06-20T18:40:28.767543899Z" level=info msg="CreateContainer within sandbox \"a2b68fee22f10c3b8e6ec5c1aff4222e30e3919b5c97bc9b895fa5e981fc88e1\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"714d584620ca349990c1e41a6d0af1ef392cdb29ea306e74915cad72879d15f5\"" Jun 20 18:40:28.768981 containerd[1748]: time="2025-06-20T18:40:28.768433896Z" level=info msg="StartContainer for \"714d584620ca349990c1e41a6d0af1ef392cdb29ea306e74915cad72879d15f5\"" Jun 20 18:40:28.794110 systemd[1]: Started cri-containerd-714d584620ca349990c1e41a6d0af1ef392cdb29ea306e74915cad72879d15f5.scope - libcontainer container 714d584620ca349990c1e41a6d0af1ef392cdb29ea306e74915cad72879d15f5. Jun 20 18:40:28.796621 containerd[1748]: time="2025-06-20T18:40:28.796326556Z" level=info msg="CreateContainer within sandbox \"629008e46c124348d1c79dd005586aad183b6cb066f20a8f3e855d23200cbdeb\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b3d1a696c6f606c519ebc88b578e9e465ed4d7a5cd9d87ab3d69beb1854d969e\"" Jun 20 18:40:28.798009 containerd[1748]: time="2025-06-20T18:40:28.797269232Z" level=info msg="StartContainer for \"b3d1a696c6f606c519ebc88b578e9e465ed4d7a5cd9d87ab3d69beb1854d969e\"" Jun 20 18:40:28.808234 containerd[1748]: time="2025-06-20T18:40:28.808093394Z" level=info msg="CreateContainer within sandbox \"268633eba27febac2f0da716cf1ee9c866b9a4d8ac01c84a5617756e8624c225\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"6a7f8f9f2ceb1963f6262e8c923d24dffd5a86e1182298aedf72484793341039\"" Jun 20 18:40:28.809430 containerd[1748]: time="2025-06-20T18:40:28.809401509Z" level=info msg="StartContainer for \"6a7f8f9f2ceb1963f6262e8c923d24dffd5a86e1182298aedf72484793341039\"" Jun 20 18:40:28.845163 systemd[1]: Started cri-containerd-b3d1a696c6f606c519ebc88b578e9e465ed4d7a5cd9d87ab3d69beb1854d969e.scope - libcontainer container b3d1a696c6f606c519ebc88b578e9e465ed4d7a5cd9d87ab3d69beb1854d969e. Jun 20 18:40:28.876463 systemd[1]: Started cri-containerd-6a7f8f9f2ceb1963f6262e8c923d24dffd5a86e1182298aedf72484793341039.scope - libcontainer container 6a7f8f9f2ceb1963f6262e8c923d24dffd5a86e1182298aedf72484793341039. Jun 20 18:40:28.877414 kubelet[2909]: E0620 18:40:28.877365 2909 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.37:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.37:6443: connect: connection refused" logger="UnhandledError" Jun 20 18:40:28.906023 containerd[1748]: time="2025-06-20T18:40:28.905621683Z" level=info msg="StartContainer for \"714d584620ca349990c1e41a6d0af1ef392cdb29ea306e74915cad72879d15f5\" returns successfully" Jun 20 18:40:29.613469 containerd[1748]: time="2025-06-20T18:40:29.613330417Z" level=info msg="StartContainer for \"6a7f8f9f2ceb1963f6262e8c923d24dffd5a86e1182298aedf72484793341039\" returns successfully" Jun 20 18:40:29.613469 containerd[1748]: time="2025-06-20T18:40:29.613330457Z" level=info msg="StartContainer for \"b3d1a696c6f606c519ebc88b578e9e465ed4d7a5cd9d87ab3d69beb1854d969e\" returns successfully" Jun 20 18:40:29.855211 kubelet[2909]: E0620 18:40:29.855094 2909 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.0-a-431835d741\" not found" node="ci-4230.2.0-a-431835d741" Jun 20 18:40:29.864011 kubelet[2909]: E0620 18:40:29.861872 2909 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.0-a-431835d741\" not found" node="ci-4230.2.0-a-431835d741" Jun 20 18:40:29.864918 kubelet[2909]: E0620 18:40:29.864885 2909 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.0-a-431835d741\" not found" node="ci-4230.2.0-a-431835d741" Jun 20 18:40:29.966883 kubelet[2909]: I0620 18:40:29.966829 2909 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.0-a-431835d741" Jun 20 18:40:30.865959 kubelet[2909]: E0620 18:40:30.865908 2909 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.0-a-431835d741\" not found" node="ci-4230.2.0-a-431835d741" Jun 20 18:40:30.866379 kubelet[2909]: E0620 18:40:30.866324 2909 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.0-a-431835d741\" not found" node="ci-4230.2.0-a-431835d741" Jun 20 18:40:30.866699 kubelet[2909]: E0620 18:40:30.866656 2909 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.0-a-431835d741\" not found" node="ci-4230.2.0-a-431835d741" Jun 20 18:40:31.388658 kubelet[2909]: E0620 18:40:31.388560 2909 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4230.2.0-a-431835d741\" not found" node="ci-4230.2.0-a-431835d741" Jun 20 18:40:31.407422 kubelet[2909]: I0620 18:40:31.407377 2909 kubelet_node_status.go:78] "Successfully registered node" node="ci-4230.2.0-a-431835d741" Jun 20 18:40:31.407422 kubelet[2909]: E0620 18:40:31.407424 2909 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4230.2.0-a-431835d741\": node \"ci-4230.2.0-a-431835d741\" not found" Jun 20 18:40:31.485651 kubelet[2909]: I0620 18:40:31.485596 2909 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230.2.0-a-431835d741" Jun 20 18:40:31.527880 kubelet[2909]: E0620 18:40:31.527821 2909 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4230.2.0-a-431835d741\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4230.2.0-a-431835d741" Jun 20 18:40:31.527880 kubelet[2909]: I0620 18:40:31.527859 2909 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230.2.0-a-431835d741" Jun 20 18:40:31.530077 kubelet[2909]: E0620 18:40:31.529818 2909 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4230.2.0-a-431835d741\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4230.2.0-a-431835d741" Jun 20 18:40:31.530077 kubelet[2909]: I0620 18:40:31.529853 2909 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230.2.0-a-431835d741" Jun 20 18:40:31.535251 kubelet[2909]: E0620 18:40:31.535201 2909 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4230.2.0-a-431835d741\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4230.2.0-a-431835d741" Jun 20 18:40:31.770230 kubelet[2909]: I0620 18:40:31.770120 2909 apiserver.go:52] "Watching apiserver" Jun 20 18:40:31.784703 kubelet[2909]: I0620 18:40:31.784632 2909 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jun 20 18:40:31.866156 kubelet[2909]: I0620 18:40:31.865390 2909 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230.2.0-a-431835d741" Jun 20 18:40:31.866156 kubelet[2909]: I0620 18:40:31.865785 2909 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230.2.0-a-431835d741" Jun 20 18:40:31.870556 kubelet[2909]: E0620 18:40:31.870215 2909 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4230.2.0-a-431835d741\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4230.2.0-a-431835d741" Jun 20 18:40:31.870556 kubelet[2909]: E0620 18:40:31.870451 2909 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4230.2.0-a-431835d741\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4230.2.0-a-431835d741" Jun 20 18:40:33.855773 systemd[1]: Reload requested from client PID 3183 ('systemctl') (unit session-9.scope)... Jun 20 18:40:33.856193 systemd[1]: Reloading... Jun 20 18:40:33.985055 zram_generator::config[3233]: No configuration found. Jun 20 18:40:34.113446 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 18:40:34.254472 systemd[1]: Reloading finished in 397 ms. Jun 20 18:40:34.277961 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 18:40:34.296135 systemd[1]: kubelet.service: Deactivated successfully. Jun 20 18:40:34.296405 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:40:34.296479 systemd[1]: kubelet.service: Consumed 1.287s CPU time, 129.7M memory peak. Jun 20 18:40:34.303358 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 18:40:34.430143 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:40:34.439678 (kubelet)[3294]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 20 18:40:34.487982 kubelet[3294]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 18:40:34.487982 kubelet[3294]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jun 20 18:40:34.487982 kubelet[3294]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 18:40:34.487982 kubelet[3294]: I0620 18:40:34.487478 3294 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 20 18:40:34.494985 kubelet[3294]: I0620 18:40:34.494851 3294 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jun 20 18:40:34.494985 kubelet[3294]: I0620 18:40:34.494887 3294 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 20 18:40:34.495276 kubelet[3294]: I0620 18:40:34.495249 3294 server.go:954] "Client rotation is on, will bootstrap in background" Jun 20 18:40:34.500818 kubelet[3294]: I0620 18:40:34.500090 3294 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jun 20 18:40:34.503525 kubelet[3294]: I0620 18:40:34.503476 3294 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 20 18:40:34.509155 kubelet[3294]: E0620 18:40:34.509093 3294 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jun 20 18:40:34.509155 kubelet[3294]: I0620 18:40:34.509148 3294 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jun 20 18:40:34.512749 kubelet[3294]: I0620 18:40:34.512695 3294 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 20 18:40:34.512992 kubelet[3294]: I0620 18:40:34.512956 3294 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 20 18:40:34.513232 kubelet[3294]: I0620 18:40:34.512991 3294 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230.2.0-a-431835d741","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jun 20 18:40:34.513336 kubelet[3294]: I0620 18:40:34.513239 3294 topology_manager.go:138] "Creating topology manager with none policy" Jun 20 18:40:34.513336 kubelet[3294]: I0620 18:40:34.513248 3294 container_manager_linux.go:304] "Creating device plugin manager" Jun 20 18:40:34.513336 kubelet[3294]: I0620 18:40:34.513297 3294 state_mem.go:36] "Initialized new in-memory state store" Jun 20 18:40:34.513452 kubelet[3294]: I0620 18:40:34.513433 3294 kubelet.go:446] "Attempting to sync node with API server" Jun 20 18:40:34.513452 kubelet[3294]: I0620 18:40:34.513452 3294 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 20 18:40:34.513507 kubelet[3294]: I0620 18:40:34.513472 3294 kubelet.go:352] "Adding apiserver pod source" Jun 20 18:40:34.514143 kubelet[3294]: I0620 18:40:34.514114 3294 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 20 18:40:34.518023 kubelet[3294]: I0620 18:40:34.516670 3294 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jun 20 18:40:34.518023 kubelet[3294]: I0620 18:40:34.517225 3294 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jun 20 18:40:34.518023 kubelet[3294]: I0620 18:40:34.517699 3294 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jun 20 18:40:34.518023 kubelet[3294]: I0620 18:40:34.517748 3294 server.go:1287] "Started kubelet" Jun 20 18:40:34.521230 kubelet[3294]: I0620 18:40:34.521202 3294 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 20 18:40:34.524393 kubelet[3294]: I0620 18:40:34.524330 3294 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jun 20 18:40:34.526132 kubelet[3294]: I0620 18:40:34.526067 3294 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 20 18:40:34.526486 kubelet[3294]: I0620 18:40:34.526467 3294 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 20 18:40:34.526832 kubelet[3294]: I0620 18:40:34.526245 3294 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jun 20 18:40:34.528513 kubelet[3294]: I0620 18:40:34.528472 3294 volume_manager.go:297] "Starting Kubelet Volume Manager" Jun 20 18:40:34.528778 kubelet[3294]: E0620 18:40:34.528743 3294 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.0-a-431835d741\" not found" Jun 20 18:40:34.530388 kubelet[3294]: I0620 18:40:34.530356 3294 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jun 20 18:40:34.530533 kubelet[3294]: I0620 18:40:34.530509 3294 reconciler.go:26] "Reconciler: start to sync state" Jun 20 18:40:34.537795 kubelet[3294]: I0620 18:40:34.537739 3294 factory.go:221] Registration of the systemd container factory successfully Jun 20 18:40:34.537955 kubelet[3294]: I0620 18:40:34.537892 3294 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 20 18:40:34.541417 kubelet[3294]: I0620 18:40:34.541128 3294 server.go:479] "Adding debug handlers to kubelet server" Jun 20 18:40:34.549942 kubelet[3294]: I0620 18:40:34.548671 3294 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 20 18:40:34.549942 kubelet[3294]: I0620 18:40:34.549723 3294 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 20 18:40:34.549942 kubelet[3294]: I0620 18:40:34.549752 3294 status_manager.go:227] "Starting to sync pod status with apiserver" Jun 20 18:40:34.549942 kubelet[3294]: I0620 18:40:34.549787 3294 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jun 20 18:40:34.549942 kubelet[3294]: I0620 18:40:34.549793 3294 kubelet.go:2382] "Starting kubelet main sync loop" Jun 20 18:40:34.549942 kubelet[3294]: E0620 18:40:34.549839 3294 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 20 18:40:34.556951 kubelet[3294]: I0620 18:40:34.556066 3294 factory.go:221] Registration of the containerd container factory successfully Jun 20 18:40:34.561952 kubelet[3294]: E0620 18:40:34.561377 3294 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 20 18:40:34.628602 kubelet[3294]: I0620 18:40:34.628574 3294 cpu_manager.go:221] "Starting CPU manager" policy="none" Jun 20 18:40:35.101504 kubelet[3294]: I0620 18:40:34.628691 3294 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jun 20 18:40:35.101504 kubelet[3294]: I0620 18:40:34.628715 3294 state_mem.go:36] "Initialized new in-memory state store" Jun 20 18:40:35.101504 kubelet[3294]: E0620 18:40:34.650258 3294 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jun 20 18:40:35.101504 kubelet[3294]: E0620 18:40:34.850626 3294 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jun 20 18:40:35.105036 kubelet[3294]: I0620 18:40:35.102836 3294 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jun 20 18:40:35.105036 kubelet[3294]: I0620 18:40:35.102867 3294 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jun 20 18:40:35.105036 kubelet[3294]: I0620 18:40:35.102888 3294 policy_none.go:49] "None policy: Start" Jun 20 18:40:35.105036 kubelet[3294]: I0620 18:40:35.102899 3294 memory_manager.go:186] "Starting memorymanager" policy="None" Jun 20 18:40:35.105036 kubelet[3294]: I0620 18:40:35.102910 3294 state_mem.go:35] "Initializing new in-memory state store" Jun 20 18:40:35.105036 kubelet[3294]: I0620 18:40:35.103118 3294 state_mem.go:75] "Updated machine memory state" Jun 20 18:40:35.112130 kubelet[3294]: I0620 18:40:35.111904 3294 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 20 18:40:35.112130 kubelet[3294]: I0620 18:40:35.112115 3294 eviction_manager.go:189] "Eviction manager: starting control loop" Jun 20 18:40:35.112625 kubelet[3294]: I0620 18:40:35.112128 3294 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 20 18:40:35.112625 kubelet[3294]: I0620 18:40:35.112375 3294 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 20 18:40:35.114729 kubelet[3294]: E0620 18:40:35.114663 3294 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jun 20 18:40:35.220790 kubelet[3294]: I0620 18:40:35.220757 3294 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.0-a-431835d741" Jun 20 18:40:35.240865 kubelet[3294]: I0620 18:40:35.240822 3294 kubelet_node_status.go:124] "Node was previously registered" node="ci-4230.2.0-a-431835d741" Jun 20 18:40:35.241062 kubelet[3294]: I0620 18:40:35.240916 3294 kubelet_node_status.go:78] "Successfully registered node" node="ci-4230.2.0-a-431835d741" Jun 20 18:40:35.251701 kubelet[3294]: I0620 18:40:35.251656 3294 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230.2.0-a-431835d741" Jun 20 18:40:35.252186 kubelet[3294]: I0620 18:40:35.251988 3294 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230.2.0-a-431835d741" Jun 20 18:40:35.254171 kubelet[3294]: I0620 18:40:35.253976 3294 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230.2.0-a-431835d741" Jun 20 18:40:35.266809 kubelet[3294]: W0620 18:40:35.266766 3294 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 20 18:40:35.274626 kubelet[3294]: W0620 18:40:35.274309 3294 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 20 18:40:35.274626 kubelet[3294]: W0620 18:40:35.274598 3294 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 20 18:40:35.321548 sudo[3327]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jun 20 18:40:35.321829 sudo[3327]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jun 20 18:40:35.344742 kubelet[3294]: I0620 18:40:35.344666 3294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f48880ad39258f2447342bac498ffbb8-k8s-certs\") pod \"kube-apiserver-ci-4230.2.0-a-431835d741\" (UID: \"f48880ad39258f2447342bac498ffbb8\") " pod="kube-system/kube-apiserver-ci-4230.2.0-a-431835d741" Jun 20 18:40:35.344742 kubelet[3294]: I0620 18:40:35.344711 3294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f48880ad39258f2447342bac498ffbb8-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230.2.0-a-431835d741\" (UID: \"f48880ad39258f2447342bac498ffbb8\") " pod="kube-system/kube-apiserver-ci-4230.2.0-a-431835d741" Jun 20 18:40:35.344742 kubelet[3294]: I0620 18:40:35.344742 3294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8dac460befa80644731225d3ccbfb8f9-flexvolume-dir\") pod \"kube-controller-manager-ci-4230.2.0-a-431835d741\" (UID: \"8dac460befa80644731225d3ccbfb8f9\") " pod="kube-system/kube-controller-manager-ci-4230.2.0-a-431835d741" Jun 20 18:40:35.344742 kubelet[3294]: I0620 18:40:35.344759 3294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8dac460befa80644731225d3ccbfb8f9-kubeconfig\") pod \"kube-controller-manager-ci-4230.2.0-a-431835d741\" (UID: \"8dac460befa80644731225d3ccbfb8f9\") " pod="kube-system/kube-controller-manager-ci-4230.2.0-a-431835d741" Jun 20 18:40:35.344997 kubelet[3294]: I0620 18:40:35.344788 3294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8dac460befa80644731225d3ccbfb8f9-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230.2.0-a-431835d741\" (UID: \"8dac460befa80644731225d3ccbfb8f9\") " pod="kube-system/kube-controller-manager-ci-4230.2.0-a-431835d741" Jun 20 18:40:35.344997 kubelet[3294]: I0620 18:40:35.344805 3294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f48880ad39258f2447342bac498ffbb8-ca-certs\") pod \"kube-apiserver-ci-4230.2.0-a-431835d741\" (UID: \"f48880ad39258f2447342bac498ffbb8\") " pod="kube-system/kube-apiserver-ci-4230.2.0-a-431835d741" Jun 20 18:40:35.344997 kubelet[3294]: I0620 18:40:35.344822 3294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8dac460befa80644731225d3ccbfb8f9-ca-certs\") pod \"kube-controller-manager-ci-4230.2.0-a-431835d741\" (UID: \"8dac460befa80644731225d3ccbfb8f9\") " pod="kube-system/kube-controller-manager-ci-4230.2.0-a-431835d741" Jun 20 18:40:35.344997 kubelet[3294]: I0620 18:40:35.344837 3294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8dac460befa80644731225d3ccbfb8f9-k8s-certs\") pod \"kube-controller-manager-ci-4230.2.0-a-431835d741\" (UID: \"8dac460befa80644731225d3ccbfb8f9\") " pod="kube-system/kube-controller-manager-ci-4230.2.0-a-431835d741" Jun 20 18:40:35.344997 kubelet[3294]: I0620 18:40:35.344854 3294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ee3b5db6e49861a29e98bcb0c87dcf6c-kubeconfig\") pod \"kube-scheduler-ci-4230.2.0-a-431835d741\" (UID: \"ee3b5db6e49861a29e98bcb0c87dcf6c\") " pod="kube-system/kube-scheduler-ci-4230.2.0-a-431835d741" Jun 20 18:40:35.528400 kubelet[3294]: I0620 18:40:35.528114 3294 apiserver.go:52] "Watching apiserver" Jun 20 18:40:35.530744 kubelet[3294]: I0620 18:40:35.530700 3294 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jun 20 18:40:35.635752 kubelet[3294]: I0620 18:40:35.635658 3294 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4230.2.0-a-431835d741" podStartSLOduration=0.635636228 podStartE2EDuration="635.636228ms" podCreationTimestamp="2025-06-20 18:40:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 18:40:35.609708272 +0000 UTC m=+1.165455427" watchObservedRunningTime="2025-06-20 18:40:35.635636228 +0000 UTC m=+1.191383343" Jun 20 18:40:35.650036 kubelet[3294]: I0620 18:40:35.649961 3294 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4230.2.0-a-431835d741" podStartSLOduration=0.64991804 podStartE2EDuration="649.91804ms" podCreationTimestamp="2025-06-20 18:40:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 18:40:35.636876022 +0000 UTC m=+1.192623217" watchObservedRunningTime="2025-06-20 18:40:35.64991804 +0000 UTC m=+1.205665195" Jun 20 18:40:35.674552 kubelet[3294]: I0620 18:40:35.674482 3294 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4230.2.0-a-431835d741" podStartSLOduration=0.674461602 podStartE2EDuration="674.461602ms" podCreationTimestamp="2025-06-20 18:40:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 18:40:35.651370753 +0000 UTC m=+1.207117908" watchObservedRunningTime="2025-06-20 18:40:35.674461602 +0000 UTC m=+1.230208757" Jun 20 18:40:35.816701 sudo[3327]: pam_unix(sudo:session): session closed for user root Jun 20 18:40:37.846030 sudo[2285]: pam_unix(sudo:session): session closed for user root Jun 20 18:40:37.925870 sshd[2284]: Connection closed by 10.200.16.10 port 34020 Jun 20 18:40:37.926478 sshd-session[2282]: pam_unix(sshd:session): session closed for user core Jun 20 18:40:37.930433 systemd[1]: sshd@6-10.200.20.37:22-10.200.16.10:34020.service: Deactivated successfully. Jun 20 18:40:37.932630 systemd[1]: session-9.scope: Deactivated successfully. Jun 20 18:40:37.933400 systemd[1]: session-9.scope: Consumed 7.432s CPU time, 262.8M memory peak. Jun 20 18:40:37.934840 systemd-logind[1715]: Session 9 logged out. Waiting for processes to exit. Jun 20 18:40:37.936131 systemd-logind[1715]: Removed session 9. Jun 20 18:40:38.270422 kubelet[3294]: I0620 18:40:38.270369 3294 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jun 20 18:40:38.271243 kubelet[3294]: I0620 18:40:38.270977 3294 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jun 20 18:40:38.271276 containerd[1748]: time="2025-06-20T18:40:38.270743657Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jun 20 18:40:39.081798 systemd[1]: Created slice kubepods-besteffort-pod7cb8f67b_3428_4d00_b6df_47d00a4dd6ae.slice - libcontainer container kubepods-besteffort-pod7cb8f67b_3428_4d00_b6df_47d00a4dd6ae.slice. Jun 20 18:40:39.098640 systemd[1]: Created slice kubepods-burstable-podb25d997a_e02f_499c_b778_ace863f8a8f9.slice - libcontainer container kubepods-burstable-podb25d997a_e02f_499c_b778_ace863f8a8f9.slice. Jun 20 18:40:39.169016 kubelet[3294]: I0620 18:40:39.168971 3294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7cb8f67b-3428-4d00-b6df-47d00a4dd6ae-kube-proxy\") pod \"kube-proxy-lbfrd\" (UID: \"7cb8f67b-3428-4d00-b6df-47d00a4dd6ae\") " pod="kube-system/kube-proxy-lbfrd" Jun 20 18:40:39.169016 kubelet[3294]: I0620 18:40:39.169014 3294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7cb8f67b-3428-4d00-b6df-47d00a4dd6ae-lib-modules\") pod \"kube-proxy-lbfrd\" (UID: \"7cb8f67b-3428-4d00-b6df-47d00a4dd6ae\") " pod="kube-system/kube-proxy-lbfrd" Jun 20 18:40:39.169016 kubelet[3294]: I0620 18:40:39.169033 3294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b25d997a-e02f-499c-b778-ace863f8a8f9-host-proc-sys-kernel\") pod \"cilium-wvkxf\" (UID: \"b25d997a-e02f-499c-b778-ace863f8a8f9\") " pod="kube-system/cilium-wvkxf" Jun 20 18:40:39.169016 kubelet[3294]: I0620 18:40:39.169058 3294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b25d997a-e02f-499c-b778-ace863f8a8f9-hubble-tls\") pod \"cilium-wvkxf\" (UID: \"b25d997a-e02f-499c-b778-ace863f8a8f9\") " pod="kube-system/cilium-wvkxf" Jun 20 18:40:39.169016 kubelet[3294]: I0620 18:40:39.169090 3294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b25d997a-e02f-499c-b778-ace863f8a8f9-cilium-config-path\") pod \"cilium-wvkxf\" (UID: \"b25d997a-e02f-499c-b778-ace863f8a8f9\") " pod="kube-system/cilium-wvkxf" Jun 20 18:40:39.169345 kubelet[3294]: I0620 18:40:39.169107 3294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b25d997a-e02f-499c-b778-ace863f8a8f9-xtables-lock\") pod \"cilium-wvkxf\" (UID: \"b25d997a-e02f-499c-b778-ace863f8a8f9\") " pod="kube-system/cilium-wvkxf" Jun 20 18:40:39.169345 kubelet[3294]: I0620 18:40:39.169145 3294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpslv\" (UniqueName: \"kubernetes.io/projected/b25d997a-e02f-499c-b778-ace863f8a8f9-kube-api-access-hpslv\") pod \"cilium-wvkxf\" (UID: \"b25d997a-e02f-499c-b778-ace863f8a8f9\") " pod="kube-system/cilium-wvkxf" Jun 20 18:40:39.169345 kubelet[3294]: I0620 18:40:39.169164 3294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b25d997a-e02f-499c-b778-ace863f8a8f9-hostproc\") pod \"cilium-wvkxf\" (UID: \"b25d997a-e02f-499c-b778-ace863f8a8f9\") " pod="kube-system/cilium-wvkxf" Jun 20 18:40:39.169345 kubelet[3294]: I0620 18:40:39.169178 3294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b25d997a-e02f-499c-b778-ace863f8a8f9-etc-cni-netd\") pod \"cilium-wvkxf\" (UID: \"b25d997a-e02f-499c-b778-ace863f8a8f9\") " pod="kube-system/cilium-wvkxf" Jun 20 18:40:39.169345 kubelet[3294]: I0620 18:40:39.169192 3294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b25d997a-e02f-499c-b778-ace863f8a8f9-cilium-run\") pod \"cilium-wvkxf\" (UID: \"b25d997a-e02f-499c-b778-ace863f8a8f9\") " pod="kube-system/cilium-wvkxf" Jun 20 18:40:39.169345 kubelet[3294]: I0620 18:40:39.169208 3294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b25d997a-e02f-499c-b778-ace863f8a8f9-bpf-maps\") pod \"cilium-wvkxf\" (UID: \"b25d997a-e02f-499c-b778-ace863f8a8f9\") " pod="kube-system/cilium-wvkxf" Jun 20 18:40:39.169471 kubelet[3294]: I0620 18:40:39.169223 3294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b25d997a-e02f-499c-b778-ace863f8a8f9-host-proc-sys-net\") pod \"cilium-wvkxf\" (UID: \"b25d997a-e02f-499c-b778-ace863f8a8f9\") " pod="kube-system/cilium-wvkxf" Jun 20 18:40:39.169471 kubelet[3294]: I0620 18:40:39.169239 3294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rl75h\" (UniqueName: \"kubernetes.io/projected/7cb8f67b-3428-4d00-b6df-47d00a4dd6ae-kube-api-access-rl75h\") pod \"kube-proxy-lbfrd\" (UID: \"7cb8f67b-3428-4d00-b6df-47d00a4dd6ae\") " pod="kube-system/kube-proxy-lbfrd" Jun 20 18:40:39.169471 kubelet[3294]: I0620 18:40:39.169253 3294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b25d997a-e02f-499c-b778-ace863f8a8f9-lib-modules\") pod \"cilium-wvkxf\" (UID: \"b25d997a-e02f-499c-b778-ace863f8a8f9\") " pod="kube-system/cilium-wvkxf" Jun 20 18:40:39.169471 kubelet[3294]: I0620 18:40:39.169270 3294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b25d997a-e02f-499c-b778-ace863f8a8f9-clustermesh-secrets\") pod \"cilium-wvkxf\" (UID: \"b25d997a-e02f-499c-b778-ace863f8a8f9\") " pod="kube-system/cilium-wvkxf" Jun 20 18:40:39.169471 kubelet[3294]: I0620 18:40:39.169287 3294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7cb8f67b-3428-4d00-b6df-47d00a4dd6ae-xtables-lock\") pod \"kube-proxy-lbfrd\" (UID: \"7cb8f67b-3428-4d00-b6df-47d00a4dd6ae\") " pod="kube-system/kube-proxy-lbfrd" Jun 20 18:40:39.169471 kubelet[3294]: I0620 18:40:39.169305 3294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b25d997a-e02f-499c-b778-ace863f8a8f9-cilium-cgroup\") pod \"cilium-wvkxf\" (UID: \"b25d997a-e02f-499c-b778-ace863f8a8f9\") " pod="kube-system/cilium-wvkxf" Jun 20 18:40:39.169592 kubelet[3294]: I0620 18:40:39.169321 3294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b25d997a-e02f-499c-b778-ace863f8a8f9-cni-path\") pod \"cilium-wvkxf\" (UID: \"b25d997a-e02f-499c-b778-ace863f8a8f9\") " pod="kube-system/cilium-wvkxf" Jun 20 18:40:39.358900 systemd[1]: Created slice kubepods-besteffort-podb74af2e6_3d50_4ed3_9c1c_011bb28d74ec.slice - libcontainer container kubepods-besteffort-podb74af2e6_3d50_4ed3_9c1c_011bb28d74ec.slice. Jun 20 18:40:39.371538 kubelet[3294]: I0620 18:40:39.371492 3294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b74af2e6-3d50-4ed3-9c1c-011bb28d74ec-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-jvtq6\" (UID: \"b74af2e6-3d50-4ed3-9c1c-011bb28d74ec\") " pod="kube-system/cilium-operator-6c4d7847fc-jvtq6" Jun 20 18:40:39.372057 kubelet[3294]: I0620 18:40:39.372015 3294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkslr\" (UniqueName: \"kubernetes.io/projected/b74af2e6-3d50-4ed3-9c1c-011bb28d74ec-kube-api-access-mkslr\") pod \"cilium-operator-6c4d7847fc-jvtq6\" (UID: \"b74af2e6-3d50-4ed3-9c1c-011bb28d74ec\") " pod="kube-system/cilium-operator-6c4d7847fc-jvtq6" Jun 20 18:40:39.392918 containerd[1748]: time="2025-06-20T18:40:39.392876437Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lbfrd,Uid:7cb8f67b-3428-4d00-b6df-47d00a4dd6ae,Namespace:kube-system,Attempt:0,}" Jun 20 18:40:39.403743 containerd[1748]: time="2025-06-20T18:40:39.403690274Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wvkxf,Uid:b25d997a-e02f-499c-b778-ace863f8a8f9,Namespace:kube-system,Attempt:0,}" Jun 20 18:40:39.471630 containerd[1748]: time="2025-06-20T18:40:39.471521087Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 18:40:39.473517 containerd[1748]: time="2025-06-20T18:40:39.473083521Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 18:40:39.473517 containerd[1748]: time="2025-06-20T18:40:39.473127681Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:40:39.473517 containerd[1748]: time="2025-06-20T18:40:39.473258840Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:40:39.477543 containerd[1748]: time="2025-06-20T18:40:39.477336704Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 18:40:39.478036 containerd[1748]: time="2025-06-20T18:40:39.477724862Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 18:40:39.478251 containerd[1748]: time="2025-06-20T18:40:39.478193981Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:40:39.478803 containerd[1748]: time="2025-06-20T18:40:39.478708819Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:40:39.501325 systemd[1]: Started cri-containerd-64c269bae9d81da1b86d75ac5c645a75dd44054df83a1b5e0f6bd252b7761017.scope - libcontainer container 64c269bae9d81da1b86d75ac5c645a75dd44054df83a1b5e0f6bd252b7761017. Jun 20 18:40:39.505359 systemd[1]: Started cri-containerd-36d752038d34f8e2ac682289d85c1807d9cfc313397f16d2a78603074dc5bdfb.scope - libcontainer container 36d752038d34f8e2ac682289d85c1807d9cfc313397f16d2a78603074dc5bdfb. Jun 20 18:40:39.534030 containerd[1748]: time="2025-06-20T18:40:39.533885801Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lbfrd,Uid:7cb8f67b-3428-4d00-b6df-47d00a4dd6ae,Namespace:kube-system,Attempt:0,} returns sandbox id \"64c269bae9d81da1b86d75ac5c645a75dd44054df83a1b5e0f6bd252b7761017\"" Jun 20 18:40:39.540016 containerd[1748]: time="2025-06-20T18:40:39.539942497Z" level=info msg="CreateContainer within sandbox \"64c269bae9d81da1b86d75ac5c645a75dd44054df83a1b5e0f6bd252b7761017\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jun 20 18:40:39.548199 containerd[1748]: time="2025-06-20T18:40:39.548143105Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wvkxf,Uid:b25d997a-e02f-499c-b778-ace863f8a8f9,Namespace:kube-system,Attempt:0,} returns sandbox id \"36d752038d34f8e2ac682289d85c1807d9cfc313397f16d2a78603074dc5bdfb\"" Jun 20 18:40:39.551093 containerd[1748]: time="2025-06-20T18:40:39.550970814Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jun 20 18:40:39.592918 containerd[1748]: time="2025-06-20T18:40:39.592814129Z" level=info msg="CreateContainer within sandbox \"64c269bae9d81da1b86d75ac5c645a75dd44054df83a1b5e0f6bd252b7761017\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"794a504c7a61a225e520c98c12c7a46002684823cd661129f22243d15738a278\"" Jun 20 18:40:39.594040 containerd[1748]: time="2025-06-20T18:40:39.593985805Z" level=info msg="StartContainer for \"794a504c7a61a225e520c98c12c7a46002684823cd661129f22243d15738a278\"" Jun 20 18:40:39.624221 systemd[1]: Started cri-containerd-794a504c7a61a225e520c98c12c7a46002684823cd661129f22243d15738a278.scope - libcontainer container 794a504c7a61a225e520c98c12c7a46002684823cd661129f22243d15738a278. Jun 20 18:40:39.658885 containerd[1748]: time="2025-06-20T18:40:39.658827389Z" level=info msg="StartContainer for \"794a504c7a61a225e520c98c12c7a46002684823cd661129f22243d15738a278\" returns successfully" Jun 20 18:40:39.664532 containerd[1748]: time="2025-06-20T18:40:39.664128008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-jvtq6,Uid:b74af2e6-3d50-4ed3-9c1c-011bb28d74ec,Namespace:kube-system,Attempt:0,}" Jun 20 18:40:39.725707 containerd[1748]: time="2025-06-20T18:40:39.723388055Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 18:40:39.725707 containerd[1748]: time="2025-06-20T18:40:39.724498250Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 18:40:39.725707 containerd[1748]: time="2025-06-20T18:40:39.724524330Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:40:39.725707 containerd[1748]: time="2025-06-20T18:40:39.725157448Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:40:39.744302 systemd[1]: Started cri-containerd-6777b19bbec00ab85d70206bd6960eea386cd5684f7fb403d447056d39534afb.scope - libcontainer container 6777b19bbec00ab85d70206bd6960eea386cd5684f7fb403d447056d39534afb. Jun 20 18:40:39.788219 containerd[1748]: time="2025-06-20T18:40:39.787830081Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-jvtq6,Uid:b74af2e6-3d50-4ed3-9c1c-011bb28d74ec,Namespace:kube-system,Attempt:0,} returns sandbox id \"6777b19bbec00ab85d70206bd6960eea386cd5684f7fb403d447056d39534afb\"" Jun 20 18:40:44.461004 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3488469108.mount: Deactivated successfully. Jun 20 18:40:45.418095 kubelet[3294]: I0620 18:40:45.417603 3294 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-lbfrd" podStartSLOduration=6.417583281 podStartE2EDuration="6.417583281s" podCreationTimestamp="2025-06-20 18:40:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 18:40:40.671113752 +0000 UTC m=+6.226860947" watchObservedRunningTime="2025-06-20 18:40:45.417583281 +0000 UTC m=+10.973330436" Jun 20 18:40:47.111400 containerd[1748]: time="2025-06-20T18:40:47.111349861Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:40:47.116110 containerd[1748]: time="2025-06-20T18:40:47.115853722Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Jun 20 18:40:47.121762 containerd[1748]: time="2025-06-20T18:40:47.121677617Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:40:47.123704 containerd[1748]: time="2025-06-20T18:40:47.123599049Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 7.572563395s" Jun 20 18:40:47.124005 containerd[1748]: time="2025-06-20T18:40:47.123808768Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jun 20 18:40:47.125431 containerd[1748]: time="2025-06-20T18:40:47.125198682Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jun 20 18:40:47.128365 containerd[1748]: time="2025-06-20T18:40:47.128180870Z" level=info msg="CreateContainer within sandbox \"36d752038d34f8e2ac682289d85c1807d9cfc313397f16d2a78603074dc5bdfb\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jun 20 18:40:47.172729 containerd[1748]: time="2025-06-20T18:40:47.172670320Z" level=info msg="CreateContainer within sandbox \"36d752038d34f8e2ac682289d85c1807d9cfc313397f16d2a78603074dc5bdfb\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b02c85b19c1ed3bdcba95fd8d2b44f5fd87c26aadc8b37e5661b7771beb078d9\"" Jun 20 18:40:47.174275 containerd[1748]: time="2025-06-20T18:40:47.174166154Z" level=info msg="StartContainer for \"b02c85b19c1ed3bdcba95fd8d2b44f5fd87c26aadc8b37e5661b7771beb078d9\"" Jun 20 18:40:47.207157 systemd[1]: Started cri-containerd-b02c85b19c1ed3bdcba95fd8d2b44f5fd87c26aadc8b37e5661b7771beb078d9.scope - libcontainer container b02c85b19c1ed3bdcba95fd8d2b44f5fd87c26aadc8b37e5661b7771beb078d9. Jun 20 18:40:47.237315 containerd[1748]: time="2025-06-20T18:40:47.237257965Z" level=info msg="StartContainer for \"b02c85b19c1ed3bdcba95fd8d2b44f5fd87c26aadc8b37e5661b7771beb078d9\" returns successfully" Jun 20 18:40:47.244505 systemd[1]: cri-containerd-b02c85b19c1ed3bdcba95fd8d2b44f5fd87c26aadc8b37e5661b7771beb078d9.scope: Deactivated successfully. Jun 20 18:40:48.158721 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b02c85b19c1ed3bdcba95fd8d2b44f5fd87c26aadc8b37e5661b7771beb078d9-rootfs.mount: Deactivated successfully. Jun 20 18:40:48.272470 containerd[1748]: time="2025-06-20T18:40:48.272368609Z" level=info msg="shim disconnected" id=b02c85b19c1ed3bdcba95fd8d2b44f5fd87c26aadc8b37e5661b7771beb078d9 namespace=k8s.io Jun 20 18:40:48.272470 containerd[1748]: time="2025-06-20T18:40:48.272432088Z" level=warning msg="cleaning up after shim disconnected" id=b02c85b19c1ed3bdcba95fd8d2b44f5fd87c26aadc8b37e5661b7771beb078d9 namespace=k8s.io Jun 20 18:40:48.272470 containerd[1748]: time="2025-06-20T18:40:48.272440928Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 18:40:48.657954 containerd[1748]: time="2025-06-20T18:40:48.657856852Z" level=info msg="CreateContainer within sandbox \"36d752038d34f8e2ac682289d85c1807d9cfc313397f16d2a78603074dc5bdfb\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jun 20 18:40:48.710235 containerd[1748]: time="2025-06-20T18:40:48.710138941Z" level=info msg="CreateContainer within sandbox \"36d752038d34f8e2ac682289d85c1807d9cfc313397f16d2a78603074dc5bdfb\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d9c5f977e95d7125994b01d4a881b28255a894e6d1d5597fd0c359994fcd74ff\"" Jun 20 18:40:48.710956 containerd[1748]: time="2025-06-20T18:40:48.710771378Z" level=info msg="StartContainer for \"d9c5f977e95d7125994b01d4a881b28255a894e6d1d5597fd0c359994fcd74ff\"" Jun 20 18:40:48.740334 systemd[1]: Started cri-containerd-d9c5f977e95d7125994b01d4a881b28255a894e6d1d5597fd0c359994fcd74ff.scope - libcontainer container d9c5f977e95d7125994b01d4a881b28255a894e6d1d5597fd0c359994fcd74ff. Jun 20 18:40:48.769918 containerd[1748]: time="2025-06-20T18:40:48.769845117Z" level=info msg="StartContainer for \"d9c5f977e95d7125994b01d4a881b28255a894e6d1d5597fd0c359994fcd74ff\" returns successfully" Jun 20 18:40:48.783053 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 20 18:40:48.783491 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 20 18:40:48.784000 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jun 20 18:40:48.791513 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 20 18:40:48.791747 systemd[1]: cri-containerd-d9c5f977e95d7125994b01d4a881b28255a894e6d1d5597fd0c359994fcd74ff.scope: Deactivated successfully. Jun 20 18:40:48.822086 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 20 18:40:48.831862 containerd[1748]: time="2025-06-20T18:40:48.831685523Z" level=info msg="shim disconnected" id=d9c5f977e95d7125994b01d4a881b28255a894e6d1d5597fd0c359994fcd74ff namespace=k8s.io Jun 20 18:40:48.831862 containerd[1748]: time="2025-06-20T18:40:48.831742283Z" level=warning msg="cleaning up after shim disconnected" id=d9c5f977e95d7125994b01d4a881b28255a894e6d1d5597fd0c359994fcd74ff namespace=k8s.io Jun 20 18:40:48.831862 containerd[1748]: time="2025-06-20T18:40:48.831753083Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 18:40:49.159065 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d9c5f977e95d7125994b01d4a881b28255a894e6d1d5597fd0c359994fcd74ff-rootfs.mount: Deactivated successfully. Jun 20 18:40:49.644606 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2914470303.mount: Deactivated successfully. Jun 20 18:40:49.662472 containerd[1748]: time="2025-06-20T18:40:49.662313128Z" level=info msg="CreateContainer within sandbox \"36d752038d34f8e2ac682289d85c1807d9cfc313397f16d2a78603074dc5bdfb\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jun 20 18:40:49.745969 containerd[1748]: time="2025-06-20T18:40:49.744106886Z" level=info msg="CreateContainer within sandbox \"36d752038d34f8e2ac682289d85c1807d9cfc313397f16d2a78603074dc5bdfb\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"8a106d9cff51bf4398154bece9cacd91cfe846669c14cf7e610f4c8fb0a58386\"" Jun 20 18:40:49.747599 containerd[1748]: time="2025-06-20T18:40:49.746681275Z" level=info msg="StartContainer for \"8a106d9cff51bf4398154bece9cacd91cfe846669c14cf7e610f4c8fb0a58386\"" Jun 20 18:40:49.777189 systemd[1]: Started cri-containerd-8a106d9cff51bf4398154bece9cacd91cfe846669c14cf7e610f4c8fb0a58386.scope - libcontainer container 8a106d9cff51bf4398154bece9cacd91cfe846669c14cf7e610f4c8fb0a58386. Jun 20 18:40:49.809336 systemd[1]: cri-containerd-8a106d9cff51bf4398154bece9cacd91cfe846669c14cf7e610f4c8fb0a58386.scope: Deactivated successfully. Jun 20 18:40:49.814639 containerd[1748]: time="2025-06-20T18:40:49.814540334Z" level=info msg="StartContainer for \"8a106d9cff51bf4398154bece9cacd91cfe846669c14cf7e610f4c8fb0a58386\" returns successfully" Jun 20 18:40:49.875487 containerd[1748]: time="2025-06-20T18:40:49.875418705Z" level=info msg="shim disconnected" id=8a106d9cff51bf4398154bece9cacd91cfe846669c14cf7e610f4c8fb0a58386 namespace=k8s.io Jun 20 18:40:49.875487 containerd[1748]: time="2025-06-20T18:40:49.875479265Z" level=warning msg="cleaning up after shim disconnected" id=8a106d9cff51bf4398154bece9cacd91cfe846669c14cf7e610f4c8fb0a58386 namespace=k8s.io Jun 20 18:40:49.875487 containerd[1748]: time="2025-06-20T18:40:49.875488505Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 18:40:50.278832 containerd[1748]: time="2025-06-20T18:40:50.278766080Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:40:50.281983 containerd[1748]: time="2025-06-20T18:40:50.281735627Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Jun 20 18:40:50.290121 containerd[1748]: time="2025-06-20T18:40:50.290048871Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:40:50.292352 containerd[1748]: time="2025-06-20T18:40:50.291970062Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 3.16673478s" Jun 20 18:40:50.292352 containerd[1748]: time="2025-06-20T18:40:50.292028622Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jun 20 18:40:50.295577 containerd[1748]: time="2025-06-20T18:40:50.295517566Z" level=info msg="CreateContainer within sandbox \"6777b19bbec00ab85d70206bd6960eea386cd5684f7fb403d447056d39534afb\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jun 20 18:40:50.347658 containerd[1748]: time="2025-06-20T18:40:50.347515256Z" level=info msg="CreateContainer within sandbox \"6777b19bbec00ab85d70206bd6960eea386cd5684f7fb403d447056d39534afb\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"2ea7e06aca3b5b508fc1cc5622dafad7cbfc52bcfc8a3b8346580aceddd35b6d\"" Jun 20 18:40:50.348967 containerd[1748]: time="2025-06-20T18:40:50.348470092Z" level=info msg="StartContainer for \"2ea7e06aca3b5b508fc1cc5622dafad7cbfc52bcfc8a3b8346580aceddd35b6d\"" Jun 20 18:40:50.382157 systemd[1]: Started cri-containerd-2ea7e06aca3b5b508fc1cc5622dafad7cbfc52bcfc8a3b8346580aceddd35b6d.scope - libcontainer container 2ea7e06aca3b5b508fc1cc5622dafad7cbfc52bcfc8a3b8346580aceddd35b6d. Jun 20 18:40:50.415218 containerd[1748]: time="2025-06-20T18:40:50.414997998Z" level=info msg="StartContainer for \"2ea7e06aca3b5b508fc1cc5622dafad7cbfc52bcfc8a3b8346580aceddd35b6d\" returns successfully" Jun 20 18:40:50.671089 containerd[1748]: time="2025-06-20T18:40:50.670965545Z" level=info msg="CreateContainer within sandbox \"36d752038d34f8e2ac682289d85c1807d9cfc313397f16d2a78603074dc5bdfb\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jun 20 18:40:50.683998 kubelet[3294]: I0620 18:40:50.683512 3294 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-jvtq6" podStartSLOduration=1.180705864 podStartE2EDuration="11.68349205s" podCreationTimestamp="2025-06-20 18:40:39 +0000 UTC" firstStartedPulling="2025-06-20 18:40:39.790839589 +0000 UTC m=+5.346586744" lastFinishedPulling="2025-06-20 18:40:50.293625775 +0000 UTC m=+15.849372930" observedRunningTime="2025-06-20 18:40:50.680518223 +0000 UTC m=+16.236265378" watchObservedRunningTime="2025-06-20 18:40:50.68349205 +0000 UTC m=+16.239239205" Jun 20 18:40:50.729451 containerd[1748]: time="2025-06-20T18:40:50.729306327Z" level=info msg="CreateContainer within sandbox \"36d752038d34f8e2ac682289d85c1807d9cfc313397f16d2a78603074dc5bdfb\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"17a96300cb9ce8cda1a549ac32f7e532f18b27c99046778996b8f9781d987301\"" Jun 20 18:40:50.731285 containerd[1748]: time="2025-06-20T18:40:50.730155603Z" level=info msg="StartContainer for \"17a96300cb9ce8cda1a549ac32f7e532f18b27c99046778996b8f9781d987301\"" Jun 20 18:40:50.774896 systemd[1]: Started cri-containerd-17a96300cb9ce8cda1a549ac32f7e532f18b27c99046778996b8f9781d987301.scope - libcontainer container 17a96300cb9ce8cda1a549ac32f7e532f18b27c99046778996b8f9781d987301. Jun 20 18:40:50.811178 systemd[1]: cri-containerd-17a96300cb9ce8cda1a549ac32f7e532f18b27c99046778996b8f9781d987301.scope: Deactivated successfully. Jun 20 18:40:50.817945 containerd[1748]: time="2025-06-20T18:40:50.817872295Z" level=info msg="StartContainer for \"17a96300cb9ce8cda1a549ac32f7e532f18b27c99046778996b8f9781d987301\" returns successfully" Jun 20 18:40:51.170837 containerd[1748]: time="2025-06-20T18:40:51.170612174Z" level=info msg="shim disconnected" id=17a96300cb9ce8cda1a549ac32f7e532f18b27c99046778996b8f9781d987301 namespace=k8s.io Jun 20 18:40:51.170837 containerd[1748]: time="2025-06-20T18:40:51.170669734Z" level=warning msg="cleaning up after shim disconnected" id=17a96300cb9ce8cda1a549ac32f7e532f18b27c99046778996b8f9781d987301 namespace=k8s.io Jun 20 18:40:51.170837 containerd[1748]: time="2025-06-20T18:40:51.170680214Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 18:40:51.675785 containerd[1748]: time="2025-06-20T18:40:51.675570140Z" level=info msg="CreateContainer within sandbox \"36d752038d34f8e2ac682289d85c1807d9cfc313397f16d2a78603074dc5bdfb\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jun 20 18:40:51.721046 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3326883918.mount: Deactivated successfully. Jun 20 18:40:51.743428 containerd[1748]: time="2025-06-20T18:40:51.743377120Z" level=info msg="CreateContainer within sandbox \"36d752038d34f8e2ac682289d85c1807d9cfc313397f16d2a78603074dc5bdfb\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"0a0811b339f443f4b951fbcee7da7fee65e4eac4602157d4b60d32c6512646c8\"" Jun 20 18:40:51.744213 containerd[1748]: time="2025-06-20T18:40:51.743966318Z" level=info msg="StartContainer for \"0a0811b339f443f4b951fbcee7da7fee65e4eac4602157d4b60d32c6512646c8\"" Jun 20 18:40:51.789154 systemd[1]: Started cri-containerd-0a0811b339f443f4b951fbcee7da7fee65e4eac4602157d4b60d32c6512646c8.scope - libcontainer container 0a0811b339f443f4b951fbcee7da7fee65e4eac4602157d4b60d32c6512646c8. Jun 20 18:40:51.820480 containerd[1748]: time="2025-06-20T18:40:51.819676263Z" level=info msg="StartContainer for \"0a0811b339f443f4b951fbcee7da7fee65e4eac4602157d4b60d32c6512646c8\" returns successfully" Jun 20 18:40:51.960503 kubelet[3294]: I0620 18:40:51.960392 3294 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jun 20 18:40:52.027267 systemd[1]: Created slice kubepods-burstable-pod67895f1d_e240_4282_b62a_644460b0863e.slice - libcontainer container kubepods-burstable-pod67895f1d_e240_4282_b62a_644460b0863e.slice. Jun 20 18:40:52.037364 systemd[1]: Created slice kubepods-burstable-pod6caacf1e_6a0d_4eb3_b454_e930094ee7a0.slice - libcontainer container kubepods-burstable-pod6caacf1e_6a0d_4eb3_b454_e930094ee7a0.slice. Jun 20 18:40:52.060839 kubelet[3294]: I0620 18:40:52.060583 3294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6caacf1e-6a0d-4eb3-b454-e930094ee7a0-config-volume\") pod \"coredns-668d6bf9bc-h9gw4\" (UID: \"6caacf1e-6a0d-4eb3-b454-e930094ee7a0\") " pod="kube-system/coredns-668d6bf9bc-h9gw4" Jun 20 18:40:52.060839 kubelet[3294]: I0620 18:40:52.060761 3294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vk2qk\" (UniqueName: \"kubernetes.io/projected/6caacf1e-6a0d-4eb3-b454-e930094ee7a0-kube-api-access-vk2qk\") pod \"coredns-668d6bf9bc-h9gw4\" (UID: \"6caacf1e-6a0d-4eb3-b454-e930094ee7a0\") " pod="kube-system/coredns-668d6bf9bc-h9gw4" Jun 20 18:40:52.060839 kubelet[3294]: I0620 18:40:52.060783 3294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/67895f1d-e240-4282-b62a-644460b0863e-config-volume\") pod \"coredns-668d6bf9bc-2c5k5\" (UID: \"67895f1d-e240-4282-b62a-644460b0863e\") " pod="kube-system/coredns-668d6bf9bc-2c5k5" Jun 20 18:40:52.061205 kubelet[3294]: I0620 18:40:52.061044 3294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htpzd\" (UniqueName: \"kubernetes.io/projected/67895f1d-e240-4282-b62a-644460b0863e-kube-api-access-htpzd\") pod \"coredns-668d6bf9bc-2c5k5\" (UID: \"67895f1d-e240-4282-b62a-644460b0863e\") " pod="kube-system/coredns-668d6bf9bc-2c5k5" Jun 20 18:40:52.161040 systemd[1]: run-containerd-runc-k8s.io-0a0811b339f443f4b951fbcee7da7fee65e4eac4602157d4b60d32c6512646c8-runc.Epizxm.mount: Deactivated successfully. Jun 20 18:40:52.334486 containerd[1748]: time="2025-06-20T18:40:52.334434705Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2c5k5,Uid:67895f1d-e240-4282-b62a-644460b0863e,Namespace:kube-system,Attempt:0,}" Jun 20 18:40:52.341828 containerd[1748]: time="2025-06-20T18:40:52.340958916Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-h9gw4,Uid:6caacf1e-6a0d-4eb3-b454-e930094ee7a0,Namespace:kube-system,Attempt:0,}" Jun 20 18:40:52.700371 kubelet[3294]: I0620 18:40:52.700207 3294 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-wvkxf" podStartSLOduration=6.125168901 podStartE2EDuration="13.700186527s" podCreationTimestamp="2025-06-20 18:40:39 +0000 UTC" firstStartedPulling="2025-06-20 18:40:39.549901178 +0000 UTC m=+5.105648333" lastFinishedPulling="2025-06-20 18:40:47.124918804 +0000 UTC m=+12.680665959" observedRunningTime="2025-06-20 18:40:52.700029728 +0000 UTC m=+18.255776923" watchObservedRunningTime="2025-06-20 18:40:52.700186527 +0000 UTC m=+18.255933642" Jun 20 18:40:54.064909 systemd-networkd[1474]: cilium_host: Link UP Jun 20 18:40:54.069386 systemd-networkd[1474]: cilium_net: Link UP Jun 20 18:40:54.069620 systemd-networkd[1474]: cilium_net: Gained carrier Jun 20 18:40:54.069735 systemd-networkd[1474]: cilium_host: Gained carrier Jun 20 18:40:54.218568 systemd-networkd[1474]: cilium_vxlan: Link UP Jun 20 18:40:54.219761 systemd-networkd[1474]: cilium_vxlan: Gained carrier Jun 20 18:40:54.564984 kernel: NET: Registered PF_ALG protocol family Jun 20 18:40:54.605080 systemd-networkd[1474]: cilium_net: Gained IPv6LL Jun 20 18:40:54.733161 systemd-networkd[1474]: cilium_host: Gained IPv6LL Jun 20 18:40:55.275670 systemd-networkd[1474]: lxc_health: Link UP Jun 20 18:40:55.286130 systemd-networkd[1474]: lxc_health: Gained carrier Jun 20 18:40:55.465322 systemd-networkd[1474]: lxce4a534771f6d: Link UP Jun 20 18:40:55.474030 kernel: eth0: renamed from tmpbb249 Jun 20 18:40:55.478199 systemd-networkd[1474]: lxce4a534771f6d: Gained carrier Jun 20 18:40:55.499566 systemd-networkd[1474]: lxc1e72d7e6b629: Link UP Jun 20 18:40:55.510291 kernel: eth0: renamed from tmp817d5 Jun 20 18:40:55.517090 systemd-networkd[1474]: lxc1e72d7e6b629: Gained carrier Jun 20 18:40:55.885086 systemd-networkd[1474]: cilium_vxlan: Gained IPv6LL Jun 20 18:40:56.897347 kubelet[3294]: I0620 18:40:56.897297 3294 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 20 18:40:56.974108 systemd-networkd[1474]: lxc_health: Gained IPv6LL Jun 20 18:40:57.229107 systemd-networkd[1474]: lxc1e72d7e6b629: Gained IPv6LL Jun 20 18:40:57.422122 systemd-networkd[1474]: lxce4a534771f6d: Gained IPv6LL Jun 20 18:40:59.328212 containerd[1748]: time="2025-06-20T18:40:59.328076489Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 18:40:59.329103 containerd[1748]: time="2025-06-20T18:40:59.328209208Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 18:40:59.329103 containerd[1748]: time="2025-06-20T18:40:59.328223728Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:40:59.329984 containerd[1748]: time="2025-06-20T18:40:59.328382448Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:40:59.357677 systemd[1]: Started cri-containerd-817d589a621ef0d37f43d734ca8c7536c0fe9ce4b82682df1234abd779b824aa.scope - libcontainer container 817d589a621ef0d37f43d734ca8c7536c0fe9ce4b82682df1234abd779b824aa. Jun 20 18:40:59.371104 containerd[1748]: time="2025-06-20T18:40:59.370483750Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 18:40:59.371104 containerd[1748]: time="2025-06-20T18:40:59.370549430Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 18:40:59.371104 containerd[1748]: time="2025-06-20T18:40:59.370564510Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:40:59.371973 containerd[1748]: time="2025-06-20T18:40:59.370644550Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:40:59.406639 systemd[1]: Started cri-containerd-bb2495db9b058188a78b2d90158936e794a3919b8a466634dcf2a09795a3031d.scope - libcontainer container bb2495db9b058188a78b2d90158936e794a3919b8a466634dcf2a09795a3031d. Jun 20 18:40:59.433008 containerd[1748]: time="2025-06-20T18:40:59.432912847Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-h9gw4,Uid:6caacf1e-6a0d-4eb3-b454-e930094ee7a0,Namespace:kube-system,Attempt:0,} returns sandbox id \"817d589a621ef0d37f43d734ca8c7536c0fe9ce4b82682df1234abd779b824aa\"" Jun 20 18:40:59.444975 containerd[1748]: time="2025-06-20T18:40:59.444803717Z" level=info msg="CreateContainer within sandbox \"817d589a621ef0d37f43d734ca8c7536c0fe9ce4b82682df1234abd779b824aa\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 20 18:40:59.457750 containerd[1748]: time="2025-06-20T18:40:59.457679103Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2c5k5,Uid:67895f1d-e240-4282-b62a-644460b0863e,Namespace:kube-system,Attempt:0,} returns sandbox id \"bb2495db9b058188a78b2d90158936e794a3919b8a466634dcf2a09795a3031d\"" Jun 20 18:40:59.465610 containerd[1748]: time="2025-06-20T18:40:59.465065352Z" level=info msg="CreateContainer within sandbox \"bb2495db9b058188a78b2d90158936e794a3919b8a466634dcf2a09795a3031d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 20 18:40:59.530404 containerd[1748]: time="2025-06-20T18:40:59.530350397Z" level=info msg="CreateContainer within sandbox \"817d589a621ef0d37f43d734ca8c7536c0fe9ce4b82682df1234abd779b824aa\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c348b44bcc3ff57f6771e4424b2b4f98e79d471215d057a5540817d1d4ee7911\"" Jun 20 18:40:59.531324 containerd[1748]: time="2025-06-20T18:40:59.531267073Z" level=info msg="StartContainer for \"c348b44bcc3ff57f6771e4424b2b4f98e79d471215d057a5540817d1d4ee7911\"" Jun 20 18:40:59.544649 containerd[1748]: time="2025-06-20T18:40:59.544596537Z" level=info msg="CreateContainer within sandbox \"bb2495db9b058188a78b2d90158936e794a3919b8a466634dcf2a09795a3031d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3a3ac2d46e5b659ea9841bb8c3135244222edea7b2a0c5e487e8dca0ccf552bb\"" Jun 20 18:40:59.547654 containerd[1748]: time="2025-06-20T18:40:59.547494725Z" level=info msg="StartContainer for \"3a3ac2d46e5b659ea9841bb8c3135244222edea7b2a0c5e487e8dca0ccf552bb\"" Jun 20 18:40:59.565141 systemd[1]: Started cri-containerd-c348b44bcc3ff57f6771e4424b2b4f98e79d471215d057a5540817d1d4ee7911.scope - libcontainer container c348b44bcc3ff57f6771e4424b2b4f98e79d471215d057a5540817d1d4ee7911. Jun 20 18:40:59.590840 systemd[1]: Started cri-containerd-3a3ac2d46e5b659ea9841bb8c3135244222edea7b2a0c5e487e8dca0ccf552bb.scope - libcontainer container 3a3ac2d46e5b659ea9841bb8c3135244222edea7b2a0c5e487e8dca0ccf552bb. Jun 20 18:40:59.620004 containerd[1748]: time="2025-06-20T18:40:59.619890660Z" level=info msg="StartContainer for \"c348b44bcc3ff57f6771e4424b2b4f98e79d471215d057a5540817d1d4ee7911\" returns successfully" Jun 20 18:40:59.637764 containerd[1748]: time="2025-06-20T18:40:59.637272027Z" level=info msg="StartContainer for \"3a3ac2d46e5b659ea9841bb8c3135244222edea7b2a0c5e487e8dca0ccf552bb\" returns successfully" Jun 20 18:40:59.746575 kubelet[3294]: I0620 18:40:59.746097 3294 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-h9gw4" podStartSLOduration=20.746076528 podStartE2EDuration="20.746076528s" podCreationTimestamp="2025-06-20 18:40:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 18:40:59.721410512 +0000 UTC m=+25.277157667" watchObservedRunningTime="2025-06-20 18:40:59.746076528 +0000 UTC m=+25.301823683" Jun 20 18:40:59.746575 kubelet[3294]: I0620 18:40:59.746202 3294 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-2c5k5" podStartSLOduration=20.746197248 podStartE2EDuration="20.746197248s" podCreationTimestamp="2025-06-20 18:40:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 18:40:59.745204612 +0000 UTC m=+25.300951727" watchObservedRunningTime="2025-06-20 18:40:59.746197248 +0000 UTC m=+25.301944403" Jun 20 18:42:13.818245 systemd[1]: Started sshd@7-10.200.20.37:22-10.200.16.10:52206.service - OpenSSH per-connection server daemon (10.200.16.10:52206). Jun 20 18:42:14.274799 sshd[4688]: Accepted publickey for core from 10.200.16.10 port 52206 ssh2: RSA SHA256:V0PIl5bgEVSoJBEasTsvuXRLX17TbdMMyIxacTZE6p0 Jun 20 18:42:14.276342 sshd-session[4688]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:42:14.281030 systemd-logind[1715]: New session 10 of user core. Jun 20 18:42:14.285130 systemd[1]: Started session-10.scope - Session 10 of User core. Jun 20 18:42:14.700366 sshd[4690]: Connection closed by 10.200.16.10 port 52206 Jun 20 18:42:14.699407 sshd-session[4688]: pam_unix(sshd:session): session closed for user core Jun 20 18:42:14.702845 systemd[1]: sshd@7-10.200.20.37:22-10.200.16.10:52206.service: Deactivated successfully. Jun 20 18:42:14.705677 systemd[1]: session-10.scope: Deactivated successfully. Jun 20 18:42:14.706800 systemd-logind[1715]: Session 10 logged out. Waiting for processes to exit. Jun 20 18:42:14.708457 systemd-logind[1715]: Removed session 10. Jun 20 18:42:19.791215 systemd[1]: Started sshd@8-10.200.20.37:22-10.200.16.10:60830.service - OpenSSH per-connection server daemon (10.200.16.10:60830). Jun 20 18:42:20.242223 sshd[4703]: Accepted publickey for core from 10.200.16.10 port 60830 ssh2: RSA SHA256:V0PIl5bgEVSoJBEasTsvuXRLX17TbdMMyIxacTZE6p0 Jun 20 18:42:20.243485 sshd-session[4703]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:42:20.249223 systemd-logind[1715]: New session 11 of user core. Jun 20 18:42:20.253125 systemd[1]: Started session-11.scope - Session 11 of User core. Jun 20 18:42:20.643971 sshd[4705]: Connection closed by 10.200.16.10 port 60830 Jun 20 18:42:20.644526 sshd-session[4703]: pam_unix(sshd:session): session closed for user core Jun 20 18:42:20.648099 systemd[1]: sshd@8-10.200.20.37:22-10.200.16.10:60830.service: Deactivated successfully. Jun 20 18:42:20.650388 systemd[1]: session-11.scope: Deactivated successfully. Jun 20 18:42:20.651343 systemd-logind[1715]: Session 11 logged out. Waiting for processes to exit. Jun 20 18:42:20.652523 systemd-logind[1715]: Removed session 11. Jun 20 18:42:25.743243 systemd[1]: Started sshd@9-10.200.20.37:22-10.200.16.10:60846.service - OpenSSH per-connection server daemon (10.200.16.10:60846). Jun 20 18:42:26.202656 sshd[4718]: Accepted publickey for core from 10.200.16.10 port 60846 ssh2: RSA SHA256:V0PIl5bgEVSoJBEasTsvuXRLX17TbdMMyIxacTZE6p0 Jun 20 18:42:26.203958 sshd-session[4718]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:42:26.208380 systemd-logind[1715]: New session 12 of user core. Jun 20 18:42:26.217124 systemd[1]: Started session-12.scope - Session 12 of User core. Jun 20 18:42:26.602724 sshd[4720]: Connection closed by 10.200.16.10 port 60846 Jun 20 18:42:26.603432 sshd-session[4718]: pam_unix(sshd:session): session closed for user core Jun 20 18:42:26.607162 systemd[1]: sshd@9-10.200.20.37:22-10.200.16.10:60846.service: Deactivated successfully. Jun 20 18:42:26.609510 systemd[1]: session-12.scope: Deactivated successfully. Jun 20 18:42:26.610884 systemd-logind[1715]: Session 12 logged out. Waiting for processes to exit. Jun 20 18:42:26.612382 systemd-logind[1715]: Removed session 12. Jun 20 18:42:31.702194 systemd[1]: Started sshd@10-10.200.20.37:22-10.200.16.10:50248.service - OpenSSH per-connection server daemon (10.200.16.10:50248). Jun 20 18:42:32.237096 sshd[4733]: Accepted publickey for core from 10.200.16.10 port 50248 ssh2: RSA SHA256:V0PIl5bgEVSoJBEasTsvuXRLX17TbdMMyIxacTZE6p0 Jun 20 18:42:32.238435 sshd-session[4733]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:42:32.242825 systemd-logind[1715]: New session 13 of user core. Jun 20 18:42:32.251137 systemd[1]: Started session-13.scope - Session 13 of User core. Jun 20 18:42:32.691832 sshd[4735]: Connection closed by 10.200.16.10 port 50248 Jun 20 18:42:32.692249 sshd-session[4733]: pam_unix(sshd:session): session closed for user core Jun 20 18:42:32.696404 systemd[1]: sshd@10-10.200.20.37:22-10.200.16.10:50248.service: Deactivated successfully. Jun 20 18:42:32.698688 systemd[1]: session-13.scope: Deactivated successfully. Jun 20 18:42:32.699725 systemd-logind[1715]: Session 13 logged out. Waiting for processes to exit. Jun 20 18:42:32.700754 systemd-logind[1715]: Removed session 13. Jun 20 18:42:37.783219 systemd[1]: Started sshd@11-10.200.20.37:22-10.200.16.10:50258.service - OpenSSH per-connection server daemon (10.200.16.10:50258). Jun 20 18:42:38.244131 sshd[4749]: Accepted publickey for core from 10.200.16.10 port 50258 ssh2: RSA SHA256:V0PIl5bgEVSoJBEasTsvuXRLX17TbdMMyIxacTZE6p0 Jun 20 18:42:38.245852 sshd-session[4749]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:42:38.250730 systemd-logind[1715]: New session 14 of user core. Jun 20 18:42:38.259114 systemd[1]: Started session-14.scope - Session 14 of User core. Jun 20 18:42:38.664902 sshd[4751]: Connection closed by 10.200.16.10 port 50258 Jun 20 18:42:38.665554 sshd-session[4749]: pam_unix(sshd:session): session closed for user core Jun 20 18:42:38.669578 systemd[1]: sshd@11-10.200.20.37:22-10.200.16.10:50258.service: Deactivated successfully. Jun 20 18:42:38.669746 systemd-logind[1715]: Session 14 logged out. Waiting for processes to exit. Jun 20 18:42:38.673151 systemd[1]: session-14.scope: Deactivated successfully. Jun 20 18:42:38.675513 systemd-logind[1715]: Removed session 14. Jun 20 18:42:38.748052 systemd[1]: Started sshd@12-10.200.20.37:22-10.200.16.10:40902.service - OpenSSH per-connection server daemon (10.200.16.10:40902). Jun 20 18:42:39.215165 sshd[4764]: Accepted publickey for core from 10.200.16.10 port 40902 ssh2: RSA SHA256:V0PIl5bgEVSoJBEasTsvuXRLX17TbdMMyIxacTZE6p0 Jun 20 18:42:39.216530 sshd-session[4764]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:42:39.220812 systemd-logind[1715]: New session 15 of user core. Jun 20 18:42:39.230261 systemd[1]: Started session-15.scope - Session 15 of User core. Jun 20 18:42:39.670982 sshd[4766]: Connection closed by 10.200.16.10 port 40902 Jun 20 18:42:39.671545 sshd-session[4764]: pam_unix(sshd:session): session closed for user core Jun 20 18:42:39.675396 systemd-logind[1715]: Session 15 logged out. Waiting for processes to exit. Jun 20 18:42:39.676118 systemd[1]: sshd@12-10.200.20.37:22-10.200.16.10:40902.service: Deactivated successfully. Jun 20 18:42:39.680155 systemd[1]: session-15.scope: Deactivated successfully. Jun 20 18:42:39.681461 systemd-logind[1715]: Removed session 15. Jun 20 18:42:39.764296 systemd[1]: Started sshd@13-10.200.20.37:22-10.200.16.10:40916.service - OpenSSH per-connection server daemon (10.200.16.10:40916). Jun 20 18:42:40.224157 sshd[4776]: Accepted publickey for core from 10.200.16.10 port 40916 ssh2: RSA SHA256:V0PIl5bgEVSoJBEasTsvuXRLX17TbdMMyIxacTZE6p0 Jun 20 18:42:40.225547 sshd-session[4776]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:42:40.231812 systemd-logind[1715]: New session 16 of user core. Jun 20 18:42:40.241156 systemd[1]: Started session-16.scope - Session 16 of User core. Jun 20 18:42:40.623119 sshd[4780]: Connection closed by 10.200.16.10 port 40916 Jun 20 18:42:40.623713 sshd-session[4776]: pam_unix(sshd:session): session closed for user core Jun 20 18:42:40.628034 systemd[1]: sshd@13-10.200.20.37:22-10.200.16.10:40916.service: Deactivated successfully. Jun 20 18:42:40.631906 systemd[1]: session-16.scope: Deactivated successfully. Jun 20 18:42:40.634262 systemd-logind[1715]: Session 16 logged out. Waiting for processes to exit. Jun 20 18:42:40.635443 systemd-logind[1715]: Removed session 16. Jun 20 18:42:45.711242 systemd[1]: Started sshd@14-10.200.20.37:22-10.200.16.10:40926.service - OpenSSH per-connection server daemon (10.200.16.10:40926). Jun 20 18:42:46.170422 sshd[4791]: Accepted publickey for core from 10.200.16.10 port 40926 ssh2: RSA SHA256:V0PIl5bgEVSoJBEasTsvuXRLX17TbdMMyIxacTZE6p0 Jun 20 18:42:46.171392 sshd-session[4791]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:42:46.176182 systemd-logind[1715]: New session 17 of user core. Jun 20 18:42:46.182124 systemd[1]: Started session-17.scope - Session 17 of User core. Jun 20 18:42:46.572489 sshd[4793]: Connection closed by 10.200.16.10 port 40926 Jun 20 18:42:46.573619 sshd-session[4791]: pam_unix(sshd:session): session closed for user core Jun 20 18:42:46.583533 systemd[1]: sshd@14-10.200.20.37:22-10.200.16.10:40926.service: Deactivated successfully. Jun 20 18:42:46.586988 systemd[1]: session-17.scope: Deactivated successfully. Jun 20 18:42:46.588569 systemd-logind[1715]: Session 17 logged out. Waiting for processes to exit. Jun 20 18:42:46.589873 systemd-logind[1715]: Removed session 17. Jun 20 18:42:51.670235 systemd[1]: Started sshd@15-10.200.20.37:22-10.200.16.10:55778.service - OpenSSH per-connection server daemon (10.200.16.10:55778). Jun 20 18:42:52.167973 sshd[4805]: Accepted publickey for core from 10.200.16.10 port 55778 ssh2: RSA SHA256:V0PIl5bgEVSoJBEasTsvuXRLX17TbdMMyIxacTZE6p0 Jun 20 18:42:52.168725 sshd-session[4805]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:42:52.173283 systemd-logind[1715]: New session 18 of user core. Jun 20 18:42:52.181141 systemd[1]: Started session-18.scope - Session 18 of User core. Jun 20 18:42:52.584481 sshd[4807]: Connection closed by 10.200.16.10 port 55778 Jun 20 18:42:52.585153 sshd-session[4805]: pam_unix(sshd:session): session closed for user core Jun 20 18:42:52.589289 systemd[1]: sshd@15-10.200.20.37:22-10.200.16.10:55778.service: Deactivated successfully. Jun 20 18:42:52.591660 systemd[1]: session-18.scope: Deactivated successfully. Jun 20 18:42:52.593072 systemd-logind[1715]: Session 18 logged out. Waiting for processes to exit. Jun 20 18:42:52.594601 systemd-logind[1715]: Removed session 18. Jun 20 18:42:52.677278 systemd[1]: Started sshd@16-10.200.20.37:22-10.200.16.10:55790.service - OpenSSH per-connection server daemon (10.200.16.10:55790). Jun 20 18:42:53.169806 sshd[4819]: Accepted publickey for core from 10.200.16.10 port 55790 ssh2: RSA SHA256:V0PIl5bgEVSoJBEasTsvuXRLX17TbdMMyIxacTZE6p0 Jun 20 18:42:53.171278 sshd-session[4819]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:42:53.175315 systemd-logind[1715]: New session 19 of user core. Jun 20 18:42:53.188300 systemd[1]: Started session-19.scope - Session 19 of User core. Jun 20 18:42:53.607505 sshd[4821]: Connection closed by 10.200.16.10 port 55790 Jun 20 18:42:53.607033 sshd-session[4819]: pam_unix(sshd:session): session closed for user core Jun 20 18:42:53.609997 systemd[1]: sshd@16-10.200.20.37:22-10.200.16.10:55790.service: Deactivated successfully. Jun 20 18:42:53.611982 systemd[1]: session-19.scope: Deactivated successfully. Jun 20 18:42:53.613555 systemd-logind[1715]: Session 19 logged out. Waiting for processes to exit. Jun 20 18:42:53.614873 systemd-logind[1715]: Removed session 19. Jun 20 18:42:53.694201 systemd[1]: Started sshd@17-10.200.20.37:22-10.200.16.10:55792.service - OpenSSH per-connection server daemon (10.200.16.10:55792). Jun 20 18:42:54.151692 sshd[4831]: Accepted publickey for core from 10.200.16.10 port 55792 ssh2: RSA SHA256:V0PIl5bgEVSoJBEasTsvuXRLX17TbdMMyIxacTZE6p0 Jun 20 18:42:54.153142 sshd-session[4831]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:42:54.157897 systemd-logind[1715]: New session 20 of user core. Jun 20 18:42:54.165263 systemd[1]: Started session-20.scope - Session 20 of User core. Jun 20 18:42:55.432964 sshd[4833]: Connection closed by 10.200.16.10 port 55792 Jun 20 18:42:55.433590 sshd-session[4831]: pam_unix(sshd:session): session closed for user core Jun 20 18:42:55.437168 systemd-logind[1715]: Session 20 logged out. Waiting for processes to exit. Jun 20 18:42:55.438107 systemd[1]: sshd@17-10.200.20.37:22-10.200.16.10:55792.service: Deactivated successfully. Jun 20 18:42:55.442047 systemd[1]: session-20.scope: Deactivated successfully. Jun 20 18:42:55.443575 systemd-logind[1715]: Removed session 20. Jun 20 18:42:55.520946 systemd[1]: Started sshd@18-10.200.20.37:22-10.200.16.10:55802.service - OpenSSH per-connection server daemon (10.200.16.10:55802). Jun 20 18:42:55.984237 sshd[4850]: Accepted publickey for core from 10.200.16.10 port 55802 ssh2: RSA SHA256:V0PIl5bgEVSoJBEasTsvuXRLX17TbdMMyIxacTZE6p0 Jun 20 18:42:55.985629 sshd-session[4850]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:42:55.991026 systemd-logind[1715]: New session 21 of user core. Jun 20 18:42:56.005152 systemd[1]: Started session-21.scope - Session 21 of User core. Jun 20 18:42:56.509353 sshd[4852]: Connection closed by 10.200.16.10 port 55802 Jun 20 18:42:56.510033 sshd-session[4850]: pam_unix(sshd:session): session closed for user core Jun 20 18:42:56.513977 systemd-logind[1715]: Session 21 logged out. Waiting for processes to exit. Jun 20 18:42:56.514965 systemd[1]: sshd@18-10.200.20.37:22-10.200.16.10:55802.service: Deactivated successfully. Jun 20 18:42:56.516798 systemd[1]: session-21.scope: Deactivated successfully. Jun 20 18:42:56.518916 systemd-logind[1715]: Removed session 21. Jun 20 18:42:56.605251 systemd[1]: Started sshd@19-10.200.20.37:22-10.200.16.10:55806.service - OpenSSH per-connection server daemon (10.200.16.10:55806). Jun 20 18:42:57.093898 sshd[4862]: Accepted publickey for core from 10.200.16.10 port 55806 ssh2: RSA SHA256:V0PIl5bgEVSoJBEasTsvuXRLX17TbdMMyIxacTZE6p0 Jun 20 18:42:57.095373 sshd-session[4862]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:42:57.099997 systemd-logind[1715]: New session 22 of user core. Jun 20 18:42:57.107151 systemd[1]: Started session-22.scope - Session 22 of User core. Jun 20 18:42:57.505626 sshd[4864]: Connection closed by 10.200.16.10 port 55806 Jun 20 18:42:57.505086 sshd-session[4862]: pam_unix(sshd:session): session closed for user core Jun 20 18:42:57.509050 systemd[1]: sshd@19-10.200.20.37:22-10.200.16.10:55806.service: Deactivated successfully. Jun 20 18:42:57.511679 systemd[1]: session-22.scope: Deactivated successfully. Jun 20 18:42:57.512890 systemd-logind[1715]: Session 22 logged out. Waiting for processes to exit. Jun 20 18:42:57.513867 systemd-logind[1715]: Removed session 22. Jun 20 18:43:02.595262 systemd[1]: Started sshd@20-10.200.20.37:22-10.200.16.10:49066.service - OpenSSH per-connection server daemon (10.200.16.10:49066). Jun 20 18:43:03.051437 sshd[4878]: Accepted publickey for core from 10.200.16.10 port 49066 ssh2: RSA SHA256:V0PIl5bgEVSoJBEasTsvuXRLX17TbdMMyIxacTZE6p0 Jun 20 18:43:03.052843 sshd-session[4878]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:43:03.057109 systemd-logind[1715]: New session 23 of user core. Jun 20 18:43:03.063164 systemd[1]: Started session-23.scope - Session 23 of User core. Jun 20 18:43:03.456059 sshd[4880]: Connection closed by 10.200.16.10 port 49066 Jun 20 18:43:03.456592 sshd-session[4878]: pam_unix(sshd:session): session closed for user core Jun 20 18:43:03.460733 systemd[1]: sshd@20-10.200.20.37:22-10.200.16.10:49066.service: Deactivated successfully. Jun 20 18:43:03.463247 systemd[1]: session-23.scope: Deactivated successfully. Jun 20 18:43:03.464313 systemd-logind[1715]: Session 23 logged out. Waiting for processes to exit. Jun 20 18:43:03.465719 systemd-logind[1715]: Removed session 23. Jun 20 18:43:08.545289 systemd[1]: Started sshd@21-10.200.20.37:22-10.200.16.10:48876.service - OpenSSH per-connection server daemon (10.200.16.10:48876). Jun 20 18:43:08.999978 sshd[4892]: Accepted publickey for core from 10.200.16.10 port 48876 ssh2: RSA SHA256:V0PIl5bgEVSoJBEasTsvuXRLX17TbdMMyIxacTZE6p0 Jun 20 18:43:09.001222 sshd-session[4892]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:43:09.005643 systemd-logind[1715]: New session 24 of user core. Jun 20 18:43:09.017311 systemd[1]: Started session-24.scope - Session 24 of User core. Jun 20 18:43:09.396456 sshd[4894]: Connection closed by 10.200.16.10 port 48876 Jun 20 18:43:09.397093 sshd-session[4892]: pam_unix(sshd:session): session closed for user core Jun 20 18:43:09.400161 systemd-logind[1715]: Session 24 logged out. Waiting for processes to exit. Jun 20 18:43:09.402068 systemd[1]: sshd@21-10.200.20.37:22-10.200.16.10:48876.service: Deactivated successfully. Jun 20 18:43:09.404970 systemd[1]: session-24.scope: Deactivated successfully. Jun 20 18:43:09.407038 systemd-logind[1715]: Removed session 24. Jun 20 18:43:14.485910 systemd[1]: Started sshd@22-10.200.20.37:22-10.200.16.10:48878.service - OpenSSH per-connection server daemon (10.200.16.10:48878). Jun 20 18:43:14.983172 sshd[4908]: Accepted publickey for core from 10.200.16.10 port 48878 ssh2: RSA SHA256:V0PIl5bgEVSoJBEasTsvuXRLX17TbdMMyIxacTZE6p0 Jun 20 18:43:14.984555 sshd-session[4908]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:43:14.989630 systemd-logind[1715]: New session 25 of user core. Jun 20 18:43:14.997133 systemd[1]: Started session-25.scope - Session 25 of User core. Jun 20 18:43:15.399042 sshd[4910]: Connection closed by 10.200.16.10 port 48878 Jun 20 18:43:15.399911 sshd-session[4908]: pam_unix(sshd:session): session closed for user core Jun 20 18:43:15.402851 systemd[1]: sshd@22-10.200.20.37:22-10.200.16.10:48878.service: Deactivated successfully. Jun 20 18:43:15.404781 systemd[1]: session-25.scope: Deactivated successfully. Jun 20 18:43:15.406690 systemd-logind[1715]: Session 25 logged out. Waiting for processes to exit. Jun 20 18:43:15.409676 systemd-logind[1715]: Removed session 25. Jun 20 18:43:15.489233 systemd[1]: Started sshd@23-10.200.20.37:22-10.200.16.10:48880.service - OpenSSH per-connection server daemon (10.200.16.10:48880). Jun 20 18:43:15.945468 sshd[4921]: Accepted publickey for core from 10.200.16.10 port 48880 ssh2: RSA SHA256:V0PIl5bgEVSoJBEasTsvuXRLX17TbdMMyIxacTZE6p0 Jun 20 18:43:15.946793 sshd-session[4921]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:43:15.952136 systemd-logind[1715]: New session 26 of user core. Jun 20 18:43:15.960267 systemd[1]: Started session-26.scope - Session 26 of User core. Jun 20 18:43:18.185074 containerd[1748]: time="2025-06-20T18:43:18.184794241Z" level=info msg="StopContainer for \"2ea7e06aca3b5b508fc1cc5622dafad7cbfc52bcfc8a3b8346580aceddd35b6d\" with timeout 30 (s)" Jun 20 18:43:18.187768 containerd[1748]: time="2025-06-20T18:43:18.187463074Z" level=info msg="Stop container \"2ea7e06aca3b5b508fc1cc5622dafad7cbfc52bcfc8a3b8346580aceddd35b6d\" with signal terminated" Jun 20 18:43:18.204208 containerd[1748]: time="2025-06-20T18:43:18.204149031Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 20 18:43:18.208917 systemd[1]: cri-containerd-2ea7e06aca3b5b508fc1cc5622dafad7cbfc52bcfc8a3b8346580aceddd35b6d.scope: Deactivated successfully. Jun 20 18:43:18.214502 containerd[1748]: time="2025-06-20T18:43:18.214376964Z" level=info msg="StopContainer for \"0a0811b339f443f4b951fbcee7da7fee65e4eac4602157d4b60d32c6512646c8\" with timeout 2 (s)" Jun 20 18:43:18.214907 containerd[1748]: time="2025-06-20T18:43:18.214845882Z" level=info msg="Stop container \"0a0811b339f443f4b951fbcee7da7fee65e4eac4602157d4b60d32c6512646c8\" with signal terminated" Jun 20 18:43:18.227055 systemd-networkd[1474]: lxc_health: Link DOWN Jun 20 18:43:18.227064 systemd-networkd[1474]: lxc_health: Lost carrier Jun 20 18:43:18.244503 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2ea7e06aca3b5b508fc1cc5622dafad7cbfc52bcfc8a3b8346580aceddd35b6d-rootfs.mount: Deactivated successfully. Jun 20 18:43:18.245364 systemd[1]: cri-containerd-0a0811b339f443f4b951fbcee7da7fee65e4eac4602157d4b60d32c6512646c8.scope: Deactivated successfully. Jun 20 18:43:18.246453 systemd[1]: cri-containerd-0a0811b339f443f4b951fbcee7da7fee65e4eac4602157d4b60d32c6512646c8.scope: Consumed 6.676s CPU time, 123.8M memory peak, 136K read from disk, 12.9M written to disk. Jun 20 18:43:18.268920 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0a0811b339f443f4b951fbcee7da7fee65e4eac4602157d4b60d32c6512646c8-rootfs.mount: Deactivated successfully. Jun 20 18:43:18.300500 containerd[1748]: time="2025-06-20T18:43:18.300276537Z" level=info msg="shim disconnected" id=2ea7e06aca3b5b508fc1cc5622dafad7cbfc52bcfc8a3b8346580aceddd35b6d namespace=k8s.io Jun 20 18:43:18.300500 containerd[1748]: time="2025-06-20T18:43:18.300334217Z" level=warning msg="cleaning up after shim disconnected" id=2ea7e06aca3b5b508fc1cc5622dafad7cbfc52bcfc8a3b8346580aceddd35b6d namespace=k8s.io Jun 20 18:43:18.300500 containerd[1748]: time="2025-06-20T18:43:18.300341777Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 18:43:18.301659 containerd[1748]: time="2025-06-20T18:43:18.300974536Z" level=info msg="shim disconnected" id=0a0811b339f443f4b951fbcee7da7fee65e4eac4602157d4b60d32c6512646c8 namespace=k8s.io Jun 20 18:43:18.301659 containerd[1748]: time="2025-06-20T18:43:18.301012816Z" level=warning msg="cleaning up after shim disconnected" id=0a0811b339f443f4b951fbcee7da7fee65e4eac4602157d4b60d32c6512646c8 namespace=k8s.io Jun 20 18:43:18.301659 containerd[1748]: time="2025-06-20T18:43:18.301020056Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 18:43:18.328256 containerd[1748]: time="2025-06-20T18:43:18.328156184Z" level=info msg="StopContainer for \"2ea7e06aca3b5b508fc1cc5622dafad7cbfc52bcfc8a3b8346580aceddd35b6d\" returns successfully" Jun 20 18:43:18.331000 containerd[1748]: time="2025-06-20T18:43:18.328963302Z" level=info msg="StopPodSandbox for \"6777b19bbec00ab85d70206bd6960eea386cd5684f7fb403d447056d39534afb\"" Jun 20 18:43:18.331000 containerd[1748]: time="2025-06-20T18:43:18.329007702Z" level=info msg="Container to stop \"2ea7e06aca3b5b508fc1cc5622dafad7cbfc52bcfc8a3b8346580aceddd35b6d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 18:43:18.331772 containerd[1748]: time="2025-06-20T18:43:18.331638695Z" level=info msg="StopContainer for \"0a0811b339f443f4b951fbcee7da7fee65e4eac4602157d4b60d32c6512646c8\" returns successfully" Jun 20 18:43:18.332312 containerd[1748]: time="2025-06-20T18:43:18.332263013Z" level=info msg="StopPodSandbox for \"36d752038d34f8e2ac682289d85c1807d9cfc313397f16d2a78603074dc5bdfb\"" Jun 20 18:43:18.332312 containerd[1748]: time="2025-06-20T18:43:18.332307453Z" level=info msg="Container to stop \"17a96300cb9ce8cda1a549ac32f7e532f18b27c99046778996b8f9781d987301\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 18:43:18.332407 containerd[1748]: time="2025-06-20T18:43:18.332320213Z" level=info msg="Container to stop \"0a0811b339f443f4b951fbcee7da7fee65e4eac4602157d4b60d32c6512646c8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 18:43:18.332407 containerd[1748]: time="2025-06-20T18:43:18.332329053Z" level=info msg="Container to stop \"b02c85b19c1ed3bdcba95fd8d2b44f5fd87c26aadc8b37e5661b7771beb078d9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 18:43:18.332407 containerd[1748]: time="2025-06-20T18:43:18.332337933Z" level=info msg="Container to stop \"d9c5f977e95d7125994b01d4a881b28255a894e6d1d5597fd0c359994fcd74ff\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 18:43:18.332407 containerd[1748]: time="2025-06-20T18:43:18.332346573Z" level=info msg="Container to stop \"8a106d9cff51bf4398154bece9cacd91cfe846669c14cf7e610f4c8fb0a58386\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 18:43:18.332547 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6777b19bbec00ab85d70206bd6960eea386cd5684f7fb403d447056d39534afb-shm.mount: Deactivated successfully. Jun 20 18:43:18.337004 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-36d752038d34f8e2ac682289d85c1807d9cfc313397f16d2a78603074dc5bdfb-shm.mount: Deactivated successfully. Jun 20 18:43:18.342232 systemd[1]: cri-containerd-36d752038d34f8e2ac682289d85c1807d9cfc313397f16d2a78603074dc5bdfb.scope: Deactivated successfully. Jun 20 18:43:18.352828 systemd[1]: cri-containerd-6777b19bbec00ab85d70206bd6960eea386cd5684f7fb403d447056d39534afb.scope: Deactivated successfully. Jun 20 18:43:18.400466 containerd[1748]: time="2025-06-20T18:43:18.400235594Z" level=info msg="shim disconnected" id=36d752038d34f8e2ac682289d85c1807d9cfc313397f16d2a78603074dc5bdfb namespace=k8s.io Jun 20 18:43:18.400466 containerd[1748]: time="2025-06-20T18:43:18.400299514Z" level=warning msg="cleaning up after shim disconnected" id=36d752038d34f8e2ac682289d85c1807d9cfc313397f16d2a78603074dc5bdfb namespace=k8s.io Jun 20 18:43:18.400466 containerd[1748]: time="2025-06-20T18:43:18.400307554Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 18:43:18.401239 containerd[1748]: time="2025-06-20T18:43:18.401194992Z" level=info msg="shim disconnected" id=6777b19bbec00ab85d70206bd6960eea386cd5684f7fb403d447056d39534afb namespace=k8s.io Jun 20 18:43:18.401326 containerd[1748]: time="2025-06-20T18:43:18.401311952Z" level=warning msg="cleaning up after shim disconnected" id=6777b19bbec00ab85d70206bd6960eea386cd5684f7fb403d447056d39534afb namespace=k8s.io Jun 20 18:43:18.401507 containerd[1748]: time="2025-06-20T18:43:18.401365631Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 18:43:18.417495 containerd[1748]: time="2025-06-20T18:43:18.417439589Z" level=info msg="TearDown network for sandbox \"6777b19bbec00ab85d70206bd6960eea386cd5684f7fb403d447056d39534afb\" successfully" Jun 20 18:43:18.417495 containerd[1748]: time="2025-06-20T18:43:18.417482469Z" level=info msg="StopPodSandbox for \"6777b19bbec00ab85d70206bd6960eea386cd5684f7fb403d447056d39534afb\" returns successfully" Jun 20 18:43:18.419469 containerd[1748]: time="2025-06-20T18:43:18.418811985Z" level=warning msg="cleanup warnings time=\"2025-06-20T18:43:18Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jun 20 18:43:18.420584 containerd[1748]: time="2025-06-20T18:43:18.420258142Z" level=info msg="TearDown network for sandbox \"36d752038d34f8e2ac682289d85c1807d9cfc313397f16d2a78603074dc5bdfb\" successfully" Jun 20 18:43:18.420584 containerd[1748]: time="2025-06-20T18:43:18.420291622Z" level=info msg="StopPodSandbox for \"36d752038d34f8e2ac682289d85c1807d9cfc313397f16d2a78603074dc5bdfb\" returns successfully" Jun 20 18:43:18.599330 kubelet[3294]: I0620 18:43:18.598699 3294 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b25d997a-e02f-499c-b778-ace863f8a8f9-cilium-config-path\") pod \"b25d997a-e02f-499c-b778-ace863f8a8f9\" (UID: \"b25d997a-e02f-499c-b778-ace863f8a8f9\") " Jun 20 18:43:18.599330 kubelet[3294]: I0620 18:43:18.598741 3294 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b25d997a-e02f-499c-b778-ace863f8a8f9-host-proc-sys-net\") pod \"b25d997a-e02f-499c-b778-ace863f8a8f9\" (UID: \"b25d997a-e02f-499c-b778-ace863f8a8f9\") " Jun 20 18:43:18.599330 kubelet[3294]: I0620 18:43:18.598759 3294 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b25d997a-e02f-499c-b778-ace863f8a8f9-host-proc-sys-kernel\") pod \"b25d997a-e02f-499c-b778-ace863f8a8f9\" (UID: \"b25d997a-e02f-499c-b778-ace863f8a8f9\") " Jun 20 18:43:18.599330 kubelet[3294]: I0620 18:43:18.598777 3294 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b25d997a-e02f-499c-b778-ace863f8a8f9-etc-cni-netd\") pod \"b25d997a-e02f-499c-b778-ace863f8a8f9\" (UID: \"b25d997a-e02f-499c-b778-ace863f8a8f9\") " Jun 20 18:43:18.599330 kubelet[3294]: I0620 18:43:18.598793 3294 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b25d997a-e02f-499c-b778-ace863f8a8f9-lib-modules\") pod \"b25d997a-e02f-499c-b778-ace863f8a8f9\" (UID: \"b25d997a-e02f-499c-b778-ace863f8a8f9\") " Jun 20 18:43:18.599330 kubelet[3294]: I0620 18:43:18.598808 3294 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b25d997a-e02f-499c-b778-ace863f8a8f9-cni-path\") pod \"b25d997a-e02f-499c-b778-ace863f8a8f9\" (UID: \"b25d997a-e02f-499c-b778-ace863f8a8f9\") " Jun 20 18:43:18.599805 kubelet[3294]: I0620 18:43:18.598830 3294 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hpslv\" (UniqueName: \"kubernetes.io/projected/b25d997a-e02f-499c-b778-ace863f8a8f9-kube-api-access-hpslv\") pod \"b25d997a-e02f-499c-b778-ace863f8a8f9\" (UID: \"b25d997a-e02f-499c-b778-ace863f8a8f9\") " Jun 20 18:43:18.599805 kubelet[3294]: I0620 18:43:18.598846 3294 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b25d997a-e02f-499c-b778-ace863f8a8f9-bpf-maps\") pod \"b25d997a-e02f-499c-b778-ace863f8a8f9\" (UID: \"b25d997a-e02f-499c-b778-ace863f8a8f9\") " Jun 20 18:43:18.599805 kubelet[3294]: I0620 18:43:18.598863 3294 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b25d997a-e02f-499c-b778-ace863f8a8f9-hubble-tls\") pod \"b25d997a-e02f-499c-b778-ace863f8a8f9\" (UID: \"b25d997a-e02f-499c-b778-ace863f8a8f9\") " Jun 20 18:43:18.599805 kubelet[3294]: I0620 18:43:18.598882 3294 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b25d997a-e02f-499c-b778-ace863f8a8f9-clustermesh-secrets\") pod \"b25d997a-e02f-499c-b778-ace863f8a8f9\" (UID: \"b25d997a-e02f-499c-b778-ace863f8a8f9\") " Jun 20 18:43:18.599805 kubelet[3294]: I0620 18:43:18.598900 3294 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b25d997a-e02f-499c-b778-ace863f8a8f9-cilium-run\") pod \"b25d997a-e02f-499c-b778-ace863f8a8f9\" (UID: \"b25d997a-e02f-499c-b778-ace863f8a8f9\") " Jun 20 18:43:18.599805 kubelet[3294]: I0620 18:43:18.598915 3294 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b25d997a-e02f-499c-b778-ace863f8a8f9-cilium-cgroup\") pod \"b25d997a-e02f-499c-b778-ace863f8a8f9\" (UID: \"b25d997a-e02f-499c-b778-ace863f8a8f9\") " Jun 20 18:43:18.599963 kubelet[3294]: I0620 18:43:18.598952 3294 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b74af2e6-3d50-4ed3-9c1c-011bb28d74ec-cilium-config-path\") pod \"b74af2e6-3d50-4ed3-9c1c-011bb28d74ec\" (UID: \"b74af2e6-3d50-4ed3-9c1c-011bb28d74ec\") " Jun 20 18:43:18.599963 kubelet[3294]: I0620 18:43:18.598969 3294 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b25d997a-e02f-499c-b778-ace863f8a8f9-xtables-lock\") pod \"b25d997a-e02f-499c-b778-ace863f8a8f9\" (UID: \"b25d997a-e02f-499c-b778-ace863f8a8f9\") " Jun 20 18:43:18.599963 kubelet[3294]: I0620 18:43:18.598985 3294 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b25d997a-e02f-499c-b778-ace863f8a8f9-hostproc\") pod \"b25d997a-e02f-499c-b778-ace863f8a8f9\" (UID: \"b25d997a-e02f-499c-b778-ace863f8a8f9\") " Jun 20 18:43:18.599963 kubelet[3294]: I0620 18:43:18.599002 3294 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mkslr\" (UniqueName: \"kubernetes.io/projected/b74af2e6-3d50-4ed3-9c1c-011bb28d74ec-kube-api-access-mkslr\") pod \"b74af2e6-3d50-4ed3-9c1c-011bb28d74ec\" (UID: \"b74af2e6-3d50-4ed3-9c1c-011bb28d74ec\") " Jun 20 18:43:18.600947 kubelet[3294]: I0620 18:43:18.600590 3294 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b25d997a-e02f-499c-b778-ace863f8a8f9-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b25d997a-e02f-499c-b778-ace863f8a8f9" (UID: "b25d997a-e02f-499c-b778-ace863f8a8f9"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jun 20 18:43:18.600947 kubelet[3294]: I0620 18:43:18.600757 3294 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b25d997a-e02f-499c-b778-ace863f8a8f9-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "b25d997a-e02f-499c-b778-ace863f8a8f9" (UID: "b25d997a-e02f-499c-b778-ace863f8a8f9"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 18:43:18.600947 kubelet[3294]: I0620 18:43:18.600780 3294 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b25d997a-e02f-499c-b778-ace863f8a8f9-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "b25d997a-e02f-499c-b778-ace863f8a8f9" (UID: "b25d997a-e02f-499c-b778-ace863f8a8f9"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 18:43:18.600947 kubelet[3294]: I0620 18:43:18.600795 3294 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b25d997a-e02f-499c-b778-ace863f8a8f9-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "b25d997a-e02f-499c-b778-ace863f8a8f9" (UID: "b25d997a-e02f-499c-b778-ace863f8a8f9"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 18:43:18.600947 kubelet[3294]: I0620 18:43:18.600825 3294 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b25d997a-e02f-499c-b778-ace863f8a8f9-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "b25d997a-e02f-499c-b778-ace863f8a8f9" (UID: "b25d997a-e02f-499c-b778-ace863f8a8f9"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 18:43:18.601128 kubelet[3294]: I0620 18:43:18.600840 3294 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b25d997a-e02f-499c-b778-ace863f8a8f9-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "b25d997a-e02f-499c-b778-ace863f8a8f9" (UID: "b25d997a-e02f-499c-b778-ace863f8a8f9"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 18:43:18.601128 kubelet[3294]: I0620 18:43:18.600854 3294 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b25d997a-e02f-499c-b778-ace863f8a8f9-cni-path" (OuterVolumeSpecName: "cni-path") pod "b25d997a-e02f-499c-b778-ace863f8a8f9" (UID: "b25d997a-e02f-499c-b778-ace863f8a8f9"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 18:43:18.603019 kubelet[3294]: I0620 18:43:18.602815 3294 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b74af2e6-3d50-4ed3-9c1c-011bb28d74ec-kube-api-access-mkslr" (OuterVolumeSpecName: "kube-api-access-mkslr") pod "b74af2e6-3d50-4ed3-9c1c-011bb28d74ec" (UID: "b74af2e6-3d50-4ed3-9c1c-011bb28d74ec"). InnerVolumeSpecName "kube-api-access-mkslr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jun 20 18:43:18.603182 kubelet[3294]: I0620 18:43:18.603163 3294 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b25d997a-e02f-499c-b778-ace863f8a8f9-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "b25d997a-e02f-499c-b778-ace863f8a8f9" (UID: "b25d997a-e02f-499c-b778-ace863f8a8f9"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 18:43:18.603272 kubelet[3294]: I0620 18:43:18.603236 3294 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b25d997a-e02f-499c-b778-ace863f8a8f9-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "b25d997a-e02f-499c-b778-ace863f8a8f9" (UID: "b25d997a-e02f-499c-b778-ace863f8a8f9"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 18:43:18.603311 kubelet[3294]: I0620 18:43:18.603216 3294 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b25d997a-e02f-499c-b778-ace863f8a8f9-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "b25d997a-e02f-499c-b778-ace863f8a8f9" (UID: "b25d997a-e02f-499c-b778-ace863f8a8f9"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 18:43:18.603368 kubelet[3294]: I0620 18:43:18.603343 3294 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b25d997a-e02f-499c-b778-ace863f8a8f9-hostproc" (OuterVolumeSpecName: "hostproc") pod "b25d997a-e02f-499c-b778-ace863f8a8f9" (UID: "b25d997a-e02f-499c-b778-ace863f8a8f9"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 18:43:18.605149 kubelet[3294]: I0620 18:43:18.605102 3294 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b25d997a-e02f-499c-b778-ace863f8a8f9-kube-api-access-hpslv" (OuterVolumeSpecName: "kube-api-access-hpslv") pod "b25d997a-e02f-499c-b778-ace863f8a8f9" (UID: "b25d997a-e02f-499c-b778-ace863f8a8f9"). InnerVolumeSpecName "kube-api-access-hpslv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jun 20 18:43:18.608541 kubelet[3294]: I0620 18:43:18.608482 3294 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b74af2e6-3d50-4ed3-9c1c-011bb28d74ec-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b74af2e6-3d50-4ed3-9c1c-011bb28d74ec" (UID: "b74af2e6-3d50-4ed3-9c1c-011bb28d74ec"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jun 20 18:43:18.609099 kubelet[3294]: I0620 18:43:18.609061 3294 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b25d997a-e02f-499c-b778-ace863f8a8f9-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "b25d997a-e02f-499c-b778-ace863f8a8f9" (UID: "b25d997a-e02f-499c-b778-ace863f8a8f9"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jun 20 18:43:18.609460 kubelet[3294]: I0620 18:43:18.609431 3294 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b25d997a-e02f-499c-b778-ace863f8a8f9-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "b25d997a-e02f-499c-b778-ace863f8a8f9" (UID: "b25d997a-e02f-499c-b778-ace863f8a8f9"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jun 20 18:43:18.699855 kubelet[3294]: I0620 18:43:18.699808 3294 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b25d997a-e02f-499c-b778-ace863f8a8f9-etc-cni-netd\") on node \"ci-4230.2.0-a-431835d741\" DevicePath \"\"" Jun 20 18:43:18.699855 kubelet[3294]: I0620 18:43:18.699848 3294 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b25d997a-e02f-499c-b778-ace863f8a8f9-lib-modules\") on node \"ci-4230.2.0-a-431835d741\" DevicePath \"\"" Jun 20 18:43:18.699855 kubelet[3294]: I0620 18:43:18.699859 3294 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hpslv\" (UniqueName: \"kubernetes.io/projected/b25d997a-e02f-499c-b778-ace863f8a8f9-kube-api-access-hpslv\") on node \"ci-4230.2.0-a-431835d741\" DevicePath \"\"" Jun 20 18:43:18.700076 kubelet[3294]: I0620 18:43:18.699871 3294 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b25d997a-e02f-499c-b778-ace863f8a8f9-cni-path\") on node \"ci-4230.2.0-a-431835d741\" DevicePath \"\"" Jun 20 18:43:18.700076 kubelet[3294]: I0620 18:43:18.699881 3294 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b25d997a-e02f-499c-b778-ace863f8a8f9-bpf-maps\") on node \"ci-4230.2.0-a-431835d741\" DevicePath \"\"" Jun 20 18:43:18.700076 kubelet[3294]: I0620 18:43:18.699891 3294 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b25d997a-e02f-499c-b778-ace863f8a8f9-hubble-tls\") on node \"ci-4230.2.0-a-431835d741\" DevicePath \"\"" Jun 20 18:43:18.700076 kubelet[3294]: I0620 18:43:18.699901 3294 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b25d997a-e02f-499c-b778-ace863f8a8f9-clustermesh-secrets\") on node \"ci-4230.2.0-a-431835d741\" DevicePath \"\"" Jun 20 18:43:18.700076 kubelet[3294]: I0620 18:43:18.699911 3294 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b25d997a-e02f-499c-b778-ace863f8a8f9-cilium-run\") on node \"ci-4230.2.0-a-431835d741\" DevicePath \"\"" Jun 20 18:43:18.700076 kubelet[3294]: I0620 18:43:18.699921 3294 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b25d997a-e02f-499c-b778-ace863f8a8f9-cilium-cgroup\") on node \"ci-4230.2.0-a-431835d741\" DevicePath \"\"" Jun 20 18:43:18.700076 kubelet[3294]: I0620 18:43:18.699958 3294 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b74af2e6-3d50-4ed3-9c1c-011bb28d74ec-cilium-config-path\") on node \"ci-4230.2.0-a-431835d741\" DevicePath \"\"" Jun 20 18:43:18.700076 kubelet[3294]: I0620 18:43:18.699966 3294 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b25d997a-e02f-499c-b778-ace863f8a8f9-hostproc\") on node \"ci-4230.2.0-a-431835d741\" DevicePath \"\"" Jun 20 18:43:18.700240 kubelet[3294]: I0620 18:43:18.699975 3294 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mkslr\" (UniqueName: \"kubernetes.io/projected/b74af2e6-3d50-4ed3-9c1c-011bb28d74ec-kube-api-access-mkslr\") on node \"ci-4230.2.0-a-431835d741\" DevicePath \"\"" Jun 20 18:43:18.700240 kubelet[3294]: I0620 18:43:18.699983 3294 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b25d997a-e02f-499c-b778-ace863f8a8f9-xtables-lock\") on node \"ci-4230.2.0-a-431835d741\" DevicePath \"\"" Jun 20 18:43:18.700240 kubelet[3294]: I0620 18:43:18.699991 3294 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b25d997a-e02f-499c-b778-ace863f8a8f9-host-proc-sys-net\") on node \"ci-4230.2.0-a-431835d741\" DevicePath \"\"" Jun 20 18:43:18.700240 kubelet[3294]: I0620 18:43:18.700002 3294 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b25d997a-e02f-499c-b778-ace863f8a8f9-host-proc-sys-kernel\") on node \"ci-4230.2.0-a-431835d741\" DevicePath \"\"" Jun 20 18:43:18.700240 kubelet[3294]: I0620 18:43:18.700011 3294 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b25d997a-e02f-499c-b778-ace863f8a8f9-cilium-config-path\") on node \"ci-4230.2.0-a-431835d741\" DevicePath \"\"" Jun 20 18:43:18.969859 kubelet[3294]: I0620 18:43:18.969744 3294 scope.go:117] "RemoveContainer" containerID="0a0811b339f443f4b951fbcee7da7fee65e4eac4602157d4b60d32c6512646c8" Jun 20 18:43:18.974138 containerd[1748]: time="2025-06-20T18:43:18.973912404Z" level=info msg="RemoveContainer for \"0a0811b339f443f4b951fbcee7da7fee65e4eac4602157d4b60d32c6512646c8\"" Jun 20 18:43:18.977934 systemd[1]: Removed slice kubepods-burstable-podb25d997a_e02f_499c_b778_ace863f8a8f9.slice - libcontainer container kubepods-burstable-podb25d997a_e02f_499c_b778_ace863f8a8f9.slice. Jun 20 18:43:18.979086 systemd[1]: kubepods-burstable-podb25d997a_e02f_499c_b778_ace863f8a8f9.slice: Consumed 6.755s CPU time, 124.2M memory peak, 136K read from disk, 12.9M written to disk. Jun 20 18:43:18.983474 systemd[1]: Removed slice kubepods-besteffort-podb74af2e6_3d50_4ed3_9c1c_011bb28d74ec.slice - libcontainer container kubepods-besteffort-podb74af2e6_3d50_4ed3_9c1c_011bb28d74ec.slice. Jun 20 18:43:18.988554 containerd[1748]: time="2025-06-20T18:43:18.988399846Z" level=info msg="RemoveContainer for \"0a0811b339f443f4b951fbcee7da7fee65e4eac4602157d4b60d32c6512646c8\" returns successfully" Jun 20 18:43:18.988866 kubelet[3294]: I0620 18:43:18.988713 3294 scope.go:117] "RemoveContainer" containerID="17a96300cb9ce8cda1a549ac32f7e532f18b27c99046778996b8f9781d987301" Jun 20 18:43:18.991201 containerd[1748]: time="2025-06-20T18:43:18.991171279Z" level=info msg="RemoveContainer for \"17a96300cb9ce8cda1a549ac32f7e532f18b27c99046778996b8f9781d987301\"" Jun 20 18:43:19.009466 containerd[1748]: time="2025-06-20T18:43:19.008603073Z" level=info msg="RemoveContainer for \"17a96300cb9ce8cda1a549ac32f7e532f18b27c99046778996b8f9781d987301\" returns successfully" Jun 20 18:43:19.009956 kubelet[3294]: I0620 18:43:19.009037 3294 scope.go:117] "RemoveContainer" containerID="8a106d9cff51bf4398154bece9cacd91cfe846669c14cf7e610f4c8fb0a58386" Jun 20 18:43:19.013568 containerd[1748]: time="2025-06-20T18:43:19.013352820Z" level=info msg="RemoveContainer for \"8a106d9cff51bf4398154bece9cacd91cfe846669c14cf7e610f4c8fb0a58386\"" Jun 20 18:43:19.028030 containerd[1748]: time="2025-06-20T18:43:19.027872742Z" level=info msg="RemoveContainer for \"8a106d9cff51bf4398154bece9cacd91cfe846669c14cf7e610f4c8fb0a58386\" returns successfully" Jun 20 18:43:19.028458 kubelet[3294]: I0620 18:43:19.028279 3294 scope.go:117] "RemoveContainer" containerID="d9c5f977e95d7125994b01d4a881b28255a894e6d1d5597fd0c359994fcd74ff" Jun 20 18:43:19.029662 containerd[1748]: time="2025-06-20T18:43:19.029544978Z" level=info msg="RemoveContainer for \"d9c5f977e95d7125994b01d4a881b28255a894e6d1d5597fd0c359994fcd74ff\"" Jun 20 18:43:19.042134 containerd[1748]: time="2025-06-20T18:43:19.042055385Z" level=info msg="RemoveContainer for \"d9c5f977e95d7125994b01d4a881b28255a894e6d1d5597fd0c359994fcd74ff\" returns successfully" Jun 20 18:43:19.042318 kubelet[3294]: I0620 18:43:19.042283 3294 scope.go:117] "RemoveContainer" containerID="b02c85b19c1ed3bdcba95fd8d2b44f5fd87c26aadc8b37e5661b7771beb078d9" Jun 20 18:43:19.043732 containerd[1748]: time="2025-06-20T18:43:19.043475861Z" level=info msg="RemoveContainer for \"b02c85b19c1ed3bdcba95fd8d2b44f5fd87c26aadc8b37e5661b7771beb078d9\"" Jun 20 18:43:19.055976 containerd[1748]: time="2025-06-20T18:43:19.055860628Z" level=info msg="RemoveContainer for \"b02c85b19c1ed3bdcba95fd8d2b44f5fd87c26aadc8b37e5661b7771beb078d9\" returns successfully" Jun 20 18:43:19.056138 kubelet[3294]: I0620 18:43:19.056107 3294 scope.go:117] "RemoveContainer" containerID="0a0811b339f443f4b951fbcee7da7fee65e4eac4602157d4b60d32c6512646c8" Jun 20 18:43:19.056416 containerd[1748]: time="2025-06-20T18:43:19.056380347Z" level=error msg="ContainerStatus for \"0a0811b339f443f4b951fbcee7da7fee65e4eac4602157d4b60d32c6512646c8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0a0811b339f443f4b951fbcee7da7fee65e4eac4602157d4b60d32c6512646c8\": not found" Jun 20 18:43:19.057123 kubelet[3294]: E0620 18:43:19.056744 3294 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0a0811b339f443f4b951fbcee7da7fee65e4eac4602157d4b60d32c6512646c8\": not found" containerID="0a0811b339f443f4b951fbcee7da7fee65e4eac4602157d4b60d32c6512646c8" Jun 20 18:43:19.057123 kubelet[3294]: I0620 18:43:19.056775 3294 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0a0811b339f443f4b951fbcee7da7fee65e4eac4602157d4b60d32c6512646c8"} err="failed to get container status \"0a0811b339f443f4b951fbcee7da7fee65e4eac4602157d4b60d32c6512646c8\": rpc error: code = NotFound desc = an error occurred when try to find container \"0a0811b339f443f4b951fbcee7da7fee65e4eac4602157d4b60d32c6512646c8\": not found" Jun 20 18:43:19.057123 kubelet[3294]: I0620 18:43:19.056853 3294 scope.go:117] "RemoveContainer" containerID="17a96300cb9ce8cda1a549ac32f7e532f18b27c99046778996b8f9781d987301" Jun 20 18:43:19.057258 containerd[1748]: time="2025-06-20T18:43:19.057074505Z" level=error msg="ContainerStatus for \"17a96300cb9ce8cda1a549ac32f7e532f18b27c99046778996b8f9781d987301\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"17a96300cb9ce8cda1a549ac32f7e532f18b27c99046778996b8f9781d987301\": not found" Jun 20 18:43:19.057420 kubelet[3294]: E0620 18:43:19.057377 3294 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"17a96300cb9ce8cda1a549ac32f7e532f18b27c99046778996b8f9781d987301\": not found" containerID="17a96300cb9ce8cda1a549ac32f7e532f18b27c99046778996b8f9781d987301" Jun 20 18:43:19.057420 kubelet[3294]: I0620 18:43:19.057406 3294 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"17a96300cb9ce8cda1a549ac32f7e532f18b27c99046778996b8f9781d987301"} err="failed to get container status \"17a96300cb9ce8cda1a549ac32f7e532f18b27c99046778996b8f9781d987301\": rpc error: code = NotFound desc = an error occurred when try to find container \"17a96300cb9ce8cda1a549ac32f7e532f18b27c99046778996b8f9781d987301\": not found" Jun 20 18:43:19.057486 kubelet[3294]: I0620 18:43:19.057426 3294 scope.go:117] "RemoveContainer" containerID="8a106d9cff51bf4398154bece9cacd91cfe846669c14cf7e610f4c8fb0a58386" Jun 20 18:43:19.057655 containerd[1748]: time="2025-06-20T18:43:19.057619464Z" level=error msg="ContainerStatus for \"8a106d9cff51bf4398154bece9cacd91cfe846669c14cf7e610f4c8fb0a58386\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8a106d9cff51bf4398154bece9cacd91cfe846669c14cf7e610f4c8fb0a58386\": not found" Jun 20 18:43:19.057820 kubelet[3294]: E0620 18:43:19.057768 3294 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8a106d9cff51bf4398154bece9cacd91cfe846669c14cf7e610f4c8fb0a58386\": not found" containerID="8a106d9cff51bf4398154bece9cacd91cfe846669c14cf7e610f4c8fb0a58386" Jun 20 18:43:19.057820 kubelet[3294]: I0620 18:43:19.057794 3294 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8a106d9cff51bf4398154bece9cacd91cfe846669c14cf7e610f4c8fb0a58386"} err="failed to get container status \"8a106d9cff51bf4398154bece9cacd91cfe846669c14cf7e610f4c8fb0a58386\": rpc error: code = NotFound desc = an error occurred when try to find container \"8a106d9cff51bf4398154bece9cacd91cfe846669c14cf7e610f4c8fb0a58386\": not found" Jun 20 18:43:19.057820 kubelet[3294]: I0620 18:43:19.057810 3294 scope.go:117] "RemoveContainer" containerID="d9c5f977e95d7125994b01d4a881b28255a894e6d1d5597fd0c359994fcd74ff" Jun 20 18:43:19.058179 containerd[1748]: time="2025-06-20T18:43:19.058084663Z" level=error msg="ContainerStatus for \"d9c5f977e95d7125994b01d4a881b28255a894e6d1d5597fd0c359994fcd74ff\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d9c5f977e95d7125994b01d4a881b28255a894e6d1d5597fd0c359994fcd74ff\": not found" Jun 20 18:43:19.058275 kubelet[3294]: E0620 18:43:19.058195 3294 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d9c5f977e95d7125994b01d4a881b28255a894e6d1d5597fd0c359994fcd74ff\": not found" containerID="d9c5f977e95d7125994b01d4a881b28255a894e6d1d5597fd0c359994fcd74ff" Jun 20 18:43:19.058275 kubelet[3294]: I0620 18:43:19.058214 3294 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d9c5f977e95d7125994b01d4a881b28255a894e6d1d5597fd0c359994fcd74ff"} err="failed to get container status \"d9c5f977e95d7125994b01d4a881b28255a894e6d1d5597fd0c359994fcd74ff\": rpc error: code = NotFound desc = an error occurred when try to find container \"d9c5f977e95d7125994b01d4a881b28255a894e6d1d5597fd0c359994fcd74ff\": not found" Jun 20 18:43:19.058275 kubelet[3294]: I0620 18:43:19.058228 3294 scope.go:117] "RemoveContainer" containerID="b02c85b19c1ed3bdcba95fd8d2b44f5fd87c26aadc8b37e5661b7771beb078d9" Jun 20 18:43:19.058699 containerd[1748]: time="2025-06-20T18:43:19.058606861Z" level=error msg="ContainerStatus for \"b02c85b19c1ed3bdcba95fd8d2b44f5fd87c26aadc8b37e5661b7771beb078d9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b02c85b19c1ed3bdcba95fd8d2b44f5fd87c26aadc8b37e5661b7771beb078d9\": not found" Jun 20 18:43:19.058877 kubelet[3294]: E0620 18:43:19.058853 3294 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b02c85b19c1ed3bdcba95fd8d2b44f5fd87c26aadc8b37e5661b7771beb078d9\": not found" containerID="b02c85b19c1ed3bdcba95fd8d2b44f5fd87c26aadc8b37e5661b7771beb078d9" Jun 20 18:43:19.058941 kubelet[3294]: I0620 18:43:19.058879 3294 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b02c85b19c1ed3bdcba95fd8d2b44f5fd87c26aadc8b37e5661b7771beb078d9"} err="failed to get container status \"b02c85b19c1ed3bdcba95fd8d2b44f5fd87c26aadc8b37e5661b7771beb078d9\": rpc error: code = NotFound desc = an error occurred when try to find container \"b02c85b19c1ed3bdcba95fd8d2b44f5fd87c26aadc8b37e5661b7771beb078d9\": not found" Jun 20 18:43:19.058941 kubelet[3294]: I0620 18:43:19.058911 3294 scope.go:117] "RemoveContainer" containerID="2ea7e06aca3b5b508fc1cc5622dafad7cbfc52bcfc8a3b8346580aceddd35b6d" Jun 20 18:43:19.060501 containerd[1748]: time="2025-06-20T18:43:19.060196057Z" level=info msg="RemoveContainer for \"2ea7e06aca3b5b508fc1cc5622dafad7cbfc52bcfc8a3b8346580aceddd35b6d\"" Jun 20 18:43:19.070294 containerd[1748]: time="2025-06-20T18:43:19.070143391Z" level=info msg="RemoveContainer for \"2ea7e06aca3b5b508fc1cc5622dafad7cbfc52bcfc8a3b8346580aceddd35b6d\" returns successfully" Jun 20 18:43:19.070584 kubelet[3294]: I0620 18:43:19.070548 3294 scope.go:117] "RemoveContainer" containerID="2ea7e06aca3b5b508fc1cc5622dafad7cbfc52bcfc8a3b8346580aceddd35b6d" Jun 20 18:43:19.070859 containerd[1748]: time="2025-06-20T18:43:19.070829789Z" level=error msg="ContainerStatus for \"2ea7e06aca3b5b508fc1cc5622dafad7cbfc52bcfc8a3b8346580aceddd35b6d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2ea7e06aca3b5b508fc1cc5622dafad7cbfc52bcfc8a3b8346580aceddd35b6d\": not found" Jun 20 18:43:19.071136 kubelet[3294]: E0620 18:43:19.071072 3294 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2ea7e06aca3b5b508fc1cc5622dafad7cbfc52bcfc8a3b8346580aceddd35b6d\": not found" containerID="2ea7e06aca3b5b508fc1cc5622dafad7cbfc52bcfc8a3b8346580aceddd35b6d" Jun 20 18:43:19.071136 kubelet[3294]: I0620 18:43:19.071100 3294 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2ea7e06aca3b5b508fc1cc5622dafad7cbfc52bcfc8a3b8346580aceddd35b6d"} err="failed to get container status \"2ea7e06aca3b5b508fc1cc5622dafad7cbfc52bcfc8a3b8346580aceddd35b6d\": rpc error: code = NotFound desc = an error occurred when try to find container \"2ea7e06aca3b5b508fc1cc5622dafad7cbfc52bcfc8a3b8346580aceddd35b6d\": not found" Jun 20 18:43:19.191754 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6777b19bbec00ab85d70206bd6960eea386cd5684f7fb403d447056d39534afb-rootfs.mount: Deactivated successfully. Jun 20 18:43:19.191850 systemd[1]: var-lib-kubelet-pods-b74af2e6\x2d3d50\x2d4ed3\x2d9c1c\x2d011bb28d74ec-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmkslr.mount: Deactivated successfully. Jun 20 18:43:19.191908 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-36d752038d34f8e2ac682289d85c1807d9cfc313397f16d2a78603074dc5bdfb-rootfs.mount: Deactivated successfully. Jun 20 18:43:19.191981 systemd[1]: var-lib-kubelet-pods-b25d997a\x2de02f\x2d499c\x2db778\x2dace863f8a8f9-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhpslv.mount: Deactivated successfully. Jun 20 18:43:19.192035 systemd[1]: var-lib-kubelet-pods-b25d997a\x2de02f\x2d499c\x2db778\x2dace863f8a8f9-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jun 20 18:43:19.192091 systemd[1]: var-lib-kubelet-pods-b25d997a\x2de02f\x2d499c\x2db778\x2dace863f8a8f9-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jun 20 18:43:20.144536 update_engine[1721]: I20250620 18:43:20.144464 1721 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jun 20 18:43:20.144536 update_engine[1721]: I20250620 18:43:20.144520 1721 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jun 20 18:43:20.148072 update_engine[1721]: I20250620 18:43:20.144774 1721 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jun 20 18:43:20.148072 update_engine[1721]: I20250620 18:43:20.145457 1721 omaha_request_params.cc:62] Current group set to stable Jun 20 18:43:20.148072 update_engine[1721]: I20250620 18:43:20.145607 1721 update_attempter.cc:499] Already updated boot flags. Skipping. Jun 20 18:43:20.148072 update_engine[1721]: I20250620 18:43:20.145619 1721 update_attempter.cc:643] Scheduling an action processor start. Jun 20 18:43:20.148072 update_engine[1721]: I20250620 18:43:20.145635 1721 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jun 20 18:43:20.148072 update_engine[1721]: I20250620 18:43:20.145699 1721 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jun 20 18:43:20.148072 update_engine[1721]: I20250620 18:43:20.145783 1721 omaha_request_action.cc:271] Posting an Omaha request to disabled Jun 20 18:43:20.148072 update_engine[1721]: I20250620 18:43:20.145794 1721 omaha_request_action.cc:272] Request: Jun 20 18:43:20.148072 update_engine[1721]: Jun 20 18:43:20.148072 update_engine[1721]: Jun 20 18:43:20.148072 update_engine[1721]: Jun 20 18:43:20.148072 update_engine[1721]: Jun 20 18:43:20.148072 update_engine[1721]: Jun 20 18:43:20.148072 update_engine[1721]: Jun 20 18:43:20.148072 update_engine[1721]: Jun 20 18:43:20.148072 update_engine[1721]: Jun 20 18:43:20.148072 update_engine[1721]: I20250620 18:43:20.145800 1721 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jun 20 18:43:20.148072 update_engine[1721]: I20250620 18:43:20.147301 1721 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jun 20 18:43:20.148072 update_engine[1721]: I20250620 18:43:20.147671 1721 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jun 20 18:43:20.148600 locksmithd[1770]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jun 20 18:43:20.157881 kubelet[3294]: E0620 18:43:20.157829 3294 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jun 20 18:43:20.166387 update_engine[1721]: E20250620 18:43:20.166321 1721 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jun 20 18:43:20.166487 update_engine[1721]: I20250620 18:43:20.166440 1721 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jun 20 18:43:20.185785 sshd[4923]: Connection closed by 10.200.16.10 port 48880 Jun 20 18:43:20.186917 sshd-session[4921]: pam_unix(sshd:session): session closed for user core Jun 20 18:43:20.192006 systemd-logind[1715]: Session 26 logged out. Waiting for processes to exit. Jun 20 18:43:20.192710 systemd[1]: sshd@23-10.200.20.37:22-10.200.16.10:48880.service: Deactivated successfully. Jun 20 18:43:20.196876 systemd[1]: session-26.scope: Deactivated successfully. Jun 20 18:43:20.198065 systemd[1]: session-26.scope: Consumed 1.330s CPU time, 23.7M memory peak. Jun 20 18:43:20.199126 systemd-logind[1715]: Removed session 26. Jun 20 18:43:20.286239 systemd[1]: Started sshd@24-10.200.20.37:22-10.200.16.10:35578.service - OpenSSH per-connection server daemon (10.200.16.10:35578). Jun 20 18:43:20.553328 kubelet[3294]: I0620 18:43:20.553270 3294 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b25d997a-e02f-499c-b778-ace863f8a8f9" path="/var/lib/kubelet/pods/b25d997a-e02f-499c-b778-ace863f8a8f9/volumes" Jun 20 18:43:20.553870 kubelet[3294]: I0620 18:43:20.553841 3294 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b74af2e6-3d50-4ed3-9c1c-011bb28d74ec" path="/var/lib/kubelet/pods/b74af2e6-3d50-4ed3-9c1c-011bb28d74ec/volumes" Jun 20 18:43:20.820791 sshd[5083]: Accepted publickey for core from 10.200.16.10 port 35578 ssh2: RSA SHA256:V0PIl5bgEVSoJBEasTsvuXRLX17TbdMMyIxacTZE6p0 Jun 20 18:43:20.822256 sshd-session[5083]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:43:20.828106 systemd-logind[1715]: New session 27 of user core. Jun 20 18:43:20.837150 systemd[1]: Started session-27.scope - Session 27 of User core. Jun 20 18:43:22.246707 kubelet[3294]: I0620 18:43:22.246487 3294 memory_manager.go:355] "RemoveStaleState removing state" podUID="b25d997a-e02f-499c-b778-ace863f8a8f9" containerName="cilium-agent" Jun 20 18:43:22.246707 kubelet[3294]: I0620 18:43:22.246530 3294 memory_manager.go:355] "RemoveStaleState removing state" podUID="b74af2e6-3d50-4ed3-9c1c-011bb28d74ec" containerName="cilium-operator" Jun 20 18:43:22.260011 systemd[1]: Created slice kubepods-burstable-pod97ecf286_1aca_4373_9277_bf264d483b26.slice - libcontainer container kubepods-burstable-pod97ecf286_1aca_4373_9277_bf264d483b26.slice. Jun 20 18:43:22.265850 sshd[5085]: Connection closed by 10.200.16.10 port 35578 Jun 20 18:43:22.265648 sshd-session[5083]: pam_unix(sshd:session): session closed for user core Jun 20 18:43:22.271875 systemd[1]: sshd@24-10.200.20.37:22-10.200.16.10:35578.service: Deactivated successfully. Jun 20 18:43:22.276579 systemd[1]: session-27.scope: Deactivated successfully. Jun 20 18:43:22.278217 systemd-logind[1715]: Session 27 logged out. Waiting for processes to exit. Jun 20 18:43:22.280996 systemd-logind[1715]: Removed session 27. Jun 20 18:43:22.365579 systemd[1]: Started sshd@25-10.200.20.37:22-10.200.16.10:35594.service - OpenSSH per-connection server daemon (10.200.16.10:35594). Jun 20 18:43:22.421790 kubelet[3294]: I0620 18:43:22.421710 3294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/97ecf286-1aca-4373-9277-bf264d483b26-cilium-cgroup\") pod \"cilium-vvrmz\" (UID: \"97ecf286-1aca-4373-9277-bf264d483b26\") " pod="kube-system/cilium-vvrmz" Jun 20 18:43:22.421790 kubelet[3294]: I0620 18:43:22.421763 3294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qsjfg\" (UniqueName: \"kubernetes.io/projected/97ecf286-1aca-4373-9277-bf264d483b26-kube-api-access-qsjfg\") pod \"cilium-vvrmz\" (UID: \"97ecf286-1aca-4373-9277-bf264d483b26\") " pod="kube-system/cilium-vvrmz" Jun 20 18:43:22.421790 kubelet[3294]: I0620 18:43:22.421786 3294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/97ecf286-1aca-4373-9277-bf264d483b26-clustermesh-secrets\") pod \"cilium-vvrmz\" (UID: \"97ecf286-1aca-4373-9277-bf264d483b26\") " pod="kube-system/cilium-vvrmz" Jun 20 18:43:22.421790 kubelet[3294]: I0620 18:43:22.421806 3294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/97ecf286-1aca-4373-9277-bf264d483b26-cilium-config-path\") pod \"cilium-vvrmz\" (UID: \"97ecf286-1aca-4373-9277-bf264d483b26\") " pod="kube-system/cilium-vvrmz" Jun 20 18:43:22.422045 kubelet[3294]: I0620 18:43:22.421824 3294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/97ecf286-1aca-4373-9277-bf264d483b26-lib-modules\") pod \"cilium-vvrmz\" (UID: \"97ecf286-1aca-4373-9277-bf264d483b26\") " pod="kube-system/cilium-vvrmz" Jun 20 18:43:22.422045 kubelet[3294]: I0620 18:43:22.421841 3294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/97ecf286-1aca-4373-9277-bf264d483b26-cilium-run\") pod \"cilium-vvrmz\" (UID: \"97ecf286-1aca-4373-9277-bf264d483b26\") " pod="kube-system/cilium-vvrmz" Jun 20 18:43:22.422045 kubelet[3294]: I0620 18:43:22.421859 3294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/97ecf286-1aca-4373-9277-bf264d483b26-bpf-maps\") pod \"cilium-vvrmz\" (UID: \"97ecf286-1aca-4373-9277-bf264d483b26\") " pod="kube-system/cilium-vvrmz" Jun 20 18:43:22.422045 kubelet[3294]: I0620 18:43:22.421875 3294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/97ecf286-1aca-4373-9277-bf264d483b26-host-proc-sys-kernel\") pod \"cilium-vvrmz\" (UID: \"97ecf286-1aca-4373-9277-bf264d483b26\") " pod="kube-system/cilium-vvrmz" Jun 20 18:43:22.422045 kubelet[3294]: I0620 18:43:22.421891 3294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/97ecf286-1aca-4373-9277-bf264d483b26-hostproc\") pod \"cilium-vvrmz\" (UID: \"97ecf286-1aca-4373-9277-bf264d483b26\") " pod="kube-system/cilium-vvrmz" Jun 20 18:43:22.422045 kubelet[3294]: I0620 18:43:22.421905 3294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/97ecf286-1aca-4373-9277-bf264d483b26-etc-cni-netd\") pod \"cilium-vvrmz\" (UID: \"97ecf286-1aca-4373-9277-bf264d483b26\") " pod="kube-system/cilium-vvrmz" Jun 20 18:43:22.422182 kubelet[3294]: I0620 18:43:22.421947 3294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/97ecf286-1aca-4373-9277-bf264d483b26-xtables-lock\") pod \"cilium-vvrmz\" (UID: \"97ecf286-1aca-4373-9277-bf264d483b26\") " pod="kube-system/cilium-vvrmz" Jun 20 18:43:22.422182 kubelet[3294]: I0620 18:43:22.421966 3294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/97ecf286-1aca-4373-9277-bf264d483b26-host-proc-sys-net\") pod \"cilium-vvrmz\" (UID: \"97ecf286-1aca-4373-9277-bf264d483b26\") " pod="kube-system/cilium-vvrmz" Jun 20 18:43:22.422182 kubelet[3294]: I0620 18:43:22.421982 3294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/97ecf286-1aca-4373-9277-bf264d483b26-hubble-tls\") pod \"cilium-vvrmz\" (UID: \"97ecf286-1aca-4373-9277-bf264d483b26\") " pod="kube-system/cilium-vvrmz" Jun 20 18:43:22.422182 kubelet[3294]: I0620 18:43:22.422000 3294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/97ecf286-1aca-4373-9277-bf264d483b26-cni-path\") pod \"cilium-vvrmz\" (UID: \"97ecf286-1aca-4373-9277-bf264d483b26\") " pod="kube-system/cilium-vvrmz" Jun 20 18:43:22.422182 kubelet[3294]: I0620 18:43:22.422021 3294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/97ecf286-1aca-4373-9277-bf264d483b26-cilium-ipsec-secrets\") pod \"cilium-vvrmz\" (UID: \"97ecf286-1aca-4373-9277-bf264d483b26\") " pod="kube-system/cilium-vvrmz" Jun 20 18:43:22.565616 containerd[1748]: time="2025-06-20T18:43:22.565573226Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vvrmz,Uid:97ecf286-1aca-4373-9277-bf264d483b26,Namespace:kube-system,Attempt:0,}" Jun 20 18:43:22.623792 containerd[1748]: time="2025-06-20T18:43:22.623644717Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 18:43:22.624381 containerd[1748]: time="2025-06-20T18:43:22.624068876Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 18:43:22.624381 containerd[1748]: time="2025-06-20T18:43:22.624118476Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:43:22.624381 containerd[1748]: time="2025-06-20T18:43:22.624221716Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:43:22.645237 systemd[1]: Started cri-containerd-c00f803ce53f632d09df798ded206313d31c411b564b0c75ab2d12d1339920e3.scope - libcontainer container c00f803ce53f632d09df798ded206313d31c411b564b0c75ab2d12d1339920e3. Jun 20 18:43:22.670408 containerd[1748]: time="2025-06-20T18:43:22.670026119Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vvrmz,Uid:97ecf286-1aca-4373-9277-bf264d483b26,Namespace:kube-system,Attempt:0,} returns sandbox id \"c00f803ce53f632d09df798ded206313d31c411b564b0c75ab2d12d1339920e3\"" Jun 20 18:43:22.675001 containerd[1748]: time="2025-06-20T18:43:22.674854106Z" level=info msg="CreateContainer within sandbox \"c00f803ce53f632d09df798ded206313d31c411b564b0c75ab2d12d1339920e3\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jun 20 18:43:22.724349 containerd[1748]: time="2025-06-20T18:43:22.724293220Z" level=info msg="CreateContainer within sandbox \"c00f803ce53f632d09df798ded206313d31c411b564b0c75ab2d12d1339920e3\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d30be423efa3c6c449b5a25db2a14732bc5ca9bbc47f739e9468a40f1ca539c4\"" Jun 20 18:43:22.725152 containerd[1748]: time="2025-06-20T18:43:22.725122618Z" level=info msg="StartContainer for \"d30be423efa3c6c449b5a25db2a14732bc5ca9bbc47f739e9468a40f1ca539c4\"" Jun 20 18:43:22.752139 systemd[1]: Started cri-containerd-d30be423efa3c6c449b5a25db2a14732bc5ca9bbc47f739e9468a40f1ca539c4.scope - libcontainer container d30be423efa3c6c449b5a25db2a14732bc5ca9bbc47f739e9468a40f1ca539c4. Jun 20 18:43:22.784953 containerd[1748]: time="2025-06-20T18:43:22.784858585Z" level=info msg="StartContainer for \"d30be423efa3c6c449b5a25db2a14732bc5ca9bbc47f739e9468a40f1ca539c4\" returns successfully" Jun 20 18:43:22.790831 systemd[1]: cri-containerd-d30be423efa3c6c449b5a25db2a14732bc5ca9bbc47f739e9468a40f1ca539c4.scope: Deactivated successfully. Jun 20 18:43:22.862903 sshd[5096]: Accepted publickey for core from 10.200.16.10 port 35594 ssh2: RSA SHA256:V0PIl5bgEVSoJBEasTsvuXRLX17TbdMMyIxacTZE6p0 Jun 20 18:43:22.864140 sshd-session[5096]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:43:22.869320 containerd[1748]: time="2025-06-20T18:43:22.868497971Z" level=info msg="shim disconnected" id=d30be423efa3c6c449b5a25db2a14732bc5ca9bbc47f739e9468a40f1ca539c4 namespace=k8s.io Jun 20 18:43:22.869320 containerd[1748]: time="2025-06-20T18:43:22.868554611Z" level=warning msg="cleaning up after shim disconnected" id=d30be423efa3c6c449b5a25db2a14732bc5ca9bbc47f739e9468a40f1ca539c4 namespace=k8s.io Jun 20 18:43:22.869320 containerd[1748]: time="2025-06-20T18:43:22.868562291Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 18:43:22.870955 systemd-logind[1715]: New session 28 of user core. Jun 20 18:43:22.876139 systemd[1]: Started session-28.scope - Session 28 of User core. Jun 20 18:43:22.991235 containerd[1748]: time="2025-06-20T18:43:22.991186017Z" level=info msg="CreateContainer within sandbox \"c00f803ce53f632d09df798ded206313d31c411b564b0c75ab2d12d1339920e3\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jun 20 18:43:23.042316 containerd[1748]: time="2025-06-20T18:43:23.042256686Z" level=info msg="CreateContainer within sandbox \"c00f803ce53f632d09df798ded206313d31c411b564b0c75ab2d12d1339920e3\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"6708b7d8403ee179f9b0660cd4e3bef6fc9956b7d22abae9fd132eb0d5bd5365\"" Jun 20 18:43:23.043836 containerd[1748]: time="2025-06-20T18:43:23.042905084Z" level=info msg="StartContainer for \"6708b7d8403ee179f9b0660cd4e3bef6fc9956b7d22abae9fd132eb0d5bd5365\"" Jun 20 18:43:23.071173 systemd[1]: Started cri-containerd-6708b7d8403ee179f9b0660cd4e3bef6fc9956b7d22abae9fd132eb0d5bd5365.scope - libcontainer container 6708b7d8403ee179f9b0660cd4e3bef6fc9956b7d22abae9fd132eb0d5bd5365. Jun 20 18:43:23.101206 containerd[1748]: time="2025-06-20T18:43:23.100994496Z" level=info msg="StartContainer for \"6708b7d8403ee179f9b0660cd4e3bef6fc9956b7d22abae9fd132eb0d5bd5365\" returns successfully" Jun 20 18:43:23.104133 systemd[1]: cri-containerd-6708b7d8403ee179f9b0660cd4e3bef6fc9956b7d22abae9fd132eb0d5bd5365.scope: Deactivated successfully. Jun 20 18:43:23.140838 containerd[1748]: time="2025-06-20T18:43:23.140660234Z" level=info msg="shim disconnected" id=6708b7d8403ee179f9b0660cd4e3bef6fc9956b7d22abae9fd132eb0d5bd5365 namespace=k8s.io Jun 20 18:43:23.140838 containerd[1748]: time="2025-06-20T18:43:23.140720114Z" level=warning msg="cleaning up after shim disconnected" id=6708b7d8403ee179f9b0660cd4e3bef6fc9956b7d22abae9fd132eb0d5bd5365 namespace=k8s.io Jun 20 18:43:23.140838 containerd[1748]: time="2025-06-20T18:43:23.140729994Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 18:43:23.210482 sshd[5206]: Connection closed by 10.200.16.10 port 35594 Jun 20 18:43:23.211052 sshd-session[5096]: pam_unix(sshd:session): session closed for user core Jun 20 18:43:23.214669 systemd[1]: sshd@25-10.200.20.37:22-10.200.16.10:35594.service: Deactivated successfully. Jun 20 18:43:23.214690 systemd-logind[1715]: Session 28 logged out. Waiting for processes to exit. Jun 20 18:43:23.216639 systemd[1]: session-28.scope: Deactivated successfully. Jun 20 18:43:23.218239 systemd-logind[1715]: Removed session 28. Jun 20 18:43:23.304311 systemd[1]: Started sshd@26-10.200.20.37:22-10.200.16.10:35600.service - OpenSSH per-connection server daemon (10.200.16.10:35600). Jun 20 18:43:23.763936 sshd[5273]: Accepted publickey for core from 10.200.16.10 port 35600 ssh2: RSA SHA256:V0PIl5bgEVSoJBEasTsvuXRLX17TbdMMyIxacTZE6p0 Jun 20 18:43:23.765342 sshd-session[5273]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:43:23.770394 systemd-logind[1715]: New session 29 of user core. Jun 20 18:43:23.775117 systemd[1]: Started session-29.scope - Session 29 of User core. Jun 20 18:43:23.996084 containerd[1748]: time="2025-06-20T18:43:23.996031525Z" level=info msg="CreateContainer within sandbox \"c00f803ce53f632d09df798ded206313d31c411b564b0c75ab2d12d1339920e3\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jun 20 18:43:24.047674 containerd[1748]: time="2025-06-20T18:43:24.047485473Z" level=info msg="CreateContainer within sandbox \"c00f803ce53f632d09df798ded206313d31c411b564b0c75ab2d12d1339920e3\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"c5ba190343dda77e2dde06c21795ed12c3a715fca4c1d66d6f7c39804a9b7b48\"" Jun 20 18:43:24.050079 containerd[1748]: time="2025-06-20T18:43:24.048388151Z" level=info msg="StartContainer for \"c5ba190343dda77e2dde06c21795ed12c3a715fca4c1d66d6f7c39804a9b7b48\"" Jun 20 18:43:24.096207 systemd[1]: Started cri-containerd-c5ba190343dda77e2dde06c21795ed12c3a715fca4c1d66d6f7c39804a9b7b48.scope - libcontainer container c5ba190343dda77e2dde06c21795ed12c3a715fca4c1d66d6f7c39804a9b7b48. Jun 20 18:43:24.135308 systemd[1]: cri-containerd-c5ba190343dda77e2dde06c21795ed12c3a715fca4c1d66d6f7c39804a9b7b48.scope: Deactivated successfully. Jun 20 18:43:24.141670 containerd[1748]: time="2025-06-20T18:43:24.140225436Z" level=info msg="StartContainer for \"c5ba190343dda77e2dde06c21795ed12c3a715fca4c1d66d6f7c39804a9b7b48\" returns successfully" Jun 20 18:43:24.184813 containerd[1748]: time="2025-06-20T18:43:24.184752762Z" level=info msg="shim disconnected" id=c5ba190343dda77e2dde06c21795ed12c3a715fca4c1d66d6f7c39804a9b7b48 namespace=k8s.io Jun 20 18:43:24.185223 containerd[1748]: time="2025-06-20T18:43:24.185043481Z" level=warning msg="cleaning up after shim disconnected" id=c5ba190343dda77e2dde06c21795ed12c3a715fca4c1d66d6f7c39804a9b7b48 namespace=k8s.io Jun 20 18:43:24.185223 containerd[1748]: time="2025-06-20T18:43:24.185060761Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 18:43:24.528049 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c5ba190343dda77e2dde06c21795ed12c3a715fca4c1d66d6f7c39804a9b7b48-rootfs.mount: Deactivated successfully. Jun 20 18:43:25.001344 containerd[1748]: time="2025-06-20T18:43:25.001187352Z" level=info msg="CreateContainer within sandbox \"c00f803ce53f632d09df798ded206313d31c411b564b0c75ab2d12d1339920e3\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jun 20 18:43:25.030213 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3152976771.mount: Deactivated successfully. Jun 20 18:43:25.042783 containerd[1748]: time="2025-06-20T18:43:25.042691606Z" level=info msg="CreateContainer within sandbox \"c00f803ce53f632d09df798ded206313d31c411b564b0c75ab2d12d1339920e3\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"844becee9d1f49b85e5591763a8b5f6dee933cf555ac09ff1248552a47e33959\"" Jun 20 18:43:25.043852 containerd[1748]: time="2025-06-20T18:43:25.043712443Z" level=info msg="StartContainer for \"844becee9d1f49b85e5591763a8b5f6dee933cf555ac09ff1248552a47e33959\"" Jun 20 18:43:25.077143 systemd[1]: Started cri-containerd-844becee9d1f49b85e5591763a8b5f6dee933cf555ac09ff1248552a47e33959.scope - libcontainer container 844becee9d1f49b85e5591763a8b5f6dee933cf555ac09ff1248552a47e33959. Jun 20 18:43:25.100221 systemd[1]: cri-containerd-844becee9d1f49b85e5591763a8b5f6dee933cf555ac09ff1248552a47e33959.scope: Deactivated successfully. Jun 20 18:43:25.110360 containerd[1748]: time="2025-06-20T18:43:25.110076874Z" level=info msg="StartContainer for \"844becee9d1f49b85e5591763a8b5f6dee933cf555ac09ff1248552a47e33959\" returns successfully" Jun 20 18:43:25.142725 containerd[1748]: time="2025-06-20T18:43:25.142573750Z" level=info msg="shim disconnected" id=844becee9d1f49b85e5591763a8b5f6dee933cf555ac09ff1248552a47e33959 namespace=k8s.io Jun 20 18:43:25.142725 containerd[1748]: time="2025-06-20T18:43:25.142719790Z" level=warning msg="cleaning up after shim disconnected" id=844becee9d1f49b85e5591763a8b5f6dee933cf555ac09ff1248552a47e33959 namespace=k8s.io Jun 20 18:43:25.142725 containerd[1748]: time="2025-06-20T18:43:25.142729310Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 18:43:25.159474 kubelet[3294]: E0620 18:43:25.159434 3294 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jun 20 18:43:25.528031 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-844becee9d1f49b85e5591763a8b5f6dee933cf555ac09ff1248552a47e33959-rootfs.mount: Deactivated successfully. Jun 20 18:43:26.006063 containerd[1748]: time="2025-06-20T18:43:26.004567104Z" level=info msg="CreateContainer within sandbox \"c00f803ce53f632d09df798ded206313d31c411b564b0c75ab2d12d1339920e3\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jun 20 18:43:26.052023 containerd[1748]: time="2025-06-20T18:43:26.051967303Z" level=info msg="CreateContainer within sandbox \"c00f803ce53f632d09df798ded206313d31c411b564b0c75ab2d12d1339920e3\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"fbfbb0ef3e3e7add60f0b0063a970a0ed538cdd9d33e6e7bf66ac944116b2a5d\"" Jun 20 18:43:26.053661 containerd[1748]: time="2025-06-20T18:43:26.053616939Z" level=info msg="StartContainer for \"fbfbb0ef3e3e7add60f0b0063a970a0ed538cdd9d33e6e7bf66ac944116b2a5d\"" Jun 20 18:43:26.084124 systemd[1]: Started cri-containerd-fbfbb0ef3e3e7add60f0b0063a970a0ed538cdd9d33e6e7bf66ac944116b2a5d.scope - libcontainer container fbfbb0ef3e3e7add60f0b0063a970a0ed538cdd9d33e6e7bf66ac944116b2a5d. Jun 20 18:43:26.118728 containerd[1748]: time="2025-06-20T18:43:26.118671772Z" level=info msg="StartContainer for \"fbfbb0ef3e3e7add60f0b0063a970a0ed538cdd9d33e6e7bf66ac944116b2a5d\" returns successfully" Jun 20 18:43:26.617037 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jun 20 18:43:28.274301 systemd[1]: run-containerd-runc-k8s.io-fbfbb0ef3e3e7add60f0b0063a970a0ed538cdd9d33e6e7bf66ac944116b2a5d-runc.Xjhyxx.mount: Deactivated successfully. Jun 20 18:43:28.933871 kubelet[3294]: I0620 18:43:28.933660 3294 setters.go:602] "Node became not ready" node="ci-4230.2.0-a-431835d741" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-06-20T18:43:28Z","lastTransitionTime":"2025-06-20T18:43:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jun 20 18:43:29.436108 systemd-networkd[1474]: lxc_health: Link UP Jun 20 18:43:29.439125 systemd-networkd[1474]: lxc_health: Gained carrier Jun 20 18:43:30.147971 update_engine[1721]: I20250620 18:43:30.146963 1721 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jun 20 18:43:30.147971 update_engine[1721]: I20250620 18:43:30.147204 1721 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jun 20 18:43:30.147971 update_engine[1721]: I20250620 18:43:30.147472 1721 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jun 20 18:43:30.228501 update_engine[1721]: E20250620 18:43:30.228320 1721 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jun 20 18:43:30.228501 update_engine[1721]: I20250620 18:43:30.228416 1721 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jun 20 18:43:30.618056 kubelet[3294]: I0620 18:43:30.617989 3294 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-vvrmz" podStartSLOduration=8.61796918 podStartE2EDuration="8.61796918s" podCreationTimestamp="2025-06-20 18:43:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 18:43:27.031184677 +0000 UTC m=+172.586931832" watchObservedRunningTime="2025-06-20 18:43:30.61796918 +0000 UTC m=+176.173716335" Jun 20 18:43:30.766081 systemd-networkd[1474]: lxc_health: Gained IPv6LL Jun 20 18:43:32.587296 systemd[1]: run-containerd-runc-k8s.io-fbfbb0ef3e3e7add60f0b0063a970a0ed538cdd9d33e6e7bf66ac944116b2a5d-runc.IX24lq.mount: Deactivated successfully. Jun 20 18:43:34.553736 containerd[1748]: time="2025-06-20T18:43:34.553680436Z" level=info msg="StopPodSandbox for \"6777b19bbec00ab85d70206bd6960eea386cd5684f7fb403d447056d39534afb\"" Jun 20 18:43:34.554857 containerd[1748]: time="2025-06-20T18:43:34.554064875Z" level=info msg="TearDown network for sandbox \"6777b19bbec00ab85d70206bd6960eea386cd5684f7fb403d447056d39534afb\" successfully" Jun 20 18:43:34.554857 containerd[1748]: time="2025-06-20T18:43:34.554083515Z" level=info msg="StopPodSandbox for \"6777b19bbec00ab85d70206bd6960eea386cd5684f7fb403d447056d39534afb\" returns successfully" Jun 20 18:43:34.555967 containerd[1748]: time="2025-06-20T18:43:34.555197552Z" level=info msg="RemovePodSandbox for \"6777b19bbec00ab85d70206bd6960eea386cd5684f7fb403d447056d39534afb\"" Jun 20 18:43:34.555967 containerd[1748]: time="2025-06-20T18:43:34.555229712Z" level=info msg="Forcibly stopping sandbox \"6777b19bbec00ab85d70206bd6960eea386cd5684f7fb403d447056d39534afb\"" Jun 20 18:43:34.555967 containerd[1748]: time="2025-06-20T18:43:34.555311672Z" level=info msg="TearDown network for sandbox \"6777b19bbec00ab85d70206bd6960eea386cd5684f7fb403d447056d39534afb\" successfully" Jun 20 18:43:34.569389 containerd[1748]: time="2025-06-20T18:43:34.569339713Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6777b19bbec00ab85d70206bd6960eea386cd5684f7fb403d447056d39534afb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 20 18:43:34.569650 containerd[1748]: time="2025-06-20T18:43:34.569626473Z" level=info msg="RemovePodSandbox \"6777b19bbec00ab85d70206bd6960eea386cd5684f7fb403d447056d39534afb\" returns successfully" Jun 20 18:43:34.570280 containerd[1748]: time="2025-06-20T18:43:34.570243671Z" level=info msg="StopPodSandbox for \"36d752038d34f8e2ac682289d85c1807d9cfc313397f16d2a78603074dc5bdfb\"" Jun 20 18:43:34.570393 containerd[1748]: time="2025-06-20T18:43:34.570335591Z" level=info msg="TearDown network for sandbox \"36d752038d34f8e2ac682289d85c1807d9cfc313397f16d2a78603074dc5bdfb\" successfully" Jun 20 18:43:34.570393 containerd[1748]: time="2025-06-20T18:43:34.570346271Z" level=info msg="StopPodSandbox for \"36d752038d34f8e2ac682289d85c1807d9cfc313397f16d2a78603074dc5bdfb\" returns successfully" Jun 20 18:43:34.572269 containerd[1748]: time="2025-06-20T18:43:34.571060269Z" level=info msg="RemovePodSandbox for \"36d752038d34f8e2ac682289d85c1807d9cfc313397f16d2a78603074dc5bdfb\"" Jun 20 18:43:34.572269 containerd[1748]: time="2025-06-20T18:43:34.571090269Z" level=info msg="Forcibly stopping sandbox \"36d752038d34f8e2ac682289d85c1807d9cfc313397f16d2a78603074dc5bdfb\"" Jun 20 18:43:34.572269 containerd[1748]: time="2025-06-20T18:43:34.571143708Z" level=info msg="TearDown network for sandbox \"36d752038d34f8e2ac682289d85c1807d9cfc313397f16d2a78603074dc5bdfb\" successfully" Jun 20 18:43:34.585885 containerd[1748]: time="2025-06-20T18:43:34.585827388Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"36d752038d34f8e2ac682289d85c1807d9cfc313397f16d2a78603074dc5bdfb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 20 18:43:34.586044 containerd[1748]: time="2025-06-20T18:43:34.585902828Z" level=info msg="RemovePodSandbox \"36d752038d34f8e2ac682289d85c1807d9cfc313397f16d2a78603074dc5bdfb\" returns successfully" Jun 20 18:43:34.881336 sshd[5275]: Connection closed by 10.200.16.10 port 35600 Jun 20 18:43:34.881692 sshd-session[5273]: pam_unix(sshd:session): session closed for user core Jun 20 18:43:34.885489 systemd[1]: sshd@26-10.200.20.37:22-10.200.16.10:35600.service: Deactivated successfully. Jun 20 18:43:34.889248 systemd[1]: session-29.scope: Deactivated successfully. Jun 20 18:43:34.890373 systemd-logind[1715]: Session 29 logged out. Waiting for processes to exit. Jun 20 18:43:34.891380 systemd-logind[1715]: Removed session 29.