Jul 10 23:34:11.352574 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 10 23:34:11.352597 kernel: Linux version 6.6.96-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Thu Jul 10 22:12:17 -00 2025 Jul 10 23:34:11.352606 kernel: KASLR enabled Jul 10 23:34:11.352612 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jul 10 23:34:11.352620 kernel: printk: bootconsole [pl11] enabled Jul 10 23:34:11.352626 kernel: efi: EFI v2.7 by EDK II Jul 10 23:34:11.352634 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f20e698 RNG=0x3fd5f998 MEMRESERVE=0x3e477598 Jul 10 23:34:11.352640 kernel: random: crng init done Jul 10 23:34:11.352646 kernel: secureboot: Secure boot disabled Jul 10 23:34:11.352652 kernel: ACPI: Early table checksum verification disabled Jul 10 23:34:11.352659 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Jul 10 23:34:11.352665 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 10 23:34:11.352671 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 10 23:34:11.352679 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jul 10 23:34:11.352686 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 10 23:34:11.352693 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 10 23:34:11.352699 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 10 23:34:11.352707 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 10 23:34:11.352714 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 10 23:34:11.352720 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 10 23:34:11.352727 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jul 10 23:34:11.352733 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 10 23:34:11.352739 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jul 10 23:34:11.352746 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Jul 10 23:34:11.352752 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Jul 10 23:34:11.352759 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Jul 10 23:34:11.352765 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Jul 10 23:34:11.352771 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Jul 10 23:34:11.352779 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Jul 10 23:34:11.352786 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Jul 10 23:34:11.352792 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Jul 10 23:34:11.352798 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Jul 10 23:34:11.352805 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Jul 10 23:34:11.352812 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Jul 10 23:34:11.352818 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Jul 10 23:34:11.352825 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Jul 10 23:34:11.352832 kernel: Zone ranges: Jul 10 23:34:11.352838 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jul 10 23:34:11.352845 kernel: DMA32 empty Jul 10 23:34:11.352851 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jul 10 23:34:11.352876 kernel: Movable zone start for each node Jul 10 23:34:11.352883 kernel: Early memory node ranges Jul 10 23:34:11.352890 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jul 10 23:34:11.352897 kernel: node 0: [mem 0x0000000000824000-0x000000003e45ffff] Jul 10 23:34:11.352904 kernel: node 0: [mem 0x000000003e460000-0x000000003e46ffff] Jul 10 23:34:11.352912 kernel: node 0: [mem 0x000000003e470000-0x000000003e54ffff] Jul 10 23:34:11.352919 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Jul 10 23:34:11.352925 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Jul 10 23:34:11.352932 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Jul 10 23:34:11.352939 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Jul 10 23:34:11.352946 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jul 10 23:34:11.352953 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jul 10 23:34:11.352960 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jul 10 23:34:11.355017 kernel: psci: probing for conduit method from ACPI. Jul 10 23:34:11.355034 kernel: psci: PSCIv1.1 detected in firmware. Jul 10 23:34:11.355042 kernel: psci: Using standard PSCI v0.2 function IDs Jul 10 23:34:11.355049 kernel: psci: MIGRATE_INFO_TYPE not supported. Jul 10 23:34:11.355062 kernel: psci: SMC Calling Convention v1.4 Jul 10 23:34:11.355069 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jul 10 23:34:11.355076 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jul 10 23:34:11.355083 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jul 10 23:34:11.355089 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jul 10 23:34:11.355097 kernel: pcpu-alloc: [0] 0 [0] 1 Jul 10 23:34:11.355103 kernel: Detected PIPT I-cache on CPU0 Jul 10 23:34:11.355110 kernel: CPU features: detected: GIC system register CPU interface Jul 10 23:34:11.355117 kernel: CPU features: detected: Hardware dirty bit management Jul 10 23:34:11.355124 kernel: CPU features: detected: Spectre-BHB Jul 10 23:34:11.355131 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 10 23:34:11.355139 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 10 23:34:11.355146 kernel: CPU features: detected: ARM erratum 1418040 Jul 10 23:34:11.355153 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Jul 10 23:34:11.355160 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 10 23:34:11.355167 kernel: alternatives: applying boot alternatives Jul 10 23:34:11.355176 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=7d7ae41c578f00376368863b7a3cf53d899e76a854273f3187550259460980dc Jul 10 23:34:11.355183 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 10 23:34:11.355190 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 10 23:34:11.355197 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 10 23:34:11.355204 kernel: Fallback order for Node 0: 0 Jul 10 23:34:11.355211 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Jul 10 23:34:11.355219 kernel: Policy zone: Normal Jul 10 23:34:11.355226 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 10 23:34:11.355233 kernel: software IO TLB: area num 2. Jul 10 23:34:11.355240 kernel: software IO TLB: mapped [mem 0x0000000036540000-0x000000003a540000] (64MB) Jul 10 23:34:11.355247 kernel: Memory: 3983592K/4194160K available (10368K kernel code, 2186K rwdata, 8104K rodata, 38336K init, 897K bss, 210568K reserved, 0K cma-reserved) Jul 10 23:34:11.355254 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 10 23:34:11.355260 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 10 23:34:11.355268 kernel: rcu: RCU event tracing is enabled. Jul 10 23:34:11.355275 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 10 23:34:11.355282 kernel: Trampoline variant of Tasks RCU enabled. Jul 10 23:34:11.355289 kernel: Tracing variant of Tasks RCU enabled. Jul 10 23:34:11.355297 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 10 23:34:11.355304 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 10 23:34:11.355311 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 10 23:34:11.355318 kernel: GICv3: 960 SPIs implemented Jul 10 23:34:11.355324 kernel: GICv3: 0 Extended SPIs implemented Jul 10 23:34:11.355331 kernel: Root IRQ handler: gic_handle_irq Jul 10 23:34:11.355338 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jul 10 23:34:11.355345 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jul 10 23:34:11.355351 kernel: ITS: No ITS available, not enabling LPIs Jul 10 23:34:11.355358 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 10 23:34:11.355365 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 10 23:34:11.355372 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 10 23:34:11.355381 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 10 23:34:11.355388 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 10 23:34:11.355395 kernel: Console: colour dummy device 80x25 Jul 10 23:34:11.355402 kernel: printk: console [tty1] enabled Jul 10 23:34:11.355410 kernel: ACPI: Core revision 20230628 Jul 10 23:34:11.355417 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 10 23:34:11.355424 kernel: pid_max: default: 32768 minimum: 301 Jul 10 23:34:11.355431 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 10 23:34:11.355439 kernel: landlock: Up and running. Jul 10 23:34:11.355447 kernel: SELinux: Initializing. Jul 10 23:34:11.355454 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 10 23:34:11.355461 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 10 23:34:11.355469 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 10 23:34:11.355476 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 10 23:34:11.355483 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Jul 10 23:34:11.355491 kernel: Hyper-V: Host Build 10.0.22477.1619-1-0 Jul 10 23:34:11.355505 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jul 10 23:34:11.355512 kernel: rcu: Hierarchical SRCU implementation. Jul 10 23:34:11.355520 kernel: rcu: Max phase no-delay instances is 400. Jul 10 23:34:11.355527 kernel: Remapping and enabling EFI services. Jul 10 23:34:11.355535 kernel: smp: Bringing up secondary CPUs ... Jul 10 23:34:11.355543 kernel: Detected PIPT I-cache on CPU1 Jul 10 23:34:11.355551 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jul 10 23:34:11.355558 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 10 23:34:11.355566 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 10 23:34:11.355573 kernel: smp: Brought up 1 node, 2 CPUs Jul 10 23:34:11.355582 kernel: SMP: Total of 2 processors activated. Jul 10 23:34:11.355589 kernel: CPU features: detected: 32-bit EL0 Support Jul 10 23:34:11.355597 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jul 10 23:34:11.355604 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 10 23:34:11.355611 kernel: CPU features: detected: CRC32 instructions Jul 10 23:34:11.355619 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 10 23:34:11.355626 kernel: CPU features: detected: LSE atomic instructions Jul 10 23:34:11.355634 kernel: CPU features: detected: Privileged Access Never Jul 10 23:34:11.355641 kernel: CPU: All CPU(s) started at EL1 Jul 10 23:34:11.355650 kernel: alternatives: applying system-wide alternatives Jul 10 23:34:11.355657 kernel: devtmpfs: initialized Jul 10 23:34:11.355665 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 10 23:34:11.355672 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 10 23:34:11.355679 kernel: pinctrl core: initialized pinctrl subsystem Jul 10 23:34:11.355687 kernel: SMBIOS 3.1.0 present. Jul 10 23:34:11.355694 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Jul 10 23:34:11.355702 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 10 23:34:11.355709 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 10 23:34:11.355718 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 10 23:34:11.355726 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 10 23:34:11.355733 kernel: audit: initializing netlink subsys (disabled) Jul 10 23:34:11.355741 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Jul 10 23:34:11.355748 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 10 23:34:11.355756 kernel: cpuidle: using governor menu Jul 10 23:34:11.355763 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 10 23:34:11.355770 kernel: ASID allocator initialised with 32768 entries Jul 10 23:34:11.355778 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 10 23:34:11.355787 kernel: Serial: AMBA PL011 UART driver Jul 10 23:34:11.355794 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 10 23:34:11.355802 kernel: Modules: 0 pages in range for non-PLT usage Jul 10 23:34:11.355809 kernel: Modules: 509264 pages in range for PLT usage Jul 10 23:34:11.355817 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 10 23:34:11.355824 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 10 23:34:11.355831 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 10 23:34:11.355839 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 10 23:34:11.355846 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 10 23:34:11.355855 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 10 23:34:11.355863 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 10 23:34:11.355870 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 10 23:34:11.355878 kernel: ACPI: Added _OSI(Module Device) Jul 10 23:34:11.355885 kernel: ACPI: Added _OSI(Processor Device) Jul 10 23:34:11.355892 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 10 23:34:11.355900 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 10 23:34:11.355907 kernel: ACPI: Interpreter enabled Jul 10 23:34:11.355915 kernel: ACPI: Using GIC for interrupt routing Jul 10 23:34:11.355924 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jul 10 23:34:11.355931 kernel: printk: console [ttyAMA0] enabled Jul 10 23:34:11.355939 kernel: printk: bootconsole [pl11] disabled Jul 10 23:34:11.355946 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jul 10 23:34:11.355954 kernel: iommu: Default domain type: Translated Jul 10 23:34:11.355961 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 10 23:34:11.355980 kernel: efivars: Registered efivars operations Jul 10 23:34:11.355988 kernel: vgaarb: loaded Jul 10 23:34:11.355996 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 10 23:34:11.356006 kernel: VFS: Disk quotas dquot_6.6.0 Jul 10 23:34:11.356013 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 10 23:34:11.356020 kernel: pnp: PnP ACPI init Jul 10 23:34:11.356028 kernel: pnp: PnP ACPI: found 0 devices Jul 10 23:34:11.356035 kernel: NET: Registered PF_INET protocol family Jul 10 23:34:11.356043 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 10 23:34:11.356051 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 10 23:34:11.356058 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 10 23:34:11.356066 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 10 23:34:11.356074 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 10 23:34:11.356082 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 10 23:34:11.356089 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 10 23:34:11.356097 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 10 23:34:11.356105 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 10 23:34:11.356112 kernel: PCI: CLS 0 bytes, default 64 Jul 10 23:34:11.356119 kernel: kvm [1]: HYP mode not available Jul 10 23:34:11.356127 kernel: Initialise system trusted keyrings Jul 10 23:34:11.356134 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 10 23:34:11.356143 kernel: Key type asymmetric registered Jul 10 23:34:11.356150 kernel: Asymmetric key parser 'x509' registered Jul 10 23:34:11.356158 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 10 23:34:11.356165 kernel: io scheduler mq-deadline registered Jul 10 23:34:11.356173 kernel: io scheduler kyber registered Jul 10 23:34:11.356180 kernel: io scheduler bfq registered Jul 10 23:34:11.356188 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 10 23:34:11.356195 kernel: thunder_xcv, ver 1.0 Jul 10 23:34:11.356202 kernel: thunder_bgx, ver 1.0 Jul 10 23:34:11.356211 kernel: nicpf, ver 1.0 Jul 10 23:34:11.356218 kernel: nicvf, ver 1.0 Jul 10 23:34:11.356390 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 10 23:34:11.356467 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-10T23:34:10 UTC (1752190450) Jul 10 23:34:11.356477 kernel: efifb: probing for efifb Jul 10 23:34:11.356485 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jul 10 23:34:11.356492 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jul 10 23:34:11.356500 kernel: efifb: scrolling: redraw Jul 10 23:34:11.356510 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jul 10 23:34:11.356518 kernel: Console: switching to colour frame buffer device 128x48 Jul 10 23:34:11.356525 kernel: fb0: EFI VGA frame buffer device Jul 10 23:34:11.356533 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jul 10 23:34:11.356540 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 10 23:34:11.356548 kernel: No ACPI PMU IRQ for CPU0 Jul 10 23:34:11.356555 kernel: No ACPI PMU IRQ for CPU1 Jul 10 23:34:11.356563 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Jul 10 23:34:11.356570 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jul 10 23:34:11.356579 kernel: watchdog: Hard watchdog permanently disabled Jul 10 23:34:11.356587 kernel: NET: Registered PF_INET6 protocol family Jul 10 23:34:11.356594 kernel: Segment Routing with IPv6 Jul 10 23:34:11.356601 kernel: In-situ OAM (IOAM) with IPv6 Jul 10 23:34:11.356609 kernel: NET: Registered PF_PACKET protocol family Jul 10 23:34:11.356616 kernel: Key type dns_resolver registered Jul 10 23:34:11.356624 kernel: registered taskstats version 1 Jul 10 23:34:11.356631 kernel: Loading compiled-in X.509 certificates Jul 10 23:34:11.356639 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.96-flatcar: 31389229b1c1b066a3aecee2ec344e038e2f2cc0' Jul 10 23:34:11.356648 kernel: Key type .fscrypt registered Jul 10 23:34:11.356655 kernel: Key type fscrypt-provisioning registered Jul 10 23:34:11.356663 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 10 23:34:11.356670 kernel: ima: Allocated hash algorithm: sha1 Jul 10 23:34:11.356678 kernel: ima: No architecture policies found Jul 10 23:34:11.356685 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 10 23:34:11.356693 kernel: clk: Disabling unused clocks Jul 10 23:34:11.356700 kernel: Freeing unused kernel memory: 38336K Jul 10 23:34:11.356707 kernel: Run /init as init process Jul 10 23:34:11.356716 kernel: with arguments: Jul 10 23:34:11.356723 kernel: /init Jul 10 23:34:11.356731 kernel: with environment: Jul 10 23:34:11.356738 kernel: HOME=/ Jul 10 23:34:11.356745 kernel: TERM=linux Jul 10 23:34:11.356752 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 10 23:34:11.356761 systemd[1]: Successfully made /usr/ read-only. Jul 10 23:34:11.356771 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 10 23:34:11.356781 systemd[1]: Detected virtualization microsoft. Jul 10 23:34:11.356789 systemd[1]: Detected architecture arm64. Jul 10 23:34:11.356796 systemd[1]: Running in initrd. Jul 10 23:34:11.356804 systemd[1]: No hostname configured, using default hostname. Jul 10 23:34:11.356812 systemd[1]: Hostname set to . Jul 10 23:34:11.356820 systemd[1]: Initializing machine ID from random generator. Jul 10 23:34:11.356828 systemd[1]: Queued start job for default target initrd.target. Jul 10 23:34:11.356836 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 10 23:34:11.356845 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 10 23:34:11.356854 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 10 23:34:11.356862 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 10 23:34:11.356869 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 10 23:34:11.356878 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 10 23:34:11.356887 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 10 23:34:11.356896 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 10 23:34:11.356904 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 10 23:34:11.356912 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 10 23:34:11.356920 systemd[1]: Reached target paths.target - Path Units. Jul 10 23:34:11.356928 systemd[1]: Reached target slices.target - Slice Units. Jul 10 23:34:11.356935 systemd[1]: Reached target swap.target - Swaps. Jul 10 23:34:11.356943 systemd[1]: Reached target timers.target - Timer Units. Jul 10 23:34:11.356951 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 10 23:34:11.356959 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 10 23:34:11.359021 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 10 23:34:11.359034 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 10 23:34:11.359043 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 10 23:34:11.359052 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 10 23:34:11.359060 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 10 23:34:11.359068 systemd[1]: Reached target sockets.target - Socket Units. Jul 10 23:34:11.359077 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 10 23:34:11.359085 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 10 23:34:11.359099 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 10 23:34:11.359107 systemd[1]: Starting systemd-fsck-usr.service... Jul 10 23:34:11.359115 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 10 23:34:11.359123 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 10 23:34:11.359161 systemd-journald[218]: Collecting audit messages is disabled. Jul 10 23:34:11.359185 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 23:34:11.359195 systemd-journald[218]: Journal started Jul 10 23:34:11.359214 systemd-journald[218]: Runtime Journal (/run/log/journal/e2db728cbe4240d0b930ae5fc4439c39) is 8M, max 78.5M, 70.5M free. Jul 10 23:34:11.368814 systemd-modules-load[220]: Inserted module 'overlay' Jul 10 23:34:11.394215 systemd[1]: Started systemd-journald.service - Journal Service. Jul 10 23:34:11.394240 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 10 23:34:11.404706 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 10 23:34:11.411748 kernel: Bridge firewalling registered Jul 10 23:34:11.405345 systemd-modules-load[220]: Inserted module 'br_netfilter' Jul 10 23:34:11.424736 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 10 23:34:11.432660 systemd[1]: Finished systemd-fsck-usr.service. Jul 10 23:34:11.445651 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 10 23:34:11.456619 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 23:34:11.480235 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 10 23:34:11.496858 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 10 23:34:11.513145 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 10 23:34:11.527180 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 10 23:34:11.555643 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 10 23:34:11.571574 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 10 23:34:11.581041 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 10 23:34:11.593103 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 10 23:34:11.618250 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 10 23:34:11.628203 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 10 23:34:11.647156 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 10 23:34:11.686998 dracut-cmdline[252]: dracut-dracut-053 Jul 10 23:34:11.686998 dracut-cmdline[252]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=7d7ae41c578f00376368863b7a3cf53d899e76a854273f3187550259460980dc Jul 10 23:34:11.690035 systemd-resolved[253]: Positive Trust Anchors: Jul 10 23:34:11.690060 systemd-resolved[253]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 10 23:34:11.690093 systemd-resolved[253]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 10 23:34:11.692417 systemd-resolved[253]: Defaulting to hostname 'linux'. Jul 10 23:34:11.694090 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 10 23:34:11.752482 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 10 23:34:11.833899 kernel: SCSI subsystem initialized Jul 10 23:34:11.833920 kernel: Loading iSCSI transport class v2.0-870. Jul 10 23:34:11.799300 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 10 23:34:11.851984 kernel: iscsi: registered transport (tcp) Jul 10 23:34:11.871220 kernel: iscsi: registered transport (qla4xxx) Jul 10 23:34:11.871275 kernel: QLogic iSCSI HBA Driver Jul 10 23:34:11.910737 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 10 23:34:11.925177 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 10 23:34:11.959677 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 10 23:34:11.959737 kernel: device-mapper: uevent: version 1.0.3 Jul 10 23:34:11.966250 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 10 23:34:12.014992 kernel: raid6: neonx8 gen() 15766 MB/s Jul 10 23:34:12.034978 kernel: raid6: neonx4 gen() 15820 MB/s Jul 10 23:34:12.054974 kernel: raid6: neonx2 gen() 13223 MB/s Jul 10 23:34:12.075975 kernel: raid6: neonx1 gen() 10516 MB/s Jul 10 23:34:12.095974 kernel: raid6: int64x8 gen() 6785 MB/s Jul 10 23:34:12.115974 kernel: raid6: int64x4 gen() 7352 MB/s Jul 10 23:34:12.136975 kernel: raid6: int64x2 gen() 6114 MB/s Jul 10 23:34:12.160926 kernel: raid6: int64x1 gen() 5055 MB/s Jul 10 23:34:12.160942 kernel: raid6: using algorithm neonx4 gen() 15820 MB/s Jul 10 23:34:12.186053 kernel: raid6: .... xor() 12469 MB/s, rmw enabled Jul 10 23:34:12.186067 kernel: raid6: using neon recovery algorithm Jul 10 23:34:12.199310 kernel: xor: measuring software checksum speed Jul 10 23:34:12.199325 kernel: 8regs : 21636 MB/sec Jul 10 23:34:12.203388 kernel: 32regs : 21596 MB/sec Jul 10 23:34:12.207276 kernel: arm64_neon : 27813 MB/sec Jul 10 23:34:12.211767 kernel: xor: using function: arm64_neon (27813 MB/sec) Jul 10 23:34:12.261981 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 10 23:34:12.272134 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 10 23:34:12.289105 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 10 23:34:12.315207 systemd-udevd[439]: Using default interface naming scheme 'v255'. Jul 10 23:34:12.320930 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 10 23:34:12.338235 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 10 23:34:12.372000 dracut-pre-trigger[452]: rd.md=0: removing MD RAID activation Jul 10 23:34:12.401764 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 10 23:34:12.418239 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 10 23:34:12.460142 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 10 23:34:12.485266 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 10 23:34:12.509148 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 10 23:34:12.522451 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 10 23:34:12.530363 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 10 23:34:12.552284 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 10 23:34:12.584583 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 10 23:34:12.600686 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 10 23:34:12.620988 kernel: hv_vmbus: Vmbus version:5.3 Jul 10 23:34:12.624215 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 10 23:34:12.624467 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 10 23:34:12.656050 kernel: hv_vmbus: registering driver hid_hyperv Jul 10 23:34:12.656071 kernel: hv_vmbus: registering driver hyperv_keyboard Jul 10 23:34:12.656082 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 10 23:34:12.656474 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 10 23:34:12.711453 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jul 10 23:34:12.711477 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jul 10 23:34:12.711497 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 10 23:34:12.711506 kernel: hv_vmbus: registering driver hv_netvsc Jul 10 23:34:12.711515 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jul 10 23:34:12.681789 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 10 23:34:12.736104 kernel: hv_vmbus: registering driver hv_storvsc Jul 10 23:34:12.736131 kernel: PTP clock support registered Jul 10 23:34:12.682156 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 23:34:12.771328 kernel: hv_utils: Registering HyperV Utility Driver Jul 10 23:34:12.771349 kernel: hv_vmbus: registering driver hv_utils Jul 10 23:34:12.771359 kernel: hv_utils: Heartbeat IC version 3.0 Jul 10 23:34:12.771370 kernel: hv_utils: Shutdown IC version 3.2 Jul 10 23:34:12.771379 kernel: hv_utils: TimeSync IC version 4.0 Jul 10 23:34:12.734672 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 23:34:12.754652 kernel: scsi host0: storvsc_host_t Jul 10 23:34:12.782560 kernel: scsi host1: storvsc_host_t Jul 10 23:34:12.790526 systemd-journald[218]: Time jumped backwards, rotating. Jul 10 23:34:12.790575 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jul 10 23:34:12.790695 kernel: hv_netvsc 0022487b-bdbb-0022-487b-bdbb0022487b eth0: VF slot 1 added Jul 10 23:34:12.790800 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jul 10 23:34:12.744168 systemd-resolved[253]: Clock change detected. Flushing caches. Jul 10 23:34:12.810051 kernel: hv_vmbus: registering driver hv_pci Jul 10 23:34:12.748606 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 23:34:12.846030 kernel: hv_pci dc806827-ef5f-4708-95e8-c459810c6eb9: PCI VMBus probing: Using version 0x10004 Jul 10 23:34:12.846201 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jul 10 23:34:12.846320 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 10 23:34:12.846330 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jul 10 23:34:12.846467 kernel: hv_pci dc806827-ef5f-4708-95e8-c459810c6eb9: PCI host bridge to bus ef5f:00 Jul 10 23:34:12.800969 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 23:34:12.867750 kernel: pci_bus ef5f:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jul 10 23:34:12.867894 kernel: pci_bus ef5f:00: No busn resource found for root bus, will use [bus 00-ff] Jul 10 23:34:12.845923 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 10 23:34:12.890755 kernel: pci ef5f:00:02.0: [15b3:1018] type 00 class 0x020000 Jul 10 23:34:12.890797 kernel: pci ef5f:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Jul 10 23:34:12.901390 kernel: pci ef5f:00:02.0: enabling Extended Tags Jul 10 23:34:12.904907 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jul 10 23:34:12.918457 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jul 10 23:34:12.918669 kernel: sd 0:0:0:0: [sda] Write Protect is off Jul 10 23:34:12.932248 kernel: pci ef5f:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at ef5f:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Jul 10 23:34:12.932302 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jul 10 23:34:12.933428 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jul 10 23:34:12.941039 kernel: pci_bus ef5f:00: busn_res: [bus 00-ff] end is updated to 00 Jul 10 23:34:12.952072 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 10 23:34:12.952111 kernel: pci ef5f:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Jul 10 23:34:12.966408 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jul 10 23:34:12.980652 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 10 23:34:13.026983 kernel: mlx5_core ef5f:00:02.0: enabling device (0000 -> 0002) Jul 10 23:34:13.034387 kernel: mlx5_core ef5f:00:02.0: firmware version: 16.31.2424 Jul 10 23:34:13.310867 kernel: hv_netvsc 0022487b-bdbb-0022-487b-bdbb0022487b eth0: VF registering: eth1 Jul 10 23:34:13.311057 kernel: mlx5_core ef5f:00:02.0 eth1: joined to eth0 Jul 10 23:34:13.321396 kernel: mlx5_core ef5f:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Jul 10 23:34:13.337408 kernel: mlx5_core ef5f:00:02.0 enP61279s1: renamed from eth1 Jul 10 23:34:13.475392 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (505) Jul 10 23:34:13.494561 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jul 10 23:34:13.510045 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jul 10 23:34:13.538176 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jul 10 23:34:13.562632 kernel: BTRFS: device fsid 28ea517e-145c-4223-93e8-6347aefbc032 devid 1 transid 36 /dev/sda3 scanned by (udev-worker) (499) Jul 10 23:34:13.565715 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jul 10 23:34:13.573739 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jul 10 23:34:13.603578 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 10 23:34:13.629394 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 10 23:34:13.637388 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 10 23:34:14.646392 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 10 23:34:14.646729 disk-uuid[604]: The operation has completed successfully. Jul 10 23:34:14.703128 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 10 23:34:14.705308 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 10 23:34:14.758511 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 10 23:34:14.773132 sh[690]: Success Jul 10 23:34:14.800405 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 10 23:34:14.966941 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 10 23:34:14.987532 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 10 23:34:14.999499 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 10 23:34:15.038932 kernel: BTRFS info (device dm-0): first mount of filesystem 28ea517e-145c-4223-93e8-6347aefbc032 Jul 10 23:34:15.038978 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 10 23:34:15.046874 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 10 23:34:15.052722 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 10 23:34:15.057988 kernel: BTRFS info (device dm-0): using free space tree Jul 10 23:34:15.283743 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 10 23:34:15.289510 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 10 23:34:15.312551 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 10 23:34:15.323331 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 10 23:34:15.365417 kernel: BTRFS info (device sda6): first mount of filesystem e248a549-ad9c-46e4-9226-90e819becc10 Jul 10 23:34:15.365468 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 10 23:34:15.370136 kernel: BTRFS info (device sda6): using free space tree Jul 10 23:34:15.400992 kernel: BTRFS info (device sda6): auto enabling async discard Jul 10 23:34:15.411397 kernel: BTRFS info (device sda6): last unmount of filesystem e248a549-ad9c-46e4-9226-90e819becc10 Jul 10 23:34:15.419167 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 10 23:34:15.431750 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 10 23:34:15.471001 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 10 23:34:15.495529 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 10 23:34:15.528154 systemd-networkd[871]: lo: Link UP Jul 10 23:34:15.528167 systemd-networkd[871]: lo: Gained carrier Jul 10 23:34:15.530404 systemd-networkd[871]: Enumeration completed Jul 10 23:34:15.532117 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 10 23:34:15.538901 systemd[1]: Reached target network.target - Network. Jul 10 23:34:15.541804 systemd-networkd[871]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 23:34:15.541808 systemd-networkd[871]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 10 23:34:15.637515 kernel: mlx5_core ef5f:00:02.0 enP61279s1: Link up Jul 10 23:34:15.710476 kernel: hv_netvsc 0022487b-bdbb-0022-487b-bdbb0022487b eth0: Data path switched to VF: enP61279s1 Jul 10 23:34:15.710156 systemd-networkd[871]: enP61279s1: Link UP Jul 10 23:34:15.710234 systemd-networkd[871]: eth0: Link UP Jul 10 23:34:15.710352 systemd-networkd[871]: eth0: Gained carrier Jul 10 23:34:15.710360 systemd-networkd[871]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 23:34:15.738662 systemd-networkd[871]: enP61279s1: Gained carrier Jul 10 23:34:15.753440 systemd-networkd[871]: eth0: DHCPv4 address 10.200.20.37/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jul 10 23:34:16.078749 ignition[831]: Ignition 2.20.0 Jul 10 23:34:16.082607 ignition[831]: Stage: fetch-offline Jul 10 23:34:16.082660 ignition[831]: no configs at "/usr/lib/ignition/base.d" Jul 10 23:34:16.087658 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 10 23:34:16.082670 ignition[831]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 10 23:34:16.082790 ignition[831]: parsed url from cmdline: "" Jul 10 23:34:16.082794 ignition[831]: no config URL provided Jul 10 23:34:16.082799 ignition[831]: reading system config file "/usr/lib/ignition/user.ign" Jul 10 23:34:16.082807 ignition[831]: no config at "/usr/lib/ignition/user.ign" Jul 10 23:34:16.120611 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 10 23:34:16.082812 ignition[831]: failed to fetch config: resource requires networking Jul 10 23:34:16.083010 ignition[831]: Ignition finished successfully Jul 10 23:34:16.154193 ignition[884]: Ignition 2.20.0 Jul 10 23:34:16.154200 ignition[884]: Stage: fetch Jul 10 23:34:16.154897 ignition[884]: no configs at "/usr/lib/ignition/base.d" Jul 10 23:34:16.154909 ignition[884]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 10 23:34:16.155043 ignition[884]: parsed url from cmdline: "" Jul 10 23:34:16.155047 ignition[884]: no config URL provided Jul 10 23:34:16.155052 ignition[884]: reading system config file "/usr/lib/ignition/user.ign" Jul 10 23:34:16.155059 ignition[884]: no config at "/usr/lib/ignition/user.ign" Jul 10 23:34:16.155089 ignition[884]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jul 10 23:34:16.250832 ignition[884]: GET result: OK Jul 10 23:34:16.250921 ignition[884]: config has been read from IMDS userdata Jul 10 23:34:16.250967 ignition[884]: parsing config with SHA512: 28b85dd1c9045dc4df90236ff3d1284784bd96cb275bab0315c248cc08cd7882dc5a40516fbc005b330698638a82a765caefc052f84d358095990b2e1fccf99c Jul 10 23:34:16.255404 unknown[884]: fetched base config from "system" Jul 10 23:34:16.255798 ignition[884]: fetch: fetch complete Jul 10 23:34:16.255412 unknown[884]: fetched base config from "system" Jul 10 23:34:16.255804 ignition[884]: fetch: fetch passed Jul 10 23:34:16.255417 unknown[884]: fetched user config from "azure" Jul 10 23:34:16.255853 ignition[884]: Ignition finished successfully Jul 10 23:34:16.260914 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 10 23:34:16.281639 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 10 23:34:16.308354 ignition[891]: Ignition 2.20.0 Jul 10 23:34:16.313158 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 10 23:34:16.308377 ignition[891]: Stage: kargs Jul 10 23:34:16.308569 ignition[891]: no configs at "/usr/lib/ignition/base.d" Jul 10 23:34:16.308579 ignition[891]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 10 23:34:16.309620 ignition[891]: kargs: kargs passed Jul 10 23:34:16.309665 ignition[891]: Ignition finished successfully Jul 10 23:34:16.341655 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 10 23:34:16.360099 ignition[898]: Ignition 2.20.0 Jul 10 23:34:16.360107 ignition[898]: Stage: disks Jul 10 23:34:16.363296 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 10 23:34:16.360317 ignition[898]: no configs at "/usr/lib/ignition/base.d" Jul 10 23:34:16.372225 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 10 23:34:16.360327 ignition[898]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 10 23:34:16.381943 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 10 23:34:16.361502 ignition[898]: disks: disks passed Jul 10 23:34:16.395142 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 10 23:34:16.361554 ignition[898]: Ignition finished successfully Jul 10 23:34:16.406100 systemd[1]: Reached target sysinit.target - System Initialization. Jul 10 23:34:16.418293 systemd[1]: Reached target basic.target - Basic System. Jul 10 23:34:16.445631 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 10 23:34:16.510179 systemd-fsck[907]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jul 10 23:34:16.520294 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 10 23:34:16.536582 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 10 23:34:16.593582 kernel: EXT4-fs (sda9): mounted filesystem ef1c88fa-d23e-4a16-bbbf-07c92f8585ec r/w with ordered data mode. Quota mode: none. Jul 10 23:34:16.594227 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 10 23:34:16.599447 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 10 23:34:16.643467 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 10 23:34:16.655828 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 10 23:34:16.668192 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jul 10 23:34:16.691391 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (918) Jul 10 23:34:16.700219 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 10 23:34:16.732199 kernel: BTRFS info (device sda6): first mount of filesystem e248a549-ad9c-46e4-9226-90e819becc10 Jul 10 23:34:16.732221 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 10 23:34:16.732230 kernel: BTRFS info (device sda6): using free space tree Jul 10 23:34:16.700277 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 10 23:34:16.715478 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 10 23:34:16.757394 kernel: BTRFS info (device sda6): auto enabling async discard Jul 10 23:34:16.762627 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 10 23:34:16.776875 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 10 23:34:16.835572 systemd-networkd[871]: eth0: Gained IPv6LL Jul 10 23:34:17.105537 coreos-metadata[920]: Jul 10 23:34:17.105 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jul 10 23:34:17.114604 coreos-metadata[920]: Jul 10 23:34:17.114 INFO Fetch successful Jul 10 23:34:17.114604 coreos-metadata[920]: Jul 10 23:34:17.114 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jul 10 23:34:17.131913 coreos-metadata[920]: Jul 10 23:34:17.126 INFO Fetch successful Jul 10 23:34:17.138833 coreos-metadata[920]: Jul 10 23:34:17.138 INFO wrote hostname ci-4230.2.1-n-d24186f489 to /sysroot/etc/hostname Jul 10 23:34:17.150396 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 10 23:34:17.157325 systemd-networkd[871]: enP61279s1: Gained IPv6LL Jul 10 23:34:17.306286 initrd-setup-root[948]: cut: /sysroot/etc/passwd: No such file or directory Jul 10 23:34:17.360430 initrd-setup-root[955]: cut: /sysroot/etc/group: No such file or directory Jul 10 23:34:17.366994 initrd-setup-root[962]: cut: /sysroot/etc/shadow: No such file or directory Jul 10 23:34:17.373443 initrd-setup-root[969]: cut: /sysroot/etc/gshadow: No such file or directory Jul 10 23:34:18.071915 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 10 23:34:18.089813 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 10 23:34:18.100560 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 10 23:34:18.123256 kernel: BTRFS info (device sda6): last unmount of filesystem e248a549-ad9c-46e4-9226-90e819becc10 Jul 10 23:34:18.117150 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 10 23:34:18.148460 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 10 23:34:18.160882 ignition[1037]: INFO : Ignition 2.20.0 Jul 10 23:34:18.160882 ignition[1037]: INFO : Stage: mount Jul 10 23:34:18.160882 ignition[1037]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 10 23:34:18.160882 ignition[1037]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 10 23:34:18.189071 ignition[1037]: INFO : mount: mount passed Jul 10 23:34:18.189071 ignition[1037]: INFO : Ignition finished successfully Jul 10 23:34:18.164841 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 10 23:34:18.193534 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 10 23:34:18.215797 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 10 23:34:18.248933 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (1048) Jul 10 23:34:18.248989 kernel: BTRFS info (device sda6): first mount of filesystem e248a549-ad9c-46e4-9226-90e819becc10 Jul 10 23:34:18.255026 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 10 23:34:18.259822 kernel: BTRFS info (device sda6): using free space tree Jul 10 23:34:18.268387 kernel: BTRFS info (device sda6): auto enabling async discard Jul 10 23:34:18.270003 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 10 23:34:18.298388 ignition[1065]: INFO : Ignition 2.20.0 Jul 10 23:34:18.298388 ignition[1065]: INFO : Stage: files Jul 10 23:34:18.298388 ignition[1065]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 10 23:34:18.298388 ignition[1065]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 10 23:34:18.298388 ignition[1065]: DEBUG : files: compiled without relabeling support, skipping Jul 10 23:34:18.325565 ignition[1065]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 10 23:34:18.325565 ignition[1065]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 10 23:34:18.381538 ignition[1065]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 10 23:34:18.389750 ignition[1065]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 10 23:34:18.389750 ignition[1065]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 10 23:34:18.382024 unknown[1065]: wrote ssh authorized keys file for user: core Jul 10 23:34:18.410559 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jul 10 23:34:18.410559 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Jul 10 23:34:18.459616 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 10 23:34:18.586645 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jul 10 23:34:18.586645 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 10 23:34:18.608220 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jul 10 23:34:18.829892 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 10 23:34:18.908844 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 10 23:34:18.908844 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 10 23:34:18.929255 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 10 23:34:18.929255 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 10 23:34:18.929255 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 10 23:34:18.929255 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 10 23:34:18.929255 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 10 23:34:18.929255 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 10 23:34:18.929255 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 10 23:34:18.929255 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 10 23:34:18.929255 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 10 23:34:18.929255 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 10 23:34:18.929255 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 10 23:34:18.929255 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 10 23:34:18.929255 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Jul 10 23:34:19.636786 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 10 23:34:19.856265 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 10 23:34:19.856265 ignition[1065]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 10 23:34:19.880576 ignition[1065]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 10 23:34:19.891791 ignition[1065]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 10 23:34:19.891791 ignition[1065]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 10 23:34:19.891791 ignition[1065]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jul 10 23:34:19.891791 ignition[1065]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jul 10 23:34:19.891791 ignition[1065]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 10 23:34:19.891791 ignition[1065]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 10 23:34:19.891791 ignition[1065]: INFO : files: files passed Jul 10 23:34:19.891791 ignition[1065]: INFO : Ignition finished successfully Jul 10 23:34:19.903571 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 10 23:34:19.944658 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 10 23:34:19.963596 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 10 23:34:19.977210 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 10 23:34:19.980020 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 10 23:34:20.021102 initrd-setup-root-after-ignition[1094]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 10 23:34:20.021102 initrd-setup-root-after-ignition[1094]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 10 23:34:20.039589 initrd-setup-root-after-ignition[1098]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 10 23:34:20.041607 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 10 23:34:20.062197 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 10 23:34:20.088616 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 10 23:34:20.114460 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 10 23:34:20.114595 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 10 23:34:20.128193 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 10 23:34:20.140167 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 10 23:34:20.151173 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 10 23:34:20.169608 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 10 23:34:20.189752 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 10 23:34:20.205622 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 10 23:34:20.224014 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 10 23:34:20.231006 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 10 23:34:20.243864 systemd[1]: Stopped target timers.target - Timer Units. Jul 10 23:34:20.255681 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 10 23:34:20.255808 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 10 23:34:20.272197 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 10 23:34:20.278125 systemd[1]: Stopped target basic.target - Basic System. Jul 10 23:34:20.289604 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 10 23:34:20.301158 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 10 23:34:20.312610 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 10 23:34:20.324744 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 10 23:34:20.337817 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 10 23:34:20.351360 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 10 23:34:20.362983 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 10 23:34:20.375762 systemd[1]: Stopped target swap.target - Swaps. Jul 10 23:34:20.386640 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 10 23:34:20.386765 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 10 23:34:20.403384 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 10 23:34:20.410117 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 10 23:34:20.422362 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 10 23:34:20.427866 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 10 23:34:20.435328 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 10 23:34:20.435464 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 10 23:34:20.453817 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 10 23:34:20.453952 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 10 23:34:20.461233 systemd[1]: ignition-files.service: Deactivated successfully. Jul 10 23:34:20.461334 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 10 23:34:20.538427 ignition[1118]: INFO : Ignition 2.20.0 Jul 10 23:34:20.538427 ignition[1118]: INFO : Stage: umount Jul 10 23:34:20.538427 ignition[1118]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 10 23:34:20.538427 ignition[1118]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 10 23:34:20.538427 ignition[1118]: INFO : umount: umount passed Jul 10 23:34:20.538427 ignition[1118]: INFO : Ignition finished successfully Jul 10 23:34:20.472006 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jul 10 23:34:20.472102 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 10 23:34:20.504656 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 10 23:34:20.523265 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 10 23:34:20.523468 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 10 23:34:20.534645 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 10 23:34:20.543592 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 10 23:34:20.543857 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 10 23:34:20.553883 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 10 23:34:20.554042 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 10 23:34:20.572864 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 10 23:34:20.573411 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 10 23:34:20.591920 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 10 23:34:20.595294 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 10 23:34:20.597404 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 10 23:34:20.609555 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 10 23:34:20.609658 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 10 23:34:20.623644 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 10 23:34:20.623722 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 10 23:34:20.633995 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 10 23:34:20.634058 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 10 23:34:20.645462 systemd[1]: Stopped target network.target - Network. Jul 10 23:34:20.657251 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 10 23:34:20.657331 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 10 23:34:20.668629 systemd[1]: Stopped target paths.target - Path Units. Jul 10 23:34:20.678879 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 10 23:34:20.682399 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 10 23:34:20.692282 systemd[1]: Stopped target slices.target - Slice Units. Jul 10 23:34:20.703385 systemd[1]: Stopped target sockets.target - Socket Units. Jul 10 23:34:20.714831 systemd[1]: iscsid.socket: Deactivated successfully. Jul 10 23:34:20.714893 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 10 23:34:20.725627 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 10 23:34:20.725669 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 10 23:34:20.736304 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 10 23:34:20.736376 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 10 23:34:20.746562 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 10 23:34:20.746611 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 10 23:34:20.757816 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 10 23:34:20.769932 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 10 23:34:20.781123 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 10 23:34:20.781218 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 10 23:34:20.787182 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 10 23:34:20.787294 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 10 23:34:20.800187 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 10 23:34:20.800338 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 10 23:34:20.822664 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 10 23:34:20.822979 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 10 23:34:21.070978 kernel: hv_netvsc 0022487b-bdbb-0022-487b-bdbb0022487b eth0: Data path switched from VF: enP61279s1 Jul 10 23:34:20.823112 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 10 23:34:20.837935 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 10 23:34:20.838593 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 10 23:34:20.838651 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 10 23:34:20.866573 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 10 23:34:20.879525 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 10 23:34:20.879645 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 10 23:34:20.891082 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 10 23:34:20.891140 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 10 23:34:20.905947 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 10 23:34:20.906008 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 10 23:34:20.912347 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 10 23:34:20.912404 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 10 23:34:20.928975 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 10 23:34:20.939842 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 10 23:34:20.939916 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 10 23:34:20.962540 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 10 23:34:20.962713 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 10 23:34:20.975279 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 10 23:34:20.975331 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 10 23:34:20.986677 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 10 23:34:20.986709 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 10 23:34:20.998748 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 10 23:34:20.998805 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 10 23:34:21.274604 systemd-journald[218]: Received SIGTERM from PID 1 (systemd). Jul 10 23:34:21.014135 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 10 23:34:21.014184 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 10 23:34:21.030897 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 10 23:34:21.030961 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 10 23:34:21.075487 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 10 23:34:21.089721 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 10 23:34:21.089790 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 10 23:34:21.107810 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 10 23:34:21.107867 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 23:34:21.121741 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 10 23:34:21.121874 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 10 23:34:21.122181 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 10 23:34:21.122276 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 10 23:34:21.132508 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 10 23:34:21.132601 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 10 23:34:21.147009 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 10 23:34:21.176599 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 10 23:34:21.204477 systemd[1]: Switching root. Jul 10 23:34:21.384437 systemd-journald[218]: Journal stopped Jul 10 23:34:26.151244 kernel: SELinux: policy capability network_peer_controls=1 Jul 10 23:34:26.151267 kernel: SELinux: policy capability open_perms=1 Jul 10 23:34:26.151278 kernel: SELinux: policy capability extended_socket_class=1 Jul 10 23:34:26.151285 kernel: SELinux: policy capability always_check_network=0 Jul 10 23:34:26.151295 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 10 23:34:26.151302 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 10 23:34:26.151311 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 10 23:34:26.151318 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 10 23:34:26.151326 kernel: audit: type=1403 audit(1752190462.755:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 10 23:34:26.151336 systemd[1]: Successfully loaded SELinux policy in 124.811ms. Jul 10 23:34:26.151347 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 12.003ms. Jul 10 23:34:26.151357 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 10 23:34:26.151392 systemd[1]: Detected virtualization microsoft. Jul 10 23:34:26.151405 systemd[1]: Detected architecture arm64. Jul 10 23:34:26.151417 systemd[1]: Detected first boot. Jul 10 23:34:26.151429 systemd[1]: Hostname set to . Jul 10 23:34:26.151437 systemd[1]: Initializing machine ID from random generator. Jul 10 23:34:26.151446 zram_generator::config[1162]: No configuration found. Jul 10 23:34:26.151455 kernel: NET: Registered PF_VSOCK protocol family Jul 10 23:34:26.151463 systemd[1]: Populated /etc with preset unit settings. Jul 10 23:34:26.151473 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 10 23:34:26.151482 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 10 23:34:26.151492 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 10 23:34:26.151500 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 10 23:34:26.151509 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 10 23:34:26.151519 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 10 23:34:26.151527 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 10 23:34:26.151536 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 10 23:34:26.151545 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 10 23:34:26.151556 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 10 23:34:26.151565 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 10 23:34:26.151574 systemd[1]: Created slice user.slice - User and Session Slice. Jul 10 23:34:26.151583 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 10 23:34:26.151595 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 10 23:34:26.151605 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 10 23:34:26.151615 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 10 23:34:26.151624 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 10 23:34:26.151635 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 10 23:34:26.151644 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jul 10 23:34:26.151653 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 10 23:34:26.151664 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 10 23:34:26.151673 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 10 23:34:26.151683 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 10 23:34:26.151691 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 10 23:34:26.151700 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 10 23:34:26.151711 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 10 23:34:26.151720 systemd[1]: Reached target slices.target - Slice Units. Jul 10 23:34:26.151729 systemd[1]: Reached target swap.target - Swaps. Jul 10 23:34:26.151738 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 10 23:34:26.151747 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 10 23:34:26.151756 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 10 23:34:26.151767 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 10 23:34:26.151776 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 10 23:34:26.151786 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 10 23:34:26.151799 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 10 23:34:26.151809 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 10 23:34:26.151819 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 10 23:34:26.151828 systemd[1]: Mounting media.mount - External Media Directory... Jul 10 23:34:26.151839 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 10 23:34:26.151848 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 10 23:34:26.151858 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 10 23:34:26.151867 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 10 23:34:26.151877 systemd[1]: Reached target machines.target - Containers. Jul 10 23:34:26.151886 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 10 23:34:26.151895 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 10 23:34:26.151904 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 10 23:34:26.151915 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 10 23:34:26.151924 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 10 23:34:26.151933 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 10 23:34:26.151943 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 10 23:34:26.151952 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 10 23:34:26.151961 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 10 23:34:26.151971 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 10 23:34:26.151980 kernel: fuse: init (API version 7.39) Jul 10 23:34:26.151989 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 10 23:34:26.151999 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 10 23:34:26.152007 kernel: loop: module loaded Jul 10 23:34:26.152017 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 10 23:34:26.152026 systemd[1]: Stopped systemd-fsck-usr.service. Jul 10 23:34:26.152036 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 10 23:34:26.152045 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 10 23:34:26.152054 kernel: ACPI: bus type drm_connector registered Jul 10 23:34:26.152062 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 10 23:34:26.152073 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 10 23:34:26.152100 systemd-journald[1266]: Collecting audit messages is disabled. Jul 10 23:34:26.152120 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 10 23:34:26.152130 systemd-journald[1266]: Journal started Jul 10 23:34:26.152151 systemd-journald[1266]: Runtime Journal (/run/log/journal/2721d510906f42658053b1e7ee18bf75) is 8M, max 78.5M, 70.5M free. Jul 10 23:34:25.216348 systemd[1]: Queued start job for default target multi-user.target. Jul 10 23:34:25.227317 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jul 10 23:34:25.227808 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 10 23:34:25.229619 systemd[1]: systemd-journald.service: Consumed 3.417s CPU time. Jul 10 23:34:26.190348 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 10 23:34:26.204582 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 10 23:34:26.217410 systemd[1]: verity-setup.service: Deactivated successfully. Jul 10 23:34:26.217488 systemd[1]: Stopped verity-setup.service. Jul 10 23:34:26.234401 systemd[1]: Started systemd-journald.service - Journal Service. Jul 10 23:34:26.233845 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 10 23:34:26.240060 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 10 23:34:26.246741 systemd[1]: Mounted media.mount - External Media Directory. Jul 10 23:34:26.252671 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 10 23:34:26.259029 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 10 23:34:26.265847 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 10 23:34:26.273426 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 10 23:34:26.280320 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 10 23:34:26.288464 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 10 23:34:26.288632 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 10 23:34:26.295785 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 23:34:26.295957 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 10 23:34:26.303972 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 10 23:34:26.304133 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 10 23:34:26.310676 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 23:34:26.310835 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 10 23:34:26.318194 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 10 23:34:26.318360 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 10 23:34:26.324885 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 23:34:26.325038 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 10 23:34:26.331428 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 10 23:34:26.337825 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 10 23:34:26.345545 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 10 23:34:26.353028 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 10 23:34:26.360265 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 10 23:34:26.378122 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 10 23:34:26.389456 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 10 23:34:26.397140 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 10 23:34:26.403637 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 10 23:34:26.403680 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 10 23:34:26.410748 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 10 23:34:26.419139 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 10 23:34:26.426919 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 10 23:34:26.432783 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 10 23:34:26.433978 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 10 23:34:26.441544 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 10 23:34:26.448284 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 10 23:34:26.451562 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 10 23:34:26.458964 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 10 23:34:26.460606 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 10 23:34:26.476562 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 10 23:34:26.484536 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 10 23:34:26.493158 systemd-journald[1266]: Time spent on flushing to /var/log/journal/2721d510906f42658053b1e7ee18bf75 is 71.004ms for 910 entries. Jul 10 23:34:26.493158 systemd-journald[1266]: System Journal (/var/log/journal/2721d510906f42658053b1e7ee18bf75) is 11.8M, max 2.6G, 2.6G free. Jul 10 23:34:26.627427 systemd-journald[1266]: Received client request to flush runtime journal. Jul 10 23:34:26.627529 kernel: loop0: detected capacity change from 0 to 28720 Jul 10 23:34:26.627555 systemd-journald[1266]: /var/log/journal/2721d510906f42658053b1e7ee18bf75/system.journal: Realtime clock jumped backwards relative to last journal entry, rotating. Jul 10 23:34:26.627580 systemd-journald[1266]: Rotating system journal. Jul 10 23:34:26.494644 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 10 23:34:26.514242 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 10 23:34:26.525734 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 10 23:34:26.552461 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 10 23:34:26.561361 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 10 23:34:26.570039 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 10 23:34:26.584791 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 10 23:34:26.601141 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 10 23:34:26.608495 udevadm[1305]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 10 23:34:26.629481 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 10 23:34:26.671922 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 10 23:34:26.672642 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 10 23:34:26.761816 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 10 23:34:26.776538 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 10 23:34:26.815305 systemd-tmpfiles[1319]: ACLs are not supported, ignoring. Jul 10 23:34:26.815324 systemd-tmpfiles[1319]: ACLs are not supported, ignoring. Jul 10 23:34:26.820127 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 10 23:34:26.883401 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 10 23:34:27.002407 kernel: loop1: detected capacity change from 0 to 207008 Jul 10 23:34:27.046400 kernel: loop2: detected capacity change from 0 to 113512 Jul 10 23:34:27.351608 kernel: loop3: detected capacity change from 0 to 123192 Jul 10 23:34:27.580402 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 10 23:34:27.592570 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 10 23:34:27.618578 systemd-udevd[1328]: Using default interface naming scheme 'v255'. Jul 10 23:34:27.641598 kernel: loop4: detected capacity change from 0 to 28720 Jul 10 23:34:27.650431 kernel: loop5: detected capacity change from 0 to 207008 Jul 10 23:34:27.661393 kernel: loop6: detected capacity change from 0 to 113512 Jul 10 23:34:27.671404 kernel: loop7: detected capacity change from 0 to 123192 Jul 10 23:34:27.675409 (sd-merge)[1330]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jul 10 23:34:27.675848 (sd-merge)[1330]: Merged extensions into '/usr'. Jul 10 23:34:27.680522 systemd[1]: Reload requested from client PID 1302 ('systemd-sysext') (unit systemd-sysext.service)... Jul 10 23:34:27.680539 systemd[1]: Reloading... Jul 10 23:34:27.751547 zram_generator::config[1360]: No configuration found. Jul 10 23:34:27.901544 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 23:34:28.008459 systemd[1]: Reloading finished in 327 ms. Jul 10 23:34:28.020378 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 10 23:34:28.037062 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 10 23:34:28.061269 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jul 10 23:34:28.074394 kernel: mousedev: PS/2 mouse device common for all mice Jul 10 23:34:28.077607 systemd[1]: Starting ensure-sysext.service... Jul 10 23:34:28.093659 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 10 23:34:28.105282 kernel: hv_vmbus: registering driver hv_balloon Jul 10 23:34:28.105390 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jul 10 23:34:28.108619 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 10 23:34:28.121829 kernel: hv_balloon: Memory hot add disabled on ARM64 Jul 10 23:34:28.131415 kernel: hv_vmbus: registering driver hyperv_fb Jul 10 23:34:28.146074 systemd[1]: Reload requested from client PID 1447 ('systemctl') (unit ensure-sysext.service)... Jul 10 23:34:28.146090 systemd[1]: Reloading... Jul 10 23:34:28.195236 systemd-tmpfiles[1450]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 10 23:34:28.195961 systemd-tmpfiles[1450]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 10 23:34:28.197910 systemd-tmpfiles[1450]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 10 23:34:28.202779 systemd-tmpfiles[1450]: ACLs are not supported, ignoring. Jul 10 23:34:28.202835 systemd-tmpfiles[1450]: ACLs are not supported, ignoring. Jul 10 23:34:28.214073 systemd-tmpfiles[1450]: Detected autofs mount point /boot during canonicalization of boot. Jul 10 23:34:28.214084 systemd-tmpfiles[1450]: Skipping /boot Jul 10 23:34:28.248756 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jul 10 23:34:28.248850 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jul 10 23:34:28.252801 systemd-tmpfiles[1450]: Detected autofs mount point /boot during canonicalization of boot. Jul 10 23:34:28.252947 systemd-tmpfiles[1450]: Skipping /boot Jul 10 23:34:28.278598 kernel: Console: switching to colour dummy device 80x25 Jul 10 23:34:28.288653 kernel: Console: switching to colour frame buffer device 128x48 Jul 10 23:34:28.288758 zram_generator::config[1488]: No configuration found. Jul 10 23:34:28.335398 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1430) Jul 10 23:34:28.471359 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 23:34:28.570739 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jul 10 23:34:28.578307 systemd[1]: Reloading finished in 430 ms. Jul 10 23:34:28.601717 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 10 23:34:28.635412 systemd[1]: Finished ensure-sysext.service. Jul 10 23:34:28.650199 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 10 23:34:28.678723 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 10 23:34:28.705964 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 10 23:34:28.712884 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 10 23:34:28.714152 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 10 23:34:28.721714 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 10 23:34:28.729601 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 10 23:34:28.739278 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 10 23:34:28.749638 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 10 23:34:28.756523 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 10 23:34:28.758593 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 10 23:34:28.766185 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 10 23:34:28.770605 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 10 23:34:28.790665 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 10 23:34:28.797047 systemd[1]: Reached target time-set.target - System Time Set. Jul 10 23:34:28.812617 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 10 23:34:28.822579 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 10 23:34:28.825563 lvm[1610]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 10 23:34:28.841804 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 23:34:28.852477 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 23:34:28.852698 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 10 23:34:28.863438 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 10 23:34:28.865404 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 10 23:34:28.873214 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 10 23:34:28.882784 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 23:34:28.882969 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 10 23:34:28.890763 augenrules[1636]: No rules Jul 10 23:34:28.891414 systemd[1]: audit-rules.service: Deactivated successfully. Jul 10 23:34:28.893430 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 10 23:34:28.899890 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 23:34:28.900067 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 10 23:34:28.907647 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 10 23:34:28.925251 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 10 23:34:28.938854 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 10 23:34:28.945729 lvm[1649]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 10 23:34:28.946817 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 10 23:34:28.947188 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 10 23:34:28.948273 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 10 23:34:28.956033 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 10 23:34:28.974106 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 10 23:34:28.985751 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 10 23:34:29.068337 systemd-resolved[1624]: Positive Trust Anchors: Jul 10 23:34:29.068696 systemd-resolved[1624]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 10 23:34:29.068787 systemd-resolved[1624]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 10 23:34:29.088697 systemd-resolved[1624]: Using system hostname 'ci-4230.2.1-n-d24186f489'. Jul 10 23:34:29.088923 systemd-networkd[1448]: lo: Link UP Jul 10 23:34:29.089171 systemd-networkd[1448]: lo: Gained carrier Jul 10 23:34:29.090261 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 10 23:34:29.091594 systemd-networkd[1448]: Enumeration completed Jul 10 23:34:29.092038 systemd-networkd[1448]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 23:34:29.092048 systemd-networkd[1448]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 10 23:34:29.098137 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 10 23:34:29.101508 systemd[1]: Reached target network.target - Network. Jul 10 23:34:29.101720 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 10 23:34:29.105542 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 10 23:34:29.107564 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 10 23:34:29.426576 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 23:34:29.483405 kernel: mlx5_core ef5f:00:02.0 enP61279s1: Link up Jul 10 23:34:29.509423 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 10 23:34:29.517685 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 10 23:34:29.529440 kernel: hv_netvsc 0022487b-bdbb-0022-487b-bdbb0022487b eth0: Data path switched to VF: enP61279s1 Jul 10 23:34:29.531906 systemd-networkd[1448]: enP61279s1: Link UP Jul 10 23:34:29.532280 systemd-networkd[1448]: eth0: Link UP Jul 10 23:34:29.532283 systemd-networkd[1448]: eth0: Gained carrier Jul 10 23:34:29.532299 systemd-networkd[1448]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 23:34:29.534469 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 10 23:34:29.545136 systemd-networkd[1448]: enP61279s1: Gained carrier Jul 10 23:34:29.551457 systemd-networkd[1448]: eth0: DHCPv4 address 10.200.20.37/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jul 10 23:34:30.991710 ldconfig[1297]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 10 23:34:31.004188 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 10 23:34:31.018789 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 10 23:34:31.033960 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 10 23:34:31.041031 systemd[1]: Reached target sysinit.target - System Initialization. Jul 10 23:34:31.047012 systemd-networkd[1448]: enP61279s1: Gained IPv6LL Jul 10 23:34:31.047919 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 10 23:34:31.055509 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 10 23:34:31.063581 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 10 23:34:31.072743 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 10 23:34:31.080739 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 10 23:34:31.089606 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 10 23:34:31.089643 systemd[1]: Reached target paths.target - Path Units. Jul 10 23:34:31.095489 systemd[1]: Reached target timers.target - Timer Units. Jul 10 23:34:31.114416 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 10 23:34:31.122816 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 10 23:34:31.130723 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 10 23:34:31.138587 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 10 23:34:31.145843 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 10 23:34:31.154423 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 10 23:34:31.160964 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 10 23:34:31.168442 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 10 23:34:31.174634 systemd[1]: Reached target sockets.target - Socket Units. Jul 10 23:34:31.180282 systemd[1]: Reached target basic.target - Basic System. Jul 10 23:34:31.185732 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 10 23:34:31.185766 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 10 23:34:31.205499 systemd[1]: Starting chronyd.service - NTP client/server... Jul 10 23:34:31.223601 systemd[1]: Starting containerd.service - containerd container runtime... Jul 10 23:34:31.236556 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 10 23:34:31.248485 (chronyd)[1673]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Jul 10 23:34:31.249536 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 10 23:34:31.259328 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 10 23:34:31.269508 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 10 23:34:31.271544 jq[1680]: false Jul 10 23:34:31.277470 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 10 23:34:31.277512 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Jul 10 23:34:31.278867 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jul 10 23:34:31.287889 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jul 10 23:34:31.296002 KVP[1682]: KVP starting; pid is:1682 Jul 10 23:34:31.289609 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 10 23:34:31.301073 KVP[1682]: KVP LIC Version: 3.1 Jul 10 23:34:31.303844 kernel: hv_utils: KVP IC version 4.0 Jul 10 23:34:31.303293 chronyd[1685]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Jul 10 23:34:31.310550 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 10 23:34:31.319298 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 10 23:34:31.327601 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 10 23:34:31.338511 chronyd[1685]: Timezone right/UTC failed leap second check, ignoring Jul 10 23:34:31.338686 chronyd[1685]: Loaded seccomp filter (level 2) Jul 10 23:34:31.342677 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 10 23:34:31.353644 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 10 23:34:31.354244 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 10 23:34:31.357435 systemd[1]: Starting update-engine.service - Update Engine... Jul 10 23:34:31.368486 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 10 23:34:31.379627 systemd[1]: Started chronyd.service - NTP client/server. Jul 10 23:34:31.391170 dbus-daemon[1676]: [system] SELinux support is enabled Jul 10 23:34:31.395959 extend-filesystems[1681]: Found loop4 Jul 10 23:34:31.414780 extend-filesystems[1681]: Found loop5 Jul 10 23:34:31.414780 extend-filesystems[1681]: Found loop6 Jul 10 23:34:31.414780 extend-filesystems[1681]: Found loop7 Jul 10 23:34:31.414780 extend-filesystems[1681]: Found sda Jul 10 23:34:31.414780 extend-filesystems[1681]: Found sda1 Jul 10 23:34:31.414780 extend-filesystems[1681]: Found sda2 Jul 10 23:34:31.414780 extend-filesystems[1681]: Found sda3 Jul 10 23:34:31.414780 extend-filesystems[1681]: Found usr Jul 10 23:34:31.414780 extend-filesystems[1681]: Found sda4 Jul 10 23:34:31.414780 extend-filesystems[1681]: Found sda6 Jul 10 23:34:31.414780 extend-filesystems[1681]: Found sda7 Jul 10 23:34:31.414780 extend-filesystems[1681]: Found sda9 Jul 10 23:34:31.414780 extend-filesystems[1681]: Checking size of /dev/sda9 Jul 10 23:34:31.396733 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 10 23:34:31.671025 coreos-metadata[1675]: Jul 10 23:34:31.483 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jul 10 23:34:31.671025 coreos-metadata[1675]: Jul 10 23:34:31.505 INFO Fetch successful Jul 10 23:34:31.671025 coreos-metadata[1675]: Jul 10 23:34:31.505 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jul 10 23:34:31.671025 coreos-metadata[1675]: Jul 10 23:34:31.505 INFO Fetch successful Jul 10 23:34:31.671025 coreos-metadata[1675]: Jul 10 23:34:31.505 INFO Fetching http://168.63.129.16/machine/151e773d-f43d-4307-986f-68b098b6def6/d2569cc3%2D51d0%2D4162%2D8055%2D61b4787e4734.%5Fci%2D4230.2.1%2Dn%2Dd24186f489?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jul 10 23:34:31.671025 coreos-metadata[1675]: Jul 10 23:34:31.505 INFO Fetch successful Jul 10 23:34:31.671025 coreos-metadata[1675]: Jul 10 23:34:31.505 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jul 10 23:34:31.671025 coreos-metadata[1675]: Jul 10 23:34:31.523 INFO Fetch successful Jul 10 23:34:31.671272 extend-filesystems[1681]: Old size kept for /dev/sda9 Jul 10 23:34:31.671272 extend-filesystems[1681]: Found sr0 Jul 10 23:34:31.721847 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1739) Jul 10 23:34:31.410949 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 10 23:34:31.721969 update_engine[1691]: I20250710 23:34:31.450989 1691 main.cc:92] Flatcar Update Engine starting Jul 10 23:34:31.721969 update_engine[1691]: I20250710 23:34:31.462917 1691 update_check_scheduler.cc:74] Next update check in 6m39s Jul 10 23:34:31.411155 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 10 23:34:31.420857 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 10 23:34:31.730714 jq[1693]: true Jul 10 23:34:31.421686 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 10 23:34:31.730926 tar[1700]: linux-arm64/LICENSE Jul 10 23:34:31.730926 tar[1700]: linux-arm64/helm Jul 10 23:34:31.456422 (ntainerd)[1709]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 10 23:34:31.741726 jq[1708]: true Jul 10 23:34:31.461696 systemd[1]: motdgen.service: Deactivated successfully. Jul 10 23:34:31.461905 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 10 23:34:31.499698 systemd-networkd[1448]: eth0: Gained IPv6LL Jul 10 23:34:31.742206 bash[1733]: Updated "/home/core/.ssh/authorized_keys" Jul 10 23:34:31.511018 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 10 23:34:31.511064 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 10 23:34:31.556701 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 10 23:34:31.556728 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 10 23:34:31.561565 systemd-logind[1690]: New seat seat0. Jul 10 23:34:31.564673 systemd-logind[1690]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 10 23:34:31.592024 systemd[1]: Started systemd-logind.service - User Login Management. Jul 10 23:34:31.608613 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 10 23:34:31.632871 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 10 23:34:31.633512 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 10 23:34:31.661534 systemd[1]: Started update-engine.service - Update Engine. Jul 10 23:34:31.685832 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 10 23:34:31.714486 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 10 23:34:31.734103 systemd[1]: Reached target network-online.target - Network is Online. Jul 10 23:34:31.758747 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 23:34:31.767074 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 10 23:34:31.775260 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 10 23:34:31.775631 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 10 23:34:31.785828 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 10 23:34:31.889490 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 10 23:34:32.113974 containerd[1709]: time="2025-07-10T23:34:32.113832460Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jul 10 23:34:32.128560 locksmithd[1770]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 10 23:34:32.177451 sshd_keygen[1707]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 10 23:34:32.202945 containerd[1709]: time="2025-07-10T23:34:32.202896900Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 10 23:34:32.210489 containerd[1709]: time="2025-07-10T23:34:32.209579940Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.96-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 10 23:34:32.210489 containerd[1709]: time="2025-07-10T23:34:32.209627420Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 10 23:34:32.210489 containerd[1709]: time="2025-07-10T23:34:32.209645820Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 10 23:34:32.210489 containerd[1709]: time="2025-07-10T23:34:32.209805300Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 10 23:34:32.210489 containerd[1709]: time="2025-07-10T23:34:32.209822980Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 10 23:34:32.210489 containerd[1709]: time="2025-07-10T23:34:32.209886620Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 10 23:34:32.210489 containerd[1709]: time="2025-07-10T23:34:32.209898500Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 10 23:34:32.210489 containerd[1709]: time="2025-07-10T23:34:32.210095620Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 10 23:34:32.210489 containerd[1709]: time="2025-07-10T23:34:32.210111580Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 10 23:34:32.210489 containerd[1709]: time="2025-07-10T23:34:32.210124980Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jul 10 23:34:32.210489 containerd[1709]: time="2025-07-10T23:34:32.210134980Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 10 23:34:32.210764 containerd[1709]: time="2025-07-10T23:34:32.210210380Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 10 23:34:32.210855 containerd[1709]: time="2025-07-10T23:34:32.210835700Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 10 23:34:32.211041 containerd[1709]: time="2025-07-10T23:34:32.211023180Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 10 23:34:32.211097 containerd[1709]: time="2025-07-10T23:34:32.211083940Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 10 23:34:32.211232 containerd[1709]: time="2025-07-10T23:34:32.211216300Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 10 23:34:32.211340 containerd[1709]: time="2025-07-10T23:34:32.211324700Z" level=info msg="metadata content store policy set" policy=shared Jul 10 23:34:32.215134 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 10 23:34:32.229637 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 10 23:34:32.241580 containerd[1709]: time="2025-07-10T23:34:32.241465420Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 10 23:34:32.241580 containerd[1709]: time="2025-07-10T23:34:32.241581100Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 10 23:34:32.241717 containerd[1709]: time="2025-07-10T23:34:32.241603860Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 10 23:34:32.241717 containerd[1709]: time="2025-07-10T23:34:32.241621180Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 10 23:34:32.241717 containerd[1709]: time="2025-07-10T23:34:32.241637580Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 10 23:34:32.247430 containerd[1709]: time="2025-07-10T23:34:32.241797020Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 10 23:34:32.247430 containerd[1709]: time="2025-07-10T23:34:32.242045180Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 10 23:34:32.247430 containerd[1709]: time="2025-07-10T23:34:32.242160900Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 10 23:34:32.247430 containerd[1709]: time="2025-07-10T23:34:32.242179580Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 10 23:34:32.247430 containerd[1709]: time="2025-07-10T23:34:32.242201020Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 10 23:34:32.247430 containerd[1709]: time="2025-07-10T23:34:32.242224300Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 10 23:34:32.247430 containerd[1709]: time="2025-07-10T23:34:32.242238180Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 10 23:34:32.247430 containerd[1709]: time="2025-07-10T23:34:32.242250300Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 10 23:34:32.247430 containerd[1709]: time="2025-07-10T23:34:32.242264580Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 10 23:34:32.247430 containerd[1709]: time="2025-07-10T23:34:32.242279380Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 10 23:34:32.247430 containerd[1709]: time="2025-07-10T23:34:32.242293620Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 10 23:34:32.247430 containerd[1709]: time="2025-07-10T23:34:32.242307100Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 10 23:34:32.247430 containerd[1709]: time="2025-07-10T23:34:32.242318820Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 10 23:34:32.247430 containerd[1709]: time="2025-07-10T23:34:32.242339020Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 10 23:34:32.241915 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jul 10 23:34:32.247817 containerd[1709]: time="2025-07-10T23:34:32.242354060Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 10 23:34:32.251767 containerd[1709]: time="2025-07-10T23:34:32.250812300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 10 23:34:32.251767 containerd[1709]: time="2025-07-10T23:34:32.250910980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 10 23:34:32.251767 containerd[1709]: time="2025-07-10T23:34:32.250929980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 10 23:34:32.251767 containerd[1709]: time="2025-07-10T23:34:32.250947820Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 10 23:34:32.251767 containerd[1709]: time="2025-07-10T23:34:32.250960620Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 10 23:34:32.251767 containerd[1709]: time="2025-07-10T23:34:32.251013380Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 10 23:34:32.251767 containerd[1709]: time="2025-07-10T23:34:32.251029980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 10 23:34:32.251767 containerd[1709]: time="2025-07-10T23:34:32.251048380Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 10 23:34:32.251767 containerd[1709]: time="2025-07-10T23:34:32.251061940Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 10 23:34:32.251767 containerd[1709]: time="2025-07-10T23:34:32.251165340Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 10 23:34:32.251767 containerd[1709]: time="2025-07-10T23:34:32.251184260Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 10 23:34:32.251767 containerd[1709]: time="2025-07-10T23:34:32.251201140Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 10 23:34:32.251767 containerd[1709]: time="2025-07-10T23:34:32.251274620Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 10 23:34:32.251767 containerd[1709]: time="2025-07-10T23:34:32.251294460Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 10 23:34:32.253817 containerd[1709]: time="2025-07-10T23:34:32.251305900Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 10 23:34:32.253817 containerd[1709]: time="2025-07-10T23:34:32.252906100Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 10 23:34:32.253817 containerd[1709]: time="2025-07-10T23:34:32.252936100Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 10 23:34:32.253817 containerd[1709]: time="2025-07-10T23:34:32.252947220Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 10 23:34:32.253817 containerd[1709]: time="2025-07-10T23:34:32.252999620Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 10 23:34:32.253817 containerd[1709]: time="2025-07-10T23:34:32.253012380Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 10 23:34:32.253817 containerd[1709]: time="2025-07-10T23:34:32.253025300Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 10 23:34:32.253817 containerd[1709]: time="2025-07-10T23:34:32.253035900Z" level=info msg="NRI interface is disabled by configuration." Jul 10 23:34:32.253817 containerd[1709]: time="2025-07-10T23:34:32.253045980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 10 23:34:32.253310 systemd[1]: issuegen.service: Deactivated successfully. Jul 10 23:34:32.254113 containerd[1709]: time="2025-07-10T23:34:32.253641980Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 10 23:34:32.254113 containerd[1709]: time="2025-07-10T23:34:32.253730420Z" level=info msg="Connect containerd service" Jul 10 23:34:32.254113 containerd[1709]: time="2025-07-10T23:34:32.253772980Z" level=info msg="using legacy CRI server" Jul 10 23:34:32.254113 containerd[1709]: time="2025-07-10T23:34:32.253780180Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 10 23:34:32.254329 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 10 23:34:32.262973 containerd[1709]: time="2025-07-10T23:34:32.260259860Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 10 23:34:32.266907 containerd[1709]: time="2025-07-10T23:34:32.266864780Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 10 23:34:32.270733 containerd[1709]: time="2025-07-10T23:34:32.268359300Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 10 23:34:32.270733 containerd[1709]: time="2025-07-10T23:34:32.269481940Z" level=info msg="Start subscribing containerd event" Jul 10 23:34:32.270733 containerd[1709]: time="2025-07-10T23:34:32.269598260Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 10 23:34:32.270733 containerd[1709]: time="2025-07-10T23:34:32.269658340Z" level=info msg="Start recovering state" Jul 10 23:34:32.270733 containerd[1709]: time="2025-07-10T23:34:32.269735980Z" level=info msg="Start event monitor" Jul 10 23:34:32.270733 containerd[1709]: time="2025-07-10T23:34:32.269749260Z" level=info msg="Start snapshots syncer" Jul 10 23:34:32.270733 containerd[1709]: time="2025-07-10T23:34:32.269759540Z" level=info msg="Start cni network conf syncer for default" Jul 10 23:34:32.270733 containerd[1709]: time="2025-07-10T23:34:32.269766460Z" level=info msg="Start streaming server" Jul 10 23:34:32.270733 containerd[1709]: time="2025-07-10T23:34:32.269840180Z" level=info msg="containerd successfully booted in 0.158190s" Jul 10 23:34:32.271994 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 10 23:34:32.290264 systemd[1]: Started containerd.service - containerd container runtime. Jul 10 23:34:32.326495 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 10 23:34:32.342314 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jul 10 23:34:32.363035 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 10 23:34:32.378732 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jul 10 23:34:32.387002 systemd[1]: Reached target getty.target - Login Prompts. Jul 10 23:34:32.527628 tar[1700]: linux-arm64/README.md Jul 10 23:34:32.542560 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 10 23:34:32.807334 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 23:34:32.814719 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 10 23:34:32.825492 systemd[1]: Startup finished in 707ms (kernel) + 11.904s (initrd) + 10.193s (userspace) = 22.805s. Jul 10 23:34:32.835826 (kubelet)[1860]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 10 23:34:33.141664 login[1850]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:34:33.149242 login[1851]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:34:33.152122 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 10 23:34:33.158934 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 10 23:34:33.175637 systemd-logind[1690]: New session 1 of user core. Jul 10 23:34:33.187100 systemd-logind[1690]: New session 2 of user core. Jul 10 23:34:33.195784 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 10 23:34:33.204889 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 10 23:34:33.211546 (systemd)[1872]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 10 23:34:33.214693 systemd-logind[1690]: New session c1 of user core. Jul 10 23:34:33.324097 kubelet[1860]: E0710 23:34:33.323952 1860 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 23:34:33.326314 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 23:34:33.326810 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 23:34:33.327192 systemd[1]: kubelet.service: Consumed 727ms CPU time, 256.4M memory peak. Jul 10 23:34:33.385570 systemd[1872]: Queued start job for default target default.target. Jul 10 23:34:33.393764 systemd[1872]: Created slice app.slice - User Application Slice. Jul 10 23:34:33.393796 systemd[1872]: Reached target paths.target - Paths. Jul 10 23:34:33.393836 systemd[1872]: Reached target timers.target - Timers. Jul 10 23:34:33.396058 systemd[1872]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 10 23:34:33.407103 systemd[1872]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 10 23:34:33.407351 systemd[1872]: Reached target sockets.target - Sockets. Jul 10 23:34:33.407522 systemd[1872]: Reached target basic.target - Basic System. Jul 10 23:34:33.407688 systemd[1872]: Reached target default.target - Main User Target. Jul 10 23:34:33.407816 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 10 23:34:33.407818 systemd[1872]: Startup finished in 186ms. Jul 10 23:34:33.413623 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 10 23:34:33.414487 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 10 23:34:34.032527 waagent[1848]: 2025-07-10T23:34:34.032337Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Jul 10 23:34:34.038487 waagent[1848]: 2025-07-10T23:34:34.038408Z INFO Daemon Daemon OS: flatcar 4230.2.1 Jul 10 23:34:34.043244 waagent[1848]: 2025-07-10T23:34:34.043182Z INFO Daemon Daemon Python: 3.11.11 Jul 10 23:34:34.048826 waagent[1848]: 2025-07-10T23:34:34.048758Z INFO Daemon Daemon Run daemon Jul 10 23:34:34.053528 waagent[1848]: 2025-07-10T23:34:34.053466Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4230.2.1' Jul 10 23:34:34.063463 waagent[1848]: 2025-07-10T23:34:34.063350Z INFO Daemon Daemon Using waagent for provisioning Jul 10 23:34:34.069657 waagent[1848]: 2025-07-10T23:34:34.069605Z INFO Daemon Daemon Activate resource disk Jul 10 23:34:34.074471 waagent[1848]: 2025-07-10T23:34:34.074419Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jul 10 23:34:34.087433 waagent[1848]: 2025-07-10T23:34:34.087326Z INFO Daemon Daemon Found device: None Jul 10 23:34:34.091935 waagent[1848]: 2025-07-10T23:34:34.091884Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jul 10 23:34:34.101458 waagent[1848]: 2025-07-10T23:34:34.101353Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jul 10 23:34:34.113901 waagent[1848]: 2025-07-10T23:34:34.113847Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jul 10 23:34:34.119991 waagent[1848]: 2025-07-10T23:34:34.119935Z INFO Daemon Daemon Running default provisioning handler Jul 10 23:34:34.132609 waagent[1848]: 2025-07-10T23:34:34.132504Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jul 10 23:34:34.148602 waagent[1848]: 2025-07-10T23:34:34.148531Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jul 10 23:34:34.159238 waagent[1848]: 2025-07-10T23:34:34.159175Z INFO Daemon Daemon cloud-init is enabled: False Jul 10 23:34:34.165060 waagent[1848]: 2025-07-10T23:34:34.165001Z INFO Daemon Daemon Copying ovf-env.xml Jul 10 23:34:34.253591 waagent[1848]: 2025-07-10T23:34:34.253105Z INFO Daemon Daemon Successfully mounted dvd Jul 10 23:34:34.346626 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jul 10 23:34:34.348978 waagent[1848]: 2025-07-10T23:34:34.348888Z INFO Daemon Daemon Detect protocol endpoint Jul 10 23:34:34.354597 waagent[1848]: 2025-07-10T23:34:34.354533Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jul 10 23:34:34.360927 waagent[1848]: 2025-07-10T23:34:34.360872Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jul 10 23:34:34.368062 waagent[1848]: 2025-07-10T23:34:34.368004Z INFO Daemon Daemon Test for route to 168.63.129.16 Jul 10 23:34:34.373967 waagent[1848]: 2025-07-10T23:34:34.373904Z INFO Daemon Daemon Route to 168.63.129.16 exists Jul 10 23:34:34.379945 waagent[1848]: 2025-07-10T23:34:34.379890Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jul 10 23:34:34.408487 waagent[1848]: 2025-07-10T23:34:34.408440Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jul 10 23:34:34.415623 waagent[1848]: 2025-07-10T23:34:34.415596Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jul 10 23:34:34.421458 waagent[1848]: 2025-07-10T23:34:34.421405Z INFO Daemon Daemon Server preferred version:2015-04-05 Jul 10 23:34:34.893640 waagent[1848]: 2025-07-10T23:34:34.893540Z INFO Daemon Daemon Initializing goal state during protocol detection Jul 10 23:34:34.900507 waagent[1848]: 2025-07-10T23:34:34.900439Z INFO Daemon Daemon Forcing an update of the goal state. Jul 10 23:34:34.910026 waagent[1848]: 2025-07-10T23:34:34.909973Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jul 10 23:34:35.493289 waagent[1848]: 2025-07-10T23:34:35.493240Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.175 Jul 10 23:34:35.500114 waagent[1848]: 2025-07-10T23:34:35.500059Z INFO Daemon Jul 10 23:34:35.503474 waagent[1848]: 2025-07-10T23:34:35.503419Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 7505712a-4ef3-4951-9dfb-ce4055779f3a eTag: 8547799303806111424 source: Fabric] Jul 10 23:34:35.516317 waagent[1848]: 2025-07-10T23:34:35.516268Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jul 10 23:34:35.524378 waagent[1848]: 2025-07-10T23:34:35.524318Z INFO Daemon Jul 10 23:34:35.527522 waagent[1848]: 2025-07-10T23:34:35.527473Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jul 10 23:34:35.539876 waagent[1848]: 2025-07-10T23:34:35.539836Z INFO Daemon Daemon Downloading artifacts profile blob Jul 10 23:34:35.628641 waagent[1848]: 2025-07-10T23:34:35.628546Z INFO Daemon Downloaded certificate {'thumbprint': '89F7D233668D51ECFC3CA187FD7BC7F05164B399', 'hasPrivateKey': True} Jul 10 23:34:35.639276 waagent[1848]: 2025-07-10T23:34:35.639218Z INFO Daemon Downloaded certificate {'thumbprint': '1ADEEB8E16D2F1B1D8E253748DF7FC82F298184B', 'hasPrivateKey': False} Jul 10 23:34:35.652194 waagent[1848]: 2025-07-10T23:34:35.652136Z INFO Daemon Fetch goal state completed Jul 10 23:34:35.664860 waagent[1848]: 2025-07-10T23:34:35.664806Z INFO Daemon Daemon Starting provisioning Jul 10 23:34:35.670625 waagent[1848]: 2025-07-10T23:34:35.670554Z INFO Daemon Daemon Handle ovf-env.xml. Jul 10 23:34:35.675497 waagent[1848]: 2025-07-10T23:34:35.675440Z INFO Daemon Daemon Set hostname [ci-4230.2.1-n-d24186f489] Jul 10 23:34:35.695301 waagent[1848]: 2025-07-10T23:34:35.695211Z INFO Daemon Daemon Publish hostname [ci-4230.2.1-n-d24186f489] Jul 10 23:34:35.702553 waagent[1848]: 2025-07-10T23:34:35.702478Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jul 10 23:34:35.711086 waagent[1848]: 2025-07-10T23:34:35.711017Z INFO Daemon Daemon Primary interface is [eth0] Jul 10 23:34:35.725973 systemd-networkd[1448]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 23:34:35.725989 systemd-networkd[1448]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 10 23:34:35.726104 systemd-networkd[1448]: eth0: DHCP lease lost Jul 10 23:34:35.727498 waagent[1848]: 2025-07-10T23:34:35.727352Z INFO Daemon Daemon Create user account if not exists Jul 10 23:34:35.733519 waagent[1848]: 2025-07-10T23:34:35.733448Z INFO Daemon Daemon User core already exists, skip useradd Jul 10 23:34:35.740318 waagent[1848]: 2025-07-10T23:34:35.740236Z INFO Daemon Daemon Configure sudoer Jul 10 23:34:35.745523 waagent[1848]: 2025-07-10T23:34:35.745351Z INFO Daemon Daemon Configure sshd Jul 10 23:34:35.750267 waagent[1848]: 2025-07-10T23:34:35.750200Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jul 10 23:34:35.763775 waagent[1848]: 2025-07-10T23:34:35.763644Z INFO Daemon Daemon Deploy ssh public key. Jul 10 23:34:35.772442 systemd-networkd[1448]: eth0: DHCPv4 address 10.200.20.37/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jul 10 23:34:36.900649 waagent[1848]: 2025-07-10T23:34:36.900589Z INFO Daemon Daemon Provisioning complete Jul 10 23:34:36.919971 waagent[1848]: 2025-07-10T23:34:36.919920Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jul 10 23:34:36.926773 waagent[1848]: 2025-07-10T23:34:36.926706Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jul 10 23:34:36.938022 waagent[1848]: 2025-07-10T23:34:36.937962Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Jul 10 23:34:37.081125 waagent[1929]: 2025-07-10T23:34:37.081036Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Jul 10 23:34:37.081478 waagent[1929]: 2025-07-10T23:34:37.081196Z INFO ExtHandler ExtHandler OS: flatcar 4230.2.1 Jul 10 23:34:37.081478 waagent[1929]: 2025-07-10T23:34:37.081250Z INFO ExtHandler ExtHandler Python: 3.11.11 Jul 10 23:34:37.115402 waagent[1929]: 2025-07-10T23:34:37.113985Z INFO ExtHandler ExtHandler Distro: flatcar-4230.2.1; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.11; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Jul 10 23:34:37.115402 waagent[1929]: 2025-07-10T23:34:37.114250Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 10 23:34:37.115402 waagent[1929]: 2025-07-10T23:34:37.114312Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 10 23:34:37.123201 waagent[1929]: 2025-07-10T23:34:37.123120Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jul 10 23:34:37.943384 waagent[1929]: 2025-07-10T23:34:37.267216Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.175 Jul 10 23:34:37.943384 waagent[1929]: 2025-07-10T23:34:37.942221Z INFO ExtHandler Jul 10 23:34:37.943384 waagent[1929]: 2025-07-10T23:34:37.942336Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 599a6068-eaca-4ff3-a38f-30b8867d82db eTag: 8547799303806111424 source: Fabric] Jul 10 23:34:37.943384 waagent[1929]: 2025-07-10T23:34:37.942726Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jul 10 23:34:37.948323 waagent[1929]: 2025-07-10T23:34:37.948230Z INFO ExtHandler Jul 10 23:34:37.948476 waagent[1929]: 2025-07-10T23:34:37.948440Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jul 10 23:34:37.953671 waagent[1929]: 2025-07-10T23:34:37.953623Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jul 10 23:34:38.030850 waagent[1929]: 2025-07-10T23:34:38.030752Z INFO ExtHandler Downloaded certificate {'thumbprint': '89F7D233668D51ECFC3CA187FD7BC7F05164B399', 'hasPrivateKey': True} Jul 10 23:34:38.031265 waagent[1929]: 2025-07-10T23:34:38.031220Z INFO ExtHandler Downloaded certificate {'thumbprint': '1ADEEB8E16D2F1B1D8E253748DF7FC82F298184B', 'hasPrivateKey': False} Jul 10 23:34:38.031749 waagent[1929]: 2025-07-10T23:34:38.031705Z INFO ExtHandler Fetch goal state completed Jul 10 23:34:38.047783 waagent[1929]: 2025-07-10T23:34:38.047720Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1929 Jul 10 23:34:38.047946 waagent[1929]: 2025-07-10T23:34:38.047907Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jul 10 23:34:38.049739 waagent[1929]: 2025-07-10T23:34:38.049690Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4230.2.1', '', 'Flatcar Container Linux by Kinvolk'] Jul 10 23:34:38.050134 waagent[1929]: 2025-07-10T23:34:38.050089Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jul 10 23:34:38.090237 waagent[1929]: 2025-07-10T23:34:38.090191Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jul 10 23:34:38.090569 waagent[1929]: 2025-07-10T23:34:38.090477Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jul 10 23:34:38.096428 waagent[1929]: 2025-07-10T23:34:38.096243Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jul 10 23:34:38.102572 systemd[1]: Reload requested from client PID 1946 ('systemctl') (unit waagent.service)... Jul 10 23:34:38.102871 systemd[1]: Reloading... Jul 10 23:34:38.193455 zram_generator::config[1985]: No configuration found. Jul 10 23:34:38.308071 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 23:34:38.412003 systemd[1]: Reloading finished in 308 ms. Jul 10 23:34:38.429441 waagent[1929]: 2025-07-10T23:34:38.422744Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Jul 10 23:34:38.431120 systemd[1]: Reload requested from client PID 2039 ('systemctl') (unit waagent.service)... Jul 10 23:34:38.431238 systemd[1]: Reloading... Jul 10 23:34:38.532439 zram_generator::config[2090]: No configuration found. Jul 10 23:34:38.628901 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 23:34:38.733615 systemd[1]: Reloading finished in 301 ms. Jul 10 23:34:38.751438 waagent[1929]: 2025-07-10T23:34:38.744951Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jul 10 23:34:38.751438 waagent[1929]: 2025-07-10T23:34:38.745144Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jul 10 23:34:38.959307 waagent[1929]: 2025-07-10T23:34:38.959138Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jul 10 23:34:38.959912 waagent[1929]: 2025-07-10T23:34:38.959805Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Jul 10 23:34:38.960772 waagent[1929]: 2025-07-10T23:34:38.960656Z INFO ExtHandler ExtHandler Starting env monitor service. Jul 10 23:34:38.961208 waagent[1929]: 2025-07-10T23:34:38.961058Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jul 10 23:34:38.961471 waagent[1929]: 2025-07-10T23:34:38.961364Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jul 10 23:34:38.961727 waagent[1929]: 2025-07-10T23:34:38.961584Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jul 10 23:34:38.962470 waagent[1929]: 2025-07-10T23:34:38.962025Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 10 23:34:38.962470 waagent[1929]: 2025-07-10T23:34:38.962108Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 10 23:34:38.962470 waagent[1929]: 2025-07-10T23:34:38.962246Z INFO EnvHandler ExtHandler Configure routes Jul 10 23:34:38.962470 waagent[1929]: 2025-07-10T23:34:38.962303Z INFO EnvHandler ExtHandler Gateway:None Jul 10 23:34:38.962470 waagent[1929]: 2025-07-10T23:34:38.962345Z INFO EnvHandler ExtHandler Routes:None Jul 10 23:34:38.962766 waagent[1929]: 2025-07-10T23:34:38.962712Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jul 10 23:34:38.962814 waagent[1929]: 2025-07-10T23:34:38.962773Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jul 10 23:34:38.963074 waagent[1929]: 2025-07-10T23:34:38.962998Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 10 23:34:38.963112 waagent[1929]: 2025-07-10T23:34:38.963077Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 10 23:34:38.963388 waagent[1929]: 2025-07-10T23:34:38.963308Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jul 10 23:34:38.963908 waagent[1929]: 2025-07-10T23:34:38.963852Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jul 10 23:34:38.964311 waagent[1929]: 2025-07-10T23:34:38.964219Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jul 10 23:34:38.964311 waagent[1929]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jul 10 23:34:38.964311 waagent[1929]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Jul 10 23:34:38.964311 waagent[1929]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jul 10 23:34:38.964311 waagent[1929]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jul 10 23:34:38.964311 waagent[1929]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jul 10 23:34:38.964311 waagent[1929]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jul 10 23:34:39.018206 waagent[1929]: 2025-07-10T23:34:39.018061Z INFO MonitorHandler ExtHandler Network interfaces: Jul 10 23:34:39.018206 waagent[1929]: Executing ['ip', '-a', '-o', 'link']: Jul 10 23:34:39.018206 waagent[1929]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jul 10 23:34:39.018206 waagent[1929]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:7b:bd:bb brd ff:ff:ff:ff:ff:ff Jul 10 23:34:39.018206 waagent[1929]: 3: enP61279s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:7b:bd:bb brd ff:ff:ff:ff:ff:ff\ altname enP61279p0s2 Jul 10 23:34:39.018206 waagent[1929]: Executing ['ip', '-4', '-a', '-o', 'address']: Jul 10 23:34:39.018206 waagent[1929]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jul 10 23:34:39.018206 waagent[1929]: 2: eth0 inet 10.200.20.37/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Jul 10 23:34:39.018206 waagent[1929]: Executing ['ip', '-6', '-a', '-o', 'address']: Jul 10 23:34:39.018206 waagent[1929]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jul 10 23:34:39.018206 waagent[1929]: 2: eth0 inet6 fe80::222:48ff:fe7b:bdbb/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jul 10 23:34:39.018206 waagent[1929]: 3: enP61279s1 inet6 fe80::222:48ff:fe7b:bdbb/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jul 10 23:34:39.066763 waagent[1929]: 2025-07-10T23:34:39.066643Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Jul 10 23:34:39.066763 waagent[1929]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jul 10 23:34:39.066763 waagent[1929]: pkts bytes target prot opt in out source destination Jul 10 23:34:39.066763 waagent[1929]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jul 10 23:34:39.066763 waagent[1929]: pkts bytes target prot opt in out source destination Jul 10 23:34:39.066763 waagent[1929]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jul 10 23:34:39.066763 waagent[1929]: pkts bytes target prot opt in out source destination Jul 10 23:34:39.066763 waagent[1929]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jul 10 23:34:39.066763 waagent[1929]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jul 10 23:34:39.066763 waagent[1929]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jul 10 23:34:39.069947 waagent[1929]: 2025-07-10T23:34:39.069830Z INFO EnvHandler ExtHandler Current Firewall rules: Jul 10 23:34:39.069947 waagent[1929]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jul 10 23:34:39.069947 waagent[1929]: pkts bytes target prot opt in out source destination Jul 10 23:34:39.069947 waagent[1929]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jul 10 23:34:39.069947 waagent[1929]: pkts bytes target prot opt in out source destination Jul 10 23:34:39.069947 waagent[1929]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jul 10 23:34:39.069947 waagent[1929]: pkts bytes target prot opt in out source destination Jul 10 23:34:39.069947 waagent[1929]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jul 10 23:34:39.069947 waagent[1929]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jul 10 23:34:39.069947 waagent[1929]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jul 10 23:34:39.070197 waagent[1929]: 2025-07-10T23:34:39.070131Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jul 10 23:34:39.650405 waagent[1929]: 2025-07-10T23:34:39.650139Z INFO ExtHandler ExtHandler Jul 10 23:34:39.650405 waagent[1929]: 2025-07-10T23:34:39.650292Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 50ea00e1-946d-4a4e-944b-a310b8ea1abe correlation b98730ef-8f96-44ff-99e1-5cc41b961441 created: 2025-07-10T23:33:34.164711Z] Jul 10 23:34:39.651116 waagent[1929]: 2025-07-10T23:34:39.651043Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jul 10 23:34:39.651866 waagent[1929]: 2025-07-10T23:34:39.651816Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Jul 10 23:34:39.689694 waagent[1929]: 2025-07-10T23:34:39.689626Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 52C5A6E1-1E99-48A1-AB7F-30244476911C;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Jul 10 23:34:43.525779 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 10 23:34:43.539602 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 23:34:43.652613 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 23:34:43.656775 (kubelet)[2171]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 10 23:34:43.810720 kubelet[2171]: E0710 23:34:43.810588 2171 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 23:34:43.813269 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 23:34:43.813460 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 23:34:43.813734 systemd[1]: kubelet.service: Consumed 130ms CPU time, 106.6M memory peak. Jul 10 23:34:51.238596 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 10 23:34:51.239884 systemd[1]: Started sshd@0-10.200.20.37:22-10.200.16.10:44386.service - OpenSSH per-connection server daemon (10.200.16.10:44386). Jul 10 23:34:51.800589 sshd[2179]: Accepted publickey for core from 10.200.16.10 port 44386 ssh2: RSA SHA256:MRWiT2m5xbwSE4Nwnya++Fyqw3vZsniZxlGhvbxVqjo Jul 10 23:34:51.802104 sshd-session[2179]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:34:51.806624 systemd-logind[1690]: New session 3 of user core. Jul 10 23:34:51.818611 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 10 23:34:52.259716 systemd[1]: Started sshd@1-10.200.20.37:22-10.200.16.10:44396.service - OpenSSH per-connection server daemon (10.200.16.10:44396). Jul 10 23:34:52.753830 sshd[2184]: Accepted publickey for core from 10.200.16.10 port 44396 ssh2: RSA SHA256:MRWiT2m5xbwSE4Nwnya++Fyqw3vZsniZxlGhvbxVqjo Jul 10 23:34:52.755679 sshd-session[2184]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:34:52.761214 systemd-logind[1690]: New session 4 of user core. Jul 10 23:34:52.768544 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 10 23:34:53.115519 sshd[2186]: Connection closed by 10.200.16.10 port 44396 Jul 10 23:34:53.114838 sshd-session[2184]: pam_unix(sshd:session): session closed for user core Jul 10 23:34:53.118896 systemd[1]: sshd@1-10.200.20.37:22-10.200.16.10:44396.service: Deactivated successfully. Jul 10 23:34:53.120765 systemd[1]: session-4.scope: Deactivated successfully. Jul 10 23:34:53.121612 systemd-logind[1690]: Session 4 logged out. Waiting for processes to exit. Jul 10 23:34:53.123024 systemd-logind[1690]: Removed session 4. Jul 10 23:34:53.201011 systemd[1]: Started sshd@2-10.200.20.37:22-10.200.16.10:44408.service - OpenSSH per-connection server daemon (10.200.16.10:44408). Jul 10 23:34:53.682999 sshd[2192]: Accepted publickey for core from 10.200.16.10 port 44408 ssh2: RSA SHA256:MRWiT2m5xbwSE4Nwnya++Fyqw3vZsniZxlGhvbxVqjo Jul 10 23:34:53.684803 sshd-session[2192]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:34:53.689742 systemd-logind[1690]: New session 5 of user core. Jul 10 23:34:53.700631 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 10 23:34:54.025670 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 10 23:34:54.025924 sshd[2194]: Connection closed by 10.200.16.10 port 44408 Jul 10 23:34:54.026524 sshd-session[2192]: pam_unix(sshd:session): session closed for user core Jul 10 23:34:54.040609 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 23:34:54.040975 systemd[1]: sshd@2-10.200.20.37:22-10.200.16.10:44408.service: Deactivated successfully. Jul 10 23:34:54.044046 systemd[1]: session-5.scope: Deactivated successfully. Jul 10 23:34:54.047122 systemd-logind[1690]: Session 5 logged out. Waiting for processes to exit. Jul 10 23:34:54.049001 systemd-logind[1690]: Removed session 5. Jul 10 23:34:54.115556 systemd[1]: Started sshd@3-10.200.20.37:22-10.200.16.10:44414.service - OpenSSH per-connection server daemon (10.200.16.10:44414). Jul 10 23:34:54.150483 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 23:34:54.154828 (kubelet)[2210]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 10 23:34:54.195224 kubelet[2210]: E0710 23:34:54.195180 2210 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 23:34:54.197200 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 23:34:54.197335 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 23:34:54.197662 systemd[1]: kubelet.service: Consumed 130ms CPU time, 106.9M memory peak. Jul 10 23:34:54.603922 sshd[2203]: Accepted publickey for core from 10.200.16.10 port 44414 ssh2: RSA SHA256:MRWiT2m5xbwSE4Nwnya++Fyqw3vZsniZxlGhvbxVqjo Jul 10 23:34:54.605771 sshd-session[2203]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:34:54.610664 systemd-logind[1690]: New session 6 of user core. Jul 10 23:34:54.616560 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 10 23:34:54.950520 sshd[2217]: Connection closed by 10.200.16.10 port 44414 Jul 10 23:34:54.951212 sshd-session[2203]: pam_unix(sshd:session): session closed for user core Jul 10 23:34:54.954508 systemd-logind[1690]: Session 6 logged out. Waiting for processes to exit. Jul 10 23:34:54.955744 systemd[1]: sshd@3-10.200.20.37:22-10.200.16.10:44414.service: Deactivated successfully. Jul 10 23:34:54.957299 systemd[1]: session-6.scope: Deactivated successfully. Jul 10 23:34:54.959618 systemd-logind[1690]: Removed session 6. Jul 10 23:34:55.046687 systemd[1]: Started sshd@4-10.200.20.37:22-10.200.16.10:44430.service - OpenSSH per-connection server daemon (10.200.16.10:44430). Jul 10 23:34:55.146873 chronyd[1685]: Selected source PHC0 Jul 10 23:34:55.526964 sshd[2223]: Accepted publickey for core from 10.200.16.10 port 44430 ssh2: RSA SHA256:MRWiT2m5xbwSE4Nwnya++Fyqw3vZsniZxlGhvbxVqjo Jul 10 23:34:55.528523 sshd-session[2223]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:34:55.534453 systemd-logind[1690]: New session 7 of user core. Jul 10 23:34:55.540574 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 10 23:34:55.841289 sudo[2226]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 10 23:34:55.841651 sudo[2226]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 23:34:55.869737 sudo[2226]: pam_unix(sudo:session): session closed for user root Jul 10 23:34:55.945248 sshd[2225]: Connection closed by 10.200.16.10 port 44430 Jul 10 23:34:55.946026 sshd-session[2223]: pam_unix(sshd:session): session closed for user core Jul 10 23:34:55.950295 systemd[1]: sshd@4-10.200.20.37:22-10.200.16.10:44430.service: Deactivated successfully. Jul 10 23:34:55.952066 systemd[1]: session-7.scope: Deactivated successfully. Jul 10 23:34:55.952866 systemd-logind[1690]: Session 7 logged out. Waiting for processes to exit. Jul 10 23:34:55.954100 systemd-logind[1690]: Removed session 7. Jul 10 23:34:56.041679 systemd[1]: Started sshd@5-10.200.20.37:22-10.200.16.10:44440.service - OpenSSH per-connection server daemon (10.200.16.10:44440). Jul 10 23:34:56.520690 sshd[2232]: Accepted publickey for core from 10.200.16.10 port 44440 ssh2: RSA SHA256:MRWiT2m5xbwSE4Nwnya++Fyqw3vZsniZxlGhvbxVqjo Jul 10 23:34:56.522677 sshd-session[2232]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:34:56.527855 systemd-logind[1690]: New session 8 of user core. Jul 10 23:34:56.538530 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 10 23:34:56.790179 sudo[2236]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 10 23:34:56.790586 sudo[2236]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 23:34:56.794262 sudo[2236]: pam_unix(sudo:session): session closed for user root Jul 10 23:34:56.799472 sudo[2235]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 10 23:34:56.799746 sudo[2235]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 23:34:56.818691 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 10 23:34:56.842978 augenrules[2258]: No rules Jul 10 23:34:56.844686 systemd[1]: audit-rules.service: Deactivated successfully. Jul 10 23:34:56.845005 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 10 23:34:56.848485 sudo[2235]: pam_unix(sudo:session): session closed for user root Jul 10 23:34:56.926105 sshd[2234]: Connection closed by 10.200.16.10 port 44440 Jul 10 23:34:56.926509 sshd-session[2232]: pam_unix(sshd:session): session closed for user core Jul 10 23:34:56.929637 systemd[1]: sshd@5-10.200.20.37:22-10.200.16.10:44440.service: Deactivated successfully. Jul 10 23:34:56.931608 systemd[1]: session-8.scope: Deactivated successfully. Jul 10 23:34:56.933331 systemd-logind[1690]: Session 8 logged out. Waiting for processes to exit. Jul 10 23:34:56.934200 systemd-logind[1690]: Removed session 8. Jul 10 23:34:57.020675 systemd[1]: Started sshd@6-10.200.20.37:22-10.200.16.10:44442.service - OpenSSH per-connection server daemon (10.200.16.10:44442). Jul 10 23:34:57.513827 sshd[2267]: Accepted publickey for core from 10.200.16.10 port 44442 ssh2: RSA SHA256:MRWiT2m5xbwSE4Nwnya++Fyqw3vZsniZxlGhvbxVqjo Jul 10 23:34:57.515260 sshd-session[2267]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:34:57.520711 systemd-logind[1690]: New session 9 of user core. Jul 10 23:34:57.523578 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 10 23:34:57.790669 sudo[2270]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 10 23:34:57.790957 sudo[2270]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 23:34:58.725688 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 10 23:34:58.725769 (dockerd)[2287]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 10 23:34:59.662409 dockerd[2287]: time="2025-07-10T23:34:59.660686058Z" level=info msg="Starting up" Jul 10 23:34:59.925139 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2341873882-merged.mount: Deactivated successfully. Jul 10 23:34:59.973177 dockerd[2287]: time="2025-07-10T23:34:59.973094218Z" level=info msg="Loading containers: start." Jul 10 23:35:00.307456 kernel: Initializing XFRM netlink socket Jul 10 23:35:00.380009 systemd-networkd[1448]: docker0: Link UP Jul 10 23:35:00.434841 dockerd[2287]: time="2025-07-10T23:35:00.434792178Z" level=info msg="Loading containers: done." Jul 10 23:35:00.467509 dockerd[2287]: time="2025-07-10T23:35:00.467167258Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 10 23:35:00.467509 dockerd[2287]: time="2025-07-10T23:35:00.467272378Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Jul 10 23:35:00.467509 dockerd[2287]: time="2025-07-10T23:35:00.467431618Z" level=info msg="Daemon has completed initialization" Jul 10 23:35:00.539971 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 10 23:35:00.540313 dockerd[2287]: time="2025-07-10T23:35:00.539665418Z" level=info msg="API listen on /run/docker.sock" Jul 10 23:35:01.749546 containerd[1709]: time="2025-07-10T23:35:01.749394698Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\"" Jul 10 23:35:02.820702 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2629161396.mount: Deactivated successfully. Jul 10 23:35:04.275723 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 10 23:35:04.284680 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 23:35:04.409648 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 23:35:04.409781 (kubelet)[2533]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 10 23:35:04.521002 kubelet[2533]: E0710 23:35:04.520865 2533 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 23:35:04.523287 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 23:35:04.523509 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 23:35:04.524001 systemd[1]: kubelet.service: Consumed 129ms CPU time, 106.7M memory peak. Jul 10 23:35:04.841477 containerd[1709]: time="2025-07-10T23:35:04.840420399Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:35:04.845691 containerd[1709]: time="2025-07-10T23:35:04.845625176Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.6: active requests=0, bytes read=26328194" Jul 10 23:35:04.849505 containerd[1709]: time="2025-07-10T23:35:04.849449509Z" level=info msg="ImageCreate event name:\"sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:35:04.858935 containerd[1709]: time="2025-07-10T23:35:04.858853660Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:35:04.860074 containerd[1709]: time="2025-07-10T23:35:04.859900503Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.6\" with image id \"sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\", size \"26324994\" in 3.110466285s" Jul 10 23:35:04.860074 containerd[1709]: time="2025-07-10T23:35:04.859938223Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\" returns image reference \"sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a\"" Jul 10 23:35:04.860814 containerd[1709]: time="2025-07-10T23:35:04.860639906Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\"" Jul 10 23:35:06.384493 containerd[1709]: time="2025-07-10T23:35:06.384355233Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:35:06.388475 containerd[1709]: time="2025-07-10T23:35:06.388359167Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.6: active requests=0, bytes read=22529228" Jul 10 23:35:06.392229 containerd[1709]: time="2025-07-10T23:35:06.392160019Z" level=info msg="ImageCreate event name:\"sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:35:06.400838 containerd[1709]: time="2025-07-10T23:35:06.400764287Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:35:06.402055 containerd[1709]: time="2025-07-10T23:35:06.401851491Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.6\" with image id \"sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\", size \"24065018\" in 1.541178825s" Jul 10 23:35:06.402055 containerd[1709]: time="2025-07-10T23:35:06.401890331Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\" returns image reference \"sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038\"" Jul 10 23:35:06.402479 containerd[1709]: time="2025-07-10T23:35:06.402294812Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\"" Jul 10 23:35:08.009444 containerd[1709]: time="2025-07-10T23:35:08.009359334Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:35:08.012906 containerd[1709]: time="2025-07-10T23:35:08.012838665Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.6: active requests=0, bytes read=17484141" Jul 10 23:35:08.018034 containerd[1709]: time="2025-07-10T23:35:08.017966082Z" level=info msg="ImageCreate event name:\"sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:35:08.024119 containerd[1709]: time="2025-07-10T23:35:08.024057582Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:35:08.025242 containerd[1709]: time="2025-07-10T23:35:08.025050226Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.6\" with image id \"sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\", size \"19019949\" in 1.622730454s" Jul 10 23:35:08.025242 containerd[1709]: time="2025-07-10T23:35:08.025082946Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\" returns image reference \"sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f\"" Jul 10 23:35:08.025590 containerd[1709]: time="2025-07-10T23:35:08.025494947Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\"" Jul 10 23:35:09.348641 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3091062664.mount: Deactivated successfully. Jul 10 23:35:09.801996 containerd[1709]: time="2025-07-10T23:35:09.801836105Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:35:09.809779 containerd[1709]: time="2025-07-10T23:35:09.809674331Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.6: active requests=0, bytes read=27378406" Jul 10 23:35:09.816232 containerd[1709]: time="2025-07-10T23:35:09.816153552Z" level=info msg="ImageCreate event name:\"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:35:09.823277 containerd[1709]: time="2025-07-10T23:35:09.823108655Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:35:09.823924 containerd[1709]: time="2025-07-10T23:35:09.823700937Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.6\" with image id \"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\", repo tag \"registry.k8s.io/kube-proxy:v1.32.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\", size \"27377425\" in 1.79817967s" Jul 10 23:35:09.823924 containerd[1709]: time="2025-07-10T23:35:09.823737417Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\" returns image reference \"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\"" Jul 10 23:35:09.824788 containerd[1709]: time="2025-07-10T23:35:09.824172058Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 10 23:35:10.474874 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1171063031.mount: Deactivated successfully. Jul 10 23:35:12.605449 containerd[1709]: time="2025-07-10T23:35:12.605180609Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:35:12.608820 containerd[1709]: time="2025-07-10T23:35:12.608739581Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" Jul 10 23:35:12.612593 containerd[1709]: time="2025-07-10T23:35:12.612518713Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:35:12.620759 containerd[1709]: time="2025-07-10T23:35:12.620688980Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:35:12.621934 containerd[1709]: time="2025-07-10T23:35:12.621861704Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 2.797663445s" Jul 10 23:35:12.621934 containerd[1709]: time="2025-07-10T23:35:12.621899584Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jul 10 23:35:12.623256 containerd[1709]: time="2025-07-10T23:35:12.622334866Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 10 23:35:13.278889 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3735004243.mount: Deactivated successfully. Jul 10 23:35:13.319820 containerd[1709]: time="2025-07-10T23:35:13.319753403Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:35:13.324875 containerd[1709]: time="2025-07-10T23:35:13.324789259Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Jul 10 23:35:13.331025 containerd[1709]: time="2025-07-10T23:35:13.330946440Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:35:13.340071 containerd[1709]: time="2025-07-10T23:35:13.339999310Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:35:13.340934 containerd[1709]: time="2025-07-10T23:35:13.340726592Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 718.366646ms" Jul 10 23:35:13.340934 containerd[1709]: time="2025-07-10T23:35:13.340764992Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 10 23:35:13.341270 containerd[1709]: time="2025-07-10T23:35:13.341217634Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jul 10 23:35:14.119145 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1422038185.mount: Deactivated successfully. Jul 10 23:35:14.529462 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jul 10 23:35:14.539981 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 23:35:14.672517 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 23:35:14.677335 (kubelet)[2631]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 10 23:35:14.777811 kubelet[2631]: E0710 23:35:14.777686 2631 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 23:35:14.782474 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 23:35:14.782629 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 23:35:14.784740 systemd[1]: kubelet.service: Consumed 141ms CPU time, 109.8M memory peak. Jul 10 23:35:16.278401 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Jul 10 23:35:16.928275 containerd[1709]: time="2025-07-10T23:35:16.928208049Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:35:16.932792 containerd[1709]: time="2025-07-10T23:35:16.932716904Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812469" Jul 10 23:35:16.938360 containerd[1709]: time="2025-07-10T23:35:16.938294562Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:35:17.039017 update_engine[1691]: I20250710 23:35:17.038409 1691 update_attempter.cc:509] Updating boot flags... Jul 10 23:35:17.052485 containerd[1709]: time="2025-07-10T23:35:17.050735573Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:35:17.052485 containerd[1709]: time="2025-07-10T23:35:17.052167377Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 3.710923983s" Jul 10 23:35:17.052485 containerd[1709]: time="2025-07-10T23:35:17.052200218Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Jul 10 23:35:17.118496 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (2703) Jul 10 23:35:21.398824 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 23:35:21.399625 systemd[1]: kubelet.service: Consumed 141ms CPU time, 109.8M memory peak. Jul 10 23:35:21.405695 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 23:35:21.440592 systemd[1]: Reload requested from client PID 2771 ('systemctl') (unit session-9.scope)... Jul 10 23:35:21.440619 systemd[1]: Reloading... Jul 10 23:35:21.564427 zram_generator::config[2818]: No configuration found. Jul 10 23:35:21.681705 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 23:35:21.790487 systemd[1]: Reloading finished in 349 ms. Jul 10 23:35:21.843982 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 23:35:21.849551 (kubelet)[2875]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 10 23:35:21.853837 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 23:35:21.855095 systemd[1]: kubelet.service: Deactivated successfully. Jul 10 23:35:21.855547 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 23:35:21.855669 systemd[1]: kubelet.service: Consumed 90ms CPU time, 96M memory peak. Jul 10 23:35:21.859735 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 23:35:22.237463 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 23:35:22.241908 (kubelet)[2888]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 10 23:35:22.287959 kubelet[2888]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 23:35:22.287959 kubelet[2888]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 10 23:35:22.287959 kubelet[2888]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 23:35:22.288625 kubelet[2888]: I0710 23:35:22.288120 2888 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 10 23:35:23.117748 kubelet[2888]: I0710 23:35:23.117706 2888 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 10 23:35:23.118542 kubelet[2888]: I0710 23:35:23.117951 2888 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 10 23:35:23.118542 kubelet[2888]: I0710 23:35:23.118253 2888 server.go:954] "Client rotation is on, will bootstrap in background" Jul 10 23:35:23.452613 kubelet[2888]: E0710 23:35:23.452496 2888 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.37:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.37:6443: connect: connection refused" logger="UnhandledError" Jul 10 23:35:23.457541 kubelet[2888]: I0710 23:35:23.457202 2888 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 10 23:35:23.465156 kubelet[2888]: E0710 23:35:23.465085 2888 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 10 23:35:23.465156 kubelet[2888]: I0710 23:35:23.465122 2888 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 10 23:35:23.468614 kubelet[2888]: I0710 23:35:23.468487 2888 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 10 23:35:23.470176 kubelet[2888]: I0710 23:35:23.470079 2888 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 10 23:35:23.470348 kubelet[2888]: I0710 23:35:23.470134 2888 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230.2.1-n-d24186f489","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 10 23:35:23.470348 kubelet[2888]: I0710 23:35:23.470327 2888 topology_manager.go:138] "Creating topology manager with none policy" Jul 10 23:35:23.470348 kubelet[2888]: I0710 23:35:23.470338 2888 container_manager_linux.go:304] "Creating device plugin manager" Jul 10 23:35:23.470541 kubelet[2888]: I0710 23:35:23.470504 2888 state_mem.go:36] "Initialized new in-memory state store" Jul 10 23:35:23.473026 kubelet[2888]: I0710 23:35:23.472989 2888 kubelet.go:446] "Attempting to sync node with API server" Jul 10 23:35:23.473026 kubelet[2888]: I0710 23:35:23.473021 2888 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 10 23:35:23.473511 kubelet[2888]: I0710 23:35:23.473043 2888 kubelet.go:352] "Adding apiserver pod source" Jul 10 23:35:23.473511 kubelet[2888]: I0710 23:35:23.473053 2888 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 10 23:35:23.479233 kubelet[2888]: W0710 23:35:23.479123 2888 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.37:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.37:6443: connect: connection refused Jul 10 23:35:23.479233 kubelet[2888]: E0710 23:35:23.479204 2888 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.37:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.37:6443: connect: connection refused" logger="UnhandledError" Jul 10 23:35:23.480608 kubelet[2888]: W0710 23:35:23.479517 2888 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.37:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.2.1-n-d24186f489&limit=500&resourceVersion=0": dial tcp 10.200.20.37:6443: connect: connection refused Jul 10 23:35:23.480608 kubelet[2888]: E0710 23:35:23.479561 2888 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.37:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.2.1-n-d24186f489&limit=500&resourceVersion=0\": dial tcp 10.200.20.37:6443: connect: connection refused" logger="UnhandledError" Jul 10 23:35:23.480608 kubelet[2888]: I0710 23:35:23.479861 2888 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jul 10 23:35:23.480608 kubelet[2888]: I0710 23:35:23.480328 2888 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 10 23:35:23.480608 kubelet[2888]: W0710 23:35:23.480407 2888 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 10 23:35:23.481861 kubelet[2888]: I0710 23:35:23.481841 2888 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 10 23:35:23.481982 kubelet[2888]: I0710 23:35:23.481971 2888 server.go:1287] "Started kubelet" Jul 10 23:35:23.482206 kubelet[2888]: I0710 23:35:23.482104 2888 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 10 23:35:23.483029 kubelet[2888]: I0710 23:35:23.482963 2888 server.go:479] "Adding debug handlers to kubelet server" Jul 10 23:35:23.485779 kubelet[2888]: I0710 23:35:23.485655 2888 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 10 23:35:23.485974 kubelet[2888]: I0710 23:35:23.485906 2888 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 10 23:35:23.486269 kubelet[2888]: I0710 23:35:23.486250 2888 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 10 23:35:23.487177 kubelet[2888]: I0710 23:35:23.487145 2888 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 10 23:35:23.493106 kubelet[2888]: I0710 23:35:23.490958 2888 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 10 23:35:23.493106 kubelet[2888]: E0710 23:35:23.491180 2888 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.1-n-d24186f489\" not found" Jul 10 23:35:23.493106 kubelet[2888]: I0710 23:35:23.492249 2888 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 10 23:35:23.493106 kubelet[2888]: E0710 23:35:23.487967 2888 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.37:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.37:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230.2.1-n-d24186f489.185107ff62f2720f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230.2.1-n-d24186f489,UID:ci-4230.2.1-n-d24186f489,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230.2.1-n-d24186f489,},FirstTimestamp:2025-07-10 23:35:23.481944591 +0000 UTC m=+1.236321215,LastTimestamp:2025-07-10 23:35:23.481944591 +0000 UTC m=+1.236321215,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230.2.1-n-d24186f489,}" Jul 10 23:35:23.493106 kubelet[2888]: I0710 23:35:23.492361 2888 reconciler.go:26] "Reconciler: start to sync state" Jul 10 23:35:23.493865 kubelet[2888]: I0710 23:35:23.493836 2888 factory.go:221] Registration of the systemd container factory successfully Jul 10 23:35:23.494666 kubelet[2888]: I0710 23:35:23.494636 2888 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 10 23:35:23.498952 kubelet[2888]: E0710 23:35:23.496350 2888 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.2.1-n-d24186f489?timeout=10s\": dial tcp 10.200.20.37:6443: connect: connection refused" interval="200ms" Jul 10 23:35:23.498952 kubelet[2888]: W0710 23:35:23.496188 2888 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.37:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.37:6443: connect: connection refused Jul 10 23:35:23.499751 kubelet[2888]: E0710 23:35:23.499724 2888 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 10 23:35:23.499872 kubelet[2888]: E0710 23:35:23.499853 2888 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.37:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.37:6443: connect: connection refused" logger="UnhandledError" Jul 10 23:35:23.501126 kubelet[2888]: I0710 23:35:23.501105 2888 factory.go:221] Registration of the containerd container factory successfully Jul 10 23:35:23.518825 kubelet[2888]: I0710 23:35:23.518799 2888 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 10 23:35:23.518970 kubelet[2888]: I0710 23:35:23.518957 2888 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 10 23:35:23.519030 kubelet[2888]: I0710 23:35:23.519022 2888 state_mem.go:36] "Initialized new in-memory state store" Jul 10 23:35:23.519983 kubelet[2888]: I0710 23:35:23.519909 2888 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 10 23:35:23.522012 kubelet[2888]: I0710 23:35:23.521922 2888 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 10 23:35:23.522012 kubelet[2888]: I0710 23:35:23.521967 2888 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 10 23:35:23.522012 kubelet[2888]: I0710 23:35:23.521986 2888 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 10 23:35:23.522012 kubelet[2888]: I0710 23:35:23.522000 2888 kubelet.go:2382] "Starting kubelet main sync loop" Jul 10 23:35:23.522165 kubelet[2888]: E0710 23:35:23.522054 2888 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 10 23:35:23.524630 kubelet[2888]: W0710 23:35:23.524519 2888 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.37:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.37:6443: connect: connection refused Jul 10 23:35:23.524630 kubelet[2888]: E0710 23:35:23.524581 2888 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.37:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.37:6443: connect: connection refused" logger="UnhandledError" Jul 10 23:35:23.591363 kubelet[2888]: E0710 23:35:23.591324 2888 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.1-n-d24186f489\" not found" Jul 10 23:35:23.622726 kubelet[2888]: E0710 23:35:23.622680 2888 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 10 23:35:23.691909 kubelet[2888]: E0710 23:35:23.691837 2888 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.1-n-d24186f489\" not found" Jul 10 23:35:23.699734 kubelet[2888]: E0710 23:35:23.699671 2888 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.2.1-n-d24186f489?timeout=10s\": dial tcp 10.200.20.37:6443: connect: connection refused" interval="400ms" Jul 10 23:35:23.792767 kubelet[2888]: E0710 23:35:23.792700 2888 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.1-n-d24186f489\" not found" Jul 10 23:35:23.822992 kubelet[2888]: E0710 23:35:23.822961 2888 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 10 23:35:23.893160 kubelet[2888]: E0710 23:35:23.893128 2888 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.1-n-d24186f489\" not found" Jul 10 23:35:23.993835 kubelet[2888]: E0710 23:35:23.993800 2888 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.1-n-d24186f489\" not found" Jul 10 23:35:24.305991 kubelet[2888]: E0710 23:35:24.094168 2888 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.1-n-d24186f489\" not found" Jul 10 23:35:24.305991 kubelet[2888]: E0710 23:35:24.100709 2888 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.2.1-n-d24186f489?timeout=10s\": dial tcp 10.200.20.37:6443: connect: connection refused" interval="800ms" Jul 10 23:35:24.305991 kubelet[2888]: E0710 23:35:24.195088 2888 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.1-n-d24186f489\" not found" Jul 10 23:35:24.305991 kubelet[2888]: E0710 23:35:24.223324 2888 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 10 23:35:24.305991 kubelet[2888]: E0710 23:35:24.295615 2888 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.1-n-d24186f489\" not found" Jul 10 23:35:24.308408 kubelet[2888]: I0710 23:35:24.308095 2888 policy_none.go:49] "None policy: Start" Jul 10 23:35:24.308408 kubelet[2888]: I0710 23:35:24.308126 2888 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 10 23:35:24.308408 kubelet[2888]: I0710 23:35:24.308138 2888 state_mem.go:35] "Initializing new in-memory state store" Jul 10 23:35:24.320457 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 10 23:35:24.329004 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 10 23:35:24.332401 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 10 23:35:24.340581 kubelet[2888]: I0710 23:35:24.340298 2888 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 10 23:35:24.340581 kubelet[2888]: I0710 23:35:24.340576 2888 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 10 23:35:24.340739 kubelet[2888]: I0710 23:35:24.340589 2888 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 10 23:35:24.341123 kubelet[2888]: I0710 23:35:24.341000 2888 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 10 23:35:24.342664 kubelet[2888]: E0710 23:35:24.342631 2888 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 10 23:35:24.342739 kubelet[2888]: E0710 23:35:24.342675 2888 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4230.2.1-n-d24186f489\" not found" Jul 10 23:35:24.364055 kubelet[2888]: W0710 23:35:24.363989 2888 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.37:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.2.1-n-d24186f489&limit=500&resourceVersion=0": dial tcp 10.200.20.37:6443: connect: connection refused Jul 10 23:35:24.364190 kubelet[2888]: E0710 23:35:24.364065 2888 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.37:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.2.1-n-d24186f489&limit=500&resourceVersion=0\": dial tcp 10.200.20.37:6443: connect: connection refused" logger="UnhandledError" Jul 10 23:35:24.424723 kubelet[2888]: W0710 23:35:24.424639 2888 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.37:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.37:6443: connect: connection refused Jul 10 23:35:24.424782 kubelet[2888]: E0710 23:35:24.424738 2888 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.37:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.37:6443: connect: connection refused" logger="UnhandledError" Jul 10 23:35:24.444839 kubelet[2888]: I0710 23:35:24.444808 2888 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.1-n-d24186f489" Jul 10 23:35:24.445269 kubelet[2888]: E0710 23:35:24.445238 2888 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.37:6443/api/v1/nodes\": dial tcp 10.200.20.37:6443: connect: connection refused" node="ci-4230.2.1-n-d24186f489" Jul 10 23:35:24.605242 kubelet[2888]: W0710 23:35:24.605115 2888 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.37:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.37:6443: connect: connection refused Jul 10 23:35:24.605242 kubelet[2888]: E0710 23:35:24.605195 2888 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.37:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.37:6443: connect: connection refused" logger="UnhandledError" Jul 10 23:35:24.647095 kubelet[2888]: I0710 23:35:24.647056 2888 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.1-n-d24186f489" Jul 10 23:35:24.647454 kubelet[2888]: E0710 23:35:24.647426 2888 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.37:6443/api/v1/nodes\": dial tcp 10.200.20.37:6443: connect: connection refused" node="ci-4230.2.1-n-d24186f489" Jul 10 23:35:24.710560 kubelet[2888]: W0710 23:35:24.710498 2888 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.37:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.37:6443: connect: connection refused Jul 10 23:35:24.710659 kubelet[2888]: E0710 23:35:24.710570 2888 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.37:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.37:6443: connect: connection refused" logger="UnhandledError" Jul 10 23:35:24.902342 kubelet[2888]: E0710 23:35:24.902205 2888 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.2.1-n-d24186f489?timeout=10s\": dial tcp 10.200.20.37:6443: connect: connection refused" interval="1.6s" Jul 10 23:35:25.034042 systemd[1]: Created slice kubepods-burstable-pod3b9c48bd6fff2b56c81f07250e512ee4.slice - libcontainer container kubepods-burstable-pod3b9c48bd6fff2b56c81f07250e512ee4.slice. Jul 10 23:35:25.051487 kubelet[2888]: I0710 23:35:25.050943 2888 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.1-n-d24186f489" Jul 10 23:35:25.051487 kubelet[2888]: E0710 23:35:25.051289 2888 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.37:6443/api/v1/nodes\": dial tcp 10.200.20.37:6443: connect: connection refused" node="ci-4230.2.1-n-d24186f489" Jul 10 23:35:25.052254 kubelet[2888]: E0710 23:35:25.052203 2888 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.1-n-d24186f489\" not found" node="ci-4230.2.1-n-d24186f489" Jul 10 23:35:25.056337 systemd[1]: Created slice kubepods-burstable-pod38d24faf9fc19f4e6b4c2cb992fa0110.slice - libcontainer container kubepods-burstable-pod38d24faf9fc19f4e6b4c2cb992fa0110.slice. Jul 10 23:35:25.059342 kubelet[2888]: E0710 23:35:25.059135 2888 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.1-n-d24186f489\" not found" node="ci-4230.2.1-n-d24186f489" Jul 10 23:35:25.062148 systemd[1]: Created slice kubepods-burstable-pod23652ab776bb9cc09ad53e9c705e5d12.slice - libcontainer container kubepods-burstable-pod23652ab776bb9cc09ad53e9c705e5d12.slice. Jul 10 23:35:25.064524 kubelet[2888]: E0710 23:35:25.064462 2888 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.1-n-d24186f489\" not found" node="ci-4230.2.1-n-d24186f489" Jul 10 23:35:25.100878 kubelet[2888]: I0710 23:35:25.100779 2888 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/38d24faf9fc19f4e6b4c2cb992fa0110-flexvolume-dir\") pod \"kube-controller-manager-ci-4230.2.1-n-d24186f489\" (UID: \"38d24faf9fc19f4e6b4c2cb992fa0110\") " pod="kube-system/kube-controller-manager-ci-4230.2.1-n-d24186f489" Jul 10 23:35:25.100878 kubelet[2888]: I0710 23:35:25.100883 2888 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/38d24faf9fc19f4e6b4c2cb992fa0110-k8s-certs\") pod \"kube-controller-manager-ci-4230.2.1-n-d24186f489\" (UID: \"38d24faf9fc19f4e6b4c2cb992fa0110\") " pod="kube-system/kube-controller-manager-ci-4230.2.1-n-d24186f489" Jul 10 23:35:25.101056 kubelet[2888]: I0710 23:35:25.100921 2888 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/38d24faf9fc19f4e6b4c2cb992fa0110-kubeconfig\") pod \"kube-controller-manager-ci-4230.2.1-n-d24186f489\" (UID: \"38d24faf9fc19f4e6b4c2cb992fa0110\") " pod="kube-system/kube-controller-manager-ci-4230.2.1-n-d24186f489" Jul 10 23:35:25.101056 kubelet[2888]: I0710 23:35:25.100939 2888 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/23652ab776bb9cc09ad53e9c705e5d12-kubeconfig\") pod \"kube-scheduler-ci-4230.2.1-n-d24186f489\" (UID: \"23652ab776bb9cc09ad53e9c705e5d12\") " pod="kube-system/kube-scheduler-ci-4230.2.1-n-d24186f489" Jul 10 23:35:25.101056 kubelet[2888]: I0710 23:35:25.100956 2888 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3b9c48bd6fff2b56c81f07250e512ee4-ca-certs\") pod \"kube-apiserver-ci-4230.2.1-n-d24186f489\" (UID: \"3b9c48bd6fff2b56c81f07250e512ee4\") " pod="kube-system/kube-apiserver-ci-4230.2.1-n-d24186f489" Jul 10 23:35:25.101056 kubelet[2888]: I0710 23:35:25.100972 2888 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3b9c48bd6fff2b56c81f07250e512ee4-k8s-certs\") pod \"kube-apiserver-ci-4230.2.1-n-d24186f489\" (UID: \"3b9c48bd6fff2b56c81f07250e512ee4\") " pod="kube-system/kube-apiserver-ci-4230.2.1-n-d24186f489" Jul 10 23:35:25.101056 kubelet[2888]: I0710 23:35:25.100987 2888 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/38d24faf9fc19f4e6b4c2cb992fa0110-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230.2.1-n-d24186f489\" (UID: \"38d24faf9fc19f4e6b4c2cb992fa0110\") " pod="kube-system/kube-controller-manager-ci-4230.2.1-n-d24186f489" Jul 10 23:35:25.101170 kubelet[2888]: I0710 23:35:25.101005 2888 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3b9c48bd6fff2b56c81f07250e512ee4-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230.2.1-n-d24186f489\" (UID: \"3b9c48bd6fff2b56c81f07250e512ee4\") " pod="kube-system/kube-apiserver-ci-4230.2.1-n-d24186f489" Jul 10 23:35:25.101170 kubelet[2888]: I0710 23:35:25.101021 2888 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/38d24faf9fc19f4e6b4c2cb992fa0110-ca-certs\") pod \"kube-controller-manager-ci-4230.2.1-n-d24186f489\" (UID: \"38d24faf9fc19f4e6b4c2cb992fa0110\") " pod="kube-system/kube-controller-manager-ci-4230.2.1-n-d24186f489" Jul 10 23:35:25.354157 containerd[1709]: time="2025-07-10T23:35:25.354104712Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230.2.1-n-d24186f489,Uid:3b9c48bd6fff2b56c81f07250e512ee4,Namespace:kube-system,Attempt:0,}" Jul 10 23:35:25.360539 containerd[1709]: time="2025-07-10T23:35:25.360493334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230.2.1-n-d24186f489,Uid:38d24faf9fc19f4e6b4c2cb992fa0110,Namespace:kube-system,Attempt:0,}" Jul 10 23:35:25.365657 containerd[1709]: time="2025-07-10T23:35:25.365476831Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230.2.1-n-d24186f489,Uid:23652ab776bb9cc09ad53e9c705e5d12,Namespace:kube-system,Attempt:0,}" Jul 10 23:35:25.599765 kubelet[2888]: E0710 23:35:25.599716 2888 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.37:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.37:6443: connect: connection refused" logger="UnhandledError" Jul 10 23:35:25.854193 kubelet[2888]: I0710 23:35:25.854148 2888 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.1-n-d24186f489" Jul 10 23:35:25.854712 kubelet[2888]: E0710 23:35:25.854504 2888 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.37:6443/api/v1/nodes\": dial tcp 10.200.20.37:6443: connect: connection refused" node="ci-4230.2.1-n-d24186f489" Jul 10 23:35:26.120946 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount490929996.mount: Deactivated successfully. Jul 10 23:35:26.156598 containerd[1709]: time="2025-07-10T23:35:26.156487381Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 10 23:35:26.166899 containerd[1709]: time="2025-07-10T23:35:26.166806296Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Jul 10 23:35:26.190826 containerd[1709]: time="2025-07-10T23:35:26.190732017Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 10 23:35:26.196029 containerd[1709]: time="2025-07-10T23:35:26.195975035Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 10 23:35:26.201229 containerd[1709]: time="2025-07-10T23:35:26.201096212Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 10 23:35:26.207362 containerd[1709]: time="2025-07-10T23:35:26.207309433Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 10 23:35:26.212319 containerd[1709]: time="2025-07-10T23:35:26.212264850Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 10 23:35:26.223405 containerd[1709]: time="2025-07-10T23:35:26.222331444Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 10 23:35:26.223405 containerd[1709]: time="2025-07-10T23:35:26.223251127Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 862.669593ms" Jul 10 23:35:26.231548 containerd[1709]: time="2025-07-10T23:35:26.231499355Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 865.943404ms" Jul 10 23:35:26.253313 containerd[1709]: time="2025-07-10T23:35:26.253173588Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 898.983635ms" Jul 10 23:35:26.327806 kubelet[2888]: W0710 23:35:26.327726 2888 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.37:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.37:6443: connect: connection refused Jul 10 23:35:26.327806 kubelet[2888]: E0710 23:35:26.327776 2888 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.37:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.37:6443: connect: connection refused" logger="UnhandledError" Jul 10 23:35:26.503276 kubelet[2888]: E0710 23:35:26.503134 2888 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.2.1-n-d24186f489?timeout=10s\": dial tcp 10.200.20.37:6443: connect: connection refused" interval="3.2s" Jul 10 23:35:26.570183 kubelet[2888]: W0710 23:35:26.570096 2888 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.37:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.37:6443: connect: connection refused Jul 10 23:35:26.570183 kubelet[2888]: E0710 23:35:26.570149 2888 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.37:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.37:6443: connect: connection refused" logger="UnhandledError" Jul 10 23:35:27.070174 containerd[1709]: time="2025-07-10T23:35:27.069849585Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 23:35:27.070174 containerd[1709]: time="2025-07-10T23:35:27.069928466Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 23:35:27.070174 containerd[1709]: time="2025-07-10T23:35:27.069945626Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 23:35:27.070174 containerd[1709]: time="2025-07-10T23:35:27.070119706Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 23:35:27.080955 containerd[1709]: time="2025-07-10T23:35:27.079870819Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 23:35:27.080955 containerd[1709]: time="2025-07-10T23:35:27.079932819Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 23:35:27.080955 containerd[1709]: time="2025-07-10T23:35:27.079943659Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 23:35:27.080955 containerd[1709]: time="2025-07-10T23:35:27.080020300Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 23:35:27.087250 containerd[1709]: time="2025-07-10T23:35:27.086966603Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 23:35:27.087250 containerd[1709]: time="2025-07-10T23:35:27.087057403Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 23:35:27.087250 containerd[1709]: time="2025-07-10T23:35:27.087080243Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 23:35:27.087250 containerd[1709]: time="2025-07-10T23:35:27.087178524Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 23:35:27.106614 systemd[1]: Started cri-containerd-12257318d1cbd7284191c13b72d9e205fb3da9211dabec954d4d56db7321786a.scope - libcontainer container 12257318d1cbd7284191c13b72d9e205fb3da9211dabec954d4d56db7321786a. Jul 10 23:35:27.112724 systemd[1]: Started cri-containerd-8f86e03db64b81f18b613cc1ebbc1c9c8626cd971ca03a093b3816b02e534a79.scope - libcontainer container 8f86e03db64b81f18b613cc1ebbc1c9c8626cd971ca03a093b3816b02e534a79. Jul 10 23:35:27.115547 kubelet[2888]: W0710 23:35:27.113868 2888 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.37:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.2.1-n-d24186f489&limit=500&resourceVersion=0": dial tcp 10.200.20.37:6443: connect: connection refused Jul 10 23:35:27.115547 kubelet[2888]: E0710 23:35:27.113927 2888 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.37:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.2.1-n-d24186f489&limit=500&resourceVersion=0\": dial tcp 10.200.20.37:6443: connect: connection refused" logger="UnhandledError" Jul 10 23:35:27.136662 systemd[1]: Started cri-containerd-ed723c1c88f7329cccf2806724225c6c675b95e30f9e502161cec39b08c01343.scope - libcontainer container ed723c1c88f7329cccf2806724225c6c675b95e30f9e502161cec39b08c01343. Jul 10 23:35:27.190900 containerd[1709]: time="2025-07-10T23:35:27.189203788Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230.2.1-n-d24186f489,Uid:3b9c48bd6fff2b56c81f07250e512ee4,Namespace:kube-system,Attempt:0,} returns sandbox id \"8f86e03db64b81f18b613cc1ebbc1c9c8626cd971ca03a093b3816b02e534a79\"" Jul 10 23:35:27.201393 containerd[1709]: time="2025-07-10T23:35:27.201312789Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230.2.1-n-d24186f489,Uid:23652ab776bb9cc09ad53e9c705e5d12,Namespace:kube-system,Attempt:0,} returns sandbox id \"ed723c1c88f7329cccf2806724225c6c675b95e30f9e502161cec39b08c01343\"" Jul 10 23:35:27.201997 containerd[1709]: time="2025-07-10T23:35:27.201973031Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230.2.1-n-d24186f489,Uid:38d24faf9fc19f4e6b4c2cb992fa0110,Namespace:kube-system,Attempt:0,} returns sandbox id \"12257318d1cbd7284191c13b72d9e205fb3da9211dabec954d4d56db7321786a\"" Jul 10 23:35:27.202863 containerd[1709]: time="2025-07-10T23:35:27.202779794Z" level=info msg="CreateContainer within sandbox \"8f86e03db64b81f18b613cc1ebbc1c9c8626cd971ca03a093b3816b02e534a79\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 10 23:35:27.205843 containerd[1709]: time="2025-07-10T23:35:27.205806564Z" level=info msg="CreateContainer within sandbox \"12257318d1cbd7284191c13b72d9e205fb3da9211dabec954d4d56db7321786a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 10 23:35:27.206027 containerd[1709]: time="2025-07-10T23:35:27.205952004Z" level=info msg="CreateContainer within sandbox \"ed723c1c88f7329cccf2806724225c6c675b95e30f9e502161cec39b08c01343\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 10 23:35:27.300184 containerd[1709]: time="2025-07-10T23:35:27.300126437Z" level=info msg="CreateContainer within sandbox \"ed723c1c88f7329cccf2806724225c6c675b95e30f9e502161cec39b08c01343\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c242356c9ac01997909d1411a62852078f933872f1f786bbdaada8c06108264f\"" Jul 10 23:35:27.301404 containerd[1709]: time="2025-07-10T23:35:27.300931080Z" level=info msg="StartContainer for \"c242356c9ac01997909d1411a62852078f933872f1f786bbdaada8c06108264f\"" Jul 10 23:35:27.312294 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4068249649.mount: Deactivated successfully. Jul 10 23:35:27.340628 systemd[1]: Started cri-containerd-c242356c9ac01997909d1411a62852078f933872f1f786bbdaada8c06108264f.scope - libcontainer container c242356c9ac01997909d1411a62852078f933872f1f786bbdaada8c06108264f. Jul 10 23:35:27.354448 containerd[1709]: time="2025-07-10T23:35:27.354306697Z" level=info msg="CreateContainer within sandbox \"12257318d1cbd7284191c13b72d9e205fb3da9211dabec954d4d56db7321786a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a9c2b45ed0b9d5bbf82b1bad623930d8cd0fcfb9bdfbf65cb2084a3307f024d0\"" Jul 10 23:35:27.358437 containerd[1709]: time="2025-07-10T23:35:27.357571308Z" level=info msg="StartContainer for \"a9c2b45ed0b9d5bbf82b1bad623930d8cd0fcfb9bdfbf65cb2084a3307f024d0\"" Jul 10 23:35:27.381689 containerd[1709]: time="2025-07-10T23:35:27.381591228Z" level=info msg="CreateContainer within sandbox \"8f86e03db64b81f18b613cc1ebbc1c9c8626cd971ca03a093b3816b02e534a79\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"6d93b438865e2d30e404ee2dde9d887497d03e7ef50f9ead9c22315de07bff49\"" Jul 10 23:35:27.382540 containerd[1709]: time="2025-07-10T23:35:27.382455511Z" level=info msg="StartContainer for \"6d93b438865e2d30e404ee2dde9d887497d03e7ef50f9ead9c22315de07bff49\"" Jul 10 23:35:27.394831 containerd[1709]: time="2025-07-10T23:35:27.394757592Z" level=info msg="StartContainer for \"c242356c9ac01997909d1411a62852078f933872f1f786bbdaada8c06108264f\" returns successfully" Jul 10 23:35:27.398317 systemd[1]: Started cri-containerd-a9c2b45ed0b9d5bbf82b1bad623930d8cd0fcfb9bdfbf65cb2084a3307f024d0.scope - libcontainer container a9c2b45ed0b9d5bbf82b1bad623930d8cd0fcfb9bdfbf65cb2084a3307f024d0. Jul 10 23:35:27.430558 systemd[1]: Started cri-containerd-6d93b438865e2d30e404ee2dde9d887497d03e7ef50f9ead9c22315de07bff49.scope - libcontainer container 6d93b438865e2d30e404ee2dde9d887497d03e7ef50f9ead9c22315de07bff49. Jul 10 23:35:27.459644 kubelet[2888]: I0710 23:35:27.459254 2888 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.1-n-d24186f489" Jul 10 23:35:27.459788 kubelet[2888]: E0710 23:35:27.459710 2888 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.37:6443/api/v1/nodes\": dial tcp 10.200.20.37:6443: connect: connection refused" node="ci-4230.2.1-n-d24186f489" Jul 10 23:35:27.475615 containerd[1709]: time="2025-07-10T23:35:27.475532620Z" level=info msg="StartContainer for \"a9c2b45ed0b9d5bbf82b1bad623930d8cd0fcfb9bdfbf65cb2084a3307f024d0\" returns successfully" Jul 10 23:35:27.507795 containerd[1709]: time="2025-07-10T23:35:27.507704247Z" level=info msg="StartContainer for \"6d93b438865e2d30e404ee2dde9d887497d03e7ef50f9ead9c22315de07bff49\" returns successfully" Jul 10 23:35:27.537420 kubelet[2888]: E0710 23:35:27.537071 2888 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.1-n-d24186f489\" not found" node="ci-4230.2.1-n-d24186f489" Jul 10 23:35:27.542092 kubelet[2888]: E0710 23:35:27.541888 2888 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.1-n-d24186f489\" not found" node="ci-4230.2.1-n-d24186f489" Jul 10 23:35:27.545668 kubelet[2888]: E0710 23:35:27.545452 2888 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.1-n-d24186f489\" not found" node="ci-4230.2.1-n-d24186f489" Jul 10 23:35:28.548412 kubelet[2888]: E0710 23:35:28.547109 2888 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.1-n-d24186f489\" not found" node="ci-4230.2.1-n-d24186f489" Jul 10 23:35:28.548412 kubelet[2888]: E0710 23:35:28.547145 2888 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.1-n-d24186f489\" not found" node="ci-4230.2.1-n-d24186f489" Jul 10 23:35:28.548412 kubelet[2888]: E0710 23:35:28.547792 2888 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.1-n-d24186f489\" not found" node="ci-4230.2.1-n-d24186f489" Jul 10 23:35:30.336036 kubelet[2888]: E0710 23:35:30.335976 2888 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4230.2.1-n-d24186f489\" not found" node="ci-4230.2.1-n-d24186f489" Jul 10 23:35:30.662918 kubelet[2888]: I0710 23:35:30.662104 2888 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.1-n-d24186f489" Jul 10 23:35:30.674627 kubelet[2888]: I0710 23:35:30.674562 2888 kubelet_node_status.go:78] "Successfully registered node" node="ci-4230.2.1-n-d24186f489" Jul 10 23:35:30.674627 kubelet[2888]: E0710 23:35:30.674610 2888 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4230.2.1-n-d24186f489\": node \"ci-4230.2.1-n-d24186f489\" not found" Jul 10 23:35:30.685432 kubelet[2888]: E0710 23:35:30.685358 2888 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.1-n-d24186f489\" not found" Jul 10 23:35:30.786521 kubelet[2888]: E0710 23:35:30.786442 2888 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.1-n-d24186f489\" not found" Jul 10 23:35:30.887156 kubelet[2888]: E0710 23:35:30.887043 2888 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.1-n-d24186f489\" not found" Jul 10 23:35:30.988142 kubelet[2888]: E0710 23:35:30.988000 2888 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.1-n-d24186f489\" not found" Jul 10 23:35:31.089094 kubelet[2888]: E0710 23:35:31.089027 2888 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.1-n-d24186f489\" not found" Jul 10 23:35:31.189660 kubelet[2888]: E0710 23:35:31.189608 2888 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.1-n-d24186f489\" not found" Jul 10 23:35:31.289814 kubelet[2888]: E0710 23:35:31.289767 2888 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.1-n-d24186f489\" not found" Jul 10 23:35:31.390863 kubelet[2888]: E0710 23:35:31.390824 2888 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.1-n-d24186f489\" not found" Jul 10 23:35:31.491104 kubelet[2888]: E0710 23:35:31.491049 2888 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.1-n-d24186f489\" not found" Jul 10 23:35:31.592160 kubelet[2888]: E0710 23:35:31.592044 2888 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.1-n-d24186f489\" not found" Jul 10 23:35:31.692669 kubelet[2888]: E0710 23:35:31.692626 2888 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.1-n-d24186f489\" not found" Jul 10 23:35:31.793076 kubelet[2888]: E0710 23:35:31.793033 2888 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.1-n-d24186f489\" not found" Jul 10 23:35:31.893641 kubelet[2888]: E0710 23:35:31.893526 2888 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.1-n-d24186f489\" not found" Jul 10 23:35:31.993828 kubelet[2888]: E0710 23:35:31.993779 2888 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.1-n-d24186f489\" not found" Jul 10 23:35:32.094100 kubelet[2888]: E0710 23:35:32.094057 2888 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.1-n-d24186f489\" not found" Jul 10 23:35:32.194718 kubelet[2888]: E0710 23:35:32.194575 2888 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.1-n-d24186f489\" not found" Jul 10 23:35:32.294757 kubelet[2888]: E0710 23:35:32.294688 2888 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.1-n-d24186f489\" not found" Jul 10 23:35:32.395133 kubelet[2888]: E0710 23:35:32.395069 2888 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.1-n-d24186f489\" not found" Jul 10 23:35:32.396408 kubelet[2888]: E0710 23:35:32.396354 2888 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.1-n-d24186f489\" not found" node="ci-4230.2.1-n-d24186f489" Jul 10 23:35:32.495276 kubelet[2888]: E0710 23:35:32.495132 2888 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.1-n-d24186f489\" not found" Jul 10 23:35:32.596063 kubelet[2888]: E0710 23:35:32.596022 2888 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.1-n-d24186f489\" not found" Jul 10 23:35:32.687784 systemd[1]: Reload requested from client PID 3161 ('systemctl') (unit session-9.scope)... Jul 10 23:35:32.687804 systemd[1]: Reloading... Jul 10 23:35:32.696844 kubelet[2888]: E0710 23:35:32.696729 2888 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.1-n-d24186f489\" not found" Jul 10 23:35:32.797410 kubelet[2888]: E0710 23:35:32.797351 2888 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.1-n-d24186f489\" not found" Jul 10 23:35:32.849405 zram_generator::config[3212]: No configuration found. Jul 10 23:35:32.898776 kubelet[2888]: E0710 23:35:32.898478 2888 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.1-n-d24186f489\" not found" Jul 10 23:35:32.989346 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 23:35:32.992692 kubelet[2888]: I0710 23:35:32.992636 2888 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230.2.1-n-d24186f489" Jul 10 23:35:33.010808 kubelet[2888]: W0710 23:35:33.010474 2888 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 10 23:35:33.010808 kubelet[2888]: I0710 23:35:33.010612 2888 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230.2.1-n-d24186f489" Jul 10 23:35:33.023664 kubelet[2888]: W0710 23:35:33.023560 2888 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 10 23:35:33.025649 kubelet[2888]: I0710 23:35:33.023673 2888 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230.2.1-n-d24186f489" Jul 10 23:35:33.032227 kubelet[2888]: W0710 23:35:33.032161 2888 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 10 23:35:33.122678 systemd[1]: Reloading finished in 434 ms. Jul 10 23:35:33.151586 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 23:35:33.166576 systemd[1]: kubelet.service: Deactivated successfully. Jul 10 23:35:33.166884 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 23:35:33.166971 systemd[1]: kubelet.service: Consumed 1.240s CPU time, 127.7M memory peak. Jul 10 23:35:33.172728 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 23:35:33.392699 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 23:35:33.406793 (kubelet)[3273]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 10 23:35:33.464771 kubelet[3273]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 23:35:33.464771 kubelet[3273]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 10 23:35:33.464771 kubelet[3273]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 23:35:33.465179 kubelet[3273]: I0710 23:35:33.464834 3273 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 10 23:35:33.470966 kubelet[3273]: I0710 23:35:33.470901 3273 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 10 23:35:33.470966 kubelet[3273]: I0710 23:35:33.470931 3273 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 10 23:35:33.471256 kubelet[3273]: I0710 23:35:33.471201 3273 server.go:954] "Client rotation is on, will bootstrap in background" Jul 10 23:35:33.472779 kubelet[3273]: I0710 23:35:33.472721 3273 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 10 23:35:33.476145 kubelet[3273]: I0710 23:35:33.475148 3273 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 10 23:35:33.481429 kubelet[3273]: E0710 23:35:33.481035 3273 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 10 23:35:33.481429 kubelet[3273]: I0710 23:35:33.481065 3273 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 10 23:35:33.494170 kubelet[3273]: I0710 23:35:33.494101 3273 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 10 23:35:33.495522 kubelet[3273]: I0710 23:35:33.495316 3273 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 10 23:35:33.496153 kubelet[3273]: I0710 23:35:33.495467 3273 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230.2.1-n-d24186f489","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 10 23:35:33.496153 kubelet[3273]: I0710 23:35:33.496138 3273 topology_manager.go:138] "Creating topology manager with none policy" Jul 10 23:35:33.496153 kubelet[3273]: I0710 23:35:33.496150 3273 container_manager_linux.go:304] "Creating device plugin manager" Jul 10 23:35:33.496449 kubelet[3273]: I0710 23:35:33.496201 3273 state_mem.go:36] "Initialized new in-memory state store" Jul 10 23:35:33.496449 kubelet[3273]: I0710 23:35:33.496321 3273 kubelet.go:446] "Attempting to sync node with API server" Jul 10 23:35:33.496449 kubelet[3273]: I0710 23:35:33.496333 3273 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 10 23:35:33.496449 kubelet[3273]: I0710 23:35:33.496350 3273 kubelet.go:352] "Adding apiserver pod source" Jul 10 23:35:33.496449 kubelet[3273]: I0710 23:35:33.496359 3273 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 10 23:35:33.500189 kubelet[3273]: I0710 23:35:33.498878 3273 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jul 10 23:35:33.500189 kubelet[3273]: I0710 23:35:33.499338 3273 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 10 23:35:33.501910 kubelet[3273]: I0710 23:35:33.501524 3273 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 10 23:35:33.502157 kubelet[3273]: I0710 23:35:33.502142 3273 server.go:1287] "Started kubelet" Jul 10 23:35:33.505768 kubelet[3273]: I0710 23:35:33.504940 3273 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 10 23:35:33.508521 kubelet[3273]: I0710 23:35:33.506993 3273 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 10 23:35:33.508521 kubelet[3273]: I0710 23:35:33.505659 3273 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 10 23:35:33.511234 kubelet[3273]: I0710 23:35:33.511171 3273 server.go:479] "Adding debug handlers to kubelet server" Jul 10 23:35:33.512190 kubelet[3273]: I0710 23:35:33.512125 3273 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 10 23:35:33.512190 kubelet[3273]: I0710 23:35:33.505707 3273 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 10 23:35:33.512408 kubelet[3273]: I0710 23:35:33.512337 3273 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 10 23:35:33.515407 kubelet[3273]: I0710 23:35:33.514545 3273 reconciler.go:26] "Reconciler: start to sync state" Jul 10 23:35:33.515407 kubelet[3273]: I0710 23:35:33.505897 3273 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 10 23:35:33.524903 kubelet[3273]: E0710 23:35:33.521664 3273 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.1-n-d24186f489\" not found" Jul 10 23:35:33.531155 kubelet[3273]: I0710 23:35:33.531074 3273 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 10 23:35:33.532433 kubelet[3273]: I0710 23:35:33.531972 3273 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 10 23:35:33.532433 kubelet[3273]: I0710 23:35:33.532001 3273 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 10 23:35:33.532433 kubelet[3273]: I0710 23:35:33.532026 3273 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 10 23:35:33.532433 kubelet[3273]: I0710 23:35:33.532034 3273 kubelet.go:2382] "Starting kubelet main sync loop" Jul 10 23:35:33.532433 kubelet[3273]: E0710 23:35:33.532074 3273 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 10 23:35:33.549166 kubelet[3273]: E0710 23:35:33.549096 3273 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 10 23:35:33.549340 kubelet[3273]: I0710 23:35:33.549322 3273 factory.go:221] Registration of the systemd container factory successfully Jul 10 23:35:33.549738 kubelet[3273]: I0710 23:35:33.549715 3273 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 10 23:35:33.552922 kubelet[3273]: I0710 23:35:33.552864 3273 factory.go:221] Registration of the containerd container factory successfully Jul 10 23:35:33.613134 kubelet[3273]: I0710 23:35:33.613111 3273 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 10 23:35:33.613315 kubelet[3273]: I0710 23:35:33.613303 3273 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 10 23:35:33.613490 kubelet[3273]: I0710 23:35:33.613445 3273 state_mem.go:36] "Initialized new in-memory state store" Jul 10 23:35:33.613681 kubelet[3273]: I0710 23:35:33.613633 3273 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 10 23:35:33.613681 kubelet[3273]: I0710 23:35:33.613651 3273 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 10 23:35:33.613681 kubelet[3273]: I0710 23:35:33.613670 3273 policy_none.go:49] "None policy: Start" Jul 10 23:35:33.613681 kubelet[3273]: I0710 23:35:33.613679 3273 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 10 23:35:33.613779 kubelet[3273]: I0710 23:35:33.613691 3273 state_mem.go:35] "Initializing new in-memory state store" Jul 10 23:35:33.613801 kubelet[3273]: I0710 23:35:33.613786 3273 state_mem.go:75] "Updated machine memory state" Jul 10 23:35:33.618058 kubelet[3273]: I0710 23:35:33.617994 3273 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 10 23:35:33.618210 kubelet[3273]: I0710 23:35:33.618160 3273 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 10 23:35:33.618210 kubelet[3273]: I0710 23:35:33.618177 3273 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 10 23:35:33.619346 kubelet[3273]: I0710 23:35:33.619216 3273 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 10 23:35:33.620115 kubelet[3273]: E0710 23:35:33.620055 3273 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 10 23:35:33.633244 kubelet[3273]: I0710 23:35:33.633176 3273 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230.2.1-n-d24186f489" Jul 10 23:35:33.634269 kubelet[3273]: I0710 23:35:33.634163 3273 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230.2.1-n-d24186f489" Jul 10 23:35:33.634357 kubelet[3273]: I0710 23:35:33.634331 3273 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230.2.1-n-d24186f489" Jul 10 23:35:33.648284 kubelet[3273]: W0710 23:35:33.647811 3273 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 10 23:35:33.648284 kubelet[3273]: E0710 23:35:33.647876 3273 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4230.2.1-n-d24186f489\" already exists" pod="kube-system/kube-controller-manager-ci-4230.2.1-n-d24186f489" Jul 10 23:35:33.649894 kubelet[3273]: W0710 23:35:33.649838 3273 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 10 23:35:33.649894 kubelet[3273]: E0710 23:35:33.649886 3273 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4230.2.1-n-d24186f489\" already exists" pod="kube-system/kube-apiserver-ci-4230.2.1-n-d24186f489" Jul 10 23:35:33.650031 kubelet[3273]: W0710 23:35:33.649941 3273 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 10 23:35:33.650031 kubelet[3273]: E0710 23:35:33.649959 3273 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4230.2.1-n-d24186f489\" already exists" pod="kube-system/kube-scheduler-ci-4230.2.1-n-d24186f489" Jul 10 23:35:33.710080 sudo[3305]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 10 23:35:33.710683 sudo[3305]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jul 10 23:35:33.715875 kubelet[3273]: I0710 23:35:33.715846 3273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/38d24faf9fc19f4e6b4c2cb992fa0110-ca-certs\") pod \"kube-controller-manager-ci-4230.2.1-n-d24186f489\" (UID: \"38d24faf9fc19f4e6b4c2cb992fa0110\") " pod="kube-system/kube-controller-manager-ci-4230.2.1-n-d24186f489" Jul 10 23:35:33.716244 kubelet[3273]: I0710 23:35:33.716048 3273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/38d24faf9fc19f4e6b4c2cb992fa0110-flexvolume-dir\") pod \"kube-controller-manager-ci-4230.2.1-n-d24186f489\" (UID: \"38d24faf9fc19f4e6b4c2cb992fa0110\") " pod="kube-system/kube-controller-manager-ci-4230.2.1-n-d24186f489" Jul 10 23:35:33.716244 kubelet[3273]: I0710 23:35:33.716074 3273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/38d24faf9fc19f4e6b4c2cb992fa0110-k8s-certs\") pod \"kube-controller-manager-ci-4230.2.1-n-d24186f489\" (UID: \"38d24faf9fc19f4e6b4c2cb992fa0110\") " pod="kube-system/kube-controller-manager-ci-4230.2.1-n-d24186f489" Jul 10 23:35:33.716244 kubelet[3273]: I0710 23:35:33.716094 3273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3b9c48bd6fff2b56c81f07250e512ee4-ca-certs\") pod \"kube-apiserver-ci-4230.2.1-n-d24186f489\" (UID: \"3b9c48bd6fff2b56c81f07250e512ee4\") " pod="kube-system/kube-apiserver-ci-4230.2.1-n-d24186f489" Jul 10 23:35:33.716244 kubelet[3273]: I0710 23:35:33.716110 3273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3b9c48bd6fff2b56c81f07250e512ee4-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230.2.1-n-d24186f489\" (UID: \"3b9c48bd6fff2b56c81f07250e512ee4\") " pod="kube-system/kube-apiserver-ci-4230.2.1-n-d24186f489" Jul 10 23:35:33.716244 kubelet[3273]: I0710 23:35:33.716128 3273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/38d24faf9fc19f4e6b4c2cb992fa0110-kubeconfig\") pod \"kube-controller-manager-ci-4230.2.1-n-d24186f489\" (UID: \"38d24faf9fc19f4e6b4c2cb992fa0110\") " pod="kube-system/kube-controller-manager-ci-4230.2.1-n-d24186f489" Jul 10 23:35:33.716417 kubelet[3273]: I0710 23:35:33.716146 3273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/38d24faf9fc19f4e6b4c2cb992fa0110-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230.2.1-n-d24186f489\" (UID: \"38d24faf9fc19f4e6b4c2cb992fa0110\") " pod="kube-system/kube-controller-manager-ci-4230.2.1-n-d24186f489" Jul 10 23:35:33.716417 kubelet[3273]: I0710 23:35:33.716169 3273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/23652ab776bb9cc09ad53e9c705e5d12-kubeconfig\") pod \"kube-scheduler-ci-4230.2.1-n-d24186f489\" (UID: \"23652ab776bb9cc09ad53e9c705e5d12\") " pod="kube-system/kube-scheduler-ci-4230.2.1-n-d24186f489" Jul 10 23:35:33.716417 kubelet[3273]: I0710 23:35:33.716184 3273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3b9c48bd6fff2b56c81f07250e512ee4-k8s-certs\") pod \"kube-apiserver-ci-4230.2.1-n-d24186f489\" (UID: \"3b9c48bd6fff2b56c81f07250e512ee4\") " pod="kube-system/kube-apiserver-ci-4230.2.1-n-d24186f489" Jul 10 23:35:33.722299 kubelet[3273]: I0710 23:35:33.722265 3273 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.1-n-d24186f489" Jul 10 23:35:33.738537 kubelet[3273]: I0710 23:35:33.738450 3273 kubelet_node_status.go:124] "Node was previously registered" node="ci-4230.2.1-n-d24186f489" Jul 10 23:35:33.738664 kubelet[3273]: I0710 23:35:33.738589 3273 kubelet_node_status.go:78] "Successfully registered node" node="ci-4230.2.1-n-d24186f489" Jul 10 23:35:34.202798 sudo[3305]: pam_unix(sudo:session): session closed for user root Jul 10 23:35:34.502184 kubelet[3273]: I0710 23:35:34.501918 3273 apiserver.go:52] "Watching apiserver" Jul 10 23:35:34.513289 kubelet[3273]: I0710 23:35:34.513176 3273 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 10 23:35:34.591885 kubelet[3273]: I0710 23:35:34.591612 3273 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230.2.1-n-d24186f489" Jul 10 23:35:34.593274 kubelet[3273]: I0710 23:35:34.592462 3273 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230.2.1-n-d24186f489" Jul 10 23:35:34.612009 kubelet[3273]: W0710 23:35:34.611466 3273 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 10 23:35:34.612009 kubelet[3273]: E0710 23:35:34.611574 3273 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4230.2.1-n-d24186f489\" already exists" pod="kube-system/kube-scheduler-ci-4230.2.1-n-d24186f489" Jul 10 23:35:34.612009 kubelet[3273]: W0710 23:35:34.611988 3273 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 10 23:35:34.612670 kubelet[3273]: E0710 23:35:34.612028 3273 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4230.2.1-n-d24186f489\" already exists" pod="kube-system/kube-apiserver-ci-4230.2.1-n-d24186f489" Jul 10 23:35:34.655977 kubelet[3273]: I0710 23:35:34.655866 3273 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4230.2.1-n-d24186f489" podStartSLOduration=1.655846325 podStartE2EDuration="1.655846325s" podCreationTimestamp="2025-07-10 23:35:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 23:35:34.63354161 +0000 UTC m=+1.222261023" watchObservedRunningTime="2025-07-10 23:35:34.655846325 +0000 UTC m=+1.244565738" Jul 10 23:35:34.673423 kubelet[3273]: I0710 23:35:34.673320 3273 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4230.2.1-n-d24186f489" podStartSLOduration=1.6733022229999999 podStartE2EDuration="1.673302223s" podCreationTimestamp="2025-07-10 23:35:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 23:35:34.656463087 +0000 UTC m=+1.245182500" watchObservedRunningTime="2025-07-10 23:35:34.673302223 +0000 UTC m=+1.262021636" Jul 10 23:35:35.993541 sudo[2270]: pam_unix(sudo:session): session closed for user root Jul 10 23:35:36.083957 sshd[2269]: Connection closed by 10.200.16.10 port 44442 Jul 10 23:35:36.084540 sshd-session[2267]: pam_unix(sshd:session): session closed for user core Jul 10 23:35:36.089353 systemd[1]: sshd@6-10.200.20.37:22-10.200.16.10:44442.service: Deactivated successfully. Jul 10 23:35:36.092319 systemd[1]: session-9.scope: Deactivated successfully. Jul 10 23:35:36.092670 systemd[1]: session-9.scope: Consumed 5.851s CPU time, 261.8M memory peak. Jul 10 23:35:36.095059 systemd-logind[1690]: Session 9 logged out. Waiting for processes to exit. Jul 10 23:35:36.096418 systemd-logind[1690]: Removed session 9. Jul 10 23:35:37.035214 kubelet[3273]: I0710 23:35:37.034845 3273 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4230.2.1-n-d24186f489" podStartSLOduration=4.034825408 podStartE2EDuration="4.034825408s" podCreationTimestamp="2025-07-10 23:35:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 23:35:34.673471823 +0000 UTC m=+1.262191236" watchObservedRunningTime="2025-07-10 23:35:37.034825408 +0000 UTC m=+3.623544821" Jul 10 23:35:37.377283 kubelet[3273]: I0710 23:35:37.376681 3273 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 10 23:35:37.377786 containerd[1709]: time="2025-07-10T23:35:37.377043013Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 10 23:35:37.378919 kubelet[3273]: I0710 23:35:37.377602 3273 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 10 23:35:38.245632 kubelet[3273]: I0710 23:35:38.245591 3273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/06d6e237-aeec-47b8-bbb3-c6aba28c875e-xtables-lock\") pod \"kube-proxy-7pqbx\" (UID: \"06d6e237-aeec-47b8-bbb3-c6aba28c875e\") " pod="kube-system/kube-proxy-7pqbx" Jul 10 23:35:38.245632 kubelet[3273]: I0710 23:35:38.245629 3273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/06d6e237-aeec-47b8-bbb3-c6aba28c875e-lib-modules\") pod \"kube-proxy-7pqbx\" (UID: \"06d6e237-aeec-47b8-bbb3-c6aba28c875e\") " pod="kube-system/kube-proxy-7pqbx" Jul 10 23:35:38.245632 kubelet[3273]: I0710 23:35:38.245650 3273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/06d6e237-aeec-47b8-bbb3-c6aba28c875e-kube-proxy\") pod \"kube-proxy-7pqbx\" (UID: \"06d6e237-aeec-47b8-bbb3-c6aba28c875e\") " pod="kube-system/kube-proxy-7pqbx" Jul 10 23:35:38.245632 kubelet[3273]: I0710 23:35:38.245669 3273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcrmf\" (UniqueName: \"kubernetes.io/projected/06d6e237-aeec-47b8-bbb3-c6aba28c875e-kube-api-access-jcrmf\") pod \"kube-proxy-7pqbx\" (UID: \"06d6e237-aeec-47b8-bbb3-c6aba28c875e\") " pod="kube-system/kube-proxy-7pqbx" Jul 10 23:35:38.248449 systemd[1]: Created slice kubepods-besteffort-pod06d6e237_aeec_47b8_bbb3_c6aba28c875e.slice - libcontainer container kubepods-besteffort-pod06d6e237_aeec_47b8_bbb3_c6aba28c875e.slice. Jul 10 23:35:38.269100 systemd[1]: Created slice kubepods-burstable-pod849dc1f6_0a16_43de_abfe_42d0ab3043fa.slice - libcontainer container kubepods-burstable-pod849dc1f6_0a16_43de_abfe_42d0ab3043fa.slice. Jul 10 23:35:38.346557 kubelet[3273]: I0710 23:35:38.346511 3273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/849dc1f6-0a16-43de-abfe-42d0ab3043fa-hubble-tls\") pod \"cilium-wpbg2\" (UID: \"849dc1f6-0a16-43de-abfe-42d0ab3043fa\") " pod="kube-system/cilium-wpbg2" Jul 10 23:35:38.346702 kubelet[3273]: I0710 23:35:38.346571 3273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/849dc1f6-0a16-43de-abfe-42d0ab3043fa-bpf-maps\") pod \"cilium-wpbg2\" (UID: \"849dc1f6-0a16-43de-abfe-42d0ab3043fa\") " pod="kube-system/cilium-wpbg2" Jul 10 23:35:38.346702 kubelet[3273]: I0710 23:35:38.346588 3273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/849dc1f6-0a16-43de-abfe-42d0ab3043fa-xtables-lock\") pod \"cilium-wpbg2\" (UID: \"849dc1f6-0a16-43de-abfe-42d0ab3043fa\") " pod="kube-system/cilium-wpbg2" Jul 10 23:35:38.346702 kubelet[3273]: I0710 23:35:38.346624 3273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/849dc1f6-0a16-43de-abfe-42d0ab3043fa-cilium-run\") pod \"cilium-wpbg2\" (UID: \"849dc1f6-0a16-43de-abfe-42d0ab3043fa\") " pod="kube-system/cilium-wpbg2" Jul 10 23:35:38.346702 kubelet[3273]: I0710 23:35:38.346640 3273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/849dc1f6-0a16-43de-abfe-42d0ab3043fa-lib-modules\") pod \"cilium-wpbg2\" (UID: \"849dc1f6-0a16-43de-abfe-42d0ab3043fa\") " pod="kube-system/cilium-wpbg2" Jul 10 23:35:38.346702 kubelet[3273]: I0710 23:35:38.346671 3273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/849dc1f6-0a16-43de-abfe-42d0ab3043fa-host-proc-sys-net\") pod \"cilium-wpbg2\" (UID: \"849dc1f6-0a16-43de-abfe-42d0ab3043fa\") " pod="kube-system/cilium-wpbg2" Jul 10 23:35:38.346702 kubelet[3273]: I0710 23:35:38.346687 3273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/849dc1f6-0a16-43de-abfe-42d0ab3043fa-cni-path\") pod \"cilium-wpbg2\" (UID: \"849dc1f6-0a16-43de-abfe-42d0ab3043fa\") " pod="kube-system/cilium-wpbg2" Jul 10 23:35:38.346895 kubelet[3273]: I0710 23:35:38.346708 3273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/849dc1f6-0a16-43de-abfe-42d0ab3043fa-hostproc\") pod \"cilium-wpbg2\" (UID: \"849dc1f6-0a16-43de-abfe-42d0ab3043fa\") " pod="kube-system/cilium-wpbg2" Jul 10 23:35:38.346895 kubelet[3273]: I0710 23:35:38.346736 3273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/849dc1f6-0a16-43de-abfe-42d0ab3043fa-cilium-cgroup\") pod \"cilium-wpbg2\" (UID: \"849dc1f6-0a16-43de-abfe-42d0ab3043fa\") " pod="kube-system/cilium-wpbg2" Jul 10 23:35:38.346895 kubelet[3273]: I0710 23:35:38.346751 3273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/849dc1f6-0a16-43de-abfe-42d0ab3043fa-etc-cni-netd\") pod \"cilium-wpbg2\" (UID: \"849dc1f6-0a16-43de-abfe-42d0ab3043fa\") " pod="kube-system/cilium-wpbg2" Jul 10 23:35:38.346895 kubelet[3273]: I0710 23:35:38.346777 3273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/849dc1f6-0a16-43de-abfe-42d0ab3043fa-host-proc-sys-kernel\") pod \"cilium-wpbg2\" (UID: \"849dc1f6-0a16-43de-abfe-42d0ab3043fa\") " pod="kube-system/cilium-wpbg2" Jul 10 23:35:38.346895 kubelet[3273]: I0710 23:35:38.346794 3273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/849dc1f6-0a16-43de-abfe-42d0ab3043fa-clustermesh-secrets\") pod \"cilium-wpbg2\" (UID: \"849dc1f6-0a16-43de-abfe-42d0ab3043fa\") " pod="kube-system/cilium-wpbg2" Jul 10 23:35:38.346895 kubelet[3273]: I0710 23:35:38.346811 3273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-st5qq\" (UniqueName: \"kubernetes.io/projected/849dc1f6-0a16-43de-abfe-42d0ab3043fa-kube-api-access-st5qq\") pod \"cilium-wpbg2\" (UID: \"849dc1f6-0a16-43de-abfe-42d0ab3043fa\") " pod="kube-system/cilium-wpbg2" Jul 10 23:35:38.347069 kubelet[3273]: I0710 23:35:38.346828 3273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/849dc1f6-0a16-43de-abfe-42d0ab3043fa-cilium-config-path\") pod \"cilium-wpbg2\" (UID: \"849dc1f6-0a16-43de-abfe-42d0ab3043fa\") " pod="kube-system/cilium-wpbg2" Jul 10 23:35:38.480837 systemd[1]: Created slice kubepods-besteffort-podcf465242_16cd_44ed_bb91_b9a8bd821559.slice - libcontainer container kubepods-besteffort-podcf465242_16cd_44ed_bb91_b9a8bd821559.slice. Jul 10 23:35:38.548026 kubelet[3273]: I0710 23:35:38.547987 3273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cf465242-16cd-44ed-bb91-b9a8bd821559-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-hxh8q\" (UID: \"cf465242-16cd-44ed-bb91-b9a8bd821559\") " pod="kube-system/cilium-operator-6c4d7847fc-hxh8q" Jul 10 23:35:38.549022 kubelet[3273]: I0710 23:35:38.548322 3273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czvc7\" (UniqueName: \"kubernetes.io/projected/cf465242-16cd-44ed-bb91-b9a8bd821559-kube-api-access-czvc7\") pod \"cilium-operator-6c4d7847fc-hxh8q\" (UID: \"cf465242-16cd-44ed-bb91-b9a8bd821559\") " pod="kube-system/cilium-operator-6c4d7847fc-hxh8q" Jul 10 23:35:38.556164 containerd[1709]: time="2025-07-10T23:35:38.556127091Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7pqbx,Uid:06d6e237-aeec-47b8-bbb3-c6aba28c875e,Namespace:kube-system,Attempt:0,}" Jul 10 23:35:38.574153 containerd[1709]: time="2025-07-10T23:35:38.573943189Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wpbg2,Uid:849dc1f6-0a16-43de-abfe-42d0ab3043fa,Namespace:kube-system,Attempt:0,}" Jul 10 23:35:38.677416 containerd[1709]: time="2025-07-10T23:35:38.676846488Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 23:35:38.677416 containerd[1709]: time="2025-07-10T23:35:38.676948328Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 23:35:38.677416 containerd[1709]: time="2025-07-10T23:35:38.676965128Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 23:35:38.677416 containerd[1709]: time="2025-07-10T23:35:38.677058529Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 23:35:38.684180 containerd[1709]: time="2025-07-10T23:35:38.683744551Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 23:35:38.684180 containerd[1709]: time="2025-07-10T23:35:38.683822391Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 23:35:38.684180 containerd[1709]: time="2025-07-10T23:35:38.683839711Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 23:35:38.684848 containerd[1709]: time="2025-07-10T23:35:38.683966791Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 23:35:38.698688 systemd[1]: Started cri-containerd-ef39b54c18fb53d8355dd113b20f8893f6f53d5acd73d2d5d8cd7bb11b10f2d1.scope - libcontainer container ef39b54c18fb53d8355dd113b20f8893f6f53d5acd73d2d5d8cd7bb11b10f2d1. Jul 10 23:35:38.709616 systemd[1]: Started cri-containerd-b751e63fe3ba557c8780525808ed63d6ab273a96a35a9341f041eeff11b2ea2a.scope - libcontainer container b751e63fe3ba557c8780525808ed63d6ab273a96a35a9341f041eeff11b2ea2a. Jul 10 23:35:38.742616 containerd[1709]: time="2025-07-10T23:35:38.742002222Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wpbg2,Uid:849dc1f6-0a16-43de-abfe-42d0ab3043fa,Namespace:kube-system,Attempt:0,} returns sandbox id \"ef39b54c18fb53d8355dd113b20f8893f6f53d5acd73d2d5d8cd7bb11b10f2d1\"" Jul 10 23:35:38.749649 containerd[1709]: time="2025-07-10T23:35:38.749610487Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7pqbx,Uid:06d6e237-aeec-47b8-bbb3-c6aba28c875e,Namespace:kube-system,Attempt:0,} returns sandbox id \"b751e63fe3ba557c8780525808ed63d6ab273a96a35a9341f041eeff11b2ea2a\"" Jul 10 23:35:38.755906 containerd[1709]: time="2025-07-10T23:35:38.755862588Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 10 23:35:38.756547 containerd[1709]: time="2025-07-10T23:35:38.756496390Z" level=info msg="CreateContainer within sandbox \"b751e63fe3ba557c8780525808ed63d6ab273a96a35a9341f041eeff11b2ea2a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 10 23:35:38.788552 containerd[1709]: time="2025-07-10T23:35:38.788479175Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-hxh8q,Uid:cf465242-16cd-44ed-bb91-b9a8bd821559,Namespace:kube-system,Attempt:0,}" Jul 10 23:35:38.845786 containerd[1709]: time="2025-07-10T23:35:38.845446442Z" level=info msg="CreateContainer within sandbox \"b751e63fe3ba557c8780525808ed63d6ab273a96a35a9341f041eeff11b2ea2a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"24537ff482d436c8af7fa40f965f918712c97ceb52459f7f400a37e9446b384a\"" Jul 10 23:35:38.847236 containerd[1709]: time="2025-07-10T23:35:38.847169968Z" level=info msg="StartContainer for \"24537ff482d436c8af7fa40f965f918712c97ceb52459f7f400a37e9446b384a\"" Jul 10 23:35:38.881605 systemd[1]: Started cri-containerd-24537ff482d436c8af7fa40f965f918712c97ceb52459f7f400a37e9446b384a.scope - libcontainer container 24537ff482d436c8af7fa40f965f918712c97ceb52459f7f400a37e9446b384a. Jul 10 23:35:38.899794 containerd[1709]: time="2025-07-10T23:35:38.899657981Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 23:35:38.899794 containerd[1709]: time="2025-07-10T23:35:38.899798701Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 23:35:38.900112 containerd[1709]: time="2025-07-10T23:35:38.899888421Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 23:35:38.900112 containerd[1709]: time="2025-07-10T23:35:38.900022542Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 23:35:38.922627 systemd[1]: Started cri-containerd-11ef54599ab744822f3122af76e46b1a18bb7896b2032050ac23802d8f332c19.scope - libcontainer container 11ef54599ab744822f3122af76e46b1a18bb7896b2032050ac23802d8f332c19. Jul 10 23:35:38.933203 containerd[1709]: time="2025-07-10T23:35:38.933105291Z" level=info msg="StartContainer for \"24537ff482d436c8af7fa40f965f918712c97ceb52459f7f400a37e9446b384a\" returns successfully" Jul 10 23:35:38.974963 containerd[1709]: time="2025-07-10T23:35:38.974834028Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-hxh8q,Uid:cf465242-16cd-44ed-bb91-b9a8bd821559,Namespace:kube-system,Attempt:0,} returns sandbox id \"11ef54599ab744822f3122af76e46b1a18bb7896b2032050ac23802d8f332c19\"" Jul 10 23:35:42.975264 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount819261935.mount: Deactivated successfully. Jul 10 23:35:45.277088 kubelet[3273]: I0710 23:35:45.276988 3273 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-7pqbx" podStartSLOduration=7.2769712779999995 podStartE2EDuration="7.276971278s" podCreationTimestamp="2025-07-10 23:35:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 23:35:39.615190414 +0000 UTC m=+6.203909827" watchObservedRunningTime="2025-07-10 23:35:45.276971278 +0000 UTC m=+11.865690651" Jul 10 23:35:45.406720 containerd[1709]: time="2025-07-10T23:35:45.406631117Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:35:45.410509 containerd[1709]: time="2025-07-10T23:35:45.410228408Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Jul 10 23:35:45.419267 containerd[1709]: time="2025-07-10T23:35:45.419210316Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:35:45.421468 containerd[1709]: time="2025-07-10T23:35:45.421174562Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 6.665015813s" Jul 10 23:35:45.421468 containerd[1709]: time="2025-07-10T23:35:45.421251322Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jul 10 23:35:45.423339 containerd[1709]: time="2025-07-10T23:35:45.423304089Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 10 23:35:45.425212 containerd[1709]: time="2025-07-10T23:35:45.425163694Z" level=info msg="CreateContainer within sandbox \"ef39b54c18fb53d8355dd113b20f8893f6f53d5acd73d2d5d8cd7bb11b10f2d1\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 10 23:35:45.494246 containerd[1709]: time="2025-07-10T23:35:45.494197507Z" level=info msg="CreateContainer within sandbox \"ef39b54c18fb53d8355dd113b20f8893f6f53d5acd73d2d5d8cd7bb11b10f2d1\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"aee2fffebaa52686c22520232fac29fad406842ae1de2e4b10ad19b8c4fb6511\"" Jul 10 23:35:45.495225 containerd[1709]: time="2025-07-10T23:35:45.495191110Z" level=info msg="StartContainer for \"aee2fffebaa52686c22520232fac29fad406842ae1de2e4b10ad19b8c4fb6511\"" Jul 10 23:35:45.522575 systemd[1]: Started cri-containerd-aee2fffebaa52686c22520232fac29fad406842ae1de2e4b10ad19b8c4fb6511.scope - libcontainer container aee2fffebaa52686c22520232fac29fad406842ae1de2e4b10ad19b8c4fb6511. Jul 10 23:35:45.552060 containerd[1709]: time="2025-07-10T23:35:45.551581963Z" level=info msg="StartContainer for \"aee2fffebaa52686c22520232fac29fad406842ae1de2e4b10ad19b8c4fb6511\" returns successfully" Jul 10 23:35:45.562607 systemd[1]: cri-containerd-aee2fffebaa52686c22520232fac29fad406842ae1de2e4b10ad19b8c4fb6511.scope: Deactivated successfully. Jul 10 23:35:45.581835 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aee2fffebaa52686c22520232fac29fad406842ae1de2e4b10ad19b8c4fb6511-rootfs.mount: Deactivated successfully. Jul 10 23:35:46.810442 containerd[1709]: time="2025-07-10T23:35:46.810328356Z" level=info msg="shim disconnected" id=aee2fffebaa52686c22520232fac29fad406842ae1de2e4b10ad19b8c4fb6511 namespace=k8s.io Jul 10 23:35:46.810442 containerd[1709]: time="2025-07-10T23:35:46.810400756Z" level=warning msg="cleaning up after shim disconnected" id=aee2fffebaa52686c22520232fac29fad406842ae1de2e4b10ad19b8c4fb6511 namespace=k8s.io Jul 10 23:35:46.810442 containerd[1709]: time="2025-07-10T23:35:46.810409836Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 10 23:35:47.631480 containerd[1709]: time="2025-07-10T23:35:47.631422762Z" level=info msg="CreateContainer within sandbox \"ef39b54c18fb53d8355dd113b20f8893f6f53d5acd73d2d5d8cd7bb11b10f2d1\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 10 23:35:47.683678 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4247985679.mount: Deactivated successfully. Jul 10 23:35:47.776079 containerd[1709]: time="2025-07-10T23:35:47.775949007Z" level=info msg="CreateContainer within sandbox \"ef39b54c18fb53d8355dd113b20f8893f6f53d5acd73d2d5d8cd7bb11b10f2d1\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"3dc5851a7b50c562c2d967981cb902db81b8be904caefc2a01301237e2862583\"" Jul 10 23:35:47.776785 containerd[1709]: time="2025-07-10T23:35:47.776720489Z" level=info msg="StartContainer for \"3dc5851a7b50c562c2d967981cb902db81b8be904caefc2a01301237e2862583\"" Jul 10 23:35:47.806540 systemd[1]: Started cri-containerd-3dc5851a7b50c562c2d967981cb902db81b8be904caefc2a01301237e2862583.scope - libcontainer container 3dc5851a7b50c562c2d967981cb902db81b8be904caefc2a01301237e2862583. Jul 10 23:35:47.836728 containerd[1709]: time="2025-07-10T23:35:47.836679314Z" level=info msg="StartContainer for \"3dc5851a7b50c562c2d967981cb902db81b8be904caefc2a01301237e2862583\" returns successfully" Jul 10 23:35:47.847446 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 10 23:35:47.847662 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 10 23:35:47.848046 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 10 23:35:47.854122 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 10 23:35:47.854758 systemd[1]: cri-containerd-3dc5851a7b50c562c2d967981cb902db81b8be904caefc2a01301237e2862583.scope: Deactivated successfully. Jul 10 23:35:47.889397 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 10 23:35:47.902234 containerd[1709]: time="2025-07-10T23:35:47.902096715Z" level=info msg="shim disconnected" id=3dc5851a7b50c562c2d967981cb902db81b8be904caefc2a01301237e2862583 namespace=k8s.io Jul 10 23:35:47.902234 containerd[1709]: time="2025-07-10T23:35:47.902204275Z" level=warning msg="cleaning up after shim disconnected" id=3dc5851a7b50c562c2d967981cb902db81b8be904caefc2a01301237e2862583 namespace=k8s.io Jul 10 23:35:47.902234 containerd[1709]: time="2025-07-10T23:35:47.902212116Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 10 23:35:48.634182 containerd[1709]: time="2025-07-10T23:35:48.634132207Z" level=info msg="CreateContainer within sandbox \"ef39b54c18fb53d8355dd113b20f8893f6f53d5acd73d2d5d8cd7bb11b10f2d1\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 10 23:35:48.681038 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3dc5851a7b50c562c2d967981cb902db81b8be904caefc2a01301237e2862583-rootfs.mount: Deactivated successfully. Jul 10 23:35:48.723930 containerd[1709]: time="2025-07-10T23:35:48.723813403Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:35:48.744934 containerd[1709]: time="2025-07-10T23:35:48.744841148Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Jul 10 23:35:48.769105 containerd[1709]: time="2025-07-10T23:35:48.769035902Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:35:48.769665 containerd[1709]: time="2025-07-10T23:35:48.769564344Z" level=info msg="CreateContainer within sandbox \"ef39b54c18fb53d8355dd113b20f8893f6f53d5acd73d2d5d8cd7bb11b10f2d1\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"4b7a97f06a728cd365c78f49cd15c8f5f1c5e1f1ba2608f00113296ded8f888d\"" Jul 10 23:35:48.772428 containerd[1709]: time="2025-07-10T23:35:48.770347467Z" level=info msg="StartContainer for \"4b7a97f06a728cd365c78f49cd15c8f5f1c5e1f1ba2608f00113296ded8f888d\"" Jul 10 23:35:48.772428 containerd[1709]: time="2025-07-10T23:35:48.770990188Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 3.347650179s" Jul 10 23:35:48.772428 containerd[1709]: time="2025-07-10T23:35:48.771019509Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jul 10 23:35:48.774635 containerd[1709]: time="2025-07-10T23:35:48.774567359Z" level=info msg="CreateContainer within sandbox \"11ef54599ab744822f3122af76e46b1a18bb7896b2032050ac23802d8f332c19\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 10 23:35:48.810570 systemd[1]: Started cri-containerd-4b7a97f06a728cd365c78f49cd15c8f5f1c5e1f1ba2608f00113296ded8f888d.scope - libcontainer container 4b7a97f06a728cd365c78f49cd15c8f5f1c5e1f1ba2608f00113296ded8f888d. Jul 10 23:35:48.836513 containerd[1709]: time="2025-07-10T23:35:48.836435030Z" level=info msg="CreateContainer within sandbox \"11ef54599ab744822f3122af76e46b1a18bb7896b2032050ac23802d8f332c19\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"3008d49e27cd03d3943733d019e163fc3510945569c91c8f852002f10ef62e01\"" Jul 10 23:35:48.838772 containerd[1709]: time="2025-07-10T23:35:48.837279392Z" level=info msg="StartContainer for \"3008d49e27cd03d3943733d019e163fc3510945569c91c8f852002f10ef62e01\"" Jul 10 23:35:48.845714 systemd[1]: cri-containerd-4b7a97f06a728cd365c78f49cd15c8f5f1c5e1f1ba2608f00113296ded8f888d.scope: Deactivated successfully. Jul 10 23:35:48.855417 containerd[1709]: time="2025-07-10T23:35:48.855327208Z" level=info msg="StartContainer for \"4b7a97f06a728cd365c78f49cd15c8f5f1c5e1f1ba2608f00113296ded8f888d\" returns successfully" Jul 10 23:35:48.873574 systemd[1]: Started cri-containerd-3008d49e27cd03d3943733d019e163fc3510945569c91c8f852002f10ef62e01.scope - libcontainer container 3008d49e27cd03d3943733d019e163fc3510945569c91c8f852002f10ef62e01. Jul 10 23:35:49.210590 containerd[1709]: time="2025-07-10T23:35:49.210522701Z" level=info msg="StartContainer for \"3008d49e27cd03d3943733d019e163fc3510945569c91c8f852002f10ef62e01\" returns successfully" Jul 10 23:35:49.216493 containerd[1709]: time="2025-07-10T23:35:49.216440279Z" level=info msg="shim disconnected" id=4b7a97f06a728cd365c78f49cd15c8f5f1c5e1f1ba2608f00113296ded8f888d namespace=k8s.io Jul 10 23:35:49.216659 containerd[1709]: time="2025-07-10T23:35:49.216643800Z" level=warning msg="cleaning up after shim disconnected" id=4b7a97f06a728cd365c78f49cd15c8f5f1c5e1f1ba2608f00113296ded8f888d namespace=k8s.io Jul 10 23:35:49.216857 containerd[1709]: time="2025-07-10T23:35:49.216746760Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 10 23:35:49.643857 containerd[1709]: time="2025-07-10T23:35:49.643775554Z" level=info msg="CreateContainer within sandbox \"ef39b54c18fb53d8355dd113b20f8893f6f53d5acd73d2d5d8cd7bb11b10f2d1\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 10 23:35:49.684711 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4b7a97f06a728cd365c78f49cd15c8f5f1c5e1f1ba2608f00113296ded8f888d-rootfs.mount: Deactivated successfully. Jul 10 23:35:49.699314 containerd[1709]: time="2025-07-10T23:35:49.699222764Z" level=info msg="CreateContainer within sandbox \"ef39b54c18fb53d8355dd113b20f8893f6f53d5acd73d2d5d8cd7bb11b10f2d1\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"fa45e6fd903a19d3f3414e84f8d9b3577785b472a0a6f81cf8b68539fa6328fb\"" Jul 10 23:35:49.700017 containerd[1709]: time="2025-07-10T23:35:49.699957247Z" level=info msg="StartContainer for \"fa45e6fd903a19d3f3414e84f8d9b3577785b472a0a6f81cf8b68539fa6328fb\"" Jul 10 23:35:49.748555 systemd[1]: Started cri-containerd-fa45e6fd903a19d3f3414e84f8d9b3577785b472a0a6f81cf8b68539fa6328fb.scope - libcontainer container fa45e6fd903a19d3f3414e84f8d9b3577785b472a0a6f81cf8b68539fa6328fb. Jul 10 23:35:49.783823 kubelet[3273]: I0710 23:35:49.783328 3273 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-hxh8q" podStartSLOduration=1.990090753 podStartE2EDuration="11.783309063s" podCreationTimestamp="2025-07-10 23:35:38 +0000 UTC" firstStartedPulling="2025-07-10 23:35:38.979033202 +0000 UTC m=+5.567752575" lastFinishedPulling="2025-07-10 23:35:48.772251472 +0000 UTC m=+15.360970885" observedRunningTime="2025-07-10 23:35:49.689816535 +0000 UTC m=+16.278535988" watchObservedRunningTime="2025-07-10 23:35:49.783309063 +0000 UTC m=+16.372028476" Jul 10 23:35:49.801046 systemd[1]: cri-containerd-fa45e6fd903a19d3f3414e84f8d9b3577785b472a0a6f81cf8b68539fa6328fb.scope: Deactivated successfully. Jul 10 23:35:49.806540 containerd[1709]: time="2025-07-10T23:35:49.806348934Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod849dc1f6_0a16_43de_abfe_42d0ab3043fa.slice/cri-containerd-fa45e6fd903a19d3f3414e84f8d9b3577785b472a0a6f81cf8b68539fa6328fb.scope/memory.events\": no such file or directory" Jul 10 23:35:49.811192 containerd[1709]: time="2025-07-10T23:35:49.810889388Z" level=info msg="StartContainer for \"fa45e6fd903a19d3f3414e84f8d9b3577785b472a0a6f81cf8b68539fa6328fb\" returns successfully" Jul 10 23:35:49.838297 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fa45e6fd903a19d3f3414e84f8d9b3577785b472a0a6f81cf8b68539fa6328fb-rootfs.mount: Deactivated successfully. Jul 10 23:35:49.859066 containerd[1709]: time="2025-07-10T23:35:49.858991696Z" level=info msg="shim disconnected" id=fa45e6fd903a19d3f3414e84f8d9b3577785b472a0a6f81cf8b68539fa6328fb namespace=k8s.io Jul 10 23:35:49.859696 containerd[1709]: time="2025-07-10T23:35:49.859420097Z" level=warning msg="cleaning up after shim disconnected" id=fa45e6fd903a19d3f3414e84f8d9b3577785b472a0a6f81cf8b68539fa6328fb namespace=k8s.io Jul 10 23:35:49.859696 containerd[1709]: time="2025-07-10T23:35:49.859438537Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 10 23:35:50.650473 containerd[1709]: time="2025-07-10T23:35:50.650360531Z" level=info msg="CreateContainer within sandbox \"ef39b54c18fb53d8355dd113b20f8893f6f53d5acd73d2d5d8cd7bb11b10f2d1\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 10 23:35:50.688852 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2769883426.mount: Deactivated successfully. Jul 10 23:35:50.712360 containerd[1709]: time="2025-07-10T23:35:50.712222081Z" level=info msg="CreateContainer within sandbox \"ef39b54c18fb53d8355dd113b20f8893f6f53d5acd73d2d5d8cd7bb11b10f2d1\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"a87d226a87e4a37c51f5eb0561d4ba53d223a94d0886bfc469d458bece9bd12d\"" Jul 10 23:35:50.713824 containerd[1709]: time="2025-07-10T23:35:50.713793806Z" level=info msg="StartContainer for \"a87d226a87e4a37c51f5eb0561d4ba53d223a94d0886bfc469d458bece9bd12d\"" Jul 10 23:35:50.753590 systemd[1]: Started cri-containerd-a87d226a87e4a37c51f5eb0561d4ba53d223a94d0886bfc469d458bece9bd12d.scope - libcontainer container a87d226a87e4a37c51f5eb0561d4ba53d223a94d0886bfc469d458bece9bd12d. Jul 10 23:35:50.788429 containerd[1709]: time="2025-07-10T23:35:50.788332035Z" level=info msg="StartContainer for \"a87d226a87e4a37c51f5eb0561d4ba53d223a94d0886bfc469d458bece9bd12d\" returns successfully" Jul 10 23:35:50.881938 kubelet[3273]: I0710 23:35:50.881666 3273 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 10 23:35:50.931043 systemd[1]: Created slice kubepods-burstable-podcaa56ae6_6453_432f_8aa9_7bbad6823afd.slice - libcontainer container kubepods-burstable-podcaa56ae6_6453_432f_8aa9_7bbad6823afd.slice. Jul 10 23:35:50.936005 kubelet[3273]: I0710 23:35:50.935512 3273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9qkw\" (UniqueName: \"kubernetes.io/projected/caa56ae6-6453-432f-8aa9-7bbad6823afd-kube-api-access-n9qkw\") pod \"coredns-668d6bf9bc-cgpsz\" (UID: \"caa56ae6-6453-432f-8aa9-7bbad6823afd\") " pod="kube-system/coredns-668d6bf9bc-cgpsz" Jul 10 23:35:50.936005 kubelet[3273]: I0710 23:35:50.935546 3273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/caa56ae6-6453-432f-8aa9-7bbad6823afd-config-volume\") pod \"coredns-668d6bf9bc-cgpsz\" (UID: \"caa56ae6-6453-432f-8aa9-7bbad6823afd\") " pod="kube-system/coredns-668d6bf9bc-cgpsz" Jul 10 23:35:50.951999 systemd[1]: Created slice kubepods-burstable-pod9a0867dc_7fca_4f7c_bfb9_316dc34bd2b7.slice - libcontainer container kubepods-burstable-pod9a0867dc_7fca_4f7c_bfb9_316dc34bd2b7.slice. Jul 10 23:35:51.036421 kubelet[3273]: I0710 23:35:51.036284 3273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9a0867dc-7fca-4f7c-bfb9-316dc34bd2b7-config-volume\") pod \"coredns-668d6bf9bc-r5wgz\" (UID: \"9a0867dc-7fca-4f7c-bfb9-316dc34bd2b7\") " pod="kube-system/coredns-668d6bf9bc-r5wgz" Jul 10 23:35:51.037191 kubelet[3273]: I0710 23:35:51.037084 3273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcj92\" (UniqueName: \"kubernetes.io/projected/9a0867dc-7fca-4f7c-bfb9-316dc34bd2b7-kube-api-access-zcj92\") pod \"coredns-668d6bf9bc-r5wgz\" (UID: \"9a0867dc-7fca-4f7c-bfb9-316dc34bd2b7\") " pod="kube-system/coredns-668d6bf9bc-r5wgz" Jul 10 23:35:51.247896 containerd[1709]: time="2025-07-10T23:35:51.247742814Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-cgpsz,Uid:caa56ae6-6453-432f-8aa9-7bbad6823afd,Namespace:kube-system,Attempt:0,}" Jul 10 23:35:51.271202 containerd[1709]: time="2025-07-10T23:35:51.271163488Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-r5wgz,Uid:9a0867dc-7fca-4f7c-bfb9-316dc34bd2b7,Namespace:kube-system,Attempt:0,}" Jul 10 23:35:51.671997 kubelet[3273]: I0710 23:35:51.671639 3273 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-wpbg2" podStartSLOduration=7.002485157 podStartE2EDuration="13.671619543s" podCreationTimestamp="2025-07-10 23:35:38 +0000 UTC" firstStartedPulling="2025-07-10 23:35:38.7534563 +0000 UTC m=+5.342175713" lastFinishedPulling="2025-07-10 23:35:45.422590726 +0000 UTC m=+12.011310099" observedRunningTime="2025-07-10 23:35:51.670155298 +0000 UTC m=+18.258874711" watchObservedRunningTime="2025-07-10 23:35:51.671619543 +0000 UTC m=+18.260338956" Jul 10 23:35:51.688530 systemd[1]: run-containerd-runc-k8s.io-a87d226a87e4a37c51f5eb0561d4ba53d223a94d0886bfc469d458bece9bd12d-runc.shMWnW.mount: Deactivated successfully. Jul 10 23:35:52.979539 systemd-networkd[1448]: cilium_host: Link UP Jul 10 23:35:52.979660 systemd-networkd[1448]: cilium_net: Link UP Jul 10 23:35:52.981908 systemd-networkd[1448]: cilium_net: Gained carrier Jul 10 23:35:52.982093 systemd-networkd[1448]: cilium_host: Gained carrier Jul 10 23:35:53.102642 systemd-networkd[1448]: cilium_vxlan: Link UP Jul 10 23:35:53.102650 systemd-networkd[1448]: cilium_vxlan: Gained carrier Jul 10 23:35:53.339558 systemd-networkd[1448]: cilium_host: Gained IPv6LL Jul 10 23:35:53.448426 kernel: NET: Registered PF_ALG protocol family Jul 10 23:35:53.732576 systemd-networkd[1448]: cilium_net: Gained IPv6LL Jul 10 23:35:54.232938 systemd-networkd[1448]: lxc_health: Link UP Jul 10 23:35:54.233171 systemd-networkd[1448]: lxc_health: Gained carrier Jul 10 23:35:54.394597 systemd-networkd[1448]: lxc1094512beddf: Link UP Jul 10 23:35:54.400405 kernel: eth0: renamed from tmp51662 Jul 10 23:35:54.407280 systemd-networkd[1448]: lxc1094512beddf: Gained carrier Jul 10 23:35:54.413072 systemd-networkd[1448]: lxcc5a1392a83cd: Link UP Jul 10 23:35:54.427424 kernel: eth0: renamed from tmp4f0cc Jul 10 23:35:54.435987 systemd-networkd[1448]: lxcc5a1392a83cd: Gained carrier Jul 10 23:35:54.947543 systemd-networkd[1448]: cilium_vxlan: Gained IPv6LL Jul 10 23:35:55.652286 systemd-networkd[1448]: lxc_health: Gained IPv6LL Jul 10 23:35:55.715529 systemd-networkd[1448]: lxcc5a1392a83cd: Gained IPv6LL Jul 10 23:35:55.907568 systemd-networkd[1448]: lxc1094512beddf: Gained IPv6LL Jul 10 23:35:58.240197 containerd[1709]: time="2025-07-10T23:35:58.240097208Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 23:35:58.242404 containerd[1709]: time="2025-07-10T23:35:58.241974374Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 23:35:58.242404 containerd[1709]: time="2025-07-10T23:35:58.242122374Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 23:35:58.243023 containerd[1709]: time="2025-07-10T23:35:58.242582495Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 23:35:58.249846 containerd[1709]: time="2025-07-10T23:35:58.249452917Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 23:35:58.249846 containerd[1709]: time="2025-07-10T23:35:58.249513477Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 23:35:58.249846 containerd[1709]: time="2025-07-10T23:35:58.249528877Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 23:35:58.249846 containerd[1709]: time="2025-07-10T23:35:58.249606237Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 23:35:58.295474 systemd[1]: Started cri-containerd-4f0cc11baf6fbf5c76d85e32196e91363700a01c0a31f53b3da75df8be00f717.scope - libcontainer container 4f0cc11baf6fbf5c76d85e32196e91363700a01c0a31f53b3da75df8be00f717. Jul 10 23:35:58.304862 systemd[1]: Started cri-containerd-51662e203fd5b425ac0173b91cc8ef21f7001cf01745b5d2999027fe145715d6.scope - libcontainer container 51662e203fd5b425ac0173b91cc8ef21f7001cf01745b5d2999027fe145715d6. Jul 10 23:35:58.366083 containerd[1709]: time="2025-07-10T23:35:58.365328920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-r5wgz,Uid:9a0867dc-7fca-4f7c-bfb9-316dc34bd2b7,Namespace:kube-system,Attempt:0,} returns sandbox id \"4f0cc11baf6fbf5c76d85e32196e91363700a01c0a31f53b3da75df8be00f717\"" Jul 10 23:35:58.375597 containerd[1709]: time="2025-07-10T23:35:58.375422272Z" level=info msg="CreateContainer within sandbox \"4f0cc11baf6fbf5c76d85e32196e91363700a01c0a31f53b3da75df8be00f717\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 10 23:35:58.390338 containerd[1709]: time="2025-07-10T23:35:58.390238518Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-cgpsz,Uid:caa56ae6-6453-432f-8aa9-7bbad6823afd,Namespace:kube-system,Attempt:0,} returns sandbox id \"51662e203fd5b425ac0173b91cc8ef21f7001cf01745b5d2999027fe145715d6\"" Jul 10 23:35:58.395258 containerd[1709]: time="2025-07-10T23:35:58.395185974Z" level=info msg="CreateContainer within sandbox \"51662e203fd5b425ac0173b91cc8ef21f7001cf01745b5d2999027fe145715d6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 10 23:35:58.495232 containerd[1709]: time="2025-07-10T23:35:58.495062007Z" level=info msg="CreateContainer within sandbox \"51662e203fd5b425ac0173b91cc8ef21f7001cf01745b5d2999027fe145715d6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1e105865e9389b4a5a114fbeb0d19aea8f60ef67de5774ce3987775962849235\"" Jul 10 23:35:58.497788 containerd[1709]: time="2025-07-10T23:35:58.496745772Z" level=info msg="StartContainer for \"1e105865e9389b4a5a114fbeb0d19aea8f60ef67de5774ce3987775962849235\"" Jul 10 23:35:58.510757 containerd[1709]: time="2025-07-10T23:35:58.510709576Z" level=info msg="CreateContainer within sandbox \"4f0cc11baf6fbf5c76d85e32196e91363700a01c0a31f53b3da75df8be00f717\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c890fc77815b464783611c83271334d5d11e91b07c916040930e38f368c63d05\"" Jul 10 23:35:58.512261 containerd[1709]: time="2025-07-10T23:35:58.512224100Z" level=info msg="StartContainer for \"c890fc77815b464783611c83271334d5d11e91b07c916040930e38f368c63d05\"" Jul 10 23:35:58.532593 systemd[1]: Started cri-containerd-1e105865e9389b4a5a114fbeb0d19aea8f60ef67de5774ce3987775962849235.scope - libcontainer container 1e105865e9389b4a5a114fbeb0d19aea8f60ef67de5774ce3987775962849235. Jul 10 23:35:58.557755 systemd[1]: Started cri-containerd-c890fc77815b464783611c83271334d5d11e91b07c916040930e38f368c63d05.scope - libcontainer container c890fc77815b464783611c83271334d5d11e91b07c916040930e38f368c63d05. Jul 10 23:35:58.589837 containerd[1709]: time="2025-07-10T23:35:58.589752583Z" level=info msg="StartContainer for \"1e105865e9389b4a5a114fbeb0d19aea8f60ef67de5774ce3987775962849235\" returns successfully" Jul 10 23:35:58.600966 containerd[1709]: time="2025-07-10T23:35:58.600744818Z" level=info msg="StartContainer for \"c890fc77815b464783611c83271334d5d11e91b07c916040930e38f368c63d05\" returns successfully" Jul 10 23:35:58.715984 kubelet[3273]: I0710 23:35:58.715922 3273 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-r5wgz" podStartSLOduration=20.715900379 podStartE2EDuration="20.715900379s" podCreationTimestamp="2025-07-10 23:35:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 23:35:58.699195806 +0000 UTC m=+25.287915219" watchObservedRunningTime="2025-07-10 23:35:58.715900379 +0000 UTC m=+25.304619792" Jul 10 23:35:58.744584 kubelet[3273]: I0710 23:35:58.744300 3273 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-cgpsz" podStartSLOduration=20.744280748 podStartE2EDuration="20.744280748s" podCreationTimestamp="2025-07-10 23:35:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 23:35:58.716543181 +0000 UTC m=+25.305262554" watchObservedRunningTime="2025-07-10 23:35:58.744280748 +0000 UTC m=+25.333000161" Jul 10 23:35:59.248709 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount553897289.mount: Deactivated successfully. Jul 10 23:36:01.370517 kubelet[3273]: I0710 23:36:01.370058 3273 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 10 23:37:37.910722 systemd[1]: Started sshd@7-10.200.20.37:22-10.200.16.10:40552.service - OpenSSH per-connection server daemon (10.200.16.10:40552). Jul 10 23:37:38.371277 sshd[4662]: Accepted publickey for core from 10.200.16.10 port 40552 ssh2: RSA SHA256:MRWiT2m5xbwSE4Nwnya++Fyqw3vZsniZxlGhvbxVqjo Jul 10 23:37:38.373004 sshd-session[4662]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:37:38.378590 systemd-logind[1690]: New session 10 of user core. Jul 10 23:37:38.385616 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 10 23:37:38.789145 sshd[4664]: Connection closed by 10.200.16.10 port 40552 Jul 10 23:37:38.789977 sshd-session[4662]: pam_unix(sshd:session): session closed for user core Jul 10 23:37:38.794060 systemd[1]: sshd@7-10.200.20.37:22-10.200.16.10:40552.service: Deactivated successfully. Jul 10 23:37:38.796884 systemd[1]: session-10.scope: Deactivated successfully. Jul 10 23:37:38.797839 systemd-logind[1690]: Session 10 logged out. Waiting for processes to exit. Jul 10 23:37:38.798927 systemd-logind[1690]: Removed session 10. Jul 10 23:37:43.884435 systemd[1]: Started sshd@8-10.200.20.37:22-10.200.16.10:51540.service - OpenSSH per-connection server daemon (10.200.16.10:51540). Jul 10 23:37:44.382999 sshd[4679]: Accepted publickey for core from 10.200.16.10 port 51540 ssh2: RSA SHA256:MRWiT2m5xbwSE4Nwnya++Fyqw3vZsniZxlGhvbxVqjo Jul 10 23:37:44.384884 sshd-session[4679]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:37:44.389845 systemd-logind[1690]: New session 11 of user core. Jul 10 23:37:44.393563 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 10 23:37:44.816496 sshd[4681]: Connection closed by 10.200.16.10 port 51540 Jul 10 23:37:44.817082 sshd-session[4679]: pam_unix(sshd:session): session closed for user core Jul 10 23:37:44.820828 systemd[1]: sshd@8-10.200.20.37:22-10.200.16.10:51540.service: Deactivated successfully. Jul 10 23:37:44.823358 systemd[1]: session-11.scope: Deactivated successfully. Jul 10 23:37:44.824340 systemd-logind[1690]: Session 11 logged out. Waiting for processes to exit. Jul 10 23:37:44.825652 systemd-logind[1690]: Removed session 11. Jul 10 23:37:49.911202 systemd[1]: Started sshd@9-10.200.20.37:22-10.200.16.10:51736.service - OpenSSH per-connection server daemon (10.200.16.10:51736). Jul 10 23:37:50.413612 sshd[4693]: Accepted publickey for core from 10.200.16.10 port 51736 ssh2: RSA SHA256:MRWiT2m5xbwSE4Nwnya++Fyqw3vZsniZxlGhvbxVqjo Jul 10 23:37:50.415119 sshd-session[4693]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:37:50.421554 systemd-logind[1690]: New session 12 of user core. Jul 10 23:37:50.424576 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 10 23:37:50.845878 sshd[4695]: Connection closed by 10.200.16.10 port 51736 Jul 10 23:37:50.846587 sshd-session[4693]: pam_unix(sshd:session): session closed for user core Jul 10 23:37:50.849807 systemd-logind[1690]: Session 12 logged out. Waiting for processes to exit. Jul 10 23:37:50.850628 systemd[1]: sshd@9-10.200.20.37:22-10.200.16.10:51736.service: Deactivated successfully. Jul 10 23:37:50.852253 systemd[1]: session-12.scope: Deactivated successfully. Jul 10 23:37:50.854962 systemd-logind[1690]: Removed session 12. Jul 10 23:37:55.944039 systemd[1]: Started sshd@10-10.200.20.37:22-10.200.16.10:51744.service - OpenSSH per-connection server daemon (10.200.16.10:51744). Jul 10 23:37:56.441315 sshd[4708]: Accepted publickey for core from 10.200.16.10 port 51744 ssh2: RSA SHA256:MRWiT2m5xbwSE4Nwnya++Fyqw3vZsniZxlGhvbxVqjo Jul 10 23:37:56.442792 sshd-session[4708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:37:56.447583 systemd-logind[1690]: New session 13 of user core. Jul 10 23:37:56.455570 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 10 23:37:56.884078 sshd[4710]: Connection closed by 10.200.16.10 port 51744 Jul 10 23:37:56.884711 sshd-session[4708]: pam_unix(sshd:session): session closed for user core Jul 10 23:37:56.888845 systemd[1]: sshd@10-10.200.20.37:22-10.200.16.10:51744.service: Deactivated successfully. Jul 10 23:37:56.891585 systemd[1]: session-13.scope: Deactivated successfully. Jul 10 23:37:56.892899 systemd-logind[1690]: Session 13 logged out. Waiting for processes to exit. Jul 10 23:37:56.894024 systemd-logind[1690]: Removed session 13. Jul 10 23:37:56.982659 systemd[1]: Started sshd@11-10.200.20.37:22-10.200.16.10:51748.service - OpenSSH per-connection server daemon (10.200.16.10:51748). Jul 10 23:37:57.440201 sshd[4723]: Accepted publickey for core from 10.200.16.10 port 51748 ssh2: RSA SHA256:MRWiT2m5xbwSE4Nwnya++Fyqw3vZsniZxlGhvbxVqjo Jul 10 23:37:57.442806 sshd-session[4723]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:37:57.447997 systemd-logind[1690]: New session 14 of user core. Jul 10 23:37:57.456573 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 10 23:37:57.876522 sshd[4726]: Connection closed by 10.200.16.10 port 51748 Jul 10 23:37:57.878104 sshd-session[4723]: pam_unix(sshd:session): session closed for user core Jul 10 23:37:57.884927 systemd[1]: sshd@11-10.200.20.37:22-10.200.16.10:51748.service: Deactivated successfully. Jul 10 23:37:57.884963 systemd-logind[1690]: Session 14 logged out. Waiting for processes to exit. Jul 10 23:37:57.887765 systemd[1]: session-14.scope: Deactivated successfully. Jul 10 23:37:57.890142 systemd-logind[1690]: Removed session 14. Jul 10 23:37:57.976686 systemd[1]: Started sshd@12-10.200.20.37:22-10.200.16.10:51754.service - OpenSSH per-connection server daemon (10.200.16.10:51754). Jul 10 23:37:58.473490 sshd[4736]: Accepted publickey for core from 10.200.16.10 port 51754 ssh2: RSA SHA256:MRWiT2m5xbwSE4Nwnya++Fyqw3vZsniZxlGhvbxVqjo Jul 10 23:37:58.475111 sshd-session[4736]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:37:58.479520 systemd-logind[1690]: New session 15 of user core. Jul 10 23:37:58.494731 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 10 23:37:58.904438 sshd[4738]: Connection closed by 10.200.16.10 port 51754 Jul 10 23:37:58.904970 sshd-session[4736]: pam_unix(sshd:session): session closed for user core Jul 10 23:37:58.909022 systemd[1]: sshd@12-10.200.20.37:22-10.200.16.10:51754.service: Deactivated successfully. Jul 10 23:37:58.911070 systemd[1]: session-15.scope: Deactivated successfully. Jul 10 23:37:58.912734 systemd-logind[1690]: Session 15 logged out. Waiting for processes to exit. Jul 10 23:37:58.914057 systemd-logind[1690]: Removed session 15. Jul 10 23:38:03.996778 systemd[1]: Started sshd@13-10.200.20.37:22-10.200.16.10:56238.service - OpenSSH per-connection server daemon (10.200.16.10:56238). Jul 10 23:38:04.454208 sshd[4749]: Accepted publickey for core from 10.200.16.10 port 56238 ssh2: RSA SHA256:MRWiT2m5xbwSE4Nwnya++Fyqw3vZsniZxlGhvbxVqjo Jul 10 23:38:04.455855 sshd-session[4749]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:38:04.460599 systemd-logind[1690]: New session 16 of user core. Jul 10 23:38:04.474588 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 10 23:38:04.850189 sshd[4751]: Connection closed by 10.200.16.10 port 56238 Jul 10 23:38:04.850825 sshd-session[4749]: pam_unix(sshd:session): session closed for user core Jul 10 23:38:04.854903 systemd-logind[1690]: Session 16 logged out. Waiting for processes to exit. Jul 10 23:38:04.855177 systemd[1]: sshd@13-10.200.20.37:22-10.200.16.10:56238.service: Deactivated successfully. Jul 10 23:38:04.857275 systemd[1]: session-16.scope: Deactivated successfully. Jul 10 23:38:04.858859 systemd-logind[1690]: Removed session 16. Jul 10 23:38:09.939658 systemd[1]: Started sshd@14-10.200.20.37:22-10.200.16.10:57866.service - OpenSSH per-connection server daemon (10.200.16.10:57866). Jul 10 23:38:10.421040 sshd[4765]: Accepted publickey for core from 10.200.16.10 port 57866 ssh2: RSA SHA256:MRWiT2m5xbwSE4Nwnya++Fyqw3vZsniZxlGhvbxVqjo Jul 10 23:38:10.422564 sshd-session[4765]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:38:10.429296 systemd-logind[1690]: New session 17 of user core. Jul 10 23:38:10.432567 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 10 23:38:10.825193 sshd[4767]: Connection closed by 10.200.16.10 port 57866 Jul 10 23:38:10.825089 sshd-session[4765]: pam_unix(sshd:session): session closed for user core Jul 10 23:38:10.828462 systemd-logind[1690]: Session 17 logged out. Waiting for processes to exit. Jul 10 23:38:10.828604 systemd[1]: sshd@14-10.200.20.37:22-10.200.16.10:57866.service: Deactivated successfully. Jul 10 23:38:10.831207 systemd[1]: session-17.scope: Deactivated successfully. Jul 10 23:38:10.834254 systemd-logind[1690]: Removed session 17. Jul 10 23:38:10.916705 systemd[1]: Started sshd@15-10.200.20.37:22-10.200.16.10:57876.service - OpenSSH per-connection server daemon (10.200.16.10:57876). Jul 10 23:38:11.396575 sshd[4779]: Accepted publickey for core from 10.200.16.10 port 57876 ssh2: RSA SHA256:MRWiT2m5xbwSE4Nwnya++Fyqw3vZsniZxlGhvbxVqjo Jul 10 23:38:11.398130 sshd-session[4779]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:38:11.403020 systemd-logind[1690]: New session 18 of user core. Jul 10 23:38:11.407558 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 10 23:38:11.832405 sshd[4781]: Connection closed by 10.200.16.10 port 57876 Jul 10 23:38:11.832976 sshd-session[4779]: pam_unix(sshd:session): session closed for user core Jul 10 23:38:11.837462 systemd[1]: sshd@15-10.200.20.37:22-10.200.16.10:57876.service: Deactivated successfully. Jul 10 23:38:11.839858 systemd[1]: session-18.scope: Deactivated successfully. Jul 10 23:38:11.842176 systemd-logind[1690]: Session 18 logged out. Waiting for processes to exit. Jul 10 23:38:11.843677 systemd-logind[1690]: Removed session 18. Jul 10 23:38:11.925646 systemd[1]: Started sshd@16-10.200.20.37:22-10.200.16.10:57892.service - OpenSSH per-connection server daemon (10.200.16.10:57892). Jul 10 23:38:12.405894 sshd[4790]: Accepted publickey for core from 10.200.16.10 port 57892 ssh2: RSA SHA256:MRWiT2m5xbwSE4Nwnya++Fyqw3vZsniZxlGhvbxVqjo Jul 10 23:38:12.407352 sshd-session[4790]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:38:12.412490 systemd-logind[1690]: New session 19 of user core. Jul 10 23:38:12.415545 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 10 23:38:13.396103 sshd[4792]: Connection closed by 10.200.16.10 port 57892 Jul 10 23:38:13.397161 sshd-session[4790]: pam_unix(sshd:session): session closed for user core Jul 10 23:38:13.402181 systemd-logind[1690]: Session 19 logged out. Waiting for processes to exit. Jul 10 23:38:13.402963 systemd[1]: sshd@16-10.200.20.37:22-10.200.16.10:57892.service: Deactivated successfully. Jul 10 23:38:13.405647 systemd[1]: session-19.scope: Deactivated successfully. Jul 10 23:38:13.407121 systemd-logind[1690]: Removed session 19. Jul 10 23:38:13.490476 systemd[1]: Started sshd@17-10.200.20.37:22-10.200.16.10:57906.service - OpenSSH per-connection server daemon (10.200.16.10:57906). Jul 10 23:38:13.978165 sshd[4809]: Accepted publickey for core from 10.200.16.10 port 57906 ssh2: RSA SHA256:MRWiT2m5xbwSE4Nwnya++Fyqw3vZsniZxlGhvbxVqjo Jul 10 23:38:13.979766 sshd-session[4809]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:38:13.985396 systemd-logind[1690]: New session 20 of user core. Jul 10 23:38:13.990581 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 10 23:38:14.501925 sshd[4811]: Connection closed by 10.200.16.10 port 57906 Jul 10 23:38:14.502642 sshd-session[4809]: pam_unix(sshd:session): session closed for user core Jul 10 23:38:14.506596 systemd[1]: sshd@17-10.200.20.37:22-10.200.16.10:57906.service: Deactivated successfully. Jul 10 23:38:14.509324 systemd[1]: session-20.scope: Deactivated successfully. Jul 10 23:38:14.511123 systemd-logind[1690]: Session 20 logged out. Waiting for processes to exit. Jul 10 23:38:14.512566 systemd-logind[1690]: Removed session 20. Jul 10 23:38:14.593399 systemd[1]: Started sshd@18-10.200.20.37:22-10.200.16.10:57914.service - OpenSSH per-connection server daemon (10.200.16.10:57914). Jul 10 23:38:15.093307 sshd[4821]: Accepted publickey for core from 10.200.16.10 port 57914 ssh2: RSA SHA256:MRWiT2m5xbwSE4Nwnya++Fyqw3vZsniZxlGhvbxVqjo Jul 10 23:38:15.094931 sshd-session[4821]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:38:15.100496 systemd-logind[1690]: New session 21 of user core. Jul 10 23:38:15.107565 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 10 23:38:15.525658 sshd[4823]: Connection closed by 10.200.16.10 port 57914 Jul 10 23:38:15.526226 sshd-session[4821]: pam_unix(sshd:session): session closed for user core Jul 10 23:38:15.530466 systemd-logind[1690]: Session 21 logged out. Waiting for processes to exit. Jul 10 23:38:15.530710 systemd[1]: sshd@18-10.200.20.37:22-10.200.16.10:57914.service: Deactivated successfully. Jul 10 23:38:15.532328 systemd[1]: session-21.scope: Deactivated successfully. Jul 10 23:38:15.535583 systemd-logind[1690]: Removed session 21. Jul 10 23:38:20.613332 systemd[1]: Started sshd@19-10.200.20.37:22-10.200.16.10:55218.service - OpenSSH per-connection server daemon (10.200.16.10:55218). Jul 10 23:38:21.097296 sshd[4837]: Accepted publickey for core from 10.200.16.10 port 55218 ssh2: RSA SHA256:MRWiT2m5xbwSE4Nwnya++Fyqw3vZsniZxlGhvbxVqjo Jul 10 23:38:21.098850 sshd-session[4837]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:38:21.103460 systemd-logind[1690]: New session 22 of user core. Jul 10 23:38:21.108534 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 10 23:38:21.499774 sshd[4839]: Connection closed by 10.200.16.10 port 55218 Jul 10 23:38:21.500612 sshd-session[4837]: pam_unix(sshd:session): session closed for user core Jul 10 23:38:21.505066 systemd[1]: sshd@19-10.200.20.37:22-10.200.16.10:55218.service: Deactivated successfully. Jul 10 23:38:21.507699 systemd[1]: session-22.scope: Deactivated successfully. Jul 10 23:38:21.509218 systemd-logind[1690]: Session 22 logged out. Waiting for processes to exit. Jul 10 23:38:21.511347 systemd-logind[1690]: Removed session 22. Jul 10 23:38:26.597665 systemd[1]: Started sshd@20-10.200.20.37:22-10.200.16.10:55226.service - OpenSSH per-connection server daemon (10.200.16.10:55226). Jul 10 23:38:27.096009 sshd[4851]: Accepted publickey for core from 10.200.16.10 port 55226 ssh2: RSA SHA256:MRWiT2m5xbwSE4Nwnya++Fyqw3vZsniZxlGhvbxVqjo Jul 10 23:38:27.097524 sshd-session[4851]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:38:27.101899 systemd-logind[1690]: New session 23 of user core. Jul 10 23:38:27.108579 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 10 23:38:27.522687 sshd[4853]: Connection closed by 10.200.16.10 port 55226 Jul 10 23:38:27.523634 sshd-session[4851]: pam_unix(sshd:session): session closed for user core Jul 10 23:38:27.527438 systemd[1]: sshd@20-10.200.20.37:22-10.200.16.10:55226.service: Deactivated successfully. Jul 10 23:38:27.529255 systemd[1]: session-23.scope: Deactivated successfully. Jul 10 23:38:27.531186 systemd-logind[1690]: Session 23 logged out. Waiting for processes to exit. Jul 10 23:38:27.532302 systemd-logind[1690]: Removed session 23. Jul 10 23:38:32.618055 systemd[1]: Started sshd@21-10.200.20.37:22-10.200.16.10:49466.service - OpenSSH per-connection server daemon (10.200.16.10:49466). Jul 10 23:38:33.112734 sshd[4866]: Accepted publickey for core from 10.200.16.10 port 49466 ssh2: RSA SHA256:MRWiT2m5xbwSE4Nwnya++Fyqw3vZsniZxlGhvbxVqjo Jul 10 23:38:33.114293 sshd-session[4866]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:38:33.118981 systemd-logind[1690]: New session 24 of user core. Jul 10 23:38:33.126577 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 10 23:38:33.546126 sshd[4868]: Connection closed by 10.200.16.10 port 49466 Jul 10 23:38:33.546000 sshd-session[4866]: pam_unix(sshd:session): session closed for user core Jul 10 23:38:33.550069 systemd[1]: sshd@21-10.200.20.37:22-10.200.16.10:49466.service: Deactivated successfully. Jul 10 23:38:33.553041 systemd[1]: session-24.scope: Deactivated successfully. Jul 10 23:38:33.554005 systemd-logind[1690]: Session 24 logged out. Waiting for processes to exit. Jul 10 23:38:33.555107 systemd-logind[1690]: Removed session 24. Jul 10 23:38:33.647206 systemd[1]: Started sshd@22-10.200.20.37:22-10.200.16.10:49472.service - OpenSSH per-connection server daemon (10.200.16.10:49472). Jul 10 23:38:34.146943 sshd[4882]: Accepted publickey for core from 10.200.16.10 port 49472 ssh2: RSA SHA256:MRWiT2m5xbwSE4Nwnya++Fyqw3vZsniZxlGhvbxVqjo Jul 10 23:38:34.148553 sshd-session[4882]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:38:34.154268 systemd-logind[1690]: New session 25 of user core. Jul 10 23:38:34.165594 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 10 23:38:34.919981 kernel: irq 12: nobody cared (try booting with the "irqpoll" option) Jul 10 23:38:34.920105 kernel: CPU: 0 PID: 0 Comm: swapper/0 Not tainted 6.6.96-flatcar #1 Jul 10 23:38:34.920127 kernel: Hardware name: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Jul 10 23:38:34.920143 kernel: Call trace: Jul 10 23:38:34.920158 kernel: dump_backtrace+0x98/0x118 Jul 10 23:38:34.920185 kernel: show_stack+0x18/0x24 Jul 10 23:38:34.920200 kernel: dump_stack_lvl+0x48/0x60 Jul 10 23:38:34.920214 kernel: dump_stack+0x18/0x24 Jul 10 23:38:34.920229 kernel: __report_bad_irq+0x38/0xe4 Jul 10 23:38:34.920246 kernel: note_interrupt+0x33c/0x3a8 Jul 10 23:38:34.920261 kernel: handle_irq_event+0x9c/0xbc Jul 10 23:38:34.920275 kernel: handle_fasteoi_irq+0xa0/0x22c Jul 10 23:38:34.920288 kernel: handle_irq_desc+0x34/0x58 Jul 10 23:38:34.920307 kernel: generic_handle_domain_irq+0x1c/0x28 Jul 10 23:38:34.920324 kernel: gic_handle_irq+0x50/0x12c Jul 10 23:38:34.920337 kernel: call_on_irq_stack+0x24/0x4c Jul 10 23:38:34.920349 kernel: do_interrupt_handler+0x80/0x84 Jul 10 23:38:34.920364 kernel: el1_interrupt+0x34/0x68 Jul 10 23:38:34.920415 kernel: el1h_64_irq_handler+0x18/0x24 Jul 10 23:38:34.920430 kernel: el1h_64_irq+0x64/0x68 Jul 10 23:38:34.920442 kernel: finish_task_switch.isra.0+0x74/0x24c Jul 10 23:38:34.920455 kernel: __schedule+0x3ac/0xdec Jul 10 23:38:34.920469 kernel: schedule_idle+0x28/0x48 Jul 10 23:38:34.920486 kernel: do_idle+0x16c/0x264 Jul 10 23:38:34.920501 kernel: cpu_startup_entry+0x38/0x3c Jul 10 23:38:34.920515 kernel: kernel_init+0x0/0x1e0 Jul 10 23:38:34.920529 kernel: arch_post_acpi_subsys_init+0x0/0x8 Jul 10 23:38:34.920544 kernel: start_kernel+0x438/0x6e0 Jul 10 23:38:34.920558 kernel: __primary_switched+0xbc/0xc4 Jul 10 23:38:34.920571 kernel: handlers: Jul 10 23:38:34.920585 kernel: [<00000000f365877f>] pl011_int Jul 10 23:38:34.920599 kernel: Disabling IRQ #12 Jul 10 23:38:36.279224 containerd[1709]: time="2025-07-10T23:38:36.279147079Z" level=info msg="StopContainer for \"3008d49e27cd03d3943733d019e163fc3510945569c91c8f852002f10ef62e01\" with timeout 30 (s)" Jul 10 23:38:36.287699 containerd[1709]: time="2025-07-10T23:38:36.280734804Z" level=info msg="Stop container \"3008d49e27cd03d3943733d019e163fc3510945569c91c8f852002f10ef62e01\" with signal terminated" Jul 10 23:38:36.293284 containerd[1709]: time="2025-07-10T23:38:36.293164521Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 10 23:38:36.302847 containerd[1709]: time="2025-07-10T23:38:36.302686949Z" level=info msg="StopContainer for \"a87d226a87e4a37c51f5eb0561d4ba53d223a94d0886bfc469d458bece9bd12d\" with timeout 2 (s)" Jul 10 23:38:36.303549 containerd[1709]: time="2025-07-10T23:38:36.303514832Z" level=info msg="Stop container \"a87d226a87e4a37c51f5eb0561d4ba53d223a94d0886bfc469d458bece9bd12d\" with signal terminated" Jul 10 23:38:36.304551 systemd[1]: cri-containerd-3008d49e27cd03d3943733d019e163fc3510945569c91c8f852002f10ef62e01.scope: Deactivated successfully. Jul 10 23:38:36.321363 systemd-networkd[1448]: lxc_health: Link DOWN Jul 10 23:38:36.322731 systemd-networkd[1448]: lxc_health: Lost carrier Jul 10 23:38:36.335143 systemd[1]: cri-containerd-a87d226a87e4a37c51f5eb0561d4ba53d223a94d0886bfc469d458bece9bd12d.scope: Deactivated successfully. Jul 10 23:38:36.335484 systemd[1]: cri-containerd-a87d226a87e4a37c51f5eb0561d4ba53d223a94d0886bfc469d458bece9bd12d.scope: Consumed 6.560s CPU time, 128.3M memory peak, 136K read from disk, 12.9M written to disk. Jul 10 23:38:36.345030 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3008d49e27cd03d3943733d019e163fc3510945569c91c8f852002f10ef62e01-rootfs.mount: Deactivated successfully. Jul 10 23:38:36.356812 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a87d226a87e4a37c51f5eb0561d4ba53d223a94d0886bfc469d458bece9bd12d-rootfs.mount: Deactivated successfully. Jul 10 23:38:36.374464 containerd[1709]: time="2025-07-10T23:38:36.374313162Z" level=info msg="shim disconnected" id=a87d226a87e4a37c51f5eb0561d4ba53d223a94d0886bfc469d458bece9bd12d namespace=k8s.io Jul 10 23:38:36.374464 containerd[1709]: time="2025-07-10T23:38:36.374405482Z" level=warning msg="cleaning up after shim disconnected" id=a87d226a87e4a37c51f5eb0561d4ba53d223a94d0886bfc469d458bece9bd12d namespace=k8s.io Jul 10 23:38:36.374464 containerd[1709]: time="2025-07-10T23:38:36.374415802Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 10 23:38:36.374708 containerd[1709]: time="2025-07-10T23:38:36.374627563Z" level=info msg="shim disconnected" id=3008d49e27cd03d3943733d019e163fc3510945569c91c8f852002f10ef62e01 namespace=k8s.io Jul 10 23:38:36.374708 containerd[1709]: time="2025-07-10T23:38:36.374676683Z" level=warning msg="cleaning up after shim disconnected" id=3008d49e27cd03d3943733d019e163fc3510945569c91c8f852002f10ef62e01 namespace=k8s.io Jul 10 23:38:36.374708 containerd[1709]: time="2025-07-10T23:38:36.374688603Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 10 23:38:36.397857 containerd[1709]: time="2025-07-10T23:38:36.397806432Z" level=info msg="StopContainer for \"a87d226a87e4a37c51f5eb0561d4ba53d223a94d0886bfc469d458bece9bd12d\" returns successfully" Jul 10 23:38:36.398567 containerd[1709]: time="2025-07-10T23:38:36.398536754Z" level=info msg="StopPodSandbox for \"ef39b54c18fb53d8355dd113b20f8893f6f53d5acd73d2d5d8cd7bb11b10f2d1\"" Jul 10 23:38:36.398639 containerd[1709]: time="2025-07-10T23:38:36.398575914Z" level=info msg="Container to stop \"3dc5851a7b50c562c2d967981cb902db81b8be904caefc2a01301237e2862583\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 23:38:36.398639 containerd[1709]: time="2025-07-10T23:38:36.398588194Z" level=info msg="Container to stop \"4b7a97f06a728cd365c78f49cd15c8f5f1c5e1f1ba2608f00113296ded8f888d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 23:38:36.398639 containerd[1709]: time="2025-07-10T23:38:36.398600594Z" level=info msg="Container to stop \"fa45e6fd903a19d3f3414e84f8d9b3577785b472a0a6f81cf8b68539fa6328fb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 23:38:36.398639 containerd[1709]: time="2025-07-10T23:38:36.398610194Z" level=info msg="Container to stop \"aee2fffebaa52686c22520232fac29fad406842ae1de2e4b10ad19b8c4fb6511\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 23:38:36.398639 containerd[1709]: time="2025-07-10T23:38:36.398618554Z" level=info msg="Container to stop \"a87d226a87e4a37c51f5eb0561d4ba53d223a94d0886bfc469d458bece9bd12d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 23:38:36.400997 containerd[1709]: time="2025-07-10T23:38:36.400954921Z" level=info msg="StopContainer for \"3008d49e27cd03d3943733d019e163fc3510945569c91c8f852002f10ef62e01\" returns successfully" Jul 10 23:38:36.401380 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ef39b54c18fb53d8355dd113b20f8893f6f53d5acd73d2d5d8cd7bb11b10f2d1-shm.mount: Deactivated successfully. Jul 10 23:38:36.402825 containerd[1709]: time="2025-07-10T23:38:36.402475006Z" level=info msg="StopPodSandbox for \"11ef54599ab744822f3122af76e46b1a18bb7896b2032050ac23802d8f332c19\"" Jul 10 23:38:36.402825 containerd[1709]: time="2025-07-10T23:38:36.402510406Z" level=info msg="Container to stop \"3008d49e27cd03d3943733d019e163fc3510945569c91c8f852002f10ef62e01\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 23:38:36.404311 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-11ef54599ab744822f3122af76e46b1a18bb7896b2032050ac23802d8f332c19-shm.mount: Deactivated successfully. Jul 10 23:38:36.410704 systemd[1]: cri-containerd-ef39b54c18fb53d8355dd113b20f8893f6f53d5acd73d2d5d8cd7bb11b10f2d1.scope: Deactivated successfully. Jul 10 23:38:36.413427 systemd[1]: cri-containerd-11ef54599ab744822f3122af76e46b1a18bb7896b2032050ac23802d8f332c19.scope: Deactivated successfully. Jul 10 23:38:36.455452 containerd[1709]: time="2025-07-10T23:38:36.455235923Z" level=info msg="shim disconnected" id=ef39b54c18fb53d8355dd113b20f8893f6f53d5acd73d2d5d8cd7bb11b10f2d1 namespace=k8s.io Jul 10 23:38:36.455452 containerd[1709]: time="2025-07-10T23:38:36.455293243Z" level=warning msg="cleaning up after shim disconnected" id=ef39b54c18fb53d8355dd113b20f8893f6f53d5acd73d2d5d8cd7bb11b10f2d1 namespace=k8s.io Jul 10 23:38:36.455452 containerd[1709]: time="2025-07-10T23:38:36.455300923Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 10 23:38:36.455452 containerd[1709]: time="2025-07-10T23:38:36.455344403Z" level=info msg="shim disconnected" id=11ef54599ab744822f3122af76e46b1a18bb7896b2032050ac23802d8f332c19 namespace=k8s.io Jul 10 23:38:36.455452 containerd[1709]: time="2025-07-10T23:38:36.455387203Z" level=warning msg="cleaning up after shim disconnected" id=11ef54599ab744822f3122af76e46b1a18bb7896b2032050ac23802d8f332c19 namespace=k8s.io Jul 10 23:38:36.455452 containerd[1709]: time="2025-07-10T23:38:36.455395483Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 10 23:38:36.470987 containerd[1709]: time="2025-07-10T23:38:36.470812569Z" level=info msg="TearDown network for sandbox \"ef39b54c18fb53d8355dd113b20f8893f6f53d5acd73d2d5d8cd7bb11b10f2d1\" successfully" Jul 10 23:38:36.470987 containerd[1709]: time="2025-07-10T23:38:36.470847809Z" level=info msg="StopPodSandbox for \"ef39b54c18fb53d8355dd113b20f8893f6f53d5acd73d2d5d8cd7bb11b10f2d1\" returns successfully" Jul 10 23:38:36.471151 containerd[1709]: time="2025-07-10T23:38:36.471117490Z" level=info msg="TearDown network for sandbox \"11ef54599ab744822f3122af76e46b1a18bb7896b2032050ac23802d8f332c19\" successfully" Jul 10 23:38:36.471151 containerd[1709]: time="2025-07-10T23:38:36.471134810Z" level=info msg="StopPodSandbox for \"11ef54599ab744822f3122af76e46b1a18bb7896b2032050ac23802d8f332c19\" returns successfully" Jul 10 23:38:36.529307 kubelet[3273]: I0710 23:38:36.529014 3273 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/849dc1f6-0a16-43de-abfe-42d0ab3043fa-cilium-cgroup\") pod \"849dc1f6-0a16-43de-abfe-42d0ab3043fa\" (UID: \"849dc1f6-0a16-43de-abfe-42d0ab3043fa\") " Jul 10 23:38:36.529307 kubelet[3273]: I0710 23:38:36.529071 3273 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/849dc1f6-0a16-43de-abfe-42d0ab3043fa-etc-cni-netd\") pod \"849dc1f6-0a16-43de-abfe-42d0ab3043fa\" (UID: \"849dc1f6-0a16-43de-abfe-42d0ab3043fa\") " Jul 10 23:38:36.529307 kubelet[3273]: I0710 23:38:36.529086 3273 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/849dc1f6-0a16-43de-abfe-42d0ab3043fa-xtables-lock\") pod \"849dc1f6-0a16-43de-abfe-42d0ab3043fa\" (UID: \"849dc1f6-0a16-43de-abfe-42d0ab3043fa\") " Jul 10 23:38:36.529307 kubelet[3273]: I0710 23:38:36.529108 3273 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cf465242-16cd-44ed-bb91-b9a8bd821559-cilium-config-path\") pod \"cf465242-16cd-44ed-bb91-b9a8bd821559\" (UID: \"cf465242-16cd-44ed-bb91-b9a8bd821559\") " Jul 10 23:38:36.529307 kubelet[3273]: I0710 23:38:36.529129 3273 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/849dc1f6-0a16-43de-abfe-42d0ab3043fa-lib-modules\") pod \"849dc1f6-0a16-43de-abfe-42d0ab3043fa\" (UID: \"849dc1f6-0a16-43de-abfe-42d0ab3043fa\") " Jul 10 23:38:36.529307 kubelet[3273]: I0710 23:38:36.529139 3273 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/849dc1f6-0a16-43de-abfe-42d0ab3043fa-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "849dc1f6-0a16-43de-abfe-42d0ab3043fa" (UID: "849dc1f6-0a16-43de-abfe-42d0ab3043fa"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 23:38:36.529808 kubelet[3273]: I0710 23:38:36.529171 3273 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/849dc1f6-0a16-43de-abfe-42d0ab3043fa-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "849dc1f6-0a16-43de-abfe-42d0ab3043fa" (UID: "849dc1f6-0a16-43de-abfe-42d0ab3043fa"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 23:38:36.529808 kubelet[3273]: I0710 23:38:36.529185 3273 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/849dc1f6-0a16-43de-abfe-42d0ab3043fa-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "849dc1f6-0a16-43de-abfe-42d0ab3043fa" (UID: "849dc1f6-0a16-43de-abfe-42d0ab3043fa"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 23:38:36.529808 kubelet[3273]: I0710 23:38:36.529200 3273 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/849dc1f6-0a16-43de-abfe-42d0ab3043fa-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "849dc1f6-0a16-43de-abfe-42d0ab3043fa" (UID: "849dc1f6-0a16-43de-abfe-42d0ab3043fa"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 23:38:36.531004 kubelet[3273]: I0710 23:38:36.530940 3273 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf465242-16cd-44ed-bb91-b9a8bd821559-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "cf465242-16cd-44ed-bb91-b9a8bd821559" (UID: "cf465242-16cd-44ed-bb91-b9a8bd821559"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 10 23:38:36.629962 kubelet[3273]: I0710 23:38:36.629806 3273 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/849dc1f6-0a16-43de-abfe-42d0ab3043fa-bpf-maps\") pod \"849dc1f6-0a16-43de-abfe-42d0ab3043fa\" (UID: \"849dc1f6-0a16-43de-abfe-42d0ab3043fa\") " Jul 10 23:38:36.630118 kubelet[3273]: I0710 23:38:36.630009 3273 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/849dc1f6-0a16-43de-abfe-42d0ab3043fa-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "849dc1f6-0a16-43de-abfe-42d0ab3043fa" (UID: "849dc1f6-0a16-43de-abfe-42d0ab3043fa"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 23:38:36.630118 kubelet[3273]: I0710 23:38:36.630037 3273 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/849dc1f6-0a16-43de-abfe-42d0ab3043fa-clustermesh-secrets\") pod \"849dc1f6-0a16-43de-abfe-42d0ab3043fa\" (UID: \"849dc1f6-0a16-43de-abfe-42d0ab3043fa\") " Jul 10 23:38:36.630118 kubelet[3273]: I0710 23:38:36.630056 3273 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/849dc1f6-0a16-43de-abfe-42d0ab3043fa-hubble-tls\") pod \"849dc1f6-0a16-43de-abfe-42d0ab3043fa\" (UID: \"849dc1f6-0a16-43de-abfe-42d0ab3043fa\") " Jul 10 23:38:36.630118 kubelet[3273]: I0710 23:38:36.630073 3273 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/849dc1f6-0a16-43de-abfe-42d0ab3043fa-hostproc\") pod \"849dc1f6-0a16-43de-abfe-42d0ab3043fa\" (UID: \"849dc1f6-0a16-43de-abfe-42d0ab3043fa\") " Jul 10 23:38:36.630118 kubelet[3273]: I0710 23:38:36.630095 3273 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/849dc1f6-0a16-43de-abfe-42d0ab3043fa-host-proc-sys-kernel\") pod \"849dc1f6-0a16-43de-abfe-42d0ab3043fa\" (UID: \"849dc1f6-0a16-43de-abfe-42d0ab3043fa\") " Jul 10 23:38:36.630118 kubelet[3273]: I0710 23:38:36.630113 3273 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-st5qq\" (UniqueName: \"kubernetes.io/projected/849dc1f6-0a16-43de-abfe-42d0ab3043fa-kube-api-access-st5qq\") pod \"849dc1f6-0a16-43de-abfe-42d0ab3043fa\" (UID: \"849dc1f6-0a16-43de-abfe-42d0ab3043fa\") " Jul 10 23:38:36.630252 kubelet[3273]: I0710 23:38:36.630128 3273 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/849dc1f6-0a16-43de-abfe-42d0ab3043fa-cilium-run\") pod \"849dc1f6-0a16-43de-abfe-42d0ab3043fa\" (UID: \"849dc1f6-0a16-43de-abfe-42d0ab3043fa\") " Jul 10 23:38:36.630252 kubelet[3273]: I0710 23:38:36.630145 3273 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/849dc1f6-0a16-43de-abfe-42d0ab3043fa-cni-path\") pod \"849dc1f6-0a16-43de-abfe-42d0ab3043fa\" (UID: \"849dc1f6-0a16-43de-abfe-42d0ab3043fa\") " Jul 10 23:38:36.630252 kubelet[3273]: I0710 23:38:36.630162 3273 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-czvc7\" (UniqueName: \"kubernetes.io/projected/cf465242-16cd-44ed-bb91-b9a8bd821559-kube-api-access-czvc7\") pod \"cf465242-16cd-44ed-bb91-b9a8bd821559\" (UID: \"cf465242-16cd-44ed-bb91-b9a8bd821559\") " Jul 10 23:38:36.630252 kubelet[3273]: I0710 23:38:36.630180 3273 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/849dc1f6-0a16-43de-abfe-42d0ab3043fa-cilium-config-path\") pod \"849dc1f6-0a16-43de-abfe-42d0ab3043fa\" (UID: \"849dc1f6-0a16-43de-abfe-42d0ab3043fa\") " Jul 10 23:38:36.630252 kubelet[3273]: I0710 23:38:36.630196 3273 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/849dc1f6-0a16-43de-abfe-42d0ab3043fa-host-proc-sys-net\") pod \"849dc1f6-0a16-43de-abfe-42d0ab3043fa\" (UID: \"849dc1f6-0a16-43de-abfe-42d0ab3043fa\") " Jul 10 23:38:36.630252 kubelet[3273]: I0710 23:38:36.630240 3273 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/849dc1f6-0a16-43de-abfe-42d0ab3043fa-cilium-cgroup\") on node \"ci-4230.2.1-n-d24186f489\" DevicePath \"\"" Jul 10 23:38:36.630445 kubelet[3273]: I0710 23:38:36.630251 3273 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/849dc1f6-0a16-43de-abfe-42d0ab3043fa-etc-cni-netd\") on node \"ci-4230.2.1-n-d24186f489\" DevicePath \"\"" Jul 10 23:38:36.630445 kubelet[3273]: I0710 23:38:36.630259 3273 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/849dc1f6-0a16-43de-abfe-42d0ab3043fa-xtables-lock\") on node \"ci-4230.2.1-n-d24186f489\" DevicePath \"\"" Jul 10 23:38:36.630445 kubelet[3273]: I0710 23:38:36.630268 3273 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cf465242-16cd-44ed-bb91-b9a8bd821559-cilium-config-path\") on node \"ci-4230.2.1-n-d24186f489\" DevicePath \"\"" Jul 10 23:38:36.630445 kubelet[3273]: I0710 23:38:36.630279 3273 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/849dc1f6-0a16-43de-abfe-42d0ab3043fa-lib-modules\") on node \"ci-4230.2.1-n-d24186f489\" DevicePath \"\"" Jul 10 23:38:36.630445 kubelet[3273]: I0710 23:38:36.630288 3273 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/849dc1f6-0a16-43de-abfe-42d0ab3043fa-bpf-maps\") on node \"ci-4230.2.1-n-d24186f489\" DevicePath \"\"" Jul 10 23:38:36.630445 kubelet[3273]: I0710 23:38:36.630305 3273 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/849dc1f6-0a16-43de-abfe-42d0ab3043fa-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "849dc1f6-0a16-43de-abfe-42d0ab3043fa" (UID: "849dc1f6-0a16-43de-abfe-42d0ab3043fa"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 23:38:36.631932 kubelet[3273]: I0710 23:38:36.631482 3273 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/849dc1f6-0a16-43de-abfe-42d0ab3043fa-hostproc" (OuterVolumeSpecName: "hostproc") pod "849dc1f6-0a16-43de-abfe-42d0ab3043fa" (UID: "849dc1f6-0a16-43de-abfe-42d0ab3043fa"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 23:38:36.631932 kubelet[3273]: I0710 23:38:36.631525 3273 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/849dc1f6-0a16-43de-abfe-42d0ab3043fa-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "849dc1f6-0a16-43de-abfe-42d0ab3043fa" (UID: "849dc1f6-0a16-43de-abfe-42d0ab3043fa"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 23:38:36.631932 kubelet[3273]: I0710 23:38:36.631713 3273 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/849dc1f6-0a16-43de-abfe-42d0ab3043fa-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "849dc1f6-0a16-43de-abfe-42d0ab3043fa" (UID: "849dc1f6-0a16-43de-abfe-42d0ab3043fa"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 23:38:36.631932 kubelet[3273]: I0710 23:38:36.631735 3273 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/849dc1f6-0a16-43de-abfe-42d0ab3043fa-cni-path" (OuterVolumeSpecName: "cni-path") pod "849dc1f6-0a16-43de-abfe-42d0ab3043fa" (UID: "849dc1f6-0a16-43de-abfe-42d0ab3043fa"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 23:38:36.635049 kubelet[3273]: I0710 23:38:36.634914 3273 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/849dc1f6-0a16-43de-abfe-42d0ab3043fa-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "849dc1f6-0a16-43de-abfe-42d0ab3043fa" (UID: "849dc1f6-0a16-43de-abfe-42d0ab3043fa"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 10 23:38:36.635182 kubelet[3273]: I0710 23:38:36.635147 3273 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/849dc1f6-0a16-43de-abfe-42d0ab3043fa-kube-api-access-st5qq" (OuterVolumeSpecName: "kube-api-access-st5qq") pod "849dc1f6-0a16-43de-abfe-42d0ab3043fa" (UID: "849dc1f6-0a16-43de-abfe-42d0ab3043fa"). InnerVolumeSpecName "kube-api-access-st5qq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 10 23:38:36.635956 kubelet[3273]: I0710 23:38:36.635933 3273 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/849dc1f6-0a16-43de-abfe-42d0ab3043fa-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "849dc1f6-0a16-43de-abfe-42d0ab3043fa" (UID: "849dc1f6-0a16-43de-abfe-42d0ab3043fa"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 10 23:38:36.636948 kubelet[3273]: I0710 23:38:36.636914 3273 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/849dc1f6-0a16-43de-abfe-42d0ab3043fa-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "849dc1f6-0a16-43de-abfe-42d0ab3043fa" (UID: "849dc1f6-0a16-43de-abfe-42d0ab3043fa"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 10 23:38:36.637088 kubelet[3273]: I0710 23:38:36.637061 3273 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf465242-16cd-44ed-bb91-b9a8bd821559-kube-api-access-czvc7" (OuterVolumeSpecName: "kube-api-access-czvc7") pod "cf465242-16cd-44ed-bb91-b9a8bd821559" (UID: "cf465242-16cd-44ed-bb91-b9a8bd821559"). InnerVolumeSpecName "kube-api-access-czvc7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 10 23:38:36.731168 kubelet[3273]: I0710 23:38:36.731087 3273 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/849dc1f6-0a16-43de-abfe-42d0ab3043fa-hubble-tls\") on node \"ci-4230.2.1-n-d24186f489\" DevicePath \"\"" Jul 10 23:38:36.731168 kubelet[3273]: I0710 23:38:36.731119 3273 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/849dc1f6-0a16-43de-abfe-42d0ab3043fa-hostproc\") on node \"ci-4230.2.1-n-d24186f489\" DevicePath \"\"" Jul 10 23:38:36.731168 kubelet[3273]: I0710 23:38:36.731128 3273 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/849dc1f6-0a16-43de-abfe-42d0ab3043fa-host-proc-sys-kernel\") on node \"ci-4230.2.1-n-d24186f489\" DevicePath \"\"" Jul 10 23:38:36.731168 kubelet[3273]: I0710 23:38:36.731137 3273 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-st5qq\" (UniqueName: \"kubernetes.io/projected/849dc1f6-0a16-43de-abfe-42d0ab3043fa-kube-api-access-st5qq\") on node \"ci-4230.2.1-n-d24186f489\" DevicePath \"\"" Jul 10 23:38:36.731168 kubelet[3273]: I0710 23:38:36.731146 3273 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/849dc1f6-0a16-43de-abfe-42d0ab3043fa-cilium-run\") on node \"ci-4230.2.1-n-d24186f489\" DevicePath \"\"" Jul 10 23:38:36.731168 kubelet[3273]: I0710 23:38:36.731154 3273 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/849dc1f6-0a16-43de-abfe-42d0ab3043fa-cni-path\") on node \"ci-4230.2.1-n-d24186f489\" DevicePath \"\"" Jul 10 23:38:36.731168 kubelet[3273]: I0710 23:38:36.731163 3273 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-czvc7\" (UniqueName: \"kubernetes.io/projected/cf465242-16cd-44ed-bb91-b9a8bd821559-kube-api-access-czvc7\") on node \"ci-4230.2.1-n-d24186f489\" DevicePath \"\"" Jul 10 23:38:36.731168 kubelet[3273]: I0710 23:38:36.731171 3273 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/849dc1f6-0a16-43de-abfe-42d0ab3043fa-cilium-config-path\") on node \"ci-4230.2.1-n-d24186f489\" DevicePath \"\"" Jul 10 23:38:36.731495 kubelet[3273]: I0710 23:38:36.731179 3273 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/849dc1f6-0a16-43de-abfe-42d0ab3043fa-host-proc-sys-net\") on node \"ci-4230.2.1-n-d24186f489\" DevicePath \"\"" Jul 10 23:38:36.731495 kubelet[3273]: I0710 23:38:36.731191 3273 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/849dc1f6-0a16-43de-abfe-42d0ab3043fa-clustermesh-secrets\") on node \"ci-4230.2.1-n-d24186f489\" DevicePath \"\"" Jul 10 23:38:36.997416 kubelet[3273]: I0710 23:38:36.996590 3273 scope.go:117] "RemoveContainer" containerID="3008d49e27cd03d3943733d019e163fc3510945569c91c8f852002f10ef62e01" Jul 10 23:38:36.999915 containerd[1709]: time="2025-07-10T23:38:36.999560701Z" level=info msg="RemoveContainer for \"3008d49e27cd03d3943733d019e163fc3510945569c91c8f852002f10ef62e01\"" Jul 10 23:38:37.005060 systemd[1]: Removed slice kubepods-besteffort-podcf465242_16cd_44ed_bb91_b9a8bd821559.slice - libcontainer container kubepods-besteffort-podcf465242_16cd_44ed_bb91_b9a8bd821559.slice. Jul 10 23:38:37.010782 systemd[1]: Removed slice kubepods-burstable-pod849dc1f6_0a16_43de_abfe_42d0ab3043fa.slice - libcontainer container kubepods-burstable-pod849dc1f6_0a16_43de_abfe_42d0ab3043fa.slice. Jul 10 23:38:37.010886 systemd[1]: kubepods-burstable-pod849dc1f6_0a16_43de_abfe_42d0ab3043fa.slice: Consumed 6.634s CPU time, 128.8M memory peak, 136K read from disk, 12.9M written to disk. Jul 10 23:38:37.018550 containerd[1709]: time="2025-07-10T23:38:37.018505078Z" level=info msg="RemoveContainer for \"3008d49e27cd03d3943733d019e163fc3510945569c91c8f852002f10ef62e01\" returns successfully" Jul 10 23:38:37.018840 kubelet[3273]: I0710 23:38:37.018804 3273 scope.go:117] "RemoveContainer" containerID="3008d49e27cd03d3943733d019e163fc3510945569c91c8f852002f10ef62e01" Jul 10 23:38:37.019158 containerd[1709]: time="2025-07-10T23:38:37.019062759Z" level=error msg="ContainerStatus for \"3008d49e27cd03d3943733d019e163fc3510945569c91c8f852002f10ef62e01\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3008d49e27cd03d3943733d019e163fc3510945569c91c8f852002f10ef62e01\": not found" Jul 10 23:38:37.019403 kubelet[3273]: E0710 23:38:37.019302 3273 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3008d49e27cd03d3943733d019e163fc3510945569c91c8f852002f10ef62e01\": not found" containerID="3008d49e27cd03d3943733d019e163fc3510945569c91c8f852002f10ef62e01" Jul 10 23:38:37.019652 kubelet[3273]: I0710 23:38:37.019340 3273 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3008d49e27cd03d3943733d019e163fc3510945569c91c8f852002f10ef62e01"} err="failed to get container status \"3008d49e27cd03d3943733d019e163fc3510945569c91c8f852002f10ef62e01\": rpc error: code = NotFound desc = an error occurred when try to find container \"3008d49e27cd03d3943733d019e163fc3510945569c91c8f852002f10ef62e01\": not found" Jul 10 23:38:37.019652 kubelet[3273]: I0710 23:38:37.019609 3273 scope.go:117] "RemoveContainer" containerID="a87d226a87e4a37c51f5eb0561d4ba53d223a94d0886bfc469d458bece9bd12d" Jul 10 23:38:37.022577 containerd[1709]: time="2025-07-10T23:38:37.022535050Z" level=info msg="RemoveContainer for \"a87d226a87e4a37c51f5eb0561d4ba53d223a94d0886bfc469d458bece9bd12d\"" Jul 10 23:38:37.034490 containerd[1709]: time="2025-07-10T23:38:37.034423445Z" level=info msg="RemoveContainer for \"a87d226a87e4a37c51f5eb0561d4ba53d223a94d0886bfc469d458bece9bd12d\" returns successfully" Jul 10 23:38:37.034800 kubelet[3273]: I0710 23:38:37.034747 3273 scope.go:117] "RemoveContainer" containerID="fa45e6fd903a19d3f3414e84f8d9b3577785b472a0a6f81cf8b68539fa6328fb" Jul 10 23:38:37.036038 containerd[1709]: time="2025-07-10T23:38:37.036003730Z" level=info msg="RemoveContainer for \"fa45e6fd903a19d3f3414e84f8d9b3577785b472a0a6f81cf8b68539fa6328fb\"" Jul 10 23:38:37.048977 containerd[1709]: time="2025-07-10T23:38:37.048930168Z" level=info msg="RemoveContainer for \"fa45e6fd903a19d3f3414e84f8d9b3577785b472a0a6f81cf8b68539fa6328fb\" returns successfully" Jul 10 23:38:37.049239 kubelet[3273]: I0710 23:38:37.049189 3273 scope.go:117] "RemoveContainer" containerID="4b7a97f06a728cd365c78f49cd15c8f5f1c5e1f1ba2608f00113296ded8f888d" Jul 10 23:38:37.052496 containerd[1709]: time="2025-07-10T23:38:37.052357258Z" level=info msg="RemoveContainer for \"4b7a97f06a728cd365c78f49cd15c8f5f1c5e1f1ba2608f00113296ded8f888d\"" Jul 10 23:38:37.065715 containerd[1709]: time="2025-07-10T23:38:37.065628138Z" level=info msg="RemoveContainer for \"4b7a97f06a728cd365c78f49cd15c8f5f1c5e1f1ba2608f00113296ded8f888d\" returns successfully" Jul 10 23:38:37.066092 kubelet[3273]: I0710 23:38:37.065888 3273 scope.go:117] "RemoveContainer" containerID="3dc5851a7b50c562c2d967981cb902db81b8be904caefc2a01301237e2862583" Jul 10 23:38:37.067407 containerd[1709]: time="2025-07-10T23:38:37.067262583Z" level=info msg="RemoveContainer for \"3dc5851a7b50c562c2d967981cb902db81b8be904caefc2a01301237e2862583\"" Jul 10 23:38:37.080539 containerd[1709]: time="2025-07-10T23:38:37.080498502Z" level=info msg="RemoveContainer for \"3dc5851a7b50c562c2d967981cb902db81b8be904caefc2a01301237e2862583\" returns successfully" Jul 10 23:38:37.080832 kubelet[3273]: I0710 23:38:37.080801 3273 scope.go:117] "RemoveContainer" containerID="aee2fffebaa52686c22520232fac29fad406842ae1de2e4b10ad19b8c4fb6511" Jul 10 23:38:37.081919 containerd[1709]: time="2025-07-10T23:38:37.081894746Z" level=info msg="RemoveContainer for \"aee2fffebaa52686c22520232fac29fad406842ae1de2e4b10ad19b8c4fb6511\"" Jul 10 23:38:37.095153 containerd[1709]: time="2025-07-10T23:38:37.095104625Z" level=info msg="RemoveContainer for \"aee2fffebaa52686c22520232fac29fad406842ae1de2e4b10ad19b8c4fb6511\" returns successfully" Jul 10 23:38:37.095845 kubelet[3273]: I0710 23:38:37.095536 3273 scope.go:117] "RemoveContainer" containerID="a87d226a87e4a37c51f5eb0561d4ba53d223a94d0886bfc469d458bece9bd12d" Jul 10 23:38:37.095948 containerd[1709]: time="2025-07-10T23:38:37.095795507Z" level=error msg="ContainerStatus for \"a87d226a87e4a37c51f5eb0561d4ba53d223a94d0886bfc469d458bece9bd12d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a87d226a87e4a37c51f5eb0561d4ba53d223a94d0886bfc469d458bece9bd12d\": not found" Jul 10 23:38:37.096212 kubelet[3273]: E0710 23:38:37.096069 3273 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a87d226a87e4a37c51f5eb0561d4ba53d223a94d0886bfc469d458bece9bd12d\": not found" containerID="a87d226a87e4a37c51f5eb0561d4ba53d223a94d0886bfc469d458bece9bd12d" Jul 10 23:38:37.096212 kubelet[3273]: I0710 23:38:37.096099 3273 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a87d226a87e4a37c51f5eb0561d4ba53d223a94d0886bfc469d458bece9bd12d"} err="failed to get container status \"a87d226a87e4a37c51f5eb0561d4ba53d223a94d0886bfc469d458bece9bd12d\": rpc error: code = NotFound desc = an error occurred when try to find container \"a87d226a87e4a37c51f5eb0561d4ba53d223a94d0886bfc469d458bece9bd12d\": not found" Jul 10 23:38:37.096212 kubelet[3273]: I0710 23:38:37.096120 3273 scope.go:117] "RemoveContainer" containerID="fa45e6fd903a19d3f3414e84f8d9b3577785b472a0a6f81cf8b68539fa6328fb" Jul 10 23:38:37.096569 containerd[1709]: time="2025-07-10T23:38:37.096309469Z" level=error msg="ContainerStatus for \"fa45e6fd903a19d3f3414e84f8d9b3577785b472a0a6f81cf8b68539fa6328fb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fa45e6fd903a19d3f3414e84f8d9b3577785b472a0a6f81cf8b68539fa6328fb\": not found" Jul 10 23:38:37.096744 kubelet[3273]: E0710 23:38:37.096515 3273 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fa45e6fd903a19d3f3414e84f8d9b3577785b472a0a6f81cf8b68539fa6328fb\": not found" containerID="fa45e6fd903a19d3f3414e84f8d9b3577785b472a0a6f81cf8b68539fa6328fb" Jul 10 23:38:37.096744 kubelet[3273]: I0710 23:38:37.096680 3273 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fa45e6fd903a19d3f3414e84f8d9b3577785b472a0a6f81cf8b68539fa6328fb"} err="failed to get container status \"fa45e6fd903a19d3f3414e84f8d9b3577785b472a0a6f81cf8b68539fa6328fb\": rpc error: code = NotFound desc = an error occurred when try to find container \"fa45e6fd903a19d3f3414e84f8d9b3577785b472a0a6f81cf8b68539fa6328fb\": not found" Jul 10 23:38:37.096744 kubelet[3273]: I0710 23:38:37.096697 3273 scope.go:117] "RemoveContainer" containerID="4b7a97f06a728cd365c78f49cd15c8f5f1c5e1f1ba2608f00113296ded8f888d" Jul 10 23:38:37.097090 containerd[1709]: time="2025-07-10T23:38:37.097016551Z" level=error msg="ContainerStatus for \"4b7a97f06a728cd365c78f49cd15c8f5f1c5e1f1ba2608f00113296ded8f888d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4b7a97f06a728cd365c78f49cd15c8f5f1c5e1f1ba2608f00113296ded8f888d\": not found" Jul 10 23:38:37.097223 kubelet[3273]: E0710 23:38:37.097196 3273 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4b7a97f06a728cd365c78f49cd15c8f5f1c5e1f1ba2608f00113296ded8f888d\": not found" containerID="4b7a97f06a728cd365c78f49cd15c8f5f1c5e1f1ba2608f00113296ded8f888d" Jul 10 23:38:37.097272 kubelet[3273]: I0710 23:38:37.097226 3273 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4b7a97f06a728cd365c78f49cd15c8f5f1c5e1f1ba2608f00113296ded8f888d"} err="failed to get container status \"4b7a97f06a728cd365c78f49cd15c8f5f1c5e1f1ba2608f00113296ded8f888d\": rpc error: code = NotFound desc = an error occurred when try to find container \"4b7a97f06a728cd365c78f49cd15c8f5f1c5e1f1ba2608f00113296ded8f888d\": not found" Jul 10 23:38:37.097272 kubelet[3273]: I0710 23:38:37.097243 3273 scope.go:117] "RemoveContainer" containerID="3dc5851a7b50c562c2d967981cb902db81b8be904caefc2a01301237e2862583" Jul 10 23:38:37.097511 containerd[1709]: time="2025-07-10T23:38:37.097469952Z" level=error msg="ContainerStatus for \"3dc5851a7b50c562c2d967981cb902db81b8be904caefc2a01301237e2862583\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3dc5851a7b50c562c2d967981cb902db81b8be904caefc2a01301237e2862583\": not found" Jul 10 23:38:37.097775 kubelet[3273]: E0710 23:38:37.097615 3273 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3dc5851a7b50c562c2d967981cb902db81b8be904caefc2a01301237e2862583\": not found" containerID="3dc5851a7b50c562c2d967981cb902db81b8be904caefc2a01301237e2862583" Jul 10 23:38:37.097775 kubelet[3273]: I0710 23:38:37.097639 3273 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3dc5851a7b50c562c2d967981cb902db81b8be904caefc2a01301237e2862583"} err="failed to get container status \"3dc5851a7b50c562c2d967981cb902db81b8be904caefc2a01301237e2862583\": rpc error: code = NotFound desc = an error occurred when try to find container \"3dc5851a7b50c562c2d967981cb902db81b8be904caefc2a01301237e2862583\": not found" Jul 10 23:38:37.097775 kubelet[3273]: I0710 23:38:37.097655 3273 scope.go:117] "RemoveContainer" containerID="aee2fffebaa52686c22520232fac29fad406842ae1de2e4b10ad19b8c4fb6511" Jul 10 23:38:37.097862 containerd[1709]: time="2025-07-10T23:38:37.097798273Z" level=error msg="ContainerStatus for \"aee2fffebaa52686c22520232fac29fad406842ae1de2e4b10ad19b8c4fb6511\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"aee2fffebaa52686c22520232fac29fad406842ae1de2e4b10ad19b8c4fb6511\": not found" Jul 10 23:38:37.098049 kubelet[3273]: E0710 23:38:37.097963 3273 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"aee2fffebaa52686c22520232fac29fad406842ae1de2e4b10ad19b8c4fb6511\": not found" containerID="aee2fffebaa52686c22520232fac29fad406842ae1de2e4b10ad19b8c4fb6511" Jul 10 23:38:37.098049 kubelet[3273]: I0710 23:38:37.098010 3273 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"aee2fffebaa52686c22520232fac29fad406842ae1de2e4b10ad19b8c4fb6511"} err="failed to get container status \"aee2fffebaa52686c22520232fac29fad406842ae1de2e4b10ad19b8c4fb6511\": rpc error: code = NotFound desc = an error occurred when try to find container \"aee2fffebaa52686c22520232fac29fad406842ae1de2e4b10ad19b8c4fb6511\": not found" Jul 10 23:38:37.271460 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-11ef54599ab744822f3122af76e46b1a18bb7896b2032050ac23802d8f332c19-rootfs.mount: Deactivated successfully. Jul 10 23:38:37.271805 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ef39b54c18fb53d8355dd113b20f8893f6f53d5acd73d2d5d8cd7bb11b10f2d1-rootfs.mount: Deactivated successfully. Jul 10 23:38:37.271977 systemd[1]: var-lib-kubelet-pods-cf465242\x2d16cd\x2d44ed\x2dbb91\x2db9a8bd821559-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dczvc7.mount: Deactivated successfully. Jul 10 23:38:37.272163 systemd[1]: var-lib-kubelet-pods-849dc1f6\x2d0a16\x2d43de\x2dabfe\x2d42d0ab3043fa-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dst5qq.mount: Deactivated successfully. Jul 10 23:38:37.272330 systemd[1]: var-lib-kubelet-pods-849dc1f6\x2d0a16\x2d43de\x2dabfe\x2d42d0ab3043fa-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 10 23:38:37.272519 systemd[1]: var-lib-kubelet-pods-849dc1f6\x2d0a16\x2d43de\x2dabfe\x2d42d0ab3043fa-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 10 23:38:37.535029 kubelet[3273]: I0710 23:38:37.534941 3273 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="849dc1f6-0a16-43de-abfe-42d0ab3043fa" path="/var/lib/kubelet/pods/849dc1f6-0a16-43de-abfe-42d0ab3043fa/volumes" Jul 10 23:38:37.535958 kubelet[3273]: I0710 23:38:37.535614 3273 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf465242-16cd-44ed-bb91-b9a8bd821559" path="/var/lib/kubelet/pods/cf465242-16cd-44ed-bb91-b9a8bd821559/volumes" Jul 10 23:38:38.280766 sshd[4884]: Connection closed by 10.200.16.10 port 49472 Jul 10 23:38:38.281499 sshd-session[4882]: pam_unix(sshd:session): session closed for user core Jul 10 23:38:38.285207 systemd-logind[1690]: Session 25 logged out. Waiting for processes to exit. Jul 10 23:38:38.285804 systemd[1]: sshd@22-10.200.20.37:22-10.200.16.10:49472.service: Deactivated successfully. Jul 10 23:38:38.287953 systemd[1]: session-25.scope: Deactivated successfully. Jul 10 23:38:38.288217 systemd[1]: session-25.scope: Consumed 1.188s CPU time, 23.7M memory peak. Jul 10 23:38:38.289095 systemd-logind[1690]: Removed session 25. Jul 10 23:38:38.368634 systemd[1]: Started sshd@23-10.200.20.37:22-10.200.16.10:49474.service - OpenSSH per-connection server daemon (10.200.16.10:49474). Jul 10 23:38:38.662000 kubelet[3273]: E0710 23:38:38.661888 3273 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 10 23:38:38.825387 sshd[5044]: Accepted publickey for core from 10.200.16.10 port 49474 ssh2: RSA SHA256:MRWiT2m5xbwSE4Nwnya++Fyqw3vZsniZxlGhvbxVqjo Jul 10 23:38:38.826753 sshd-session[5044]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:38:38.832434 systemd-logind[1690]: New session 26 of user core. Jul 10 23:38:38.838544 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 10 23:38:39.711250 kubelet[3273]: I0710 23:38:39.711136 3273 memory_manager.go:355] "RemoveStaleState removing state" podUID="cf465242-16cd-44ed-bb91-b9a8bd821559" containerName="cilium-operator" Jul 10 23:38:39.711250 kubelet[3273]: I0710 23:38:39.711179 3273 memory_manager.go:355] "RemoveStaleState removing state" podUID="849dc1f6-0a16-43de-abfe-42d0ab3043fa" containerName="cilium-agent" Jul 10 23:38:39.722032 systemd[1]: Created slice kubepods-burstable-pod8d23a4eb_4fa1_44d4_be9f_81363c40b946.slice - libcontainer container kubepods-burstable-pod8d23a4eb_4fa1_44d4_be9f_81363c40b946.slice. Jul 10 23:38:39.726959 kubelet[3273]: I0710 23:38:39.726267 3273 status_manager.go:890] "Failed to get status for pod" podUID="8d23a4eb-4fa1-44d4-be9f-81363c40b946" pod="kube-system/cilium-7hrgh" err="pods \"cilium-7hrgh\" is forbidden: User \"system:node:ci-4230.2.1-n-d24186f489\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4230.2.1-n-d24186f489' and this object" Jul 10 23:38:39.726959 kubelet[3273]: W0710 23:38:39.726360 3273 reflector.go:569] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-4230.2.1-n-d24186f489" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4230.2.1-n-d24186f489' and this object Jul 10 23:38:39.726959 kubelet[3273]: E0710 23:38:39.726408 3273 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:ci-4230.2.1-n-d24186f489\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4230.2.1-n-d24186f489' and this object" logger="UnhandledError" Jul 10 23:38:39.726959 kubelet[3273]: W0710 23:38:39.726456 3273 reflector.go:569] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ci-4230.2.1-n-d24186f489" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4230.2.1-n-d24186f489' and this object Jul 10 23:38:39.726959 kubelet[3273]: E0710 23:38:39.726468 3273 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-ipsec-keys\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-ipsec-keys\" is forbidden: User \"system:node:ci-4230.2.1-n-d24186f489\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4230.2.1-n-d24186f489' and this object" logger="UnhandledError" Jul 10 23:38:39.727190 kubelet[3273]: W0710 23:38:39.726504 3273 reflector.go:569] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-4230.2.1-n-d24186f489" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4230.2.1-n-d24186f489' and this object Jul 10 23:38:39.727190 kubelet[3273]: E0710 23:38:39.726515 3273 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:ci-4230.2.1-n-d24186f489\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4230.2.1-n-d24186f489' and this object" logger="UnhandledError" Jul 10 23:38:39.727190 kubelet[3273]: W0710 23:38:39.726588 3273 reflector.go:569] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-4230.2.1-n-d24186f489" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4230.2.1-n-d24186f489' and this object Jul 10 23:38:39.727190 kubelet[3273]: E0710 23:38:39.726601 3273 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:ci-4230.2.1-n-d24186f489\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4230.2.1-n-d24186f489' and this object" logger="UnhandledError" Jul 10 23:38:39.741911 sshd[5046]: Connection closed by 10.200.16.10 port 49474 Jul 10 23:38:39.742545 sshd-session[5044]: pam_unix(sshd:session): session closed for user core Jul 10 23:38:39.747412 systemd[1]: sshd@23-10.200.20.37:22-10.200.16.10:49474.service: Deactivated successfully. Jul 10 23:38:39.749446 kubelet[3273]: I0710 23:38:39.748172 3273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8d23a4eb-4fa1-44d4-be9f-81363c40b946-cilium-cgroup\") pod \"cilium-7hrgh\" (UID: \"8d23a4eb-4fa1-44d4-be9f-81363c40b946\") " pod="kube-system/cilium-7hrgh" Jul 10 23:38:39.749446 kubelet[3273]: I0710 23:38:39.748211 3273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8d23a4eb-4fa1-44d4-be9f-81363c40b946-lib-modules\") pod \"cilium-7hrgh\" (UID: \"8d23a4eb-4fa1-44d4-be9f-81363c40b946\") " pod="kube-system/cilium-7hrgh" Jul 10 23:38:39.749446 kubelet[3273]: I0710 23:38:39.748226 3273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8d23a4eb-4fa1-44d4-be9f-81363c40b946-xtables-lock\") pod \"cilium-7hrgh\" (UID: \"8d23a4eb-4fa1-44d4-be9f-81363c40b946\") " pod="kube-system/cilium-7hrgh" Jul 10 23:38:39.749446 kubelet[3273]: I0710 23:38:39.748245 3273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8d23a4eb-4fa1-44d4-be9f-81363c40b946-hostproc\") pod \"cilium-7hrgh\" (UID: \"8d23a4eb-4fa1-44d4-be9f-81363c40b946\") " pod="kube-system/cilium-7hrgh" Jul 10 23:38:39.749446 kubelet[3273]: I0710 23:38:39.748260 3273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8d23a4eb-4fa1-44d4-be9f-81363c40b946-cni-path\") pod \"cilium-7hrgh\" (UID: \"8d23a4eb-4fa1-44d4-be9f-81363c40b946\") " pod="kube-system/cilium-7hrgh" Jul 10 23:38:39.749446 kubelet[3273]: I0710 23:38:39.748275 3273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8d23a4eb-4fa1-44d4-be9f-81363c40b946-etc-cni-netd\") pod \"cilium-7hrgh\" (UID: \"8d23a4eb-4fa1-44d4-be9f-81363c40b946\") " pod="kube-system/cilium-7hrgh" Jul 10 23:38:39.749238 systemd[1]: session-26.scope: Deactivated successfully. Jul 10 23:38:39.749701 kubelet[3273]: I0710 23:38:39.748292 3273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8d23a4eb-4fa1-44d4-be9f-81363c40b946-cilium-run\") pod \"cilium-7hrgh\" (UID: \"8d23a4eb-4fa1-44d4-be9f-81363c40b946\") " pod="kube-system/cilium-7hrgh" Jul 10 23:38:39.749701 kubelet[3273]: I0710 23:38:39.748307 3273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8d23a4eb-4fa1-44d4-be9f-81363c40b946-bpf-maps\") pod \"cilium-7hrgh\" (UID: \"8d23a4eb-4fa1-44d4-be9f-81363c40b946\") " pod="kube-system/cilium-7hrgh" Jul 10 23:38:39.749701 kubelet[3273]: I0710 23:38:39.748325 3273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8d23a4eb-4fa1-44d4-be9f-81363c40b946-cilium-config-path\") pod \"cilium-7hrgh\" (UID: \"8d23a4eb-4fa1-44d4-be9f-81363c40b946\") " pod="kube-system/cilium-7hrgh" Jul 10 23:38:39.749701 kubelet[3273]: I0710 23:38:39.748340 3273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8d23a4eb-4fa1-44d4-be9f-81363c40b946-host-proc-sys-net\") pod \"cilium-7hrgh\" (UID: \"8d23a4eb-4fa1-44d4-be9f-81363c40b946\") " pod="kube-system/cilium-7hrgh" Jul 10 23:38:39.749701 kubelet[3273]: I0710 23:38:39.748355 3273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8d23a4eb-4fa1-44d4-be9f-81363c40b946-hubble-tls\") pod \"cilium-7hrgh\" (UID: \"8d23a4eb-4fa1-44d4-be9f-81363c40b946\") " pod="kube-system/cilium-7hrgh" Jul 10 23:38:39.750456 kubelet[3273]: I0710 23:38:39.750185 3273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/8d23a4eb-4fa1-44d4-be9f-81363c40b946-cilium-ipsec-secrets\") pod \"cilium-7hrgh\" (UID: \"8d23a4eb-4fa1-44d4-be9f-81363c40b946\") " pod="kube-system/cilium-7hrgh" Jul 10 23:38:39.750456 kubelet[3273]: I0710 23:38:39.750234 3273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8d23a4eb-4fa1-44d4-be9f-81363c40b946-host-proc-sys-kernel\") pod \"cilium-7hrgh\" (UID: \"8d23a4eb-4fa1-44d4-be9f-81363c40b946\") " pod="kube-system/cilium-7hrgh" Jul 10 23:38:39.750456 kubelet[3273]: I0710 23:38:39.750251 3273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rb9zx\" (UniqueName: \"kubernetes.io/projected/8d23a4eb-4fa1-44d4-be9f-81363c40b946-kube-api-access-rb9zx\") pod \"cilium-7hrgh\" (UID: \"8d23a4eb-4fa1-44d4-be9f-81363c40b946\") " pod="kube-system/cilium-7hrgh" Jul 10 23:38:39.750456 kubelet[3273]: I0710 23:38:39.750270 3273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8d23a4eb-4fa1-44d4-be9f-81363c40b946-clustermesh-secrets\") pod \"cilium-7hrgh\" (UID: \"8d23a4eb-4fa1-44d4-be9f-81363c40b946\") " pod="kube-system/cilium-7hrgh" Jul 10 23:38:39.751609 systemd-logind[1690]: Session 26 logged out. Waiting for processes to exit. Jul 10 23:38:39.753191 systemd-logind[1690]: Removed session 26. Jul 10 23:38:39.835675 systemd[1]: Started sshd@24-10.200.20.37:22-10.200.16.10:43694.service - OpenSSH per-connection server daemon (10.200.16.10:43694). Jul 10 23:38:40.292829 sshd[5058]: Accepted publickey for core from 10.200.16.10 port 43694 ssh2: RSA SHA256:MRWiT2m5xbwSE4Nwnya++Fyqw3vZsniZxlGhvbxVqjo Jul 10 23:38:40.293924 sshd-session[5058]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:38:40.300662 systemd-logind[1690]: New session 27 of user core. Jul 10 23:38:40.308565 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 10 23:38:40.616454 sshd[5061]: Connection closed by 10.200.16.10 port 43694 Jul 10 23:38:40.616979 sshd-session[5058]: pam_unix(sshd:session): session closed for user core Jul 10 23:38:40.621909 systemd-logind[1690]: Session 27 logged out. Waiting for processes to exit. Jul 10 23:38:40.621972 systemd[1]: sshd@24-10.200.20.37:22-10.200.16.10:43694.service: Deactivated successfully. Jul 10 23:38:40.623661 systemd[1]: session-27.scope: Deactivated successfully. Jul 10 23:38:40.624627 systemd-logind[1690]: Removed session 27. Jul 10 23:38:40.701651 systemd[1]: Started sshd@25-10.200.20.37:22-10.200.16.10:43698.service - OpenSSH per-connection server daemon (10.200.16.10:43698). Jul 10 23:38:40.851883 kubelet[3273]: E0710 23:38:40.851842 3273 projected.go:263] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Jul 10 23:38:40.851883 kubelet[3273]: E0710 23:38:40.851876 3273 projected.go:194] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-7hrgh: failed to sync secret cache: timed out waiting for the condition Jul 10 23:38:40.852290 kubelet[3273]: E0710 23:38:40.851958 3273 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8d23a4eb-4fa1-44d4-be9f-81363c40b946-hubble-tls podName:8d23a4eb-4fa1-44d4-be9f-81363c40b946 nodeName:}" failed. No retries permitted until 2025-07-10 23:38:41.351936575 +0000 UTC m=+187.940655948 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/8d23a4eb-4fa1-44d4-be9f-81363c40b946-hubble-tls") pod "cilium-7hrgh" (UID: "8d23a4eb-4fa1-44d4-be9f-81363c40b946") : failed to sync secret cache: timed out waiting for the condition Jul 10 23:38:40.852987 kubelet[3273]: E0710 23:38:40.852898 3273 secret.go:189] Couldn't get secret kube-system/cilium-ipsec-keys: failed to sync secret cache: timed out waiting for the condition Jul 10 23:38:40.852987 kubelet[3273]: E0710 23:38:40.852961 3273 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8d23a4eb-4fa1-44d4-be9f-81363c40b946-cilium-ipsec-secrets podName:8d23a4eb-4fa1-44d4-be9f-81363c40b946 nodeName:}" failed. No retries permitted until 2025-07-10 23:38:41.352947978 +0000 UTC m=+187.941667391 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-ipsec-secrets" (UniqueName: "kubernetes.io/secret/8d23a4eb-4fa1-44d4-be9f-81363c40b946-cilium-ipsec-secrets") pod "cilium-7hrgh" (UID: "8d23a4eb-4fa1-44d4-be9f-81363c40b946") : failed to sync secret cache: timed out waiting for the condition Jul 10 23:38:40.853214 kubelet[3273]: E0710 23:38:40.852996 3273 configmap.go:193] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Jul 10 23:38:40.853214 kubelet[3273]: E0710 23:38:40.853048 3273 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8d23a4eb-4fa1-44d4-be9f-81363c40b946-cilium-config-path podName:8d23a4eb-4fa1-44d4-be9f-81363c40b946 nodeName:}" failed. No retries permitted until 2025-07-10 23:38:41.353035538 +0000 UTC m=+187.941754951 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/8d23a4eb-4fa1-44d4-be9f-81363c40b946-cilium-config-path") pod "cilium-7hrgh" (UID: "8d23a4eb-4fa1-44d4-be9f-81363c40b946") : failed to sync configmap cache: timed out waiting for the condition Jul 10 23:38:41.157867 sshd[5069]: Accepted publickey for core from 10.200.16.10 port 43698 ssh2: RSA SHA256:MRWiT2m5xbwSE4Nwnya++Fyqw3vZsniZxlGhvbxVqjo Jul 10 23:38:41.159228 sshd-session[5069]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:38:41.164483 systemd-logind[1690]: New session 28 of user core. Jul 10 23:38:41.173544 systemd[1]: Started session-28.scope - Session 28 of User core. Jul 10 23:38:41.533755 containerd[1709]: time="2025-07-10T23:38:41.533420610Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7hrgh,Uid:8d23a4eb-4fa1-44d4-be9f-81363c40b946,Namespace:kube-system,Attempt:0,}" Jul 10 23:38:41.584735 containerd[1709]: time="2025-07-10T23:38:41.584634643Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 23:38:41.585167 containerd[1709]: time="2025-07-10T23:38:41.585095485Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 23:38:41.585229 containerd[1709]: time="2025-07-10T23:38:41.585179645Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 23:38:41.585437 containerd[1709]: time="2025-07-10T23:38:41.585340005Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 23:38:41.608575 systemd[1]: Started cri-containerd-1960f9af49fa2ff8051ab89eb1752825480204a8ea3f6698143131969928362f.scope - libcontainer container 1960f9af49fa2ff8051ab89eb1752825480204a8ea3f6698143131969928362f. Jul 10 23:38:41.632213 containerd[1709]: time="2025-07-10T23:38:41.632076345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7hrgh,Uid:8d23a4eb-4fa1-44d4-be9f-81363c40b946,Namespace:kube-system,Attempt:0,} returns sandbox id \"1960f9af49fa2ff8051ab89eb1752825480204a8ea3f6698143131969928362f\"" Jul 10 23:38:41.637052 containerd[1709]: time="2025-07-10T23:38:41.636722319Z" level=info msg="CreateContainer within sandbox \"1960f9af49fa2ff8051ab89eb1752825480204a8ea3f6698143131969928362f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 10 23:38:41.687617 containerd[1709]: time="2025-07-10T23:38:41.687563031Z" level=info msg="CreateContainer within sandbox \"1960f9af49fa2ff8051ab89eb1752825480204a8ea3f6698143131969928362f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"def605c6f901a7af380ef1e69ec7bc1c5e8e3ab9a06a76019f50e6086a7e4c7a\"" Jul 10 23:38:41.688929 containerd[1709]: time="2025-07-10T23:38:41.688877595Z" level=info msg="StartContainer for \"def605c6f901a7af380ef1e69ec7bc1c5e8e3ab9a06a76019f50e6086a7e4c7a\"" Jul 10 23:38:41.712555 systemd[1]: Started cri-containerd-def605c6f901a7af380ef1e69ec7bc1c5e8e3ab9a06a76019f50e6086a7e4c7a.scope - libcontainer container def605c6f901a7af380ef1e69ec7bc1c5e8e3ab9a06a76019f50e6086a7e4c7a. Jul 10 23:38:41.741966 containerd[1709]: time="2025-07-10T23:38:41.741889193Z" level=info msg="StartContainer for \"def605c6f901a7af380ef1e69ec7bc1c5e8e3ab9a06a76019f50e6086a7e4c7a\" returns successfully" Jul 10 23:38:41.742879 systemd[1]: cri-containerd-def605c6f901a7af380ef1e69ec7bc1c5e8e3ab9a06a76019f50e6086a7e4c7a.scope: Deactivated successfully. Jul 10 23:38:41.855754 containerd[1709]: time="2025-07-10T23:38:41.855519132Z" level=info msg="shim disconnected" id=def605c6f901a7af380ef1e69ec7bc1c5e8e3ab9a06a76019f50e6086a7e4c7a namespace=k8s.io Jul 10 23:38:41.855754 containerd[1709]: time="2025-07-10T23:38:41.855596853Z" level=warning msg="cleaning up after shim disconnected" id=def605c6f901a7af380ef1e69ec7bc1c5e8e3ab9a06a76019f50e6086a7e4c7a namespace=k8s.io Jul 10 23:38:41.855754 containerd[1709]: time="2025-07-10T23:38:41.855606733Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 10 23:38:42.020727 containerd[1709]: time="2025-07-10T23:38:42.020219624Z" level=info msg="CreateContainer within sandbox \"1960f9af49fa2ff8051ab89eb1752825480204a8ea3f6698143131969928362f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 10 23:38:42.075915 containerd[1709]: time="2025-07-10T23:38:42.075863631Z" level=info msg="CreateContainer within sandbox \"1960f9af49fa2ff8051ab89eb1752825480204a8ea3f6698143131969928362f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"009d9023810c3623aea79aab93f8f3069cd497ad35bc4c0a0e0a38504785cf65\"" Jul 10 23:38:42.077601 containerd[1709]: time="2025-07-10T23:38:42.077532716Z" level=info msg="StartContainer for \"009d9023810c3623aea79aab93f8f3069cd497ad35bc4c0a0e0a38504785cf65\"" Jul 10 23:38:42.108565 systemd[1]: Started cri-containerd-009d9023810c3623aea79aab93f8f3069cd497ad35bc4c0a0e0a38504785cf65.scope - libcontainer container 009d9023810c3623aea79aab93f8f3069cd497ad35bc4c0a0e0a38504785cf65. Jul 10 23:38:42.134116 containerd[1709]: time="2025-07-10T23:38:42.134042244Z" level=info msg="StartContainer for \"009d9023810c3623aea79aab93f8f3069cd497ad35bc4c0a0e0a38504785cf65\" returns successfully" Jul 10 23:38:42.137046 systemd[1]: cri-containerd-009d9023810c3623aea79aab93f8f3069cd497ad35bc4c0a0e0a38504785cf65.scope: Deactivated successfully. Jul 10 23:38:42.172115 containerd[1709]: time="2025-07-10T23:38:42.171904957Z" level=info msg="shim disconnected" id=009d9023810c3623aea79aab93f8f3069cd497ad35bc4c0a0e0a38504785cf65 namespace=k8s.io Jul 10 23:38:42.172115 containerd[1709]: time="2025-07-10T23:38:42.171957998Z" level=warning msg="cleaning up after shim disconnected" id=009d9023810c3623aea79aab93f8f3069cd497ad35bc4c0a0e0a38504785cf65 namespace=k8s.io Jul 10 23:38:42.172115 containerd[1709]: time="2025-07-10T23:38:42.171966598Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 10 23:38:42.182322 containerd[1709]: time="2025-07-10T23:38:42.182193748Z" level=warning msg="cleanup warnings time=\"2025-07-10T23:38:42Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 10 23:38:42.364701 systemd[1]: run-containerd-runc-k8s.io-1960f9af49fa2ff8051ab89eb1752825480204a8ea3f6698143131969928362f-runc.oWKemZ.mount: Deactivated successfully. Jul 10 23:38:43.025776 containerd[1709]: time="2025-07-10T23:38:43.025722068Z" level=info msg="CreateContainer within sandbox \"1960f9af49fa2ff8051ab89eb1752825480204a8ea3f6698143131969928362f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 10 23:38:43.077816 containerd[1709]: time="2025-07-10T23:38:43.077706743Z" level=info msg="CreateContainer within sandbox \"1960f9af49fa2ff8051ab89eb1752825480204a8ea3f6698143131969928362f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"efce74b757290be26b83c76c9f592e6930c045fb7d68a8344714f32e852ce5c7\"" Jul 10 23:38:43.079260 containerd[1709]: time="2025-07-10T23:38:43.079074987Z" level=info msg="StartContainer for \"efce74b757290be26b83c76c9f592e6930c045fb7d68a8344714f32e852ce5c7\"" Jul 10 23:38:43.109558 systemd[1]: Started cri-containerd-efce74b757290be26b83c76c9f592e6930c045fb7d68a8344714f32e852ce5c7.scope - libcontainer container efce74b757290be26b83c76c9f592e6930c045fb7d68a8344714f32e852ce5c7. Jul 10 23:38:43.139097 systemd[1]: cri-containerd-efce74b757290be26b83c76c9f592e6930c045fb7d68a8344714f32e852ce5c7.scope: Deactivated successfully. Jul 10 23:38:43.143047 containerd[1709]: time="2025-07-10T23:38:43.142879498Z" level=info msg="StartContainer for \"efce74b757290be26b83c76c9f592e6930c045fb7d68a8344714f32e852ce5c7\" returns successfully" Jul 10 23:38:43.174806 containerd[1709]: time="2025-07-10T23:38:43.174667273Z" level=info msg="shim disconnected" id=efce74b757290be26b83c76c9f592e6930c045fb7d68a8344714f32e852ce5c7 namespace=k8s.io Jul 10 23:38:43.174806 containerd[1709]: time="2025-07-10T23:38:43.174727153Z" level=warning msg="cleaning up after shim disconnected" id=efce74b757290be26b83c76c9f592e6930c045fb7d68a8344714f32e852ce5c7 namespace=k8s.io Jul 10 23:38:43.174806 containerd[1709]: time="2025-07-10T23:38:43.174742353Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 10 23:38:43.363505 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-efce74b757290be26b83c76c9f592e6930c045fb7d68a8344714f32e852ce5c7-rootfs.mount: Deactivated successfully. Jul 10 23:38:43.663750 kubelet[3273]: E0710 23:38:43.663524 3273 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 10 23:38:44.029101 containerd[1709]: time="2025-07-10T23:38:44.028854504Z" level=info msg="CreateContainer within sandbox \"1960f9af49fa2ff8051ab89eb1752825480204a8ea3f6698143131969928362f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 10 23:38:44.068832 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2724431028.mount: Deactivated successfully. Jul 10 23:38:44.081795 containerd[1709]: time="2025-07-10T23:38:44.081726142Z" level=info msg="CreateContainer within sandbox \"1960f9af49fa2ff8051ab89eb1752825480204a8ea3f6698143131969928362f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"1b08ac63adea722ae96685f86ab18b72f09b1481d3e174cef6fd75f581c642ce\"" Jul 10 23:38:44.083521 containerd[1709]: time="2025-07-10T23:38:44.082519064Z" level=info msg="StartContainer for \"1b08ac63adea722ae96685f86ab18b72f09b1481d3e174cef6fd75f581c642ce\"" Jul 10 23:38:44.110552 systemd[1]: Started cri-containerd-1b08ac63adea722ae96685f86ab18b72f09b1481d3e174cef6fd75f581c642ce.scope - libcontainer container 1b08ac63adea722ae96685f86ab18b72f09b1481d3e174cef6fd75f581c642ce. Jul 10 23:38:44.131684 systemd[1]: cri-containerd-1b08ac63adea722ae96685f86ab18b72f09b1481d3e174cef6fd75f581c642ce.scope: Deactivated successfully. Jul 10 23:38:44.136862 containerd[1709]: time="2025-07-10T23:38:44.136783507Z" level=info msg="StartContainer for \"1b08ac63adea722ae96685f86ab18b72f09b1481d3e174cef6fd75f581c642ce\" returns successfully" Jul 10 23:38:44.171707 containerd[1709]: time="2025-07-10T23:38:44.171651771Z" level=info msg="shim disconnected" id=1b08ac63adea722ae96685f86ab18b72f09b1481d3e174cef6fd75f581c642ce namespace=k8s.io Jul 10 23:38:44.172176 containerd[1709]: time="2025-07-10T23:38:44.171975132Z" level=warning msg="cleaning up after shim disconnected" id=1b08ac63adea722ae96685f86ab18b72f09b1481d3e174cef6fd75f581c642ce namespace=k8s.io Jul 10 23:38:44.172176 containerd[1709]: time="2025-07-10T23:38:44.171993172Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 10 23:38:44.363346 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1b08ac63adea722ae96685f86ab18b72f09b1481d3e174cef6fd75f581c642ce-rootfs.mount: Deactivated successfully. Jul 10 23:38:45.033276 containerd[1709]: time="2025-07-10T23:38:45.032503542Z" level=info msg="CreateContainer within sandbox \"1960f9af49fa2ff8051ab89eb1752825480204a8ea3f6698143131969928362f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 10 23:38:45.086622 containerd[1709]: time="2025-07-10T23:38:45.086538943Z" level=info msg="CreateContainer within sandbox \"1960f9af49fa2ff8051ab89eb1752825480204a8ea3f6698143131969928362f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f867a148d8651ae21409f0c2dede25a4680d0bbcfbdd9da0ccc351b49b5662af\"" Jul 10 23:38:45.088111 containerd[1709]: time="2025-07-10T23:38:45.087208705Z" level=info msg="StartContainer for \"f867a148d8651ae21409f0c2dede25a4680d0bbcfbdd9da0ccc351b49b5662af\"" Jul 10 23:38:45.113561 systemd[1]: Started cri-containerd-f867a148d8651ae21409f0c2dede25a4680d0bbcfbdd9da0ccc351b49b5662af.scope - libcontainer container f867a148d8651ae21409f0c2dede25a4680d0bbcfbdd9da0ccc351b49b5662af. Jul 10 23:38:45.144058 containerd[1709]: time="2025-07-10T23:38:45.143972995Z" level=info msg="StartContainer for \"f867a148d8651ae21409f0c2dede25a4680d0bbcfbdd9da0ccc351b49b5662af\" returns successfully" Jul 10 23:38:45.655400 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jul 10 23:38:47.483538 kubelet[3273]: I0710 23:38:47.483485 3273 setters.go:602] "Node became not ready" node="ci-4230.2.1-n-d24186f489" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-07-10T23:38:47Z","lastTransitionTime":"2025-07-10T23:38:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 10 23:38:47.535022 kubelet[3273]: E0710 23:38:47.534968 3273 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-r5wgz" podUID="9a0867dc-7fca-4f7c-bfb9-316dc34bd2b7" Jul 10 23:38:48.379793 systemd-networkd[1448]: lxc_health: Link UP Jul 10 23:38:48.381633 systemd-networkd[1448]: lxc_health: Gained carrier Jul 10 23:38:49.539577 systemd-networkd[1448]: lxc_health: Gained IPv6LL Jul 10 23:38:49.568424 kubelet[3273]: I0710 23:38:49.568348 3273 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-7hrgh" podStartSLOduration=10.568328331 podStartE2EDuration="10.568328331s" podCreationTimestamp="2025-07-10 23:38:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 23:38:46.066856512 +0000 UTC m=+192.655575925" watchObservedRunningTime="2025-07-10 23:38:49.568328331 +0000 UTC m=+196.157047744" Jul 10 23:38:54.146835 systemd[1]: run-containerd-runc-k8s.io-f867a148d8651ae21409f0c2dede25a4680d0bbcfbdd9da0ccc351b49b5662af-runc.8UFPjk.mount: Deactivated successfully. Jul 10 23:38:54.266585 sshd[5071]: Connection closed by 10.200.16.10 port 43698 Jul 10 23:38:54.267253 sshd-session[5069]: pam_unix(sshd:session): session closed for user core Jul 10 23:38:54.271192 systemd[1]: sshd@25-10.200.20.37:22-10.200.16.10:43698.service: Deactivated successfully. Jul 10 23:38:54.276020 systemd[1]: session-28.scope: Deactivated successfully. Jul 10 23:38:54.278453 systemd-logind[1690]: Session 28 logged out. Waiting for processes to exit. Jul 10 23:38:54.279935 systemd-logind[1690]: Removed session 28. Jul 10 23:38:58.215207 kubelet[3273]: E0710 23:38:58.214595 3273 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.20.37:45340->10.200.20.22:2379: read: connection reset by peer"