Jul 6 23:06:51.348088 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 6 23:06:51.348110 kernel: Linux version 6.6.95-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Sun Jul 6 21:51:54 -00 2025 Jul 6 23:06:51.348118 kernel: KASLR enabled Jul 6 23:06:51.348124 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jul 6 23:06:51.348131 kernel: printk: bootconsole [pl11] enabled Jul 6 23:06:51.348136 kernel: efi: EFI v2.7 by EDK II Jul 6 23:06:51.348143 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f20f698 RNG=0x3fd5f998 MEMRESERVE=0x3e477598 Jul 6 23:06:51.348149 kernel: random: crng init done Jul 6 23:06:51.348155 kernel: secureboot: Secure boot disabled Jul 6 23:06:51.348161 kernel: ACPI: Early table checksum verification disabled Jul 6 23:06:51.348166 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Jul 6 23:06:51.348172 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 6 23:06:51.348178 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 6 23:06:51.348185 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jul 6 23:06:51.348193 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 6 23:06:51.348199 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 6 23:06:51.348205 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 6 23:06:51.348212 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 6 23:06:51.348218 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 6 23:06:51.348224 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 6 23:06:51.348231 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jul 6 23:06:51.348237 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 6 23:06:51.348243 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jul 6 23:06:51.348249 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Jul 6 23:06:51.348255 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Jul 6 23:06:51.348261 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Jul 6 23:06:51.348267 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Jul 6 23:06:51.348273 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Jul 6 23:06:51.348280 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Jul 6 23:06:51.348287 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Jul 6 23:06:51.348293 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Jul 6 23:06:51.348299 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Jul 6 23:06:51.348305 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Jul 6 23:06:51.348311 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Jul 6 23:06:51.348317 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Jul 6 23:06:51.348323 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Jul 6 23:06:51.348329 kernel: Zone ranges: Jul 6 23:06:51.348335 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jul 6 23:06:51.348341 kernel: DMA32 empty Jul 6 23:06:51.348347 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jul 6 23:06:51.348357 kernel: Movable zone start for each node Jul 6 23:06:51.348363 kernel: Early memory node ranges Jul 6 23:06:51.348370 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jul 6 23:06:51.348376 kernel: node 0: [mem 0x0000000000824000-0x000000003e45ffff] Jul 6 23:06:51.348383 kernel: node 0: [mem 0x000000003e460000-0x000000003e46ffff] Jul 6 23:06:51.348391 kernel: node 0: [mem 0x000000003e470000-0x000000003e54ffff] Jul 6 23:06:51.348397 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Jul 6 23:06:51.348404 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Jul 6 23:06:51.348410 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Jul 6 23:06:51.348417 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Jul 6 23:06:51.348423 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jul 6 23:06:51.348430 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jul 6 23:06:51.348436 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jul 6 23:06:51.348443 kernel: psci: probing for conduit method from ACPI. Jul 6 23:06:51.348449 kernel: psci: PSCIv1.1 detected in firmware. Jul 6 23:06:51.348456 kernel: psci: Using standard PSCI v0.2 function IDs Jul 6 23:06:51.348462 kernel: psci: MIGRATE_INFO_TYPE not supported. Jul 6 23:06:51.348470 kernel: psci: SMC Calling Convention v1.4 Jul 6 23:06:51.348476 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jul 6 23:06:51.348483 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jul 6 23:06:51.348489 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jul 6 23:06:51.348496 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jul 6 23:06:51.348502 kernel: pcpu-alloc: [0] 0 [0] 1 Jul 6 23:06:51.348509 kernel: Detected PIPT I-cache on CPU0 Jul 6 23:06:51.348515 kernel: CPU features: detected: GIC system register CPU interface Jul 6 23:06:51.348522 kernel: CPU features: detected: Hardware dirty bit management Jul 6 23:06:51.348528 kernel: CPU features: detected: Spectre-BHB Jul 6 23:06:51.348535 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 6 23:06:51.348543 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 6 23:06:51.348549 kernel: CPU features: detected: ARM erratum 1418040 Jul 6 23:06:51.348556 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Jul 6 23:06:51.348562 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 6 23:06:51.348569 kernel: alternatives: applying boot alternatives Jul 6 23:06:51.348576 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=ca8feb1f79a67c117068f051b5f829d3e40170c022cd5834bd6789cba9641479 Jul 6 23:06:51.348583 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 6 23:06:51.348590 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 6 23:06:51.348597 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 6 23:06:51.348603 kernel: Fallback order for Node 0: 0 Jul 6 23:06:51.348610 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Jul 6 23:06:51.348618 kernel: Policy zone: Normal Jul 6 23:06:51.348624 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 6 23:06:51.348631 kernel: software IO TLB: area num 2. Jul 6 23:06:51.348637 kernel: software IO TLB: mapped [mem 0x0000000036540000-0x000000003a540000] (64MB) Jul 6 23:06:51.348644 kernel: Memory: 3983592K/4194160K available (10368K kernel code, 2186K rwdata, 8104K rodata, 38336K init, 897K bss, 210568K reserved, 0K cma-reserved) Jul 6 23:06:51.348651 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 6 23:06:51.348657 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 6 23:06:51.348664 kernel: rcu: RCU event tracing is enabled. Jul 6 23:06:51.348671 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 6 23:06:51.348678 kernel: Trampoline variant of Tasks RCU enabled. Jul 6 23:06:51.348684 kernel: Tracing variant of Tasks RCU enabled. Jul 6 23:06:51.348692 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 6 23:06:51.348699 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 6 23:06:51.348705 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 6 23:06:51.348712 kernel: GICv3: 960 SPIs implemented Jul 6 23:06:51.348718 kernel: GICv3: 0 Extended SPIs implemented Jul 6 23:06:51.348725 kernel: Root IRQ handler: gic_handle_irq Jul 6 23:06:51.348731 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jul 6 23:06:51.348738 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jul 6 23:06:51.348744 kernel: ITS: No ITS available, not enabling LPIs Jul 6 23:06:51.348751 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 6 23:06:51.348758 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 6 23:06:51.348764 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 6 23:06:51.348773 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 6 23:06:51.348780 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 6 23:06:51.348786 kernel: Console: colour dummy device 80x25 Jul 6 23:06:51.348793 kernel: printk: console [tty1] enabled Jul 6 23:06:51.348800 kernel: ACPI: Core revision 20230628 Jul 6 23:06:51.348807 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 6 23:06:51.348814 kernel: pid_max: default: 32768 minimum: 301 Jul 6 23:06:51.348821 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 6 23:06:51.348827 kernel: landlock: Up and running. Jul 6 23:06:51.348835 kernel: SELinux: Initializing. Jul 6 23:06:51.348842 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 6 23:06:51.348848 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 6 23:06:51.348855 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 6 23:06:51.348862 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 6 23:06:51.348869 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Jul 6 23:06:51.348876 kernel: Hyper-V: Host Build 10.0.22477.1619-1-0 Jul 6 23:06:51.348889 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jul 6 23:06:51.348896 kernel: rcu: Hierarchical SRCU implementation. Jul 6 23:06:51.348918 kernel: rcu: Max phase no-delay instances is 400. Jul 6 23:06:51.348925 kernel: Remapping and enabling EFI services. Jul 6 23:06:51.348932 kernel: smp: Bringing up secondary CPUs ... Jul 6 23:06:51.348941 kernel: Detected PIPT I-cache on CPU1 Jul 6 23:06:51.348948 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jul 6 23:06:51.348955 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 6 23:06:51.348963 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 6 23:06:51.348970 kernel: smp: Brought up 1 node, 2 CPUs Jul 6 23:06:51.348978 kernel: SMP: Total of 2 processors activated. Jul 6 23:06:51.348985 kernel: CPU features: detected: 32-bit EL0 Support Jul 6 23:06:51.348992 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jul 6 23:06:51.348999 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 6 23:06:51.349007 kernel: CPU features: detected: CRC32 instructions Jul 6 23:06:51.349014 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 6 23:06:51.349021 kernel: CPU features: detected: LSE atomic instructions Jul 6 23:06:51.349028 kernel: CPU features: detected: Privileged Access Never Jul 6 23:06:51.349035 kernel: CPU: All CPU(s) started at EL1 Jul 6 23:06:51.349043 kernel: alternatives: applying system-wide alternatives Jul 6 23:06:51.349050 kernel: devtmpfs: initialized Jul 6 23:06:51.349058 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 6 23:06:51.349065 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 6 23:06:51.349072 kernel: pinctrl core: initialized pinctrl subsystem Jul 6 23:06:51.349079 kernel: SMBIOS 3.1.0 present. Jul 6 23:06:51.349086 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Jul 6 23:06:51.349093 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 6 23:06:51.349100 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 6 23:06:51.349109 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 6 23:06:51.349116 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 6 23:06:51.349123 kernel: audit: initializing netlink subsys (disabled) Jul 6 23:06:51.349130 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Jul 6 23:06:51.349137 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 6 23:06:51.349144 kernel: cpuidle: using governor menu Jul 6 23:06:51.349151 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 6 23:06:51.349158 kernel: ASID allocator initialised with 32768 entries Jul 6 23:06:51.349165 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 6 23:06:51.349174 kernel: Serial: AMBA PL011 UART driver Jul 6 23:06:51.349181 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 6 23:06:51.349188 kernel: Modules: 0 pages in range for non-PLT usage Jul 6 23:06:51.349195 kernel: Modules: 509264 pages in range for PLT usage Jul 6 23:06:51.349202 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 6 23:06:51.349209 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 6 23:06:51.349216 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 6 23:06:51.349223 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 6 23:06:51.349230 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 6 23:06:51.349239 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 6 23:06:51.349246 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 6 23:06:51.349253 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 6 23:06:51.349260 kernel: ACPI: Added _OSI(Module Device) Jul 6 23:06:51.349267 kernel: ACPI: Added _OSI(Processor Device) Jul 6 23:06:51.349274 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 6 23:06:51.349282 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 6 23:06:51.349289 kernel: ACPI: Interpreter enabled Jul 6 23:06:51.349295 kernel: ACPI: Using GIC for interrupt routing Jul 6 23:06:51.349304 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jul 6 23:06:51.349311 kernel: printk: console [ttyAMA0] enabled Jul 6 23:06:51.349318 kernel: printk: bootconsole [pl11] disabled Jul 6 23:06:51.349325 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jul 6 23:06:51.349333 kernel: iommu: Default domain type: Translated Jul 6 23:06:51.349340 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 6 23:06:51.349347 kernel: efivars: Registered efivars operations Jul 6 23:06:51.349354 kernel: vgaarb: loaded Jul 6 23:06:51.349361 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 6 23:06:51.349369 kernel: VFS: Disk quotas dquot_6.6.0 Jul 6 23:06:51.349377 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 6 23:06:51.349384 kernel: pnp: PnP ACPI init Jul 6 23:06:51.349391 kernel: pnp: PnP ACPI: found 0 devices Jul 6 23:06:51.349398 kernel: NET: Registered PF_INET protocol family Jul 6 23:06:51.349405 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 6 23:06:51.349412 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 6 23:06:51.349420 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 6 23:06:51.349427 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 6 23:06:51.349435 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 6 23:06:51.349443 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 6 23:06:51.349450 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 6 23:06:51.349457 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 6 23:06:51.349464 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 6 23:06:51.349472 kernel: PCI: CLS 0 bytes, default 64 Jul 6 23:06:51.349479 kernel: kvm [1]: HYP mode not available Jul 6 23:06:51.349486 kernel: Initialise system trusted keyrings Jul 6 23:06:51.349493 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 6 23:06:51.349501 kernel: Key type asymmetric registered Jul 6 23:06:51.349508 kernel: Asymmetric key parser 'x509' registered Jul 6 23:06:51.349515 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 6 23:06:51.349522 kernel: io scheduler mq-deadline registered Jul 6 23:06:51.349529 kernel: io scheduler kyber registered Jul 6 23:06:51.349536 kernel: io scheduler bfq registered Jul 6 23:06:51.349544 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 6 23:06:51.349551 kernel: thunder_xcv, ver 1.0 Jul 6 23:06:51.349558 kernel: thunder_bgx, ver 1.0 Jul 6 23:06:51.349566 kernel: nicpf, ver 1.0 Jul 6 23:06:51.349573 kernel: nicvf, ver 1.0 Jul 6 23:06:51.349710 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 6 23:06:51.349783 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-06T23:06:50 UTC (1751843210) Jul 6 23:06:51.349793 kernel: efifb: probing for efifb Jul 6 23:06:51.349801 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jul 6 23:06:51.349808 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jul 6 23:06:51.349815 kernel: efifb: scrolling: redraw Jul 6 23:06:51.349825 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jul 6 23:06:51.349832 kernel: Console: switching to colour frame buffer device 128x48 Jul 6 23:06:51.349839 kernel: fb0: EFI VGA frame buffer device Jul 6 23:06:51.349846 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jul 6 23:06:51.349853 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 6 23:06:51.349861 kernel: No ACPI PMU IRQ for CPU0 Jul 6 23:06:51.349868 kernel: No ACPI PMU IRQ for CPU1 Jul 6 23:06:51.349875 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Jul 6 23:06:51.349882 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jul 6 23:06:51.349890 kernel: watchdog: Hard watchdog permanently disabled Jul 6 23:06:51.349897 kernel: NET: Registered PF_INET6 protocol family Jul 6 23:06:51.349918 kernel: Segment Routing with IPv6 Jul 6 23:06:51.349925 kernel: In-situ OAM (IOAM) with IPv6 Jul 6 23:06:51.349932 kernel: NET: Registered PF_PACKET protocol family Jul 6 23:06:51.349939 kernel: Key type dns_resolver registered Jul 6 23:06:51.349946 kernel: registered taskstats version 1 Jul 6 23:06:51.349954 kernel: Loading compiled-in X.509 certificates Jul 6 23:06:51.349961 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.95-flatcar: b86e6d3bec2e587f2e5c37def91c4582416a83e3' Jul 6 23:06:51.349970 kernel: Key type .fscrypt registered Jul 6 23:06:51.349977 kernel: Key type fscrypt-provisioning registered Jul 6 23:06:51.349984 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 6 23:06:51.349992 kernel: ima: Allocated hash algorithm: sha1 Jul 6 23:06:51.349999 kernel: ima: No architecture policies found Jul 6 23:06:51.350006 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 6 23:06:51.350013 kernel: clk: Disabling unused clocks Jul 6 23:06:51.350020 kernel: Freeing unused kernel memory: 38336K Jul 6 23:06:51.350027 kernel: Run /init as init process Jul 6 23:06:51.350035 kernel: with arguments: Jul 6 23:06:51.350042 kernel: /init Jul 6 23:06:51.350049 kernel: with environment: Jul 6 23:06:51.350056 kernel: HOME=/ Jul 6 23:06:51.350063 kernel: TERM=linux Jul 6 23:06:51.350070 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 6 23:06:51.350078 systemd[1]: Successfully made /usr/ read-only. Jul 6 23:06:51.350088 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 6 23:06:51.350098 systemd[1]: Detected virtualization microsoft. Jul 6 23:06:51.350105 systemd[1]: Detected architecture arm64. Jul 6 23:06:51.350112 systemd[1]: Running in initrd. Jul 6 23:06:51.350120 systemd[1]: No hostname configured, using default hostname. Jul 6 23:06:51.350127 systemd[1]: Hostname set to . Jul 6 23:06:51.350135 systemd[1]: Initializing machine ID from random generator. Jul 6 23:06:51.350142 systemd[1]: Queued start job for default target initrd.target. Jul 6 23:06:51.350150 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:06:51.350159 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:06:51.350167 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 6 23:06:51.350175 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 6 23:06:51.350183 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 6 23:06:51.350192 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 6 23:06:51.350200 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 6 23:06:51.350210 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 6 23:06:51.350218 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:06:51.350225 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:06:51.350233 systemd[1]: Reached target paths.target - Path Units. Jul 6 23:06:51.350240 systemd[1]: Reached target slices.target - Slice Units. Jul 6 23:06:51.350248 systemd[1]: Reached target swap.target - Swaps. Jul 6 23:06:51.350256 systemd[1]: Reached target timers.target - Timer Units. Jul 6 23:06:51.350263 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 6 23:06:51.350271 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 6 23:06:51.350280 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 6 23:06:51.350288 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 6 23:06:51.350295 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:06:51.350303 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 6 23:06:51.350311 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:06:51.350318 systemd[1]: Reached target sockets.target - Socket Units. Jul 6 23:06:51.350326 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 6 23:06:51.350333 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 6 23:06:51.350342 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 6 23:06:51.350350 systemd[1]: Starting systemd-fsck-usr.service... Jul 6 23:06:51.350357 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 6 23:06:51.350382 systemd-journald[218]: Collecting audit messages is disabled. Jul 6 23:06:51.350401 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 6 23:06:51.350410 systemd-journald[218]: Journal started Jul 6 23:06:51.350428 systemd-journald[218]: Runtime Journal (/run/log/journal/a5e02b05fc2b4343bcdcf338a051fd83) is 8M, max 78.5M, 70.5M free. Jul 6 23:06:51.367366 systemd-modules-load[220]: Inserted module 'overlay' Jul 6 23:06:51.373749 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:06:51.393888 systemd[1]: Started systemd-journald.service - Journal Service. Jul 6 23:06:51.394543 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 6 23:06:51.424058 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 6 23:06:51.424097 kernel: Bridge firewalling registered Jul 6 23:06:51.419100 systemd-modules-load[220]: Inserted module 'br_netfilter' Jul 6 23:06:51.424973 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:06:51.436010 systemd[1]: Finished systemd-fsck-usr.service. Jul 6 23:06:51.447121 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 6 23:06:51.457138 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:06:51.480086 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 6 23:06:51.489108 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:06:51.506993 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 6 23:06:51.527397 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 6 23:06:51.543797 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:06:51.557436 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 6 23:06:51.567234 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:06:51.590748 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:06:51.612141 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 6 23:06:51.622120 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 6 23:06:51.638090 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 6 23:06:51.662223 dracut-cmdline[253]: dracut-dracut-053 Jul 6 23:06:51.670021 dracut-cmdline[253]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=ca8feb1f79a67c117068f051b5f829d3e40170c022cd5834bd6789cba9641479 Jul 6 23:06:51.702619 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:06:51.714539 systemd-resolved[254]: Positive Trust Anchors: Jul 6 23:06:51.714549 systemd-resolved[254]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 6 23:06:51.714579 systemd-resolved[254]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 6 23:06:51.716692 systemd-resolved[254]: Defaulting to hostname 'linux'. Jul 6 23:06:51.717830 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 6 23:06:51.725313 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:06:51.836938 kernel: SCSI subsystem initialized Jul 6 23:06:51.844930 kernel: Loading iSCSI transport class v2.0-870. Jul 6 23:06:51.855942 kernel: iscsi: registered transport (tcp) Jul 6 23:06:51.873678 kernel: iscsi: registered transport (qla4xxx) Jul 6 23:06:51.873706 kernel: QLogic iSCSI HBA Driver Jul 6 23:06:51.911536 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 6 23:06:51.927187 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 6 23:06:51.963115 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 6 23:06:51.963163 kernel: device-mapper: uevent: version 1.0.3 Jul 6 23:06:51.970024 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 6 23:06:52.019928 kernel: raid6: neonx8 gen() 15747 MB/s Jul 6 23:06:52.040913 kernel: raid6: neonx4 gen() 15817 MB/s Jul 6 23:06:52.060909 kernel: raid6: neonx2 gen() 13343 MB/s Jul 6 23:06:52.081910 kernel: raid6: neonx1 gen() 10517 MB/s Jul 6 23:06:52.101908 kernel: raid6: int64x8 gen() 6792 MB/s Jul 6 23:06:52.121908 kernel: raid6: int64x4 gen() 7349 MB/s Jul 6 23:06:52.142910 kernel: raid6: int64x2 gen() 6114 MB/s Jul 6 23:06:52.167683 kernel: raid6: int64x1 gen() 5059 MB/s Jul 6 23:06:52.167700 kernel: raid6: using algorithm neonx4 gen() 15817 MB/s Jul 6 23:06:52.190915 kernel: raid6: .... xor() 12358 MB/s, rmw enabled Jul 6 23:06:52.190939 kernel: raid6: using neon recovery algorithm Jul 6 23:06:52.203252 kernel: xor: measuring software checksum speed Jul 6 23:06:52.203267 kernel: 8regs : 21624 MB/sec Jul 6 23:06:52.206808 kernel: 32regs : 21699 MB/sec Jul 6 23:06:52.210449 kernel: arm64_neon : 27984 MB/sec Jul 6 23:06:52.214508 kernel: xor: using function: arm64_neon (27984 MB/sec) Jul 6 23:06:52.264925 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 6 23:06:52.273850 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 6 23:06:52.290058 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:06:52.323176 systemd-udevd[439]: Using default interface naming scheme 'v255'. Jul 6 23:06:52.328171 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:06:52.346029 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 6 23:06:52.374383 dracut-pre-trigger[450]: rd.md=0: removing MD RAID activation Jul 6 23:06:52.403695 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 6 23:06:52.418145 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 6 23:06:52.459783 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:06:52.486177 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 6 23:06:52.503095 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 6 23:06:52.521877 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 6 23:06:52.532246 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:06:52.549398 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 6 23:06:52.584274 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 6 23:06:52.599377 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 6 23:06:52.619815 kernel: hv_vmbus: Vmbus version:5.3 Jul 6 23:06:52.599505 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:06:52.628749 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 6 23:06:52.684980 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 6 23:06:52.685006 kernel: hv_vmbus: registering driver hyperv_keyboard Jul 6 23:06:52.685016 kernel: hv_vmbus: registering driver hid_hyperv Jul 6 23:06:52.685025 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 6 23:06:52.685034 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jul 6 23:06:52.647385 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:06:52.712738 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jul 6 23:06:52.712766 kernel: hv_vmbus: registering driver hv_netvsc Jul 6 23:06:52.712776 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jul 6 23:06:52.712941 kernel: hv_vmbus: registering driver hv_storvsc Jul 6 23:06:52.647567 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:06:52.733437 kernel: PTP clock support registered Jul 6 23:06:52.696991 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:06:52.762194 kernel: hv_utils: Registering HyperV Utility Driver Jul 6 23:06:52.762217 kernel: hv_vmbus: registering driver hv_utils Jul 6 23:06:52.762227 kernel: hv_utils: Shutdown IC version 3.2 Jul 6 23:06:52.762237 kernel: hv_utils: Heartbeat IC version 3.0 Jul 6 23:06:52.762247 kernel: hv_utils: TimeSync IC version 4.0 Jul 6 23:06:52.733226 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:06:52.750283 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 6 23:06:52.835733 kernel: scsi host0: storvsc_host_t Jul 6 23:06:52.835978 kernel: scsi host1: storvsc_host_t Jul 6 23:06:52.836105 kernel: scsi 1:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jul 6 23:06:52.793505 systemd-resolved[254]: Clock change detected. Flushing caches. Jul 6 23:06:52.819221 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:06:52.864006 kernel: scsi 1:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jul 6 23:06:52.841682 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:06:52.841874 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:06:52.893797 kernel: hv_netvsc 000d3ac5-8205-000d-3ac5-8205000d3ac5 eth0: VF slot 1 added Jul 6 23:06:52.863835 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:06:52.901425 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:06:52.927955 kernel: hv_vmbus: registering driver hv_pci Jul 6 23:06:52.928296 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:06:52.974785 kernel: hv_pci b69b3bb3-6d13-476e-8a4b-f2ae707d2634: PCI VMBus probing: Using version 0x10004 Jul 6 23:06:52.974985 kernel: hv_pci b69b3bb3-6d13-476e-8a4b-f2ae707d2634: PCI host bridge to bus 6d13:00 Jul 6 23:06:52.975067 kernel: pci_bus 6d13:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jul 6 23:06:52.975174 kernel: pci_bus 6d13:00: No busn resource found for root bus, will use [bus 00-ff] Jul 6 23:06:52.974606 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 6 23:06:53.006028 kernel: pci 6d13:00:02.0: [15b3:1018] type 00 class 0x020000 Jul 6 23:06:53.006076 kernel: sr 1:0:0:2: [sr0] scsi-1 drive Jul 6 23:06:53.017612 kernel: pci 6d13:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Jul 6 23:06:53.017707 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 6 23:06:53.020957 kernel: sd 1:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jul 6 23:06:53.027193 kernel: sr 1:0:0:2: Attached scsi CD-ROM sr0 Jul 6 23:06:53.027373 kernel: sd 1:0:0:0: [sda] 4096-byte physical blocks Jul 6 23:06:53.044098 kernel: pci 6d13:00:02.0: enabling Extended Tags Jul 6 23:06:53.044184 kernel: sd 1:0:0:0: [sda] Write Protect is off Jul 6 23:06:53.064394 kernel: sd 1:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jul 6 23:06:53.064705 kernel: sd 1:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jul 6 23:06:53.064793 kernel: pci 6d13:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 6d13:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Jul 6 23:06:53.065888 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:06:53.104312 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 6 23:06:53.104334 kernel: pci_bus 6d13:00: busn_res: [bus 00-ff] end is updated to 00 Jul 6 23:06:53.104470 kernel: sd 1:0:0:0: [sda] Attached SCSI disk Jul 6 23:06:53.104567 kernel: pci 6d13:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Jul 6 23:06:53.151476 kernel: mlx5_core 6d13:00:02.0: enabling device (0000 -> 0002) Jul 6 23:06:53.158946 kernel: mlx5_core 6d13:00:02.0: firmware version: 16.30.1284 Jul 6 23:06:53.369605 kernel: hv_netvsc 000d3ac5-8205-000d-3ac5-8205000d3ac5 eth0: VF registering: eth1 Jul 6 23:06:53.369820 kernel: mlx5_core 6d13:00:02.0 eth1: joined to eth0 Jul 6 23:06:53.378040 kernel: mlx5_core 6d13:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Jul 6 23:06:53.388946 kernel: mlx5_core 6d13:00:02.0 enP27923s1: renamed from eth1 Jul 6 23:06:53.509509 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jul 6 23:06:53.617949 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (487) Jul 6 23:06:53.623488 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jul 6 23:06:53.650197 kernel: BTRFS: device fsid 990dd864-0c88-4d4d-9797-49057844458a devid 1 transid 35 /dev/sda3 scanned by (udev-worker) (499) Jul 6 23:06:53.651883 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jul 6 23:06:53.680156 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jul 6 23:06:53.686776 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jul 6 23:06:53.721161 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 6 23:06:53.749139 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 6 23:06:53.756949 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 6 23:06:54.765653 disk-uuid[607]: The operation has completed successfully. Jul 6 23:06:54.770913 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 6 23:06:54.834592 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 6 23:06:54.834716 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 6 23:06:54.877129 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 6 23:06:54.891627 sh[693]: Success Jul 6 23:06:54.920158 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 6 23:06:55.123446 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 6 23:06:55.142109 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 6 23:06:55.151812 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 6 23:06:55.190256 kernel: BTRFS info (device dm-0): first mount of filesystem 990dd864-0c88-4d4d-9797-49057844458a Jul 6 23:06:55.190314 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 6 23:06:55.197840 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 6 23:06:55.204367 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 6 23:06:55.208956 kernel: BTRFS info (device dm-0): using free space tree Jul 6 23:06:55.810619 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 6 23:06:55.816399 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 6 23:06:55.842223 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 6 23:06:55.850126 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 6 23:06:55.898764 kernel: BTRFS info (device sda6): first mount of filesystem 297af9a7-3de6-47a6-b022-d94c20ff287b Jul 6 23:06:55.898828 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 6 23:06:55.898839 kernel: BTRFS info (device sda6): using free space tree Jul 6 23:06:55.918975 kernel: BTRFS info (device sda6): auto enabling async discard Jul 6 23:06:55.928991 kernel: BTRFS info (device sda6): last unmount of filesystem 297af9a7-3de6-47a6-b022-d94c20ff287b Jul 6 23:06:55.933364 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 6 23:06:55.948227 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 6 23:06:55.976495 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 6 23:06:55.999119 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 6 23:06:56.029330 systemd-networkd[874]: lo: Link UP Jul 6 23:06:56.029977 systemd-networkd[874]: lo: Gained carrier Jul 6 23:06:56.031940 systemd-networkd[874]: Enumeration completed Jul 6 23:06:56.032128 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 6 23:06:56.040194 systemd[1]: Reached target network.target - Network. Jul 6 23:06:56.052486 systemd-networkd[874]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:06:56.052490 systemd-networkd[874]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 6 23:06:56.133949 kernel: mlx5_core 6d13:00:02.0 enP27923s1: Link up Jul 6 23:06:56.176216 kernel: hv_netvsc 000d3ac5-8205-000d-3ac5-8205000d3ac5 eth0: Data path switched to VF: enP27923s1 Jul 6 23:06:56.175906 systemd-networkd[874]: enP27923s1: Link UP Jul 6 23:06:56.176010 systemd-networkd[874]: eth0: Link UP Jul 6 23:06:56.176105 systemd-networkd[874]: eth0: Gained carrier Jul 6 23:06:56.176114 systemd-networkd[874]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:06:56.203226 systemd-networkd[874]: enP27923s1: Gained carrier Jul 6 23:06:56.227973 systemd-networkd[874]: eth0: DHCPv4 address 10.200.20.36/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jul 6 23:06:56.649095 ignition[851]: Ignition 2.20.0 Jul 6 23:06:56.649106 ignition[851]: Stage: fetch-offline Jul 6 23:06:56.653798 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 6 23:06:56.649141 ignition[851]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:06:56.649148 ignition[851]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 6 23:06:56.649236 ignition[851]: parsed url from cmdline: "" Jul 6 23:06:56.649239 ignition[851]: no config URL provided Jul 6 23:06:56.649243 ignition[851]: reading system config file "/usr/lib/ignition/user.ign" Jul 6 23:06:56.682181 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 6 23:06:56.649249 ignition[851]: no config at "/usr/lib/ignition/user.ign" Jul 6 23:06:56.649254 ignition[851]: failed to fetch config: resource requires networking Jul 6 23:06:56.649766 ignition[851]: Ignition finished successfully Jul 6 23:06:56.702779 ignition[885]: Ignition 2.20.0 Jul 6 23:06:56.702785 ignition[885]: Stage: fetch Jul 6 23:06:56.703039 ignition[885]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:06:56.703048 ignition[885]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 6 23:06:56.703146 ignition[885]: parsed url from cmdline: "" Jul 6 23:06:56.703150 ignition[885]: no config URL provided Jul 6 23:06:56.703154 ignition[885]: reading system config file "/usr/lib/ignition/user.ign" Jul 6 23:06:56.703161 ignition[885]: no config at "/usr/lib/ignition/user.ign" Jul 6 23:06:56.703189 ignition[885]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jul 6 23:06:56.828878 ignition[885]: GET result: OK Jul 6 23:06:56.828982 ignition[885]: config has been read from IMDS userdata Jul 6 23:06:56.833347 unknown[885]: fetched base config from "system" Jul 6 23:06:56.829022 ignition[885]: parsing config with SHA512: 05e13215498c2060afbc5223d5a322c02e39ac617bc359b6058a9bf49b741921ef516f1530c14e0618d328c516b8acf9655b30eba5a74ae955c994d8243285fe Jul 6 23:06:56.833354 unknown[885]: fetched base config from "system" Jul 6 23:06:56.833709 ignition[885]: fetch: fetch complete Jul 6 23:06:56.833359 unknown[885]: fetched user config from "azure" Jul 6 23:06:56.833713 ignition[885]: fetch: fetch passed Jul 6 23:06:56.839358 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 6 23:06:56.833759 ignition[885]: Ignition finished successfully Jul 6 23:06:56.859185 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 6 23:06:56.892270 ignition[892]: Ignition 2.20.0 Jul 6 23:06:56.892281 ignition[892]: Stage: kargs Jul 6 23:06:56.897045 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 6 23:06:56.892463 ignition[892]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:06:56.892473 ignition[892]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 6 23:06:56.893520 ignition[892]: kargs: kargs passed Jul 6 23:06:56.893570 ignition[892]: Ignition finished successfully Jul 6 23:06:56.924167 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 6 23:06:56.949045 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 6 23:06:56.943635 ignition[898]: Ignition 2.20.0 Jul 6 23:06:56.955049 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 6 23:06:56.943641 ignition[898]: Stage: disks Jul 6 23:06:56.965620 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 6 23:06:56.943826 ignition[898]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:06:56.977486 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 6 23:06:56.943835 ignition[898]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 6 23:06:56.990374 systemd[1]: Reached target sysinit.target - System Initialization. Jul 6 23:06:56.944878 ignition[898]: disks: disks passed Jul 6 23:06:56.999039 systemd[1]: Reached target basic.target - Basic System. Jul 6 23:06:56.944934 ignition[898]: Ignition finished successfully Jul 6 23:06:57.029192 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 6 23:06:57.115607 systemd-fsck[906]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jul 6 23:06:57.124751 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 6 23:06:57.144144 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 6 23:06:57.205985 kernel: EXT4-fs (sda9): mounted filesystem efd38a90-a3d5-48a9-85e4-1ea6162daba0 r/w with ordered data mode. Quota mode: none. Jul 6 23:06:57.206484 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 6 23:06:57.215454 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 6 23:06:57.262045 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 6 23:06:57.272760 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 6 23:06:57.284582 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jul 6 23:06:57.298852 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 6 23:06:57.298887 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 6 23:06:57.307747 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 6 23:06:57.343945 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (917) Jul 6 23:06:57.351420 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 6 23:06:57.372863 kernel: BTRFS info (device sda6): first mount of filesystem 297af9a7-3de6-47a6-b022-d94c20ff287b Jul 6 23:06:57.372893 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 6 23:06:57.372904 kernel: BTRFS info (device sda6): using free space tree Jul 6 23:06:57.378993 kernel: BTRFS info (device sda6): auto enabling async discard Jul 6 23:06:57.380288 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 6 23:06:57.666080 systemd-networkd[874]: eth0: Gained IPv6LL Jul 6 23:06:57.775684 coreos-metadata[919]: Jul 06 23:06:57.775 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jul 6 23:06:57.785794 coreos-metadata[919]: Jul 06 23:06:57.785 INFO Fetch successful Jul 6 23:06:57.790982 coreos-metadata[919]: Jul 06 23:06:57.790 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jul 6 23:06:57.812787 coreos-metadata[919]: Jul 06 23:06:57.812 INFO Fetch successful Jul 6 23:06:57.826281 coreos-metadata[919]: Jul 06 23:06:57.826 INFO wrote hostname ci-4230.2.1-a-dc1fa1989d to /sysroot/etc/hostname Jul 6 23:06:57.836843 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 6 23:06:58.114061 systemd-networkd[874]: enP27923s1: Gained IPv6LL Jul 6 23:06:58.398452 initrd-setup-root[947]: cut: /sysroot/etc/passwd: No such file or directory Jul 6 23:06:58.464498 initrd-setup-root[954]: cut: /sysroot/etc/group: No such file or directory Jul 6 23:06:58.472074 initrd-setup-root[961]: cut: /sysroot/etc/shadow: No such file or directory Jul 6 23:06:58.478876 initrd-setup-root[968]: cut: /sysroot/etc/gshadow: No such file or directory Jul 6 23:06:59.383222 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 6 23:06:59.400164 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 6 23:06:59.414128 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 6 23:06:59.431169 kernel: BTRFS info (device sda6): last unmount of filesystem 297af9a7-3de6-47a6-b022-d94c20ff287b Jul 6 23:06:59.426989 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 6 23:06:59.457702 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 6 23:06:59.462774 ignition[1036]: INFO : Ignition 2.20.0 Jul 6 23:06:59.462774 ignition[1036]: INFO : Stage: mount Jul 6 23:06:59.462774 ignition[1036]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:06:59.462774 ignition[1036]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 6 23:06:59.462774 ignition[1036]: INFO : mount: mount passed Jul 6 23:06:59.462774 ignition[1036]: INFO : Ignition finished successfully Jul 6 23:06:59.472485 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 6 23:06:59.506051 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 6 23:06:59.520246 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 6 23:06:59.552598 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (1048) Jul 6 23:06:59.552667 kernel: BTRFS info (device sda6): first mount of filesystem 297af9a7-3de6-47a6-b022-d94c20ff287b Jul 6 23:06:59.563808 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 6 23:06:59.563857 kernel: BTRFS info (device sda6): using free space tree Jul 6 23:06:59.569948 kernel: BTRFS info (device sda6): auto enabling async discard Jul 6 23:06:59.571971 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 6 23:06:59.599231 ignition[1066]: INFO : Ignition 2.20.0 Jul 6 23:06:59.603463 ignition[1066]: INFO : Stage: files Jul 6 23:06:59.603463 ignition[1066]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:06:59.603463 ignition[1066]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 6 23:06:59.603463 ignition[1066]: DEBUG : files: compiled without relabeling support, skipping Jul 6 23:06:59.624843 ignition[1066]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 6 23:06:59.624843 ignition[1066]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 6 23:06:59.691993 ignition[1066]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 6 23:06:59.699173 ignition[1066]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 6 23:06:59.699173 ignition[1066]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 6 23:06:59.692479 unknown[1066]: wrote ssh authorized keys file for user: core Jul 6 23:06:59.719519 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jul 6 23:06:59.719519 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Jul 6 23:06:59.782365 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 6 23:06:59.903743 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jul 6 23:06:59.914638 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 6 23:06:59.914638 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jul 6 23:07:00.433906 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 6 23:07:00.782885 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 6 23:07:00.782885 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 6 23:07:00.803556 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 6 23:07:00.803556 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 6 23:07:00.803556 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 6 23:07:00.803556 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 6 23:07:00.803556 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 6 23:07:00.803556 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 6 23:07:00.803556 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 6 23:07:00.803556 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 6 23:07:00.803556 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 6 23:07:00.803556 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 6 23:07:00.803556 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 6 23:07:00.803556 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 6 23:07:00.803556 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Jul 6 23:07:01.584665 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 6 23:07:03.142347 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 6 23:07:03.142347 ignition[1066]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 6 23:07:03.165001 ignition[1066]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 6 23:07:03.165001 ignition[1066]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 6 23:07:03.165001 ignition[1066]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 6 23:07:03.165001 ignition[1066]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jul 6 23:07:03.165001 ignition[1066]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jul 6 23:07:03.165001 ignition[1066]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 6 23:07:03.165001 ignition[1066]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 6 23:07:03.165001 ignition[1066]: INFO : files: files passed Jul 6 23:07:03.165001 ignition[1066]: INFO : Ignition finished successfully Jul 6 23:07:03.167188 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 6 23:07:03.211208 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 6 23:07:03.226093 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 6 23:07:03.310670 initrd-setup-root-after-ignition[1092]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:07:03.310670 initrd-setup-root-after-ignition[1092]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:07:03.255003 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 6 23:07:03.344244 initrd-setup-root-after-ignition[1096]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:07:03.255095 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 6 23:07:03.271564 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 6 23:07:03.283989 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 6 23:07:03.320171 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 6 23:07:03.370598 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 6 23:07:03.370739 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 6 23:07:03.383320 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 6 23:07:03.396651 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 6 23:07:03.409160 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 6 23:07:03.429297 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 6 23:07:03.461991 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 6 23:07:03.484160 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 6 23:07:03.505606 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 6 23:07:03.505734 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 6 23:07:03.520620 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:07:03.534845 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:07:03.549816 systemd[1]: Stopped target timers.target - Timer Units. Jul 6 23:07:03.559118 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 6 23:07:03.559199 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 6 23:07:03.579774 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 6 23:07:03.591694 systemd[1]: Stopped target basic.target - Basic System. Jul 6 23:07:03.603857 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 6 23:07:03.616299 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 6 23:07:03.627915 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 6 23:07:03.641001 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 6 23:07:03.654631 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 6 23:07:03.669135 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 6 23:07:03.680998 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 6 23:07:03.694185 systemd[1]: Stopped target swap.target - Swaps. Jul 6 23:07:03.705588 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 6 23:07:03.705678 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 6 23:07:03.723649 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:07:03.737007 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:07:03.753109 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 6 23:07:03.759787 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:07:03.768638 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 6 23:07:03.768720 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 6 23:07:03.789480 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 6 23:07:03.789571 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 6 23:07:03.804682 systemd[1]: ignition-files.service: Deactivated successfully. Jul 6 23:07:03.804739 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 6 23:07:03.816283 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jul 6 23:07:03.816339 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 6 23:07:03.850115 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 6 23:07:03.882314 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 6 23:07:03.914406 ignition[1118]: INFO : Ignition 2.20.0 Jul 6 23:07:03.914406 ignition[1118]: INFO : Stage: umount Jul 6 23:07:03.914406 ignition[1118]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:07:03.914406 ignition[1118]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 6 23:07:03.914406 ignition[1118]: INFO : umount: umount passed Jul 6 23:07:03.914406 ignition[1118]: INFO : Ignition finished successfully Jul 6 23:07:03.894005 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 6 23:07:03.894119 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:07:03.907509 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 6 23:07:03.907584 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 6 23:07:03.921137 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 6 23:07:03.921234 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 6 23:07:03.931900 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 6 23:07:03.932018 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 6 23:07:03.951193 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 6 23:07:03.951269 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 6 23:07:03.957882 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 6 23:07:03.957985 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 6 23:07:03.969460 systemd[1]: Stopped target network.target - Network. Jul 6 23:07:03.980791 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 6 23:07:03.980893 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 6 23:07:03.995735 systemd[1]: Stopped target paths.target - Path Units. Jul 6 23:07:04.011419 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 6 23:07:04.018894 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:07:04.027709 systemd[1]: Stopped target slices.target - Slice Units. Jul 6 23:07:04.044136 systemd[1]: Stopped target sockets.target - Socket Units. Jul 6 23:07:04.058116 systemd[1]: iscsid.socket: Deactivated successfully. Jul 6 23:07:04.058182 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 6 23:07:04.070807 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 6 23:07:04.070864 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 6 23:07:04.084036 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 6 23:07:04.084102 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 6 23:07:04.095618 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 6 23:07:04.095672 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 6 23:07:04.107304 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 6 23:07:04.119341 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 6 23:07:04.147514 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 6 23:07:04.147648 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 6 23:07:04.168820 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 6 23:07:04.367041 kernel: hv_netvsc 000d3ac5-8205-000d-3ac5-8205000d3ac5 eth0: Data path switched from VF: enP27923s1 Jul 6 23:07:04.169266 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 6 23:07:04.169405 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 6 23:07:04.186860 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 6 23:07:04.188261 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 6 23:07:04.188329 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:07:04.211144 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 6 23:07:04.221384 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 6 23:07:04.221468 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 6 23:07:04.232759 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 6 23:07:04.232819 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:07:04.248173 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 6 23:07:04.248241 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 6 23:07:04.256181 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 6 23:07:04.256254 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:07:04.275511 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:07:04.282845 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 6 23:07:04.282917 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 6 23:07:04.291414 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 6 23:07:04.291535 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:07:04.299614 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 6 23:07:04.299698 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 6 23:07:04.311574 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 6 23:07:04.311617 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:07:04.323396 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 6 23:07:04.323475 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 6 23:07:04.340544 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 6 23:07:04.340610 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 6 23:07:04.366799 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 6 23:07:04.366907 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:07:04.392259 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 6 23:07:04.409481 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 6 23:07:04.409560 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:07:04.428240 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 6 23:07:04.428319 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 6 23:07:04.667380 systemd-journald[218]: Received SIGTERM from PID 1 (systemd). Jul 6 23:07:04.436545 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 6 23:07:04.436603 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:07:04.446731 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:07:04.446794 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:07:04.465757 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 6 23:07:04.465849 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 6 23:07:04.465888 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 6 23:07:04.466529 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 6 23:07:04.466643 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 6 23:07:04.476617 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 6 23:07:04.476729 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 6 23:07:04.487288 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 6 23:07:04.487378 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 6 23:07:04.500623 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 6 23:07:04.513842 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 6 23:07:04.514047 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 6 23:07:04.546226 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 6 23:07:04.567193 systemd[1]: Switching root. Jul 6 23:07:04.770527 systemd-journald[218]: Journal stopped Jul 6 23:07:10.144063 kernel: SELinux: policy capability network_peer_controls=1 Jul 6 23:07:10.144090 kernel: SELinux: policy capability open_perms=1 Jul 6 23:07:10.144101 kernel: SELinux: policy capability extended_socket_class=1 Jul 6 23:07:10.144109 kernel: SELinux: policy capability always_check_network=0 Jul 6 23:07:10.144119 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 6 23:07:10.144126 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 6 23:07:10.144135 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 6 23:07:10.144142 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 6 23:07:10.144150 kernel: audit: type=1403 audit(1751843225.559:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 6 23:07:10.144160 systemd[1]: Successfully loaded SELinux policy in 188.880ms. Jul 6 23:07:10.144172 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.451ms. Jul 6 23:07:10.144182 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 6 23:07:10.144190 systemd[1]: Detected virtualization microsoft. Jul 6 23:07:10.144199 systemd[1]: Detected architecture arm64. Jul 6 23:07:10.144208 systemd[1]: Detected first boot. Jul 6 23:07:10.144219 systemd[1]: Hostname set to . Jul 6 23:07:10.144228 systemd[1]: Initializing machine ID from random generator. Jul 6 23:07:10.144236 zram_generator::config[1161]: No configuration found. Jul 6 23:07:10.144248 kernel: NET: Registered PF_VSOCK protocol family Jul 6 23:07:10.144256 systemd[1]: Populated /etc with preset unit settings. Jul 6 23:07:10.144265 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 6 23:07:10.144274 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 6 23:07:10.144284 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 6 23:07:10.144292 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 6 23:07:10.144301 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 6 23:07:10.144310 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 6 23:07:10.144319 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 6 23:07:10.144328 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 6 23:07:10.144336 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 6 23:07:10.144347 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 6 23:07:10.144356 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 6 23:07:10.144364 systemd[1]: Created slice user.slice - User and Session Slice. Jul 6 23:07:10.144373 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:07:10.144382 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:07:10.144391 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 6 23:07:10.144400 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 6 23:07:10.144409 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 6 23:07:10.144419 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 6 23:07:10.144433 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jul 6 23:07:10.144442 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:07:10.144454 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 6 23:07:10.144463 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 6 23:07:10.144472 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 6 23:07:10.144481 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 6 23:07:10.144490 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:07:10.144500 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 6 23:07:10.144509 systemd[1]: Reached target slices.target - Slice Units. Jul 6 23:07:10.144518 systemd[1]: Reached target swap.target - Swaps. Jul 6 23:07:10.144527 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 6 23:07:10.144536 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 6 23:07:10.144545 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 6 23:07:10.144556 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:07:10.144566 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 6 23:07:10.144575 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:07:10.144584 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 6 23:07:10.144594 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 6 23:07:10.144603 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 6 23:07:10.144612 systemd[1]: Mounting media.mount - External Media Directory... Jul 6 23:07:10.144623 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 6 23:07:10.144632 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 6 23:07:10.144642 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 6 23:07:10.144652 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 6 23:07:10.144661 systemd[1]: Reached target machines.target - Containers. Jul 6 23:07:10.144670 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 6 23:07:10.144679 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:07:10.144689 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 6 23:07:10.144699 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 6 23:07:10.144708 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:07:10.144717 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 6 23:07:10.144726 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:07:10.144735 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 6 23:07:10.144745 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:07:10.144754 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 6 23:07:10.144764 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 6 23:07:10.144774 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 6 23:07:10.144784 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 6 23:07:10.144793 systemd[1]: Stopped systemd-fsck-usr.service. Jul 6 23:07:10.144801 kernel: fuse: init (API version 7.39) Jul 6 23:07:10.144810 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 6 23:07:10.144819 kernel: loop: module loaded Jul 6 23:07:10.144828 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 6 23:07:10.144838 kernel: ACPI: bus type drm_connector registered Jul 6 23:07:10.144846 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 6 23:07:10.144857 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 6 23:07:10.144889 systemd-journald[1265]: Collecting audit messages is disabled. Jul 6 23:07:10.144911 systemd-journald[1265]: Journal started Jul 6 23:07:10.144989 systemd-journald[1265]: Runtime Journal (/run/log/journal/25ed80c3efc643868fb920aa27cba501) is 8M, max 78.5M, 70.5M free. Jul 6 23:07:09.168379 systemd[1]: Queued start job for default target multi-user.target. Jul 6 23:07:09.180780 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jul 6 23:07:09.181196 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 6 23:07:09.181551 systemd[1]: systemd-journald.service: Consumed 3.458s CPU time. Jul 6 23:07:10.172694 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 6 23:07:10.196725 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 6 23:07:10.210353 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 6 23:07:10.219193 systemd[1]: verity-setup.service: Deactivated successfully. Jul 6 23:07:10.219251 systemd[1]: Stopped verity-setup.service. Jul 6 23:07:10.237935 systemd[1]: Started systemd-journald.service - Journal Service. Jul 6 23:07:10.238766 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 6 23:07:10.244700 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 6 23:07:10.250913 systemd[1]: Mounted media.mount - External Media Directory. Jul 6 23:07:10.257263 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 6 23:07:10.263863 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 6 23:07:10.270738 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 6 23:07:10.276351 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 6 23:07:10.282986 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:07:10.290148 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 6 23:07:10.290317 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 6 23:07:10.297123 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:07:10.297278 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:07:10.305577 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 6 23:07:10.305748 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 6 23:07:10.312089 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:07:10.312237 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:07:10.319639 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 6 23:07:10.319799 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 6 23:07:10.326465 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:07:10.326612 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:07:10.333020 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 6 23:07:10.339690 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 6 23:07:10.346951 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 6 23:07:10.358790 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 6 23:07:10.366642 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:07:10.385580 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 6 23:07:10.404053 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 6 23:07:10.412786 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 6 23:07:10.424992 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 6 23:07:10.425043 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 6 23:07:10.432771 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 6 23:07:10.441404 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 6 23:07:10.450643 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 6 23:07:10.456831 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:07:10.459202 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 6 23:07:10.468486 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 6 23:07:10.477364 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 6 23:07:10.478548 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 6 23:07:10.485263 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 6 23:07:10.488176 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:07:10.499626 systemd-journald[1265]: Time spent on flushing to /var/log/journal/25ed80c3efc643868fb920aa27cba501 is 13.480ms for 914 entries. Jul 6 23:07:10.499626 systemd-journald[1265]: System Journal (/var/log/journal/25ed80c3efc643868fb920aa27cba501) is 8M, max 2.6G, 2.6G free. Jul 6 23:07:10.588199 systemd-journald[1265]: Received client request to flush runtime journal. Jul 6 23:07:10.516736 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 6 23:07:10.529137 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 6 23:07:10.545244 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 6 23:07:10.560866 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 6 23:07:10.571193 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 6 23:07:10.578977 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 6 23:07:10.588080 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 6 23:07:10.604572 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 6 23:07:10.607955 kernel: loop0: detected capacity change from 0 to 211168 Jul 6 23:07:10.618540 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 6 23:07:10.636276 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 6 23:07:10.644304 udevadm[1304]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 6 23:07:10.672966 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 6 23:07:10.673945 systemd-tmpfiles[1303]: ACLs are not supported, ignoring. Jul 6 23:07:10.673959 systemd-tmpfiles[1303]: ACLs are not supported, ignoring. Jul 6 23:07:10.679726 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:07:10.690123 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 6 23:07:10.704179 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 6 23:07:10.732960 kernel: loop1: detected capacity change from 0 to 123192 Jul 6 23:07:10.741985 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 6 23:07:10.743607 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 6 23:07:10.788985 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 6 23:07:10.804112 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 6 23:07:10.820388 systemd-tmpfiles[1322]: ACLs are not supported, ignoring. Jul 6 23:07:10.820732 systemd-tmpfiles[1322]: ACLs are not supported, ignoring. Jul 6 23:07:10.826608 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:07:11.106963 kernel: loop2: detected capacity change from 0 to 28720 Jul 6 23:07:11.407977 kernel: loop3: detected capacity change from 0 to 113512 Jul 6 23:07:11.698248 kernel: loop4: detected capacity change from 0 to 211168 Jul 6 23:07:11.710988 kernel: loop5: detected capacity change from 0 to 123192 Jul 6 23:07:11.722949 kernel: loop6: detected capacity change from 0 to 28720 Jul 6 23:07:11.738952 kernel: loop7: detected capacity change from 0 to 113512 Jul 6 23:07:11.743671 (sd-merge)[1328]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jul 6 23:07:11.744509 (sd-merge)[1328]: Merged extensions into '/usr'. Jul 6 23:07:11.748728 systemd[1]: Reload requested from client PID 1301 ('systemd-sysext') (unit systemd-sysext.service)... Jul 6 23:07:11.749014 systemd[1]: Reloading... Jul 6 23:07:11.829954 zram_generator::config[1359]: No configuration found. Jul 6 23:07:11.961167 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:07:12.038254 systemd[1]: Reloading finished in 288 ms. Jul 6 23:07:12.062999 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 6 23:07:12.072023 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 6 23:07:12.093182 systemd[1]: Starting ensure-sysext.service... Jul 6 23:07:12.099473 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 6 23:07:12.109285 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:07:12.148304 systemd-udevd[1414]: Using default interface naming scheme 'v255'. Jul 6 23:07:12.186017 systemd[1]: Reload requested from client PID 1412 ('systemctl') (unit ensure-sysext.service)... Jul 6 23:07:12.186034 systemd[1]: Reloading... Jul 6 23:07:12.188167 systemd-tmpfiles[1413]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 6 23:07:12.189761 systemd-tmpfiles[1413]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 6 23:07:12.190500 systemd-tmpfiles[1413]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 6 23:07:12.190705 systemd-tmpfiles[1413]: ACLs are not supported, ignoring. Jul 6 23:07:12.190749 systemd-tmpfiles[1413]: ACLs are not supported, ignoring. Jul 6 23:07:12.224564 systemd-tmpfiles[1413]: Detected autofs mount point /boot during canonicalization of boot. Jul 6 23:07:12.224754 systemd-tmpfiles[1413]: Skipping /boot Jul 6 23:07:12.241869 systemd-tmpfiles[1413]: Detected autofs mount point /boot during canonicalization of boot. Jul 6 23:07:12.242232 systemd-tmpfiles[1413]: Skipping /boot Jul 6 23:07:12.278109 zram_generator::config[1447]: No configuration found. Jul 6 23:07:12.447594 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:07:12.499969 kernel: mousedev: PS/2 mouse device common for all mice Jul 6 23:07:12.555975 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jul 6 23:07:12.556136 systemd[1]: Reloading finished in 369 ms. Jul 6 23:07:12.562012 kernel: hv_vmbus: registering driver hv_balloon Jul 6 23:07:12.562083 kernel: hv_vmbus: registering driver hyperv_fb Jul 6 23:07:12.570826 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jul 6 23:07:12.577328 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jul 6 23:07:12.577422 kernel: hv_balloon: Memory hot add disabled on ARM64 Jul 6 23:07:12.588054 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jul 6 23:07:12.596691 kernel: Console: switching to colour dummy device 80x25 Jul 6 23:07:12.597139 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:07:12.608777 kernel: Console: switching to colour frame buffer device 128x48 Jul 6 23:07:12.626965 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:07:12.682983 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 35 scanned by (udev-worker) (1473) Jul 6 23:07:12.710250 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 6 23:07:12.727567 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 6 23:07:12.743134 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:07:12.748274 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:07:12.760117 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 6 23:07:12.770252 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:07:12.779375 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:07:12.785832 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:07:12.786200 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 6 23:07:12.788132 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 6 23:07:12.798355 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 6 23:07:12.813216 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 6 23:07:12.818643 systemd[1]: Reached target time-set.target - System Time Set. Jul 6 23:07:12.825546 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 6 23:07:12.833878 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:07:12.842674 systemd[1]: Finished ensure-sysext.service. Jul 6 23:07:12.853096 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:07:12.853316 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:07:12.860430 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 6 23:07:12.860773 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 6 23:07:12.868531 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:07:12.869907 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:07:12.877782 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:07:12.877973 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:07:12.904258 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jul 6 23:07:12.919520 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 6 23:07:12.931514 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 6 23:07:12.952967 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 6 23:07:12.961184 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 6 23:07:12.968831 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 6 23:07:12.968914 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 6 23:07:12.970820 augenrules[1640]: No rules Jul 6 23:07:12.971187 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 6 23:07:12.981851 systemd[1]: audit-rules.service: Deactivated successfully. Jul 6 23:07:12.982162 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 6 23:07:12.991605 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:07:12.992080 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:07:13.003618 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 6 23:07:13.011200 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:07:13.023333 lvm[1637]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 6 23:07:13.036328 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 6 23:07:13.052046 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 6 23:07:13.071003 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 6 23:07:13.084739 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:07:13.099212 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 6 23:07:13.111884 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 6 23:07:13.121331 lvm[1661]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 6 23:07:13.153978 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 6 23:07:13.169064 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 6 23:07:13.177983 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 6 23:07:13.180008 systemd-resolved[1611]: Positive Trust Anchors: Jul 6 23:07:13.180024 systemd-resolved[1611]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 6 23:07:13.180055 systemd-resolved[1611]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 6 23:07:13.197812 systemd-resolved[1611]: Using system hostname 'ci-4230.2.1-a-dc1fa1989d'. Jul 6 23:07:13.199295 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 6 23:07:13.206362 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:07:13.287727 systemd-networkd[1607]: lo: Link UP Jul 6 23:07:13.288064 systemd-networkd[1607]: lo: Gained carrier Jul 6 23:07:13.290121 systemd-networkd[1607]: Enumeration completed Jul 6 23:07:13.290596 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 6 23:07:13.290964 systemd-networkd[1607]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:07:13.291047 systemd-networkd[1607]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 6 23:07:13.297243 systemd[1]: Reached target network.target - Network. Jul 6 23:07:13.308080 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 6 23:07:13.315421 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 6 23:07:13.337951 kernel: mlx5_core 6d13:00:02.0 enP27923s1: Link up Jul 6 23:07:13.364272 kernel: hv_netvsc 000d3ac5-8205-000d-3ac5-8205000d3ac5 eth0: Data path switched to VF: enP27923s1 Jul 6 23:07:13.366353 systemd-networkd[1607]: enP27923s1: Link UP Jul 6 23:07:13.366467 systemd-networkd[1607]: eth0: Link UP Jul 6 23:07:13.366470 systemd-networkd[1607]: eth0: Gained carrier Jul 6 23:07:13.366486 systemd-networkd[1607]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:07:13.368663 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 6 23:07:13.378368 systemd-networkd[1607]: enP27923s1: Gained carrier Jul 6 23:07:13.389019 systemd-networkd[1607]: eth0: DHCPv4 address 10.200.20.36/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jul 6 23:07:13.391352 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:07:14.754057 systemd-networkd[1607]: enP27923s1: Gained IPv6LL Jul 6 23:07:14.882039 systemd-networkd[1607]: eth0: Gained IPv6LL Jul 6 23:07:14.884244 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 6 23:07:14.891733 systemd[1]: Reached target network-online.target - Network is Online. Jul 6 23:07:15.472962 ldconfig[1296]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 6 23:07:15.485652 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 6 23:07:15.497267 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 6 23:07:15.512282 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 6 23:07:15.519690 systemd[1]: Reached target sysinit.target - System Initialization. Jul 6 23:07:15.526419 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 6 23:07:15.534472 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 6 23:07:15.542184 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 6 23:07:15.548632 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 6 23:07:15.556389 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 6 23:07:15.563886 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 6 23:07:15.563940 systemd[1]: Reached target paths.target - Path Units. Jul 6 23:07:15.572014 systemd[1]: Reached target timers.target - Timer Units. Jul 6 23:07:15.578830 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 6 23:07:15.587427 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 6 23:07:15.595607 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 6 23:07:15.604526 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 6 23:07:15.612188 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 6 23:07:15.629913 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 6 23:07:15.636457 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 6 23:07:15.644965 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 6 23:07:15.651431 systemd[1]: Reached target sockets.target - Socket Units. Jul 6 23:07:15.657064 systemd[1]: Reached target basic.target - Basic System. Jul 6 23:07:15.662762 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 6 23:07:15.662802 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 6 23:07:15.670064 systemd[1]: Starting chronyd.service - NTP client/server... Jul 6 23:07:15.679109 systemd[1]: Starting containerd.service - containerd container runtime... Jul 6 23:07:15.691169 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 6 23:07:15.705172 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 6 23:07:15.712356 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 6 23:07:15.718430 (chronyd)[1678]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Jul 6 23:07:15.721127 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 6 23:07:15.731054 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 6 23:07:15.731099 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Jul 6 23:07:15.738251 jq[1685]: false Jul 6 23:07:15.744255 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jul 6 23:07:15.750810 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jul 6 23:07:15.754828 KVP[1687]: KVP starting; pid is:1687 Jul 6 23:07:15.759425 chronyd[1690]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Jul 6 23:07:15.759812 KVP[1687]: KVP LIC Version: 3.1 Jul 6 23:07:15.760989 kernel: hv_utils: KVP IC version 4.0 Jul 6 23:07:15.765633 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:07:15.775585 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 6 23:07:15.785945 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 6 23:07:15.792011 chronyd[1690]: Timezone right/UTC failed leap second check, ignoring Jul 6 23:07:15.792266 chronyd[1690]: Loaded seccomp filter (level 2) Jul 6 23:07:15.794423 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 6 23:07:15.806024 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 6 23:07:15.822411 extend-filesystems[1686]: Found loop4 Jul 6 23:07:15.828161 extend-filesystems[1686]: Found loop5 Jul 6 23:07:15.828161 extend-filesystems[1686]: Found loop6 Jul 6 23:07:15.828161 extend-filesystems[1686]: Found loop7 Jul 6 23:07:15.828161 extend-filesystems[1686]: Found sda Jul 6 23:07:15.828161 extend-filesystems[1686]: Found sda1 Jul 6 23:07:15.828161 extend-filesystems[1686]: Found sda2 Jul 6 23:07:15.828161 extend-filesystems[1686]: Found sda3 Jul 6 23:07:15.828161 extend-filesystems[1686]: Found usr Jul 6 23:07:15.828161 extend-filesystems[1686]: Found sda4 Jul 6 23:07:15.828161 extend-filesystems[1686]: Found sda6 Jul 6 23:07:15.828161 extend-filesystems[1686]: Found sda7 Jul 6 23:07:15.828161 extend-filesystems[1686]: Found sda9 Jul 6 23:07:15.828161 extend-filesystems[1686]: Checking size of /dev/sda9 Jul 6 23:07:15.982973 coreos-metadata[1680]: Jul 06 23:07:15.926 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jul 6 23:07:15.982973 coreos-metadata[1680]: Jul 06 23:07:15.951 INFO Fetch successful Jul 6 23:07:15.982973 coreos-metadata[1680]: Jul 06 23:07:15.951 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jul 6 23:07:15.982973 coreos-metadata[1680]: Jul 06 23:07:15.952 INFO Fetch successful Jul 6 23:07:15.982973 coreos-metadata[1680]: Jul 06 23:07:15.952 INFO Fetching http://168.63.129.16/machine/5cb0f14b-6999-4ee8-94f3-e4cf5a8dfba5/990dd569%2De52f%2D406c%2D9091%2Db6998aa2d981.%5Fci%2D4230.2.1%2Da%2Ddc1fa1989d?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jul 6 23:07:15.982973 coreos-metadata[1680]: Jul 06 23:07:15.956 INFO Fetch successful Jul 6 23:07:15.982973 coreos-metadata[1680]: Jul 06 23:07:15.962 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jul 6 23:07:15.982973 coreos-metadata[1680]: Jul 06 23:07:15.979 INFO Fetch successful Jul 6 23:07:15.855258 dbus-daemon[1681]: [system] SELinux support is enabled Jul 6 23:07:15.832743 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 6 23:07:15.983761 extend-filesystems[1686]: Old size kept for /dev/sda9 Jul 6 23:07:15.983761 extend-filesystems[1686]: Found sr0 Jul 6 23:07:16.028548 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 35 scanned by (udev-worker) (1489) Jul 6 23:07:15.852164 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 6 23:07:15.873741 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 6 23:07:16.028905 update_engine[1714]: I20250706 23:07:15.943902 1714 main.cc:92] Flatcar Update Engine starting Jul 6 23:07:16.028905 update_engine[1714]: I20250706 23:07:15.948768 1714 update_check_scheduler.cc:74] Next update check in 6m12s Jul 6 23:07:15.874396 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 6 23:07:16.051480 jq[1721]: true Jul 6 23:07:15.886991 systemd[1]: Starting update-engine.service - Update Engine... Jul 6 23:07:15.908056 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 6 23:07:15.920971 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 6 23:07:15.929724 systemd[1]: Started chronyd.service - NTP client/server. Jul 6 23:07:15.945450 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 6 23:07:15.946969 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 6 23:07:15.947260 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 6 23:07:15.947420 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 6 23:07:15.978650 systemd-logind[1708]: New seat seat0. Jul 6 23:07:15.981457 systemd[1]: motdgen.service: Deactivated successfully. Jul 6 23:07:15.981669 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 6 23:07:15.987895 systemd-logind[1708]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Jul 6 23:07:15.989574 systemd[1]: Started systemd-logind.service - User Login Management. Jul 6 23:07:16.037576 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 6 23:07:16.052464 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 6 23:07:16.052675 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 6 23:07:16.079440 (ntainerd)[1752]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 6 23:07:16.089883 jq[1751]: true Jul 6 23:07:16.099503 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 6 23:07:16.149763 dbus-daemon[1681]: [system] Successfully activated service 'org.freedesktop.systemd1' Jul 6 23:07:16.157211 tar[1749]: linux-arm64/LICENSE Jul 6 23:07:16.157501 tar[1749]: linux-arm64/helm Jul 6 23:07:16.170591 systemd[1]: Started update-engine.service - Update Engine. Jul 6 23:07:16.182404 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 6 23:07:16.182643 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 6 23:07:16.182765 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 6 23:07:16.194789 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 6 23:07:16.194910 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 6 23:07:16.215233 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 6 23:07:16.295377 bash[1810]: Updated "/home/core/.ssh/authorized_keys" Jul 6 23:07:16.304838 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 6 23:07:16.323588 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 6 23:07:16.495194 locksmithd[1806]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 6 23:07:16.656630 containerd[1752]: time="2025-07-06T23:07:16.656283540Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jul 6 23:07:16.737951 containerd[1752]: time="2025-07-06T23:07:16.734326540Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 6 23:07:16.741015 containerd[1752]: time="2025-07-06T23:07:16.740961740Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.95-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:07:16.741015 containerd[1752]: time="2025-07-06T23:07:16.741009820Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 6 23:07:16.741123 containerd[1752]: time="2025-07-06T23:07:16.741029180Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 6 23:07:16.741213 containerd[1752]: time="2025-07-06T23:07:16.741188620Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 6 23:07:16.741239 containerd[1752]: time="2025-07-06T23:07:16.741212940Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 6 23:07:16.741308 containerd[1752]: time="2025-07-06T23:07:16.741286820Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:07:16.741308 containerd[1752]: time="2025-07-06T23:07:16.741306620Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 6 23:07:16.741537 containerd[1752]: time="2025-07-06T23:07:16.741513260Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:07:16.741537 containerd[1752]: time="2025-07-06T23:07:16.741534260Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 6 23:07:16.741579 containerd[1752]: time="2025-07-06T23:07:16.741549060Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:07:16.741579 containerd[1752]: time="2025-07-06T23:07:16.741558460Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 6 23:07:16.741653 containerd[1752]: time="2025-07-06T23:07:16.741632860Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 6 23:07:16.741849 containerd[1752]: time="2025-07-06T23:07:16.741825540Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 6 23:07:16.742001 containerd[1752]: time="2025-07-06T23:07:16.741979420Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:07:16.742035 containerd[1752]: time="2025-07-06T23:07:16.742001060Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 6 23:07:16.742097 containerd[1752]: time="2025-07-06T23:07:16.742077700Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 6 23:07:16.742144 containerd[1752]: time="2025-07-06T23:07:16.742126820Z" level=info msg="metadata content store policy set" policy=shared Jul 6 23:07:16.761205 containerd[1752]: time="2025-07-06T23:07:16.760914100Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 6 23:07:16.761205 containerd[1752]: time="2025-07-06T23:07:16.761029460Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 6 23:07:16.761205 containerd[1752]: time="2025-07-06T23:07:16.761047620Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 6 23:07:16.761205 containerd[1752]: time="2025-07-06T23:07:16.761065700Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 6 23:07:16.761205 containerd[1752]: time="2025-07-06T23:07:16.761080180Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 6 23:07:16.761447 containerd[1752]: time="2025-07-06T23:07:16.761274020Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 6 23:07:16.761552 containerd[1752]: time="2025-07-06T23:07:16.761516140Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 6 23:07:16.761644 containerd[1752]: time="2025-07-06T23:07:16.761623220Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 6 23:07:16.761673 containerd[1752]: time="2025-07-06T23:07:16.761645300Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 6 23:07:16.761673 containerd[1752]: time="2025-07-06T23:07:16.761660740Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 6 23:07:16.761709 containerd[1752]: time="2025-07-06T23:07:16.761674220Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 6 23:07:16.761709 containerd[1752]: time="2025-07-06T23:07:16.761687660Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 6 23:07:16.761709 containerd[1752]: time="2025-07-06T23:07:16.761700740Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 6 23:07:16.761761 containerd[1752]: time="2025-07-06T23:07:16.761715060Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 6 23:07:16.761761 containerd[1752]: time="2025-07-06T23:07:16.761730100Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 6 23:07:16.761761 containerd[1752]: time="2025-07-06T23:07:16.761742980Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 6 23:07:16.761761 containerd[1752]: time="2025-07-06T23:07:16.761755820Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 6 23:07:16.761824 containerd[1752]: time="2025-07-06T23:07:16.761768900Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 6 23:07:16.761824 containerd[1752]: time="2025-07-06T23:07:16.761790540Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 6 23:07:16.761824 containerd[1752]: time="2025-07-06T23:07:16.761804140Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 6 23:07:16.761824 containerd[1752]: time="2025-07-06T23:07:16.761816260Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 6 23:07:16.761895 containerd[1752]: time="2025-07-06T23:07:16.761839060Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 6 23:07:16.761895 containerd[1752]: time="2025-07-06T23:07:16.761852020Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 6 23:07:16.761895 containerd[1752]: time="2025-07-06T23:07:16.761865020Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 6 23:07:16.761895 containerd[1752]: time="2025-07-06T23:07:16.761877380Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 6 23:07:16.761895 containerd[1752]: time="2025-07-06T23:07:16.761889980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 6 23:07:16.762016 containerd[1752]: time="2025-07-06T23:07:16.761902980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 6 23:07:16.762016 containerd[1752]: time="2025-07-06T23:07:16.761945740Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 6 23:07:16.762016 containerd[1752]: time="2025-07-06T23:07:16.761961540Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 6 23:07:16.762016 containerd[1752]: time="2025-07-06T23:07:16.761973940Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 6 23:07:16.762016 containerd[1752]: time="2025-07-06T23:07:16.761986980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 6 23:07:16.762016 containerd[1752]: time="2025-07-06T23:07:16.762000860Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 6 23:07:16.762197 containerd[1752]: time="2025-07-06T23:07:16.762026620Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 6 23:07:16.762197 containerd[1752]: time="2025-07-06T23:07:16.762040700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 6 23:07:16.762197 containerd[1752]: time="2025-07-06T23:07:16.762059380Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 6 23:07:16.762197 containerd[1752]: time="2025-07-06T23:07:16.762116860Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 6 23:07:16.762197 containerd[1752]: time="2025-07-06T23:07:16.762136980Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 6 23:07:16.762197 containerd[1752]: time="2025-07-06T23:07:16.762147740Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 6 23:07:16.762197 containerd[1752]: time="2025-07-06T23:07:16.762159380Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 6 23:07:16.762197 containerd[1752]: time="2025-07-06T23:07:16.762170340Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 6 23:07:16.762197 containerd[1752]: time="2025-07-06T23:07:16.762182860Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 6 23:07:16.762197 containerd[1752]: time="2025-07-06T23:07:16.762192500Z" level=info msg="NRI interface is disabled by configuration." Jul 6 23:07:16.762197 containerd[1752]: time="2025-07-06T23:07:16.762202340Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 6 23:07:16.762568 containerd[1752]: time="2025-07-06T23:07:16.762513900Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 6 23:07:16.762568 containerd[1752]: time="2025-07-06T23:07:16.762567060Z" level=info msg="Connect containerd service" Jul 6 23:07:16.762700 containerd[1752]: time="2025-07-06T23:07:16.762611260Z" level=info msg="using legacy CRI server" Jul 6 23:07:16.762700 containerd[1752]: time="2025-07-06T23:07:16.762618380Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 6 23:07:16.763947 containerd[1752]: time="2025-07-06T23:07:16.762736460Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 6 23:07:16.767493 containerd[1752]: time="2025-07-06T23:07:16.767122900Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 6 23:07:16.769494 containerd[1752]: time="2025-07-06T23:07:16.767693820Z" level=info msg="Start subscribing containerd event" Jul 6 23:07:16.769494 containerd[1752]: time="2025-07-06T23:07:16.767758940Z" level=info msg="Start recovering state" Jul 6 23:07:16.769494 containerd[1752]: time="2025-07-06T23:07:16.767857140Z" level=info msg="Start event monitor" Jul 6 23:07:16.769494 containerd[1752]: time="2025-07-06T23:07:16.767871540Z" level=info msg="Start snapshots syncer" Jul 6 23:07:16.769494 containerd[1752]: time="2025-07-06T23:07:16.767881660Z" level=info msg="Start cni network conf syncer for default" Jul 6 23:07:16.769494 containerd[1752]: time="2025-07-06T23:07:16.767889260Z" level=info msg="Start streaming server" Jul 6 23:07:16.769494 containerd[1752]: time="2025-07-06T23:07:16.768682820Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 6 23:07:16.769494 containerd[1752]: time="2025-07-06T23:07:16.768762780Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 6 23:07:16.768974 systemd[1]: Started containerd.service - containerd container runtime. Jul 6 23:07:16.776880 containerd[1752]: time="2025-07-06T23:07:16.776842380Z" level=info msg="containerd successfully booted in 0.126728s" Jul 6 23:07:16.908135 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:07:16.916778 (kubelet)[1830]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:07:17.015184 tar[1749]: linux-arm64/README.md Jul 6 23:07:17.033444 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 6 23:07:17.274908 kubelet[1830]: E0706 23:07:17.274805 1830 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:07:17.277206 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:07:17.277524 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:07:17.277962 systemd[1]: kubelet.service: Consumed 726ms CPU time, 259.8M memory peak. Jul 6 23:07:17.319305 sshd_keygen[1713]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 6 23:07:17.339016 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 6 23:07:17.352229 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 6 23:07:17.359377 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jul 6 23:07:17.365558 systemd[1]: issuegen.service: Deactivated successfully. Jul 6 23:07:17.366114 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 6 23:07:17.384986 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 6 23:07:17.394468 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jul 6 23:07:17.403541 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 6 23:07:17.413236 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 6 23:07:17.419750 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jul 6 23:07:17.427518 systemd[1]: Reached target getty.target - Login Prompts. Jul 6 23:07:17.436727 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 6 23:07:17.444035 systemd[1]: Startup finished in 698ms (kernel) + 14.590s (initrd) + 12.072s (userspace) = 27.361s. Jul 6 23:07:17.624543 login[1863]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Jul 6 23:07:17.626043 login[1864]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:07:17.635419 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 6 23:07:17.640212 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 6 23:07:17.643765 systemd-logind[1708]: New session 1 of user core. Jul 6 23:07:17.651790 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 6 23:07:17.663343 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 6 23:07:17.667140 (systemd)[1871]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 6 23:07:17.669487 systemd-logind[1708]: New session c1 of user core. Jul 6 23:07:17.917577 systemd[1871]: Queued start job for default target default.target. Jul 6 23:07:17.923869 systemd[1871]: Created slice app.slice - User Application Slice. Jul 6 23:07:17.923900 systemd[1871]: Reached target paths.target - Paths. Jul 6 23:07:17.923963 systemd[1871]: Reached target timers.target - Timers. Jul 6 23:07:17.925774 systemd[1871]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 6 23:07:17.935550 systemd[1871]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 6 23:07:17.935616 systemd[1871]: Reached target sockets.target - Sockets. Jul 6 23:07:17.935659 systemd[1871]: Reached target basic.target - Basic System. Jul 6 23:07:17.935686 systemd[1871]: Reached target default.target - Main User Target. Jul 6 23:07:17.935711 systemd[1871]: Startup finished in 259ms. Jul 6 23:07:17.936056 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 6 23:07:17.940502 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 6 23:07:18.625424 login[1863]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:07:18.630526 systemd-logind[1708]: New session 2 of user core. Jul 6 23:07:18.634131 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 6 23:07:18.925259 waagent[1860]: 2025-07-06T23:07:18.925095Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Jul 6 23:07:18.931649 waagent[1860]: 2025-07-06T23:07:18.931569Z INFO Daemon Daemon OS: flatcar 4230.2.1 Jul 6 23:07:18.936575 waagent[1860]: 2025-07-06T23:07:18.936508Z INFO Daemon Daemon Python: 3.11.11 Jul 6 23:07:18.941751 waagent[1860]: 2025-07-06T23:07:18.941535Z INFO Daemon Daemon Run daemon Jul 6 23:07:18.947226 waagent[1860]: 2025-07-06T23:07:18.947164Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4230.2.1' Jul 6 23:07:18.956872 waagent[1860]: 2025-07-06T23:07:18.956800Z INFO Daemon Daemon Using waagent for provisioning Jul 6 23:07:18.962568 waagent[1860]: 2025-07-06T23:07:18.962511Z INFO Daemon Daemon Activate resource disk Jul 6 23:07:18.967692 waagent[1860]: 2025-07-06T23:07:18.967633Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jul 6 23:07:18.980427 waagent[1860]: 2025-07-06T23:07:18.980350Z INFO Daemon Daemon Found device: None Jul 6 23:07:18.985207 waagent[1860]: 2025-07-06T23:07:18.985145Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jul 6 23:07:18.994117 waagent[1860]: 2025-07-06T23:07:18.994054Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jul 6 23:07:19.006456 waagent[1860]: 2025-07-06T23:07:19.006401Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jul 6 23:07:19.012368 waagent[1860]: 2025-07-06T23:07:19.012307Z INFO Daemon Daemon Running default provisioning handler Jul 6 23:07:19.024522 waagent[1860]: 2025-07-06T23:07:19.024435Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jul 6 23:07:19.038860 waagent[1860]: 2025-07-06T23:07:19.038789Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jul 6 23:07:19.048202 waagent[1860]: 2025-07-06T23:07:19.048133Z INFO Daemon Daemon cloud-init is enabled: False Jul 6 23:07:19.054555 waagent[1860]: 2025-07-06T23:07:19.054492Z INFO Daemon Daemon Copying ovf-env.xml Jul 6 23:07:19.148267 waagent[1860]: 2025-07-06T23:07:19.146463Z INFO Daemon Daemon Successfully mounted dvd Jul 6 23:07:19.175281 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jul 6 23:07:19.176917 waagent[1860]: 2025-07-06T23:07:19.176828Z INFO Daemon Daemon Detect protocol endpoint Jul 6 23:07:19.181864 waagent[1860]: 2025-07-06T23:07:19.181787Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jul 6 23:07:19.187302 waagent[1860]: 2025-07-06T23:07:19.187236Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jul 6 23:07:19.193700 waagent[1860]: 2025-07-06T23:07:19.193636Z INFO Daemon Daemon Test for route to 168.63.129.16 Jul 6 23:07:19.199194 waagent[1860]: 2025-07-06T23:07:19.199133Z INFO Daemon Daemon Route to 168.63.129.16 exists Jul 6 23:07:19.204198 waagent[1860]: 2025-07-06T23:07:19.204136Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jul 6 23:07:19.239948 waagent[1860]: 2025-07-06T23:07:19.239886Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jul 6 23:07:19.246569 waagent[1860]: 2025-07-06T23:07:19.246533Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jul 6 23:07:19.251591 waagent[1860]: 2025-07-06T23:07:19.251538Z INFO Daemon Daemon Server preferred version:2015-04-05 Jul 6 23:07:19.349820 waagent[1860]: 2025-07-06T23:07:19.349698Z INFO Daemon Daemon Initializing goal state during protocol detection Jul 6 23:07:19.356912 waagent[1860]: 2025-07-06T23:07:19.356840Z INFO Daemon Daemon Forcing an update of the goal state. Jul 6 23:07:19.366482 waagent[1860]: 2025-07-06T23:07:19.366427Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jul 6 23:07:19.394752 waagent[1860]: 2025-07-06T23:07:19.394701Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.175 Jul 6 23:07:19.401833 waagent[1860]: 2025-07-06T23:07:19.401781Z INFO Daemon Jul 6 23:07:19.404714 waagent[1860]: 2025-07-06T23:07:19.404664Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: b5af4966-7783-4d2d-b0b3-9a35e13a38fe eTag: 996793488958117069 source: Fabric] Jul 6 23:07:19.416165 waagent[1860]: 2025-07-06T23:07:19.416112Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jul 6 23:07:19.422790 waagent[1860]: 2025-07-06T23:07:19.422739Z INFO Daemon Jul 6 23:07:19.425570 waagent[1860]: 2025-07-06T23:07:19.425519Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jul 6 23:07:19.437492 waagent[1860]: 2025-07-06T23:07:19.437407Z INFO Daemon Daemon Downloading artifacts profile blob Jul 6 23:07:19.528703 waagent[1860]: 2025-07-06T23:07:19.528601Z INFO Daemon Downloaded certificate {'thumbprint': '615E525C69EF64CF0BDFA340B206AB6FF3707A79', 'hasPrivateKey': True} Jul 6 23:07:19.539850 waagent[1860]: 2025-07-06T23:07:19.539792Z INFO Daemon Downloaded certificate {'thumbprint': '02D31405E1046CC37E480A7E15AD2B98BDEA9F6F', 'hasPrivateKey': False} Jul 6 23:07:19.551746 waagent[1860]: 2025-07-06T23:07:19.551684Z INFO Daemon Fetch goal state completed Jul 6 23:07:19.564087 waagent[1860]: 2025-07-06T23:07:19.564039Z INFO Daemon Daemon Starting provisioning Jul 6 23:07:19.569562 waagent[1860]: 2025-07-06T23:07:19.569492Z INFO Daemon Daemon Handle ovf-env.xml. Jul 6 23:07:19.576136 waagent[1860]: 2025-07-06T23:07:19.576074Z INFO Daemon Daemon Set hostname [ci-4230.2.1-a-dc1fa1989d] Jul 6 23:07:19.599945 waagent[1860]: 2025-07-06T23:07:19.598845Z INFO Daemon Daemon Publish hostname [ci-4230.2.1-a-dc1fa1989d] Jul 6 23:07:19.606521 waagent[1860]: 2025-07-06T23:07:19.606436Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jul 6 23:07:19.613643 waagent[1860]: 2025-07-06T23:07:19.613559Z INFO Daemon Daemon Primary interface is [eth0] Jul 6 23:07:19.627157 systemd-networkd[1607]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:07:19.627165 systemd-networkd[1607]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 6 23:07:19.627194 systemd-networkd[1607]: eth0: DHCP lease lost Jul 6 23:07:19.628215 waagent[1860]: 2025-07-06T23:07:19.628134Z INFO Daemon Daemon Create user account if not exists Jul 6 23:07:19.634324 waagent[1860]: 2025-07-06T23:07:19.634249Z INFO Daemon Daemon User core already exists, skip useradd Jul 6 23:07:19.641569 waagent[1860]: 2025-07-06T23:07:19.641503Z INFO Daemon Daemon Configure sudoer Jul 6 23:07:19.647268 waagent[1860]: 2025-07-06T23:07:19.647189Z INFO Daemon Daemon Configure sshd Jul 6 23:07:19.652509 waagent[1860]: 2025-07-06T23:07:19.652438Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jul 6 23:07:19.665866 waagent[1860]: 2025-07-06T23:07:19.665795Z INFO Daemon Daemon Deploy ssh public key. Jul 6 23:07:19.695009 systemd-networkd[1607]: eth0: DHCPv4 address 10.200.20.36/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jul 6 23:07:20.784274 waagent[1860]: 2025-07-06T23:07:20.784212Z INFO Daemon Daemon Provisioning complete Jul 6 23:07:20.804359 waagent[1860]: 2025-07-06T23:07:20.804300Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jul 6 23:07:20.810667 waagent[1860]: 2025-07-06T23:07:20.810597Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jul 6 23:07:20.820745 waagent[1860]: 2025-07-06T23:07:20.820673Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Jul 6 23:07:20.967389 waagent[1925]: 2025-07-06T23:07:20.966763Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Jul 6 23:07:20.967389 waagent[1925]: 2025-07-06T23:07:20.966970Z INFO ExtHandler ExtHandler OS: flatcar 4230.2.1 Jul 6 23:07:20.967389 waagent[1925]: 2025-07-06T23:07:20.967035Z INFO ExtHandler ExtHandler Python: 3.11.11 Jul 6 23:07:21.324981 waagent[1925]: 2025-07-06T23:07:21.324453Z INFO ExtHandler ExtHandler Distro: flatcar-4230.2.1; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.11; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Jul 6 23:07:21.324981 waagent[1925]: 2025-07-06T23:07:21.324725Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 6 23:07:21.324981 waagent[1925]: 2025-07-06T23:07:21.324789Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 6 23:07:21.335044 waagent[1925]: 2025-07-06T23:07:21.334943Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jul 6 23:07:21.342487 waagent[1925]: 2025-07-06T23:07:21.342428Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.175 Jul 6 23:07:21.343140 waagent[1925]: 2025-07-06T23:07:21.343087Z INFO ExtHandler Jul 6 23:07:21.343220 waagent[1925]: 2025-07-06T23:07:21.343187Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: d39eb684-abc1-4c42-afb0-feb409ff91a7 eTag: 996793488958117069 source: Fabric] Jul 6 23:07:21.343535 waagent[1925]: 2025-07-06T23:07:21.343492Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jul 6 23:07:21.352991 waagent[1925]: 2025-07-06T23:07:21.352880Z INFO ExtHandler Jul 6 23:07:21.353096 waagent[1925]: 2025-07-06T23:07:21.353061Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jul 6 23:07:21.360978 waagent[1925]: 2025-07-06T23:07:21.360911Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jul 6 23:07:21.579009 waagent[1925]: 2025-07-06T23:07:21.578795Z INFO ExtHandler Downloaded certificate {'thumbprint': '615E525C69EF64CF0BDFA340B206AB6FF3707A79', 'hasPrivateKey': True} Jul 6 23:07:21.579401 waagent[1925]: 2025-07-06T23:07:21.579350Z INFO ExtHandler Downloaded certificate {'thumbprint': '02D31405E1046CC37E480A7E15AD2B98BDEA9F6F', 'hasPrivateKey': False} Jul 6 23:07:21.579883 waagent[1925]: 2025-07-06T23:07:21.579816Z INFO ExtHandler Fetch goal state completed Jul 6 23:07:21.598846 waagent[1925]: 2025-07-06T23:07:21.598776Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1925 Jul 6 23:07:21.599033 waagent[1925]: 2025-07-06T23:07:21.598987Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jul 6 23:07:21.600901 waagent[1925]: 2025-07-06T23:07:21.600810Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4230.2.1', '', 'Flatcar Container Linux by Kinvolk'] Jul 6 23:07:21.601596 waagent[1925]: 2025-07-06T23:07:21.601244Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jul 6 23:07:21.657854 waagent[1925]: 2025-07-06T23:07:21.657807Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jul 6 23:07:21.658087 waagent[1925]: 2025-07-06T23:07:21.658043Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jul 6 23:07:21.663957 waagent[1925]: 2025-07-06T23:07:21.663875Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jul 6 23:07:21.670704 systemd[1]: Reload requested from client PID 1940 ('systemctl') (unit waagent.service)... Jul 6 23:07:21.670718 systemd[1]: Reloading... Jul 6 23:07:21.769018 zram_generator::config[1985]: No configuration found. Jul 6 23:07:21.865219 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:07:21.963997 systemd[1]: Reloading finished in 292 ms. Jul 6 23:07:21.976596 waagent[1925]: 2025-07-06T23:07:21.976220Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Jul 6 23:07:21.982454 systemd[1]: Reload requested from client PID 2033 ('systemctl') (unit waagent.service)... Jul 6 23:07:21.982470 systemd[1]: Reloading... Jul 6 23:07:22.075009 zram_generator::config[2072]: No configuration found. Jul 6 23:07:22.181113 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:07:22.280638 systemd[1]: Reloading finished in 297 ms. Jul 6 23:07:22.296122 waagent[1925]: 2025-07-06T23:07:22.295293Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jul 6 23:07:22.296122 waagent[1925]: 2025-07-06T23:07:22.295471Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jul 6 23:07:22.659569 waagent[1925]: 2025-07-06T23:07:22.659432Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jul 6 23:07:22.660361 waagent[1925]: 2025-07-06T23:07:22.660278Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Jul 6 23:07:22.661383 waagent[1925]: 2025-07-06T23:07:22.661317Z INFO ExtHandler ExtHandler Starting env monitor service. Jul 6 23:07:22.661512 waagent[1925]: 2025-07-06T23:07:22.661460Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 6 23:07:22.661649 waagent[1925]: 2025-07-06T23:07:22.661609Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 6 23:07:22.661890 waagent[1925]: 2025-07-06T23:07:22.661844Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jul 6 23:07:22.662491 waagent[1925]: 2025-07-06T23:07:22.662435Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jul 6 23:07:22.662648 waagent[1925]: 2025-07-06T23:07:22.662563Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jul 6 23:07:22.662648 waagent[1925]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jul 6 23:07:22.662648 waagent[1925]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Jul 6 23:07:22.662648 waagent[1925]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jul 6 23:07:22.662648 waagent[1925]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jul 6 23:07:22.662648 waagent[1925]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jul 6 23:07:22.662648 waagent[1925]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jul 6 23:07:22.663260 waagent[1925]: 2025-07-06T23:07:22.663207Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jul 6 23:07:22.663919 waagent[1925]: 2025-07-06T23:07:22.663838Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jul 6 23:07:22.664489 waagent[1925]: 2025-07-06T23:07:22.664371Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 6 23:07:22.664489 waagent[1925]: 2025-07-06T23:07:22.664448Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 6 23:07:22.664765 waagent[1925]: 2025-07-06T23:07:22.664502Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jul 6 23:07:22.664765 waagent[1925]: 2025-07-06T23:07:22.664213Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jul 6 23:07:22.664765 waagent[1925]: 2025-07-06T23:07:22.664666Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jul 6 23:07:22.665464 waagent[1925]: 2025-07-06T23:07:22.665423Z INFO EnvHandler ExtHandler Configure routes Jul 6 23:07:22.668423 waagent[1925]: 2025-07-06T23:07:22.668349Z INFO EnvHandler ExtHandler Gateway:None Jul 6 23:07:22.669063 waagent[1925]: 2025-07-06T23:07:22.668700Z INFO EnvHandler ExtHandler Routes:None Jul 6 23:07:22.673687 waagent[1925]: 2025-07-06T23:07:22.673628Z INFO ExtHandler ExtHandler Jul 6 23:07:22.673787 waagent[1925]: 2025-07-06T23:07:22.673751Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 7840c787-040c-4d4c-93f5-e2f92f953cd8 correlation 709ba4e4-44a0-4d77-b058-becd3bbfea9f created: 2025-07-06T23:06:06.462967Z] Jul 6 23:07:22.674234 waagent[1925]: 2025-07-06T23:07:22.674179Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jul 6 23:07:22.674964 waagent[1925]: 2025-07-06T23:07:22.674784Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Jul 6 23:07:22.705148 waagent[1925]: 2025-07-06T23:07:22.705062Z INFO MonitorHandler ExtHandler Network interfaces: Jul 6 23:07:22.705148 waagent[1925]: Executing ['ip', '-a', '-o', 'link']: Jul 6 23:07:22.705148 waagent[1925]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jul 6 23:07:22.705148 waagent[1925]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:c5:82:05 brd ff:ff:ff:ff:ff:ff Jul 6 23:07:22.705148 waagent[1925]: 3: enP27923s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:c5:82:05 brd ff:ff:ff:ff:ff:ff\ altname enP27923p0s2 Jul 6 23:07:22.705148 waagent[1925]: Executing ['ip', '-4', '-a', '-o', 'address']: Jul 6 23:07:22.705148 waagent[1925]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jul 6 23:07:22.705148 waagent[1925]: 2: eth0 inet 10.200.20.36/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Jul 6 23:07:22.705148 waagent[1925]: Executing ['ip', '-6', '-a', '-o', 'address']: Jul 6 23:07:22.705148 waagent[1925]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jul 6 23:07:22.705148 waagent[1925]: 2: eth0 inet6 fe80::20d:3aff:fec5:8205/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jul 6 23:07:22.705148 waagent[1925]: 3: enP27923s1 inet6 fe80::20d:3aff:fec5:8205/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jul 6 23:07:22.721649 waagent[1925]: 2025-07-06T23:07:22.721587Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 501EBC02-6F09-4D1E-AA61-10DDF29BE69E;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Jul 6 23:07:22.733456 waagent[1925]: 2025-07-06T23:07:22.733381Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Jul 6 23:07:22.733456 waagent[1925]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jul 6 23:07:22.733456 waagent[1925]: pkts bytes target prot opt in out source destination Jul 6 23:07:22.733456 waagent[1925]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jul 6 23:07:22.733456 waagent[1925]: pkts bytes target prot opt in out source destination Jul 6 23:07:22.733456 waagent[1925]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jul 6 23:07:22.733456 waagent[1925]: pkts bytes target prot opt in out source destination Jul 6 23:07:22.733456 waagent[1925]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jul 6 23:07:22.733456 waagent[1925]: 5 457 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jul 6 23:07:22.733456 waagent[1925]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jul 6 23:07:22.737101 waagent[1925]: 2025-07-06T23:07:22.737002Z INFO EnvHandler ExtHandler Current Firewall rules: Jul 6 23:07:22.737101 waagent[1925]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jul 6 23:07:22.737101 waagent[1925]: pkts bytes target prot opt in out source destination Jul 6 23:07:22.737101 waagent[1925]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jul 6 23:07:22.737101 waagent[1925]: pkts bytes target prot opt in out source destination Jul 6 23:07:22.737101 waagent[1925]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jul 6 23:07:22.737101 waagent[1925]: pkts bytes target prot opt in out source destination Jul 6 23:07:22.737101 waagent[1925]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jul 6 23:07:22.737101 waagent[1925]: 10 1102 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jul 6 23:07:22.737101 waagent[1925]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jul 6 23:07:22.737387 waagent[1925]: 2025-07-06T23:07:22.737346Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jul 6 23:07:23.236814 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 6 23:07:23.243234 systemd[1]: Started sshd@0-10.200.20.36:22-10.200.16.10:39790.service - OpenSSH per-connection server daemon (10.200.16.10:39790). Jul 6 23:07:23.857174 sshd[2158]: Accepted publickey for core from 10.200.16.10 port 39790 ssh2: RSA SHA256:BYvOZLTfueOxq93dYKJbaYxARQqOBHJqeUMgtMpy+gQ Jul 6 23:07:23.858588 sshd-session[2158]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:07:23.862894 systemd-logind[1708]: New session 3 of user core. Jul 6 23:07:23.868111 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 6 23:07:24.305210 systemd[1]: Started sshd@1-10.200.20.36:22-10.200.16.10:39804.service - OpenSSH per-connection server daemon (10.200.16.10:39804). Jul 6 23:07:24.780576 sshd[2163]: Accepted publickey for core from 10.200.16.10 port 39804 ssh2: RSA SHA256:BYvOZLTfueOxq93dYKJbaYxARQqOBHJqeUMgtMpy+gQ Jul 6 23:07:24.782081 sshd-session[2163]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:07:24.786208 systemd-logind[1708]: New session 4 of user core. Jul 6 23:07:24.796101 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 6 23:07:25.133718 sshd[2165]: Connection closed by 10.200.16.10 port 39804 Jul 6 23:07:25.134563 sshd-session[2163]: pam_unix(sshd:session): session closed for user core Jul 6 23:07:25.137365 systemd-logind[1708]: Session 4 logged out. Waiting for processes to exit. Jul 6 23:07:25.137598 systemd[1]: sshd@1-10.200.20.36:22-10.200.16.10:39804.service: Deactivated successfully. Jul 6 23:07:25.139775 systemd[1]: session-4.scope: Deactivated successfully. Jul 6 23:07:25.141221 systemd-logind[1708]: Removed session 4. Jul 6 23:07:25.221369 systemd[1]: Started sshd@2-10.200.20.36:22-10.200.16.10:39816.service - OpenSSH per-connection server daemon (10.200.16.10:39816). Jul 6 23:07:25.705991 sshd[2171]: Accepted publickey for core from 10.200.16.10 port 39816 ssh2: RSA SHA256:BYvOZLTfueOxq93dYKJbaYxARQqOBHJqeUMgtMpy+gQ Jul 6 23:07:25.707290 sshd-session[2171]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:07:25.711682 systemd-logind[1708]: New session 5 of user core. Jul 6 23:07:25.719105 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 6 23:07:26.052820 sshd[2173]: Connection closed by 10.200.16.10 port 39816 Jul 6 23:07:26.053598 sshd-session[2171]: pam_unix(sshd:session): session closed for user core Jul 6 23:07:26.056639 systemd-logind[1708]: Session 5 logged out. Waiting for processes to exit. Jul 6 23:07:26.056871 systemd[1]: sshd@2-10.200.20.36:22-10.200.16.10:39816.service: Deactivated successfully. Jul 6 23:07:26.058493 systemd[1]: session-5.scope: Deactivated successfully. Jul 6 23:07:26.061256 systemd-logind[1708]: Removed session 5. Jul 6 23:07:26.140419 systemd[1]: Started sshd@3-10.200.20.36:22-10.200.16.10:39828.service - OpenSSH per-connection server daemon (10.200.16.10:39828). Jul 6 23:07:26.620235 sshd[2179]: Accepted publickey for core from 10.200.16.10 port 39828 ssh2: RSA SHA256:BYvOZLTfueOxq93dYKJbaYxARQqOBHJqeUMgtMpy+gQ Jul 6 23:07:26.621522 sshd-session[2179]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:07:26.625722 systemd-logind[1708]: New session 6 of user core. Jul 6 23:07:26.636094 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 6 23:07:26.973030 sshd[2181]: Connection closed by 10.200.16.10 port 39828 Jul 6 23:07:26.973709 sshd-session[2179]: pam_unix(sshd:session): session closed for user core Jul 6 23:07:26.976744 systemd[1]: sshd@3-10.200.20.36:22-10.200.16.10:39828.service: Deactivated successfully. Jul 6 23:07:26.978605 systemd[1]: session-6.scope: Deactivated successfully. Jul 6 23:07:26.980499 systemd-logind[1708]: Session 6 logged out. Waiting for processes to exit. Jul 6 23:07:26.981700 systemd-logind[1708]: Removed session 6. Jul 6 23:07:27.072239 systemd[1]: Started sshd@4-10.200.20.36:22-10.200.16.10:39844.service - OpenSSH per-connection server daemon (10.200.16.10:39844). Jul 6 23:07:27.466018 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 6 23:07:27.475180 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:07:27.550117 sshd[2187]: Accepted publickey for core from 10.200.16.10 port 39844 ssh2: RSA SHA256:BYvOZLTfueOxq93dYKJbaYxARQqOBHJqeUMgtMpy+gQ Jul 6 23:07:27.551141 sshd-session[2187]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:07:27.556775 systemd-logind[1708]: New session 7 of user core. Jul 6 23:07:27.568130 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 6 23:07:27.594872 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:07:27.605261 (kubelet)[2198]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:07:27.720217 kubelet[2198]: E0706 23:07:27.718978 2198 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:07:27.722172 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:07:27.722321 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:07:27.722786 systemd[1]: kubelet.service: Consumed 146ms CPU time, 105.6M memory peak. Jul 6 23:07:28.127659 sudo[2205]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 6 23:07:28.128407 sudo[2205]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:07:28.156079 sudo[2205]: pam_unix(sudo:session): session closed for user root Jul 6 23:07:28.240776 sshd[2192]: Connection closed by 10.200.16.10 port 39844 Jul 6 23:07:28.241533 sshd-session[2187]: pam_unix(sshd:session): session closed for user core Jul 6 23:07:28.245455 systemd[1]: sshd@4-10.200.20.36:22-10.200.16.10:39844.service: Deactivated successfully. Jul 6 23:07:28.247163 systemd[1]: session-7.scope: Deactivated successfully. Jul 6 23:07:28.248271 systemd-logind[1708]: Session 7 logged out. Waiting for processes to exit. Jul 6 23:07:28.249325 systemd-logind[1708]: Removed session 7. Jul 6 23:07:28.337257 systemd[1]: Started sshd@5-10.200.20.36:22-10.200.16.10:39850.service - OpenSSH per-connection server daemon (10.200.16.10:39850). Jul 6 23:07:28.827780 sshd[2211]: Accepted publickey for core from 10.200.16.10 port 39850 ssh2: RSA SHA256:BYvOZLTfueOxq93dYKJbaYxARQqOBHJqeUMgtMpy+gQ Jul 6 23:07:28.829195 sshd-session[2211]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:07:28.833638 systemd-logind[1708]: New session 8 of user core. Jul 6 23:07:28.837084 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 6 23:07:29.104813 sudo[2215]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 6 23:07:29.105233 sudo[2215]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:07:29.109022 sudo[2215]: pam_unix(sudo:session): session closed for user root Jul 6 23:07:29.114031 sudo[2214]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 6 23:07:29.114315 sudo[2214]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:07:29.135361 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 6 23:07:29.161623 augenrules[2237]: No rules Jul 6 23:07:29.163274 systemd[1]: audit-rules.service: Deactivated successfully. Jul 6 23:07:29.163632 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 6 23:07:29.166170 sudo[2214]: pam_unix(sudo:session): session closed for user root Jul 6 23:07:29.244622 sshd[2213]: Connection closed by 10.200.16.10 port 39850 Jul 6 23:07:29.245413 sshd-session[2211]: pam_unix(sshd:session): session closed for user core Jul 6 23:07:29.249445 systemd[1]: sshd@5-10.200.20.36:22-10.200.16.10:39850.service: Deactivated successfully. Jul 6 23:07:29.252525 systemd[1]: session-8.scope: Deactivated successfully. Jul 6 23:07:29.253223 systemd-logind[1708]: Session 8 logged out. Waiting for processes to exit. Jul 6 23:07:29.254493 systemd-logind[1708]: Removed session 8. Jul 6 23:07:29.335158 systemd[1]: Started sshd@6-10.200.20.36:22-10.200.16.10:39862.service - OpenSSH per-connection server daemon (10.200.16.10:39862). Jul 6 23:07:29.829235 sshd[2246]: Accepted publickey for core from 10.200.16.10 port 39862 ssh2: RSA SHA256:BYvOZLTfueOxq93dYKJbaYxARQqOBHJqeUMgtMpy+gQ Jul 6 23:07:29.830563 sshd-session[2246]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:07:29.836245 systemd-logind[1708]: New session 9 of user core. Jul 6 23:07:29.843201 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 6 23:07:30.105009 sudo[2249]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 6 23:07:30.105267 sudo[2249]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:07:31.255217 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 6 23:07:31.255290 (dockerd)[2266]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 6 23:07:31.954184 dockerd[2266]: time="2025-07-06T23:07:31.953728100Z" level=info msg="Starting up" Jul 6 23:07:32.274804 dockerd[2266]: time="2025-07-06T23:07:32.274687580Z" level=info msg="Loading containers: start." Jul 6 23:07:32.464993 kernel: Initializing XFRM netlink socket Jul 6 23:07:32.602138 systemd-networkd[1607]: docker0: Link UP Jul 6 23:07:32.643368 dockerd[2266]: time="2025-07-06T23:07:32.643307740Z" level=info msg="Loading containers: done." Jul 6 23:07:32.654260 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3554067628-merged.mount: Deactivated successfully. Jul 6 23:07:32.663972 dockerd[2266]: time="2025-07-06T23:07:32.663566780Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 6 23:07:32.663972 dockerd[2266]: time="2025-07-06T23:07:32.663681020Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Jul 6 23:07:32.663972 dockerd[2266]: time="2025-07-06T23:07:32.663811900Z" level=info msg="Daemon has completed initialization" Jul 6 23:07:32.727806 dockerd[2266]: time="2025-07-06T23:07:32.727737300Z" level=info msg="API listen on /run/docker.sock" Jul 6 23:07:32.728445 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 6 23:07:33.537898 containerd[1752]: time="2025-07-06T23:07:33.537845140Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\"" Jul 6 23:07:34.606768 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount304426216.mount: Deactivated successfully. Jul 6 23:07:37.972843 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 6 23:07:37.983337 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:07:38.129262 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:07:38.143254 (kubelet)[2472]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:07:38.257947 kubelet[2472]: E0706 23:07:38.257777 2472 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:07:38.260352 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:07:38.260506 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:07:38.261033 systemd[1]: kubelet.service: Consumed 132ms CPU time, 104.9M memory peak. Jul 6 23:07:39.580418 chronyd[1690]: Selected source PHC0 Jul 6 23:07:43.881822 containerd[1752]: time="2025-07-06T23:07:43.881753011Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:07:43.975014 containerd[1752]: time="2025-07-06T23:07:43.974909496Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.2: active requests=0, bytes read=27351716" Jul 6 23:07:44.038370 containerd[1752]: time="2025-07-06T23:07:44.038242047Z" level=info msg="ImageCreate event name:\"sha256:04ac773cca35cc457f24a6501b6b308d63a2cddd1aec14fe95559bccca3010a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:07:44.087007 containerd[1752]: time="2025-07-06T23:07:44.086853510Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:07:44.088722 containerd[1752]: time="2025-07-06T23:07:44.088181631Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.2\" with image id \"sha256:04ac773cca35cc457f24a6501b6b308d63a2cddd1aec14fe95559bccca3010a4\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\", size \"27348516\" in 10.550291851s" Jul 6 23:07:44.088722 containerd[1752]: time="2025-07-06T23:07:44.088229071Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\" returns image reference \"sha256:04ac773cca35cc457f24a6501b6b308d63a2cddd1aec14fe95559bccca3010a4\"" Jul 6 23:07:44.089723 containerd[1752]: time="2025-07-06T23:07:44.089687992Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\"" Jul 6 23:07:47.498687 containerd[1752]: time="2025-07-06T23:07:47.498627365Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:07:47.504190 containerd[1752]: time="2025-07-06T23:07:47.504103888Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.2: active requests=0, bytes read=23537623" Jul 6 23:07:47.513177 containerd[1752]: time="2025-07-06T23:07:47.513110452Z" level=info msg="ImageCreate event name:\"sha256:99a259072231375ad69a369cdf5620d60cdff72d450951c603fad8a94667af65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:07:47.522785 containerd[1752]: time="2025-07-06T23:07:47.522701417Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:07:47.524250 containerd[1752]: time="2025-07-06T23:07:47.523860497Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.2\" with image id \"sha256:99a259072231375ad69a369cdf5620d60cdff72d450951c603fad8a94667af65\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\", size \"25092541\" in 3.434130705s" Jul 6 23:07:47.524250 containerd[1752]: time="2025-07-06T23:07:47.523900577Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\" returns image reference \"sha256:99a259072231375ad69a369cdf5620d60cdff72d450951c603fad8a94667af65\"" Jul 6 23:07:47.524674 containerd[1752]: time="2025-07-06T23:07:47.524575018Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\"" Jul 6 23:07:48.451049 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 6 23:07:48.460246 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:07:48.566682 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:07:48.579317 (kubelet)[2531]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:07:48.686576 kubelet[2531]: E0706 23:07:48.686500 2531 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:07:48.689363 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:07:48.689655 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:07:48.690191 systemd[1]: kubelet.service: Consumed 131ms CPU time, 105M memory peak. Jul 6 23:07:53.032025 containerd[1752]: time="2025-07-06T23:07:53.031430216Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:07:53.035808 containerd[1752]: time="2025-07-06T23:07:53.035548818Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.2: active requests=0, bytes read=18293515" Jul 6 23:07:53.041308 containerd[1752]: time="2025-07-06T23:07:53.041208581Z" level=info msg="ImageCreate event name:\"sha256:bb3da57746ca4726b669d35145eb9b4085643c61bbc80b9df3bf1e6021ba9eaf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:07:53.050560 containerd[1752]: time="2025-07-06T23:07:53.050497065Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:07:53.051708 containerd[1752]: time="2025-07-06T23:07:53.051675265Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.2\" with image id \"sha256:bb3da57746ca4726b669d35145eb9b4085643c61bbc80b9df3bf1e6021ba9eaf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\", size \"19848451\" in 5.527069447s" Jul 6 23:07:53.051794 containerd[1752]: time="2025-07-06T23:07:53.051713425Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\" returns image reference \"sha256:bb3da57746ca4726b669d35145eb9b4085643c61bbc80b9df3bf1e6021ba9eaf\"" Jul 6 23:07:53.052433 containerd[1752]: time="2025-07-06T23:07:53.052281865Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\"" Jul 6 23:07:54.179014 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3373023785.mount: Deactivated successfully. Jul 6 23:07:54.527408 containerd[1752]: time="2025-07-06T23:07:54.527344863Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:07:54.533263 containerd[1752]: time="2025-07-06T23:07:54.533062265Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.2: active requests=0, bytes read=28199472" Jul 6 23:07:54.539290 containerd[1752]: time="2025-07-06T23:07:54.539250948Z" level=info msg="ImageCreate event name:\"sha256:c26522e54bad2e6bfbb1bf11500833c94433076a3fa38436a2ec496a422c5455\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:07:54.545454 containerd[1752]: time="2025-07-06T23:07:54.545374590Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:07:54.546458 containerd[1752]: time="2025-07-06T23:07:54.545983911Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.2\" with image id \"sha256:c26522e54bad2e6bfbb1bf11500833c94433076a3fa38436a2ec496a422c5455\", repo tag \"registry.k8s.io/kube-proxy:v1.33.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\", size \"28198491\" in 1.493668526s" Jul 6 23:07:54.546458 containerd[1752]: time="2025-07-06T23:07:54.546021191Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\" returns image reference \"sha256:c26522e54bad2e6bfbb1bf11500833c94433076a3fa38436a2ec496a422c5455\"" Jul 6 23:07:54.546646 containerd[1752]: time="2025-07-06T23:07:54.546613071Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jul 6 23:07:55.240039 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount237387885.mount: Deactivated successfully. Jul 6 23:07:56.391128 containerd[1752]: time="2025-07-06T23:07:56.391068843Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:07:56.394026 containerd[1752]: time="2025-07-06T23:07:56.393954364Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152117" Jul 6 23:07:56.398328 containerd[1752]: time="2025-07-06T23:07:56.398257406Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:07:56.403481 containerd[1752]: time="2025-07-06T23:07:56.403403289Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:07:56.404792 containerd[1752]: time="2025-07-06T23:07:56.404638249Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.857987338s" Jul 6 23:07:56.404792 containerd[1752]: time="2025-07-06T23:07:56.404680769Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Jul 6 23:07:56.405573 containerd[1752]: time="2025-07-06T23:07:56.405371530Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 6 23:07:57.217575 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3992415490.mount: Deactivated successfully. Jul 6 23:07:57.250135 containerd[1752]: time="2025-07-06T23:07:57.250087551Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:07:57.253128 containerd[1752]: time="2025-07-06T23:07:57.253079192Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Jul 6 23:07:57.260734 containerd[1752]: time="2025-07-06T23:07:57.260697035Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:07:57.266909 containerd[1752]: time="2025-07-06T23:07:57.266830358Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:07:57.267940 containerd[1752]: time="2025-07-06T23:07:57.267460999Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 862.056349ms" Jul 6 23:07:57.267940 containerd[1752]: time="2025-07-06T23:07:57.267494479Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 6 23:07:57.268192 containerd[1752]: time="2025-07-06T23:07:57.268164959Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jul 6 23:07:57.866414 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2758444483.mount: Deactivated successfully. Jul 6 23:07:58.700900 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jul 6 23:07:58.708166 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:07:58.995033 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:07:59.005468 (kubelet)[2663]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:07:59.086570 kubelet[2663]: E0706 23:07:59.086511 2663 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:07:59.089346 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:07:59.089525 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:07:59.089823 systemd[1]: kubelet.service: Consumed 133ms CPU time, 104.3M memory peak. Jul 6 23:08:00.508237 containerd[1752]: time="2025-07-06T23:08:00.508183581Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:08:00.513101 containerd[1752]: time="2025-07-06T23:08:00.513038463Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69334599" Jul 6 23:08:00.519072 containerd[1752]: time="2025-07-06T23:08:00.518971946Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:08:00.577078 containerd[1752]: time="2025-07-06T23:08:00.576992252Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:08:00.578667 containerd[1752]: time="2025-07-06T23:08:00.578384973Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 3.310183934s" Jul 6 23:08:00.578667 containerd[1752]: time="2025-07-06T23:08:00.578425693Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Jul 6 23:08:00.689105 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Jul 6 23:08:01.109918 update_engine[1714]: I20250706 23:08:01.109830 1714 update_attempter.cc:509] Updating boot flags... Jul 6 23:08:01.170301 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 35 scanned by (udev-worker) (2710) Jul 6 23:08:01.343988 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 35 scanned by (udev-worker) (2710) Jul 6 23:08:04.172536 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:08:04.172684 systemd[1]: kubelet.service: Consumed 133ms CPU time, 104.3M memory peak. Jul 6 23:08:04.189504 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:08:04.222340 systemd[1]: Reload requested from client PID 2819 ('systemctl') (unit session-9.scope)... Jul 6 23:08:04.222359 systemd[1]: Reloading... Jul 6 23:08:04.337954 zram_generator::config[2872]: No configuration found. Jul 6 23:08:04.441382 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:08:04.545128 systemd[1]: Reloading finished in 322 ms. Jul 6 23:08:04.594479 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:08:04.600447 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:08:04.609709 systemd[1]: kubelet.service: Deactivated successfully. Jul 6 23:08:04.610032 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:08:04.610088 systemd[1]: kubelet.service: Consumed 89ms CPU time, 94.9M memory peak. Jul 6 23:08:04.611978 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:08:04.744234 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:08:04.749384 (kubelet)[2935]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 6 23:08:04.905626 kubelet[2935]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:08:04.906021 kubelet[2935]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 6 23:08:04.906071 kubelet[2935]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:08:04.906218 kubelet[2935]: I0706 23:08:04.906186 2935 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 6 23:08:05.478239 kubelet[2935]: I0706 23:08:05.478200 2935 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 6 23:08:05.478428 kubelet[2935]: I0706 23:08:05.478417 2935 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 6 23:08:05.478745 kubelet[2935]: I0706 23:08:05.478729 2935 server.go:956] "Client rotation is on, will bootstrap in background" Jul 6 23:08:05.493315 kubelet[2935]: E0706 23:08:05.493273 2935 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.20.36:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.36:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jul 6 23:08:05.494551 kubelet[2935]: I0706 23:08:05.494527 2935 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 6 23:08:05.505608 kubelet[2935]: E0706 23:08:05.505555 2935 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 6 23:08:05.505608 kubelet[2935]: I0706 23:08:05.505608 2935 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 6 23:08:05.508755 kubelet[2935]: I0706 23:08:05.508729 2935 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 6 23:08:05.509014 kubelet[2935]: I0706 23:08:05.508983 2935 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 6 23:08:05.509176 kubelet[2935]: I0706 23:08:05.509012 2935 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230.2.1-a-dc1fa1989d","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 6 23:08:05.509310 kubelet[2935]: I0706 23:08:05.509184 2935 topology_manager.go:138] "Creating topology manager with none policy" Jul 6 23:08:05.509310 kubelet[2935]: I0706 23:08:05.509193 2935 container_manager_linux.go:303] "Creating device plugin manager" Jul 6 23:08:05.509359 kubelet[2935]: I0706 23:08:05.509331 2935 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:08:05.512332 kubelet[2935]: I0706 23:08:05.512305 2935 kubelet.go:480] "Attempting to sync node with API server" Jul 6 23:08:05.512388 kubelet[2935]: I0706 23:08:05.512335 2935 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 6 23:08:05.512388 kubelet[2935]: I0706 23:08:05.512365 2935 kubelet.go:386] "Adding apiserver pod source" Jul 6 23:08:05.512388 kubelet[2935]: I0706 23:08:05.512384 2935 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 6 23:08:05.517145 kubelet[2935]: E0706 23:08:05.517108 2935 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.36:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.36:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 6 23:08:05.517593 kubelet[2935]: E0706 23:08:05.517551 2935 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.36:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.2.1-a-dc1fa1989d&limit=500&resourceVersion=0\": dial tcp 10.200.20.36:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 6 23:08:05.517677 kubelet[2935]: I0706 23:08:05.517658 2935 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jul 6 23:08:05.518277 kubelet[2935]: I0706 23:08:05.518250 2935 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 6 23:08:05.518338 kubelet[2935]: W0706 23:08:05.518317 2935 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 6 23:08:05.522399 kubelet[2935]: I0706 23:08:05.522230 2935 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 6 23:08:05.522399 kubelet[2935]: I0706 23:08:05.522275 2935 server.go:1289] "Started kubelet" Jul 6 23:08:05.523811 kubelet[2935]: I0706 23:08:05.523782 2935 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 6 23:08:05.526148 kubelet[2935]: E0706 23:08:05.525206 2935 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.36:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.36:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230.2.1-a-dc1fa1989d.184fcc2fbfb4f91d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230.2.1-a-dc1fa1989d,UID:ci-4230.2.1-a-dc1fa1989d,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230.2.1-a-dc1fa1989d,},FirstTimestamp:2025-07-06 23:08:05.522250013 +0000 UTC m=+0.768644179,LastTimestamp:2025-07-06 23:08:05.522250013 +0000 UTC m=+0.768644179,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230.2.1-a-dc1fa1989d,}" Jul 6 23:08:05.527953 kubelet[2935]: I0706 23:08:05.527456 2935 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 6 23:08:05.528386 kubelet[2935]: I0706 23:08:05.528369 2935 server.go:317] "Adding debug handlers to kubelet server" Jul 6 23:08:05.530768 kubelet[2935]: I0706 23:08:05.530260 2935 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 6 23:08:05.532023 kubelet[2935]: I0706 23:08:05.531279 2935 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 6 23:08:05.532023 kubelet[2935]: I0706 23:08:05.531511 2935 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 6 23:08:05.532023 kubelet[2935]: I0706 23:08:05.531704 2935 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 6 23:08:05.534037 kubelet[2935]: E0706 23:08:05.533384 2935 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.1-a-dc1fa1989d\" not found" Jul 6 23:08:05.534037 kubelet[2935]: I0706 23:08:05.533434 2935 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 6 23:08:05.534037 kubelet[2935]: I0706 23:08:05.533625 2935 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 6 23:08:05.534037 kubelet[2935]: I0706 23:08:05.533682 2935 reconciler.go:26] "Reconciler: start to sync state" Jul 6 23:08:05.535107 kubelet[2935]: E0706 23:08:05.534176 2935 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.36:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.36:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 6 23:08:05.535107 kubelet[2935]: E0706 23:08:05.534424 2935 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.2.1-a-dc1fa1989d?timeout=10s\": dial tcp 10.200.20.36:6443: connect: connection refused" interval="200ms" Jul 6 23:08:05.537905 kubelet[2935]: I0706 23:08:05.537873 2935 factory.go:223] Registration of the systemd container factory successfully Jul 6 23:08:05.538038 kubelet[2935]: I0706 23:08:05.537988 2935 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 6 23:08:05.538244 kubelet[2935]: E0706 23:08:05.538175 2935 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 6 23:08:05.540180 kubelet[2935]: I0706 23:08:05.540125 2935 factory.go:223] Registration of the containerd container factory successfully Jul 6 23:08:05.566755 kubelet[2935]: I0706 23:08:05.566721 2935 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 6 23:08:05.567101 kubelet[2935]: I0706 23:08:05.566912 2935 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 6 23:08:05.567398 kubelet[2935]: I0706 23:08:05.567198 2935 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 6 23:08:05.567398 kubelet[2935]: I0706 23:08:05.567216 2935 kubelet.go:2436] "Starting kubelet main sync loop" Jul 6 23:08:05.567398 kubelet[2935]: E0706 23:08:05.567262 2935 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 6 23:08:05.569055 kubelet[2935]: E0706 23:08:05.568944 2935 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.36:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.36:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 6 23:08:05.571919 kubelet[2935]: I0706 23:08:05.571886 2935 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 6 23:08:05.571919 kubelet[2935]: I0706 23:08:05.571906 2935 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 6 23:08:05.572089 kubelet[2935]: I0706 23:08:05.572043 2935 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:08:05.577979 kubelet[2935]: I0706 23:08:05.577937 2935 policy_none.go:49] "None policy: Start" Jul 6 23:08:05.577979 kubelet[2935]: I0706 23:08:05.577972 2935 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 6 23:08:05.577979 kubelet[2935]: I0706 23:08:05.577987 2935 state_mem.go:35] "Initializing new in-memory state store" Jul 6 23:08:05.591351 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 6 23:08:05.602635 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 6 23:08:05.606295 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 6 23:08:05.616935 kubelet[2935]: E0706 23:08:05.616881 2935 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 6 23:08:05.617142 kubelet[2935]: I0706 23:08:05.617119 2935 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 6 23:08:05.617142 kubelet[2935]: I0706 23:08:05.617137 2935 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 6 23:08:05.617512 kubelet[2935]: I0706 23:08:05.617432 2935 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 6 23:08:05.619991 kubelet[2935]: E0706 23:08:05.619953 2935 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 6 23:08:05.620256 kubelet[2935]: E0706 23:08:05.620232 2935 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4230.2.1-a-dc1fa1989d\" not found" Jul 6 23:08:05.682808 systemd[1]: Created slice kubepods-burstable-poda3bfefb81fe8939114888355504ce16c.slice - libcontainer container kubepods-burstable-poda3bfefb81fe8939114888355504ce16c.slice. Jul 6 23:08:05.693737 kubelet[2935]: E0706 23:08:05.693677 2935 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.1-a-dc1fa1989d\" not found" node="ci-4230.2.1-a-dc1fa1989d" Jul 6 23:08:05.698970 systemd[1]: Created slice kubepods-burstable-pod0e1d8f61390728e1ed6e2f2335f63caf.slice - libcontainer container kubepods-burstable-pod0e1d8f61390728e1ed6e2f2335f63caf.slice. Jul 6 23:08:05.700890 kubelet[2935]: E0706 23:08:05.700866 2935 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.1-a-dc1fa1989d\" not found" node="ci-4230.2.1-a-dc1fa1989d" Jul 6 23:08:05.703824 systemd[1]: Created slice kubepods-burstable-pod339a0bc9e6b7629f540e95bfb5cab70d.slice - libcontainer container kubepods-burstable-pod339a0bc9e6b7629f540e95bfb5cab70d.slice. Jul 6 23:08:05.705330 kubelet[2935]: E0706 23:08:05.705307 2935 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.1-a-dc1fa1989d\" not found" node="ci-4230.2.1-a-dc1fa1989d" Jul 6 23:08:05.719537 kubelet[2935]: I0706 23:08:05.719395 2935 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.1-a-dc1fa1989d" Jul 6 23:08:05.719971 kubelet[2935]: E0706 23:08:05.719942 2935 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.36:6443/api/v1/nodes\": dial tcp 10.200.20.36:6443: connect: connection refused" node="ci-4230.2.1-a-dc1fa1989d" Jul 6 23:08:05.735527 kubelet[2935]: I0706 23:08:05.735272 2935 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0e1d8f61390728e1ed6e2f2335f63caf-kubeconfig\") pod \"kube-scheduler-ci-4230.2.1-a-dc1fa1989d\" (UID: \"0e1d8f61390728e1ed6e2f2335f63caf\") " pod="kube-system/kube-scheduler-ci-4230.2.1-a-dc1fa1989d" Jul 6 23:08:05.735527 kubelet[2935]: I0706 23:08:05.735402 2935 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/339a0bc9e6b7629f540e95bfb5cab70d-k8s-certs\") pod \"kube-apiserver-ci-4230.2.1-a-dc1fa1989d\" (UID: \"339a0bc9e6b7629f540e95bfb5cab70d\") " pod="kube-system/kube-apiserver-ci-4230.2.1-a-dc1fa1989d" Jul 6 23:08:05.735527 kubelet[2935]: I0706 23:08:05.735420 2935 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/339a0bc9e6b7629f540e95bfb5cab70d-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230.2.1-a-dc1fa1989d\" (UID: \"339a0bc9e6b7629f540e95bfb5cab70d\") " pod="kube-system/kube-apiserver-ci-4230.2.1-a-dc1fa1989d" Jul 6 23:08:05.735993 kubelet[2935]: I0706 23:08:05.735758 2935 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a3bfefb81fe8939114888355504ce16c-flexvolume-dir\") pod \"kube-controller-manager-ci-4230.2.1-a-dc1fa1989d\" (UID: \"a3bfefb81fe8939114888355504ce16c\") " pod="kube-system/kube-controller-manager-ci-4230.2.1-a-dc1fa1989d" Jul 6 23:08:05.735993 kubelet[2935]: I0706 23:08:05.735794 2935 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a3bfefb81fe8939114888355504ce16c-kubeconfig\") pod \"kube-controller-manager-ci-4230.2.1-a-dc1fa1989d\" (UID: \"a3bfefb81fe8939114888355504ce16c\") " pod="kube-system/kube-controller-manager-ci-4230.2.1-a-dc1fa1989d" Jul 6 23:08:05.735993 kubelet[2935]: I0706 23:08:05.735812 2935 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a3bfefb81fe8939114888355504ce16c-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230.2.1-a-dc1fa1989d\" (UID: \"a3bfefb81fe8939114888355504ce16c\") " pod="kube-system/kube-controller-manager-ci-4230.2.1-a-dc1fa1989d" Jul 6 23:08:05.736228 kubelet[2935]: I0706 23:08:05.736150 2935 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/339a0bc9e6b7629f540e95bfb5cab70d-ca-certs\") pod \"kube-apiserver-ci-4230.2.1-a-dc1fa1989d\" (UID: \"339a0bc9e6b7629f540e95bfb5cab70d\") " pod="kube-system/kube-apiserver-ci-4230.2.1-a-dc1fa1989d" Jul 6 23:08:05.736228 kubelet[2935]: I0706 23:08:05.736173 2935 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a3bfefb81fe8939114888355504ce16c-ca-certs\") pod \"kube-controller-manager-ci-4230.2.1-a-dc1fa1989d\" (UID: \"a3bfefb81fe8939114888355504ce16c\") " pod="kube-system/kube-controller-manager-ci-4230.2.1-a-dc1fa1989d" Jul 6 23:08:05.736504 kubelet[2935]: E0706 23:08:05.736454 2935 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.2.1-a-dc1fa1989d?timeout=10s\": dial tcp 10.200.20.36:6443: connect: connection refused" interval="400ms" Jul 6 23:08:05.736575 kubelet[2935]: I0706 23:08:05.736490 2935 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a3bfefb81fe8939114888355504ce16c-k8s-certs\") pod \"kube-controller-manager-ci-4230.2.1-a-dc1fa1989d\" (UID: \"a3bfefb81fe8939114888355504ce16c\") " pod="kube-system/kube-controller-manager-ci-4230.2.1-a-dc1fa1989d" Jul 6 23:08:05.921906 kubelet[2935]: I0706 23:08:05.921861 2935 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.1-a-dc1fa1989d" Jul 6 23:08:05.922442 kubelet[2935]: E0706 23:08:05.922403 2935 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.36:6443/api/v1/nodes\": dial tcp 10.200.20.36:6443: connect: connection refused" node="ci-4230.2.1-a-dc1fa1989d" Jul 6 23:08:05.995725 containerd[1752]: time="2025-07-06T23:08:05.995603507Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230.2.1-a-dc1fa1989d,Uid:a3bfefb81fe8939114888355504ce16c,Namespace:kube-system,Attempt:0,}" Jul 6 23:08:06.001946 containerd[1752]: time="2025-07-06T23:08:06.001878833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230.2.1-a-dc1fa1989d,Uid:0e1d8f61390728e1ed6e2f2335f63caf,Namespace:kube-system,Attempt:0,}" Jul 6 23:08:06.006800 containerd[1752]: time="2025-07-06T23:08:06.006759838Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230.2.1-a-dc1fa1989d,Uid:339a0bc9e6b7629f540e95bfb5cab70d,Namespace:kube-system,Attempt:0,}" Jul 6 23:08:06.123392 kubelet[2935]: E0706 23:08:06.123284 2935 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.36:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.36:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230.2.1-a-dc1fa1989d.184fcc2fbfb4f91d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230.2.1-a-dc1fa1989d,UID:ci-4230.2.1-a-dc1fa1989d,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230.2.1-a-dc1fa1989d,},FirstTimestamp:2025-07-06 23:08:05.522250013 +0000 UTC m=+0.768644179,LastTimestamp:2025-07-06 23:08:05.522250013 +0000 UTC m=+0.768644179,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230.2.1-a-dc1fa1989d,}" Jul 6 23:08:06.137875 kubelet[2935]: E0706 23:08:06.137840 2935 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.2.1-a-dc1fa1989d?timeout=10s\": dial tcp 10.200.20.36:6443: connect: connection refused" interval="800ms" Jul 6 23:08:06.325330 kubelet[2935]: I0706 23:08:06.324947 2935 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.1-a-dc1fa1989d" Jul 6 23:08:06.325614 kubelet[2935]: E0706 23:08:06.325547 2935 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.36:6443/api/v1/nodes\": dial tcp 10.200.20.36:6443: connect: connection refused" node="ci-4230.2.1-a-dc1fa1989d" Jul 6 23:08:06.562378 kubelet[2935]: E0706 23:08:06.562331 2935 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.36:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.36:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 6 23:08:06.585262 kubelet[2935]: E0706 23:08:06.585140 2935 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.36:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.36:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 6 23:08:06.671360 kubelet[2935]: E0706 23:08:06.671316 2935 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.36:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.36:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 6 23:08:06.938798 kubelet[2935]: E0706 23:08:06.938692 2935 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.2.1-a-dc1fa1989d?timeout=10s\": dial tcp 10.200.20.36:6443: connect: connection refused" interval="1.6s" Jul 6 23:08:07.030767 kubelet[2935]: E0706 23:08:07.030709 2935 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.36:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.2.1-a-dc1fa1989d&limit=500&resourceVersion=0\": dial tcp 10.200.20.36:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 6 23:08:07.128109 kubelet[2935]: I0706 23:08:07.128079 2935 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.1-a-dc1fa1989d" Jul 6 23:08:07.128535 kubelet[2935]: E0706 23:08:07.128476 2935 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.36:6443/api/v1/nodes\": dial tcp 10.200.20.36:6443: connect: connection refused" node="ci-4230.2.1-a-dc1fa1989d" Jul 6 23:08:07.579871 kubelet[2935]: E0706 23:08:07.579832 2935 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.20.36:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.36:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jul 6 23:08:07.773309 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3739461113.mount: Deactivated successfully. Jul 6 23:08:07.805969 containerd[1752]: time="2025-07-06T23:08:07.805791644Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:08:07.823996 containerd[1752]: time="2025-07-06T23:08:07.823909301Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Jul 6 23:08:07.829334 containerd[1752]: time="2025-07-06T23:08:07.829298626Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:08:07.837753 containerd[1752]: time="2025-07-06T23:08:07.836671873Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:08:07.846304 containerd[1752]: time="2025-07-06T23:08:07.846127243Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 6 23:08:07.851470 containerd[1752]: time="2025-07-06T23:08:07.851431208Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:08:07.853627 containerd[1752]: time="2025-07-06T23:08:07.853586890Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 6 23:08:07.859136 containerd[1752]: time="2025-07-06T23:08:07.859073575Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:08:07.860255 containerd[1752]: time="2025-07-06T23:08:07.860015536Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 1.853194298s" Jul 6 23:08:07.866121 containerd[1752]: time="2025-07-06T23:08:07.866084022Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 1.864101589s" Jul 6 23:08:07.873103 containerd[1752]: time="2025-07-06T23:08:07.873077468Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 1.877392281s" Jul 6 23:08:08.471706 containerd[1752]: time="2025-07-06T23:08:08.471614822Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:08:08.472064 containerd[1752]: time="2025-07-06T23:08:08.471688062Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:08:08.472369 containerd[1752]: time="2025-07-06T23:08:08.472055022Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:08:08.472369 containerd[1752]: time="2025-07-06T23:08:08.472325103Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:08:08.475517 containerd[1752]: time="2025-07-06T23:08:08.475444264Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:08:08.475517 containerd[1752]: time="2025-07-06T23:08:08.475491744Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:08:08.475681 containerd[1752]: time="2025-07-06T23:08:08.475635624Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:08:08.475897 containerd[1752]: time="2025-07-06T23:08:08.475802944Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:08:08.480558 containerd[1752]: time="2025-07-06T23:08:08.480293746Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:08:08.480558 containerd[1752]: time="2025-07-06T23:08:08.480348026Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:08:08.480558 containerd[1752]: time="2025-07-06T23:08:08.480359186Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:08:08.480558 containerd[1752]: time="2025-07-06T23:08:08.480429426Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:08:08.510111 systemd[1]: Started cri-containerd-b593de8e30b4324db49ecd5cf6b957779df3d54aa754b74b8fe98b24cfcda53b.scope - libcontainer container b593de8e30b4324db49ecd5cf6b957779df3d54aa754b74b8fe98b24cfcda53b. Jul 6 23:08:08.511244 systemd[1]: Started cri-containerd-c2a37e30c7507fa1524f1bcaae0d266d8ca9cffbd131be40d0d754b384089ee1.scope - libcontainer container c2a37e30c7507fa1524f1bcaae0d266d8ca9cffbd131be40d0d754b384089ee1. Jul 6 23:08:08.516812 systemd[1]: Started cri-containerd-20bbd25b72d680d93f43f6fbb99f592d2e9529af16a9a0d3cbddf1e63456d3fd.scope - libcontainer container 20bbd25b72d680d93f43f6fbb99f592d2e9529af16a9a0d3cbddf1e63456d3fd. Jul 6 23:08:08.539915 kubelet[2935]: E0706 23:08:08.539864 2935 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.2.1-a-dc1fa1989d?timeout=10s\": dial tcp 10.200.20.36:6443: connect: connection refused" interval="3.2s" Jul 6 23:08:08.560350 containerd[1752]: time="2025-07-06T23:08:08.560096544Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230.2.1-a-dc1fa1989d,Uid:0e1d8f61390728e1ed6e2f2335f63caf,Namespace:kube-system,Attempt:0,} returns sandbox id \"20bbd25b72d680d93f43f6fbb99f592d2e9529af16a9a0d3cbddf1e63456d3fd\"" Jul 6 23:08:08.564876 containerd[1752]: time="2025-07-06T23:08:08.564775146Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230.2.1-a-dc1fa1989d,Uid:339a0bc9e6b7629f540e95bfb5cab70d,Namespace:kube-system,Attempt:0,} returns sandbox id \"b593de8e30b4324db49ecd5cf6b957779df3d54aa754b74b8fe98b24cfcda53b\"" Jul 6 23:08:08.570690 containerd[1752]: time="2025-07-06T23:08:08.570612869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230.2.1-a-dc1fa1989d,Uid:a3bfefb81fe8939114888355504ce16c,Namespace:kube-system,Attempt:0,} returns sandbox id \"c2a37e30c7507fa1524f1bcaae0d266d8ca9cffbd131be40d0d754b384089ee1\"" Jul 6 23:08:08.572265 containerd[1752]: time="2025-07-06T23:08:08.572193830Z" level=info msg="CreateContainer within sandbox \"20bbd25b72d680d93f43f6fbb99f592d2e9529af16a9a0d3cbddf1e63456d3fd\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 6 23:08:08.577809 containerd[1752]: time="2025-07-06T23:08:08.577773312Z" level=info msg="CreateContainer within sandbox \"b593de8e30b4324db49ecd5cf6b957779df3d54aa754b74b8fe98b24cfcda53b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 6 23:08:08.582682 containerd[1752]: time="2025-07-06T23:08:08.582652395Z" level=info msg="CreateContainer within sandbox \"c2a37e30c7507fa1524f1bcaae0d266d8ca9cffbd131be40d0d754b384089ee1\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 6 23:08:08.635346 kubelet[2935]: E0706 23:08:08.635312 2935 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.36:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.36:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 6 23:08:08.651382 containerd[1752]: time="2025-07-06T23:08:08.651336107Z" level=info msg="CreateContainer within sandbox \"20bbd25b72d680d93f43f6fbb99f592d2e9529af16a9a0d3cbddf1e63456d3fd\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"399529057d8e7087b80747598a636a3b1fb3cfcebe2e47c187004f53a0f75280\"" Jul 6 23:08:08.655454 containerd[1752]: time="2025-07-06T23:08:08.655108869Z" level=info msg="CreateContainer within sandbox \"b593de8e30b4324db49ecd5cf6b957779df3d54aa754b74b8fe98b24cfcda53b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"6b5a84117e090c2dc1d1601a5f7bd71ec5b61959d20f0fdddfe912e32cef6bc9\"" Jul 6 23:08:08.655454 containerd[1752]: time="2025-07-06T23:08:08.655331829Z" level=info msg="StartContainer for \"399529057d8e7087b80747598a636a3b1fb3cfcebe2e47c187004f53a0f75280\"" Jul 6 23:08:08.656443 containerd[1752]: time="2025-07-06T23:08:08.656391549Z" level=info msg="StartContainer for \"6b5a84117e090c2dc1d1601a5f7bd71ec5b61959d20f0fdddfe912e32cef6bc9\"" Jul 6 23:08:08.666116 containerd[1752]: time="2025-07-06T23:08:08.666084314Z" level=info msg="CreateContainer within sandbox \"c2a37e30c7507fa1524f1bcaae0d266d8ca9cffbd131be40d0d754b384089ee1\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"7c15f9e2e14b2837319d255aa7a5da46c0b2c8dbb814c01705355f45b01bb931\"" Jul 6 23:08:08.669661 containerd[1752]: time="2025-07-06T23:08:08.669638076Z" level=info msg="StartContainer for \"7c15f9e2e14b2837319d255aa7a5da46c0b2c8dbb814c01705355f45b01bb931\"" Jul 6 23:08:08.689232 systemd[1]: Started cri-containerd-6b5a84117e090c2dc1d1601a5f7bd71ec5b61959d20f0fdddfe912e32cef6bc9.scope - libcontainer container 6b5a84117e090c2dc1d1601a5f7bd71ec5b61959d20f0fdddfe912e32cef6bc9. Jul 6 23:08:08.713139 systemd[1]: Started cri-containerd-399529057d8e7087b80747598a636a3b1fb3cfcebe2e47c187004f53a0f75280.scope - libcontainer container 399529057d8e7087b80747598a636a3b1fb3cfcebe2e47c187004f53a0f75280. Jul 6 23:08:08.714045 systemd[1]: Started cri-containerd-7c15f9e2e14b2837319d255aa7a5da46c0b2c8dbb814c01705355f45b01bb931.scope - libcontainer container 7c15f9e2e14b2837319d255aa7a5da46c0b2c8dbb814c01705355f45b01bb931. Jul 6 23:08:08.732166 kubelet[2935]: I0706 23:08:08.731863 2935 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.1-a-dc1fa1989d" Jul 6 23:08:08.732781 kubelet[2935]: E0706 23:08:08.732195 2935 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.36:6443/api/v1/nodes\": dial tcp 10.200.20.36:6443: connect: connection refused" node="ci-4230.2.1-a-dc1fa1989d" Jul 6 23:08:08.741659 containerd[1752]: time="2025-07-06T23:08:08.741352589Z" level=info msg="StartContainer for \"6b5a84117e090c2dc1d1601a5f7bd71ec5b61959d20f0fdddfe912e32cef6bc9\" returns successfully" Jul 6 23:08:08.782944 containerd[1752]: time="2025-07-06T23:08:08.778000407Z" level=info msg="StartContainer for \"7c15f9e2e14b2837319d255aa7a5da46c0b2c8dbb814c01705355f45b01bb931\" returns successfully" Jul 6 23:08:08.791810 containerd[1752]: time="2025-07-06T23:08:08.791705173Z" level=info msg="StartContainer for \"399529057d8e7087b80747598a636a3b1fb3cfcebe2e47c187004f53a0f75280\" returns successfully" Jul 6 23:08:09.585869 kubelet[2935]: E0706 23:08:09.585513 2935 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.1-a-dc1fa1989d\" not found" node="ci-4230.2.1-a-dc1fa1989d" Jul 6 23:08:09.587524 kubelet[2935]: E0706 23:08:09.586885 2935 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.1-a-dc1fa1989d\" not found" node="ci-4230.2.1-a-dc1fa1989d" Jul 6 23:08:09.589317 kubelet[2935]: E0706 23:08:09.589180 2935 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.1-a-dc1fa1989d\" not found" node="ci-4230.2.1-a-dc1fa1989d" Jul 6 23:08:10.592619 kubelet[2935]: E0706 23:08:10.592331 2935 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.1-a-dc1fa1989d\" not found" node="ci-4230.2.1-a-dc1fa1989d" Jul 6 23:08:10.594696 kubelet[2935]: E0706 23:08:10.594676 2935 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.1-a-dc1fa1989d\" not found" node="ci-4230.2.1-a-dc1fa1989d" Jul 6 23:08:10.595271 kubelet[2935]: E0706 23:08:10.594970 2935 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.1-a-dc1fa1989d\" not found" node="ci-4230.2.1-a-dc1fa1989d" Jul 6 23:08:11.595384 kubelet[2935]: E0706 23:08:11.595232 2935 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.1-a-dc1fa1989d\" not found" node="ci-4230.2.1-a-dc1fa1989d" Jul 6 23:08:11.596447 kubelet[2935]: E0706 23:08:11.596421 2935 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.1-a-dc1fa1989d\" not found" node="ci-4230.2.1-a-dc1fa1989d" Jul 6 23:08:11.736194 kubelet[2935]: E0706 23:08:11.735979 2935 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.1-a-dc1fa1989d\" not found" node="ci-4230.2.1-a-dc1fa1989d" Jul 6 23:08:11.940021 kubelet[2935]: I0706 23:08:11.939313 2935 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.1-a-dc1fa1989d" Jul 6 23:08:12.086849 kubelet[2935]: I0706 23:08:12.086724 2935 kubelet_node_status.go:78] "Successfully registered node" node="ci-4230.2.1-a-dc1fa1989d" Jul 6 23:08:12.087241 kubelet[2935]: E0706 23:08:12.087014 2935 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4230.2.1-a-dc1fa1989d\": node \"ci-4230.2.1-a-dc1fa1989d\" not found" Jul 6 23:08:12.134623 kubelet[2935]: I0706 23:08:12.134592 2935 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230.2.1-a-dc1fa1989d" Jul 6 23:08:12.162706 kubelet[2935]: E0706 23:08:12.162449 2935 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4230.2.1-a-dc1fa1989d\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4230.2.1-a-dc1fa1989d" Jul 6 23:08:12.162706 kubelet[2935]: I0706 23:08:12.162482 2935 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230.2.1-a-dc1fa1989d" Jul 6 23:08:12.163936 kubelet[2935]: E0706 23:08:12.163879 2935 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-node-lease\" not found" interval="6.4s" Jul 6 23:08:12.175250 kubelet[2935]: E0706 23:08:12.175045 2935 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4230.2.1-a-dc1fa1989d\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4230.2.1-a-dc1fa1989d" Jul 6 23:08:12.175250 kubelet[2935]: I0706 23:08:12.175075 2935 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230.2.1-a-dc1fa1989d" Jul 6 23:08:12.182615 kubelet[2935]: E0706 23:08:12.182579 2935 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4230.2.1-a-dc1fa1989d\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4230.2.1-a-dc1fa1989d" Jul 6 23:08:12.517112 kubelet[2935]: I0706 23:08:12.516842 2935 apiserver.go:52] "Watching apiserver" Jul 6 23:08:12.533857 kubelet[2935]: I0706 23:08:12.533825 2935 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 6 23:08:14.334451 systemd[1]: Reload requested from client PID 3214 ('systemctl') (unit session-9.scope)... Jul 6 23:08:14.334475 systemd[1]: Reloading... Jul 6 23:08:14.442954 zram_generator::config[3268]: No configuration found. Jul 6 23:08:14.543514 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:08:14.659411 systemd[1]: Reloading finished in 324 ms. Jul 6 23:08:14.685042 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:08:14.702326 systemd[1]: kubelet.service: Deactivated successfully. Jul 6 23:08:14.702609 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:08:14.702679 systemd[1]: kubelet.service: Consumed 1.009s CPU time, 129.1M memory peak. Jul 6 23:08:14.707190 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:08:14.821148 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:08:14.830362 (kubelet)[3325]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 6 23:08:14.875044 kubelet[3325]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:08:14.876970 kubelet[3325]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 6 23:08:14.876970 kubelet[3325]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:08:14.876970 kubelet[3325]: I0706 23:08:14.875642 3325 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 6 23:08:14.884162 kubelet[3325]: I0706 23:08:14.884109 3325 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 6 23:08:14.884162 kubelet[3325]: I0706 23:08:14.884153 3325 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 6 23:08:14.884521 kubelet[3325]: I0706 23:08:14.884502 3325 server.go:956] "Client rotation is on, will bootstrap in background" Jul 6 23:08:14.886218 kubelet[3325]: I0706 23:08:14.886188 3325 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jul 6 23:08:14.889859 kubelet[3325]: I0706 23:08:14.889823 3325 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 6 23:08:14.895371 kubelet[3325]: E0706 23:08:14.895325 3325 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 6 23:08:14.895371 kubelet[3325]: I0706 23:08:14.895367 3325 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 6 23:08:14.902419 kubelet[3325]: I0706 23:08:14.902106 3325 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 6 23:08:14.902419 kubelet[3325]: I0706 23:08:14.902377 3325 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 6 23:08:14.902648 kubelet[3325]: I0706 23:08:14.902407 3325 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230.2.1-a-dc1fa1989d","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 6 23:08:14.902764 kubelet[3325]: I0706 23:08:14.902651 3325 topology_manager.go:138] "Creating topology manager with none policy" Jul 6 23:08:14.902764 kubelet[3325]: I0706 23:08:14.902661 3325 container_manager_linux.go:303] "Creating device plugin manager" Jul 6 23:08:14.902764 kubelet[3325]: I0706 23:08:14.902726 3325 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:08:14.905178 kubelet[3325]: I0706 23:08:14.902920 3325 kubelet.go:480] "Attempting to sync node with API server" Jul 6 23:08:14.905178 kubelet[3325]: I0706 23:08:14.903494 3325 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 6 23:08:14.905178 kubelet[3325]: I0706 23:08:14.903542 3325 kubelet.go:386] "Adding apiserver pod source" Jul 6 23:08:14.905178 kubelet[3325]: I0706 23:08:14.903558 3325 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 6 23:08:14.907029 kubelet[3325]: I0706 23:08:14.905785 3325 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jul 6 23:08:14.907029 kubelet[3325]: I0706 23:08:14.906442 3325 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 6 23:08:14.912010 kubelet[3325]: I0706 23:08:14.911280 3325 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 6 23:08:14.912516 kubelet[3325]: I0706 23:08:14.912356 3325 server.go:1289] "Started kubelet" Jul 6 23:08:14.918819 kubelet[3325]: I0706 23:08:14.918792 3325 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 6 23:08:14.925292 kubelet[3325]: I0706 23:08:14.925248 3325 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 6 23:08:14.940963 kubelet[3325]: I0706 23:08:14.940337 3325 server.go:317] "Adding debug handlers to kubelet server" Jul 6 23:08:14.943003 kubelet[3325]: I0706 23:08:14.929268 3325 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 6 23:08:14.945803 kubelet[3325]: I0706 23:08:14.926001 3325 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 6 23:08:14.946472 kubelet[3325]: I0706 23:08:14.946217 3325 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 6 23:08:14.951236 kubelet[3325]: E0706 23:08:14.931439 3325 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.1-a-dc1fa1989d\" not found" Jul 6 23:08:14.951236 kubelet[3325]: I0706 23:08:14.947627 3325 factory.go:223] Registration of the systemd container factory successfully Jul 6 23:08:14.951236 kubelet[3325]: I0706 23:08:14.949105 3325 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 6 23:08:14.951236 kubelet[3325]: I0706 23:08:14.931308 3325 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 6 23:08:14.951236 kubelet[3325]: I0706 23:08:14.931287 3325 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 6 23:08:14.951236 kubelet[3325]: I0706 23:08:14.949590 3325 reconciler.go:26] "Reconciler: start to sync state" Jul 6 23:08:14.952172 kubelet[3325]: I0706 23:08:14.949916 3325 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 6 23:08:14.953635 kubelet[3325]: I0706 23:08:14.953201 3325 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 6 23:08:14.953635 kubelet[3325]: I0706 23:08:14.953228 3325 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 6 23:08:14.953635 kubelet[3325]: I0706 23:08:14.953246 3325 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 6 23:08:14.953635 kubelet[3325]: I0706 23:08:14.953252 3325 kubelet.go:2436] "Starting kubelet main sync loop" Jul 6 23:08:14.953635 kubelet[3325]: E0706 23:08:14.953294 3325 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 6 23:08:14.958905 kubelet[3325]: E0706 23:08:14.958748 3325 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 6 23:08:14.961660 kubelet[3325]: I0706 23:08:14.961637 3325 factory.go:223] Registration of the containerd container factory successfully Jul 6 23:08:15.018289 kubelet[3325]: I0706 23:08:15.018263 3325 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 6 23:08:15.018514 kubelet[3325]: I0706 23:08:15.018497 3325 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 6 23:08:15.018606 kubelet[3325]: I0706 23:08:15.018596 3325 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:08:15.018793 kubelet[3325]: I0706 23:08:15.018778 3325 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 6 23:08:15.018870 kubelet[3325]: I0706 23:08:15.018846 3325 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 6 23:08:15.018945 kubelet[3325]: I0706 23:08:15.018913 3325 policy_none.go:49] "None policy: Start" Jul 6 23:08:15.019022 kubelet[3325]: I0706 23:08:15.019010 3325 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 6 23:08:15.019079 kubelet[3325]: I0706 23:08:15.019071 3325 state_mem.go:35] "Initializing new in-memory state store" Jul 6 23:08:15.019289 kubelet[3325]: I0706 23:08:15.019274 3325 state_mem.go:75] "Updated machine memory state" Jul 6 23:08:15.023274 kubelet[3325]: E0706 23:08:15.023252 3325 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 6 23:08:15.023542 kubelet[3325]: I0706 23:08:15.023527 3325 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 6 23:08:15.023640 kubelet[3325]: I0706 23:08:15.023608 3325 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 6 23:08:15.023881 kubelet[3325]: I0706 23:08:15.023865 3325 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 6 23:08:15.026149 kubelet[3325]: E0706 23:08:15.026113 3325 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 6 23:08:15.054450 kubelet[3325]: I0706 23:08:15.054409 3325 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230.2.1-a-dc1fa1989d" Jul 6 23:08:15.055600 kubelet[3325]: I0706 23:08:15.054712 3325 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230.2.1-a-dc1fa1989d" Jul 6 23:08:15.055600 kubelet[3325]: I0706 23:08:15.054790 3325 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230.2.1-a-dc1fa1989d" Jul 6 23:08:15.063603 kubelet[3325]: I0706 23:08:15.063564 3325 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jul 6 23:08:15.069224 kubelet[3325]: I0706 23:08:15.069193 3325 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jul 6 23:08:15.069357 kubelet[3325]: I0706 23:08:15.069343 3325 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jul 6 23:08:15.126844 kubelet[3325]: I0706 23:08:15.126445 3325 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.1-a-dc1fa1989d" Jul 6 23:08:15.147426 kubelet[3325]: I0706 23:08:15.147388 3325 kubelet_node_status.go:124] "Node was previously registered" node="ci-4230.2.1-a-dc1fa1989d" Jul 6 23:08:15.147842 kubelet[3325]: I0706 23:08:15.147770 3325 kubelet_node_status.go:78] "Successfully registered node" node="ci-4230.2.1-a-dc1fa1989d" Jul 6 23:08:15.149902 kubelet[3325]: I0706 23:08:15.149831 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a3bfefb81fe8939114888355504ce16c-ca-certs\") pod \"kube-controller-manager-ci-4230.2.1-a-dc1fa1989d\" (UID: \"a3bfefb81fe8939114888355504ce16c\") " pod="kube-system/kube-controller-manager-ci-4230.2.1-a-dc1fa1989d" Jul 6 23:08:15.149902 kubelet[3325]: I0706 23:08:15.149864 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a3bfefb81fe8939114888355504ce16c-kubeconfig\") pod \"kube-controller-manager-ci-4230.2.1-a-dc1fa1989d\" (UID: \"a3bfefb81fe8939114888355504ce16c\") " pod="kube-system/kube-controller-manager-ci-4230.2.1-a-dc1fa1989d" Jul 6 23:08:15.149902 kubelet[3325]: I0706 23:08:15.149880 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0e1d8f61390728e1ed6e2f2335f63caf-kubeconfig\") pod \"kube-scheduler-ci-4230.2.1-a-dc1fa1989d\" (UID: \"0e1d8f61390728e1ed6e2f2335f63caf\") " pod="kube-system/kube-scheduler-ci-4230.2.1-a-dc1fa1989d" Jul 6 23:08:15.150143 kubelet[3325]: I0706 23:08:15.149973 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/339a0bc9e6b7629f540e95bfb5cab70d-ca-certs\") pod \"kube-apiserver-ci-4230.2.1-a-dc1fa1989d\" (UID: \"339a0bc9e6b7629f540e95bfb5cab70d\") " pod="kube-system/kube-apiserver-ci-4230.2.1-a-dc1fa1989d" Jul 6 23:08:15.150143 kubelet[3325]: I0706 23:08:15.149994 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/339a0bc9e6b7629f540e95bfb5cab70d-k8s-certs\") pod \"kube-apiserver-ci-4230.2.1-a-dc1fa1989d\" (UID: \"339a0bc9e6b7629f540e95bfb5cab70d\") " pod="kube-system/kube-apiserver-ci-4230.2.1-a-dc1fa1989d" Jul 6 23:08:15.150398 kubelet[3325]: I0706 23:08:15.150203 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a3bfefb81fe8939114888355504ce16c-flexvolume-dir\") pod \"kube-controller-manager-ci-4230.2.1-a-dc1fa1989d\" (UID: \"a3bfefb81fe8939114888355504ce16c\") " pod="kube-system/kube-controller-manager-ci-4230.2.1-a-dc1fa1989d" Jul 6 23:08:15.150398 kubelet[3325]: I0706 23:08:15.150238 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a3bfefb81fe8939114888355504ce16c-k8s-certs\") pod \"kube-controller-manager-ci-4230.2.1-a-dc1fa1989d\" (UID: \"a3bfefb81fe8939114888355504ce16c\") " pod="kube-system/kube-controller-manager-ci-4230.2.1-a-dc1fa1989d" Jul 6 23:08:15.150398 kubelet[3325]: I0706 23:08:15.150254 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a3bfefb81fe8939114888355504ce16c-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230.2.1-a-dc1fa1989d\" (UID: \"a3bfefb81fe8939114888355504ce16c\") " pod="kube-system/kube-controller-manager-ci-4230.2.1-a-dc1fa1989d" Jul 6 23:08:15.150398 kubelet[3325]: I0706 23:08:15.150294 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/339a0bc9e6b7629f540e95bfb5cab70d-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230.2.1-a-dc1fa1989d\" (UID: \"339a0bc9e6b7629f540e95bfb5cab70d\") " pod="kube-system/kube-apiserver-ci-4230.2.1-a-dc1fa1989d" Jul 6 23:08:15.355983 sudo[3361]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 6 23:08:15.356266 sudo[3361]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jul 6 23:08:15.831784 sudo[3361]: pam_unix(sudo:session): session closed for user root Jul 6 23:08:15.904541 kubelet[3325]: I0706 23:08:15.904315 3325 apiserver.go:52] "Watching apiserver" Jul 6 23:08:15.949426 kubelet[3325]: I0706 23:08:15.949386 3325 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 6 23:08:15.996814 kubelet[3325]: I0706 23:08:15.996779 3325 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230.2.1-a-dc1fa1989d" Jul 6 23:08:15.998135 kubelet[3325]: I0706 23:08:15.998108 3325 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230.2.1-a-dc1fa1989d" Jul 6 23:08:16.012097 kubelet[3325]: I0706 23:08:16.012058 3325 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jul 6 23:08:16.012277 kubelet[3325]: E0706 23:08:16.012129 3325 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4230.2.1-a-dc1fa1989d\" already exists" pod="kube-system/kube-apiserver-ci-4230.2.1-a-dc1fa1989d" Jul 6 23:08:16.013382 kubelet[3325]: I0706 23:08:16.013361 3325 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jul 6 23:08:16.013430 kubelet[3325]: E0706 23:08:16.013403 3325 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4230.2.1-a-dc1fa1989d\" already exists" pod="kube-system/kube-scheduler-ci-4230.2.1-a-dc1fa1989d" Jul 6 23:08:16.021759 kubelet[3325]: I0706 23:08:16.021685 3325 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4230.2.1-a-dc1fa1989d" podStartSLOduration=1.02167112 podStartE2EDuration="1.02167112s" podCreationTimestamp="2025-07-06 23:08:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:08:16.020875959 +0000 UTC m=+1.184440464" watchObservedRunningTime="2025-07-06 23:08:16.02167112 +0000 UTC m=+1.185235585" Jul 6 23:08:16.046357 kubelet[3325]: I0706 23:08:16.046291 3325 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4230.2.1-a-dc1fa1989d" podStartSLOduration=1.046272949 podStartE2EDuration="1.046272949s" podCreationTimestamp="2025-07-06 23:08:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:08:16.035004375 +0000 UTC m=+1.198568880" watchObservedRunningTime="2025-07-06 23:08:16.046272949 +0000 UTC m=+1.209837454" Jul 6 23:08:17.675778 sudo[2249]: pam_unix(sudo:session): session closed for user root Jul 6 23:08:17.754603 sshd[2248]: Connection closed by 10.200.16.10 port 39862 Jul 6 23:08:17.755173 sshd-session[2246]: pam_unix(sshd:session): session closed for user core Jul 6 23:08:17.758804 systemd-logind[1708]: Session 9 logged out. Waiting for processes to exit. Jul 6 23:08:17.759674 systemd[1]: sshd@6-10.200.20.36:22-10.200.16.10:39862.service: Deactivated successfully. Jul 6 23:08:17.762143 systemd[1]: session-9.scope: Deactivated successfully. Jul 6 23:08:17.763030 systemd[1]: session-9.scope: Consumed 5.391s CPU time, 263.7M memory peak. Jul 6 23:08:17.764582 systemd-logind[1708]: Removed session 9. Jul 6 23:08:20.106158 kubelet[3325]: I0706 23:08:20.106107 3325 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 6 23:08:20.106497 containerd[1752]: time="2025-07-06T23:08:20.106397780Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 6 23:08:20.106683 kubelet[3325]: I0706 23:08:20.106567 3325 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 6 23:08:20.713554 kubelet[3325]: I0706 23:08:20.712641 3325 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4230.2.1-a-dc1fa1989d" podStartSLOduration=5.712621253 podStartE2EDuration="5.712621253s" podCreationTimestamp="2025-07-06 23:08:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:08:16.04753043 +0000 UTC m=+1.211094975" watchObservedRunningTime="2025-07-06 23:08:20.712621253 +0000 UTC m=+5.876185758" Jul 6 23:08:20.734539 systemd[1]: Created slice kubepods-besteffort-pod561297d4_a807_45a0_8d6e_8bce380cba0f.slice - libcontainer container kubepods-besteffort-pod561297d4_a807_45a0_8d6e_8bce380cba0f.slice. Jul 6 23:08:20.750535 systemd[1]: Created slice kubepods-burstable-pod19092466_f644_4374_99fc_4fc1a66f975a.slice - libcontainer container kubepods-burstable-pod19092466_f644_4374_99fc_4fc1a66f975a.slice. Jul 6 23:08:20.782486 kubelet[3325]: I0706 23:08:20.782000 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/19092466-f644-4374-99fc-4fc1a66f975a-bpf-maps\") pod \"cilium-4gq2x\" (UID: \"19092466-f644-4374-99fc-4fc1a66f975a\") " pod="kube-system/cilium-4gq2x" Jul 6 23:08:20.782486 kubelet[3325]: I0706 23:08:20.782453 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/19092466-f644-4374-99fc-4fc1a66f975a-lib-modules\") pod \"cilium-4gq2x\" (UID: \"19092466-f644-4374-99fc-4fc1a66f975a\") " pod="kube-system/cilium-4gq2x" Jul 6 23:08:20.783036 kubelet[3325]: I0706 23:08:20.782679 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/19092466-f644-4374-99fc-4fc1a66f975a-clustermesh-secrets\") pod \"cilium-4gq2x\" (UID: \"19092466-f644-4374-99fc-4fc1a66f975a\") " pod="kube-system/cilium-4gq2x" Jul 6 23:08:20.783036 kubelet[3325]: I0706 23:08:20.782706 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/19092466-f644-4374-99fc-4fc1a66f975a-host-proc-sys-kernel\") pod \"cilium-4gq2x\" (UID: \"19092466-f644-4374-99fc-4fc1a66f975a\") " pod="kube-system/cilium-4gq2x" Jul 6 23:08:20.783036 kubelet[3325]: I0706 23:08:20.782844 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/19092466-f644-4374-99fc-4fc1a66f975a-cni-path\") pod \"cilium-4gq2x\" (UID: \"19092466-f644-4374-99fc-4fc1a66f975a\") " pod="kube-system/cilium-4gq2x" Jul 6 23:08:20.783036 kubelet[3325]: I0706 23:08:20.782869 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/19092466-f644-4374-99fc-4fc1a66f975a-etc-cni-netd\") pod \"cilium-4gq2x\" (UID: \"19092466-f644-4374-99fc-4fc1a66f975a\") " pod="kube-system/cilium-4gq2x" Jul 6 23:08:20.783036 kubelet[3325]: I0706 23:08:20.782886 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/19092466-f644-4374-99fc-4fc1a66f975a-host-proc-sys-net\") pod \"cilium-4gq2x\" (UID: \"19092466-f644-4374-99fc-4fc1a66f975a\") " pod="kube-system/cilium-4gq2x" Jul 6 23:08:20.783977 kubelet[3325]: I0706 23:08:20.782901 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/19092466-f644-4374-99fc-4fc1a66f975a-hubble-tls\") pod \"cilium-4gq2x\" (UID: \"19092466-f644-4374-99fc-4fc1a66f975a\") " pod="kube-system/cilium-4gq2x" Jul 6 23:08:20.783977 kubelet[3325]: I0706 23:08:20.783295 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/561297d4-a807-45a0-8d6e-8bce380cba0f-kube-proxy\") pod \"kube-proxy-4n4rj\" (UID: \"561297d4-a807-45a0-8d6e-8bce380cba0f\") " pod="kube-system/kube-proxy-4n4rj" Jul 6 23:08:20.783977 kubelet[3325]: I0706 23:08:20.783404 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/561297d4-a807-45a0-8d6e-8bce380cba0f-xtables-lock\") pod \"kube-proxy-4n4rj\" (UID: \"561297d4-a807-45a0-8d6e-8bce380cba0f\") " pod="kube-system/kube-proxy-4n4rj" Jul 6 23:08:20.783977 kubelet[3325]: I0706 23:08:20.783425 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/19092466-f644-4374-99fc-4fc1a66f975a-hostproc\") pod \"cilium-4gq2x\" (UID: \"19092466-f644-4374-99fc-4fc1a66f975a\") " pod="kube-system/cilium-4gq2x" Jul 6 23:08:20.784288 kubelet[3325]: I0706 23:08:20.784140 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/19092466-f644-4374-99fc-4fc1a66f975a-xtables-lock\") pod \"cilium-4gq2x\" (UID: \"19092466-f644-4374-99fc-4fc1a66f975a\") " pod="kube-system/cilium-4gq2x" Jul 6 23:08:20.784288 kubelet[3325]: I0706 23:08:20.784198 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/19092466-f644-4374-99fc-4fc1a66f975a-cilium-config-path\") pod \"cilium-4gq2x\" (UID: \"19092466-f644-4374-99fc-4fc1a66f975a\") " pod="kube-system/cilium-4gq2x" Jul 6 23:08:20.784288 kubelet[3325]: I0706 23:08:20.784228 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dls29\" (UniqueName: \"kubernetes.io/projected/19092466-f644-4374-99fc-4fc1a66f975a-kube-api-access-dls29\") pod \"cilium-4gq2x\" (UID: \"19092466-f644-4374-99fc-4fc1a66f975a\") " pod="kube-system/cilium-4gq2x" Jul 6 23:08:20.784288 kubelet[3325]: I0706 23:08:20.784250 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/561297d4-a807-45a0-8d6e-8bce380cba0f-lib-modules\") pod \"kube-proxy-4n4rj\" (UID: \"561297d4-a807-45a0-8d6e-8bce380cba0f\") " pod="kube-system/kube-proxy-4n4rj" Jul 6 23:08:20.784288 kubelet[3325]: I0706 23:08:20.784264 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/19092466-f644-4374-99fc-4fc1a66f975a-cilium-run\") pod \"cilium-4gq2x\" (UID: \"19092466-f644-4374-99fc-4fc1a66f975a\") " pod="kube-system/cilium-4gq2x" Jul 6 23:08:20.784493 kubelet[3325]: I0706 23:08:20.784425 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/19092466-f644-4374-99fc-4fc1a66f975a-cilium-cgroup\") pod \"cilium-4gq2x\" (UID: \"19092466-f644-4374-99fc-4fc1a66f975a\") " pod="kube-system/cilium-4gq2x" Jul 6 23:08:20.784493 kubelet[3325]: I0706 23:08:20.784462 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w54tm\" (UniqueName: \"kubernetes.io/projected/561297d4-a807-45a0-8d6e-8bce380cba0f-kube-api-access-w54tm\") pod \"kube-proxy-4n4rj\" (UID: \"561297d4-a807-45a0-8d6e-8bce380cba0f\") " pod="kube-system/kube-proxy-4n4rj" Jul 6 23:08:20.905215 kubelet[3325]: E0706 23:08:20.905116 3325 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jul 6 23:08:20.905215 kubelet[3325]: E0706 23:08:20.905149 3325 projected.go:194] Error preparing data for projected volume kube-api-access-dls29 for pod kube-system/cilium-4gq2x: configmap "kube-root-ca.crt" not found Jul 6 23:08:20.905215 kubelet[3325]: E0706 23:08:20.905169 3325 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jul 6 23:08:20.905215 kubelet[3325]: E0706 23:08:20.905184 3325 projected.go:194] Error preparing data for projected volume kube-api-access-w54tm for pod kube-system/kube-proxy-4n4rj: configmap "kube-root-ca.crt" not found Jul 6 23:08:20.905215 kubelet[3325]: E0706 23:08:20.905213 3325 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/19092466-f644-4374-99fc-4fc1a66f975a-kube-api-access-dls29 podName:19092466-f644-4374-99fc-4fc1a66f975a nodeName:}" failed. No retries permitted until 2025-07-06 23:08:21.405191606 +0000 UTC m=+6.568756111 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-dls29" (UniqueName: "kubernetes.io/projected/19092466-f644-4374-99fc-4fc1a66f975a-kube-api-access-dls29") pod "cilium-4gq2x" (UID: "19092466-f644-4374-99fc-4fc1a66f975a") : configmap "kube-root-ca.crt" not found Jul 6 23:08:20.905215 kubelet[3325]: E0706 23:08:20.905226 3325 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/561297d4-a807-45a0-8d6e-8bce380cba0f-kube-api-access-w54tm podName:561297d4-a807-45a0-8d6e-8bce380cba0f nodeName:}" failed. No retries permitted until 2025-07-06 23:08:21.405220246 +0000 UTC m=+6.568784711 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-w54tm" (UniqueName: "kubernetes.io/projected/561297d4-a807-45a0-8d6e-8bce380cba0f-kube-api-access-w54tm") pod "kube-proxy-4n4rj" (UID: "561297d4-a807-45a0-8d6e-8bce380cba0f") : configmap "kube-root-ca.crt" not found Jul 6 23:08:21.324130 systemd[1]: Created slice kubepods-besteffort-pod8115a4b2_1b6c_4dad_906e_41ac10c2857d.slice - libcontainer container kubepods-besteffort-pod8115a4b2_1b6c_4dad_906e_41ac10c2857d.slice. Jul 6 23:08:21.390071 kubelet[3325]: I0706 23:08:21.389960 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tpz9\" (UniqueName: \"kubernetes.io/projected/8115a4b2-1b6c-4dad-906e-41ac10c2857d-kube-api-access-5tpz9\") pod \"cilium-operator-6c4d7847fc-8xv62\" (UID: \"8115a4b2-1b6c-4dad-906e-41ac10c2857d\") " pod="kube-system/cilium-operator-6c4d7847fc-8xv62" Jul 6 23:08:21.390071 kubelet[3325]: I0706 23:08:21.390013 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8115a4b2-1b6c-4dad-906e-41ac10c2857d-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-8xv62\" (UID: \"8115a4b2-1b6c-4dad-906e-41ac10c2857d\") " pod="kube-system/cilium-operator-6c4d7847fc-8xv62" Jul 6 23:08:21.627702 containerd[1752]: time="2025-07-06T23:08:21.627415706Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-8xv62,Uid:8115a4b2-1b6c-4dad-906e-41ac10c2857d,Namespace:kube-system,Attempt:0,}" Jul 6 23:08:21.648870 containerd[1752]: time="2025-07-06T23:08:21.648573638Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4n4rj,Uid:561297d4-a807-45a0-8d6e-8bce380cba0f,Namespace:kube-system,Attempt:0,}" Jul 6 23:08:21.657575 containerd[1752]: time="2025-07-06T23:08:21.657536804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4gq2x,Uid:19092466-f644-4374-99fc-4fc1a66f975a,Namespace:kube-system,Attempt:0,}" Jul 6 23:08:21.698858 containerd[1752]: time="2025-07-06T23:08:21.698749188Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:08:21.698858 containerd[1752]: time="2025-07-06T23:08:21.698827348Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:08:21.699134 containerd[1752]: time="2025-07-06T23:08:21.698905588Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:08:21.700141 containerd[1752]: time="2025-07-06T23:08:21.699326828Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:08:21.719005 containerd[1752]: time="2025-07-06T23:08:21.718127999Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:08:21.719005 containerd[1752]: time="2025-07-06T23:08:21.718176079Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:08:21.719005 containerd[1752]: time="2025-07-06T23:08:21.718186439Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:08:21.719005 containerd[1752]: time="2025-07-06T23:08:21.718251279Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:08:21.720128 systemd[1]: Started cri-containerd-c1161092f42f9af3e53d22cfd700290d5605941ff415521c8f0897f1f92e13ea.scope - libcontainer container c1161092f42f9af3e53d22cfd700290d5605941ff415521c8f0897f1f92e13ea. Jul 6 23:08:21.737112 systemd[1]: Started cri-containerd-9cb133278961ef4e7d11dba2f79e33d00f52fc4d9d123ef498221c0f8579d0d0.scope - libcontainer container 9cb133278961ef4e7d11dba2f79e33d00f52fc4d9d123ef498221c0f8579d0d0. Jul 6 23:08:21.752105 containerd[1752]: time="2025-07-06T23:08:21.750112337Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:08:21.752105 containerd[1752]: time="2025-07-06T23:08:21.750248658Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:08:21.752105 containerd[1752]: time="2025-07-06T23:08:21.750286498Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:08:21.753143 containerd[1752]: time="2025-07-06T23:08:21.752005819Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:08:21.780582 containerd[1752]: time="2025-07-06T23:08:21.780213035Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-8xv62,Uid:8115a4b2-1b6c-4dad-906e-41ac10c2857d,Namespace:kube-system,Attempt:0,} returns sandbox id \"c1161092f42f9af3e53d22cfd700290d5605941ff415521c8f0897f1f92e13ea\"" Jul 6 23:08:21.780368 systemd[1]: Started cri-containerd-2169c37e7adefb2cd57c6340629b8975e9e7a7027cdbde6cc1436de1a66db03d.scope - libcontainer container 2169c37e7adefb2cd57c6340629b8975e9e7a7027cdbde6cc1436de1a66db03d. Jul 6 23:08:21.783302 containerd[1752]: time="2025-07-06T23:08:21.782817717Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4n4rj,Uid:561297d4-a807-45a0-8d6e-8bce380cba0f,Namespace:kube-system,Attempt:0,} returns sandbox id \"9cb133278961ef4e7d11dba2f79e33d00f52fc4d9d123ef498221c0f8579d0d0\"" Jul 6 23:08:21.786872 containerd[1752]: time="2025-07-06T23:08:21.785374438Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 6 23:08:21.794837 containerd[1752]: time="2025-07-06T23:08:21.794761563Z" level=info msg="CreateContainer within sandbox \"9cb133278961ef4e7d11dba2f79e33d00f52fc4d9d123ef498221c0f8579d0d0\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 6 23:08:21.810722 containerd[1752]: time="2025-07-06T23:08:21.810665893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4gq2x,Uid:19092466-f644-4374-99fc-4fc1a66f975a,Namespace:kube-system,Attempt:0,} returns sandbox id \"2169c37e7adefb2cd57c6340629b8975e9e7a7027cdbde6cc1436de1a66db03d\"" Jul 6 23:08:21.842088 containerd[1752]: time="2025-07-06T23:08:21.842045551Z" level=info msg="CreateContainer within sandbox \"9cb133278961ef4e7d11dba2f79e33d00f52fc4d9d123ef498221c0f8579d0d0\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d0a7e5b4fabe60f96d33912408ea58ac9e8e1bff95e19ee90b774411d22ef9d2\"" Jul 6 23:08:21.843434 containerd[1752]: time="2025-07-06T23:08:21.842853352Z" level=info msg="StartContainer for \"d0a7e5b4fabe60f96d33912408ea58ac9e8e1bff95e19ee90b774411d22ef9d2\"" Jul 6 23:08:21.865087 systemd[1]: Started cri-containerd-d0a7e5b4fabe60f96d33912408ea58ac9e8e1bff95e19ee90b774411d22ef9d2.scope - libcontainer container d0a7e5b4fabe60f96d33912408ea58ac9e8e1bff95e19ee90b774411d22ef9d2. Jul 6 23:08:21.897248 containerd[1752]: time="2025-07-06T23:08:21.896700103Z" level=info msg="StartContainer for \"d0a7e5b4fabe60f96d33912408ea58ac9e8e1bff95e19ee90b774411d22ef9d2\" returns successfully" Jul 6 23:08:22.034784 kubelet[3325]: I0706 23:08:22.034350 3325 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-4n4rj" podStartSLOduration=2.034332023 podStartE2EDuration="2.034332023s" podCreationTimestamp="2025-07-06 23:08:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:08:22.031643061 +0000 UTC m=+7.195207566" watchObservedRunningTime="2025-07-06 23:08:22.034332023 +0000 UTC m=+7.197896528" Jul 6 23:08:23.344288 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1754833976.mount: Deactivated successfully. Jul 6 23:08:23.928950 containerd[1752]: time="2025-07-06T23:08:23.928281206Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:08:23.932515 containerd[1752]: time="2025-07-06T23:08:23.932464208Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Jul 6 23:08:23.937148 containerd[1752]: time="2025-07-06T23:08:23.937095171Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:08:23.938544 containerd[1752]: time="2025-07-06T23:08:23.938406612Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 2.152974814s" Jul 6 23:08:23.938544 containerd[1752]: time="2025-07-06T23:08:23.938446932Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jul 6 23:08:23.940334 containerd[1752]: time="2025-07-06T23:08:23.940158773Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 6 23:08:23.946723 containerd[1752]: time="2025-07-06T23:08:23.946568337Z" level=info msg="CreateContainer within sandbox \"c1161092f42f9af3e53d22cfd700290d5605941ff415521c8f0897f1f92e13ea\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 6 23:08:23.988618 containerd[1752]: time="2025-07-06T23:08:23.988559201Z" level=info msg="CreateContainer within sandbox \"c1161092f42f9af3e53d22cfd700290d5605941ff415521c8f0897f1f92e13ea\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"313ec08a20dc415fb018d325b03e57894001bb3eaed53b7c793c370a4877ea1f\"" Jul 6 23:08:23.989707 containerd[1752]: time="2025-07-06T23:08:23.989600002Z" level=info msg="StartContainer for \"313ec08a20dc415fb018d325b03e57894001bb3eaed53b7c793c370a4877ea1f\"" Jul 6 23:08:24.019118 systemd[1]: Started cri-containerd-313ec08a20dc415fb018d325b03e57894001bb3eaed53b7c793c370a4877ea1f.scope - libcontainer container 313ec08a20dc415fb018d325b03e57894001bb3eaed53b7c793c370a4877ea1f. Jul 6 23:08:24.049461 containerd[1752]: time="2025-07-06T23:08:24.049379956Z" level=info msg="StartContainer for \"313ec08a20dc415fb018d325b03e57894001bb3eaed53b7c793c370a4877ea1f\" returns successfully" Jul 6 23:08:27.888358 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3262046465.mount: Deactivated successfully. Jul 6 23:08:30.156118 containerd[1752]: time="2025-07-06T23:08:30.156062204Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:08:30.159957 containerd[1752]: time="2025-07-06T23:08:30.159880006Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Jul 6 23:08:30.166723 containerd[1752]: time="2025-07-06T23:08:30.166664210Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:08:30.168357 containerd[1752]: time="2025-07-06T23:08:30.168212851Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 6.228020718s" Jul 6 23:08:30.168357 containerd[1752]: time="2025-07-06T23:08:30.168252811Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jul 6 23:08:30.175688 containerd[1752]: time="2025-07-06T23:08:30.175637655Z" level=info msg="CreateContainer within sandbox \"2169c37e7adefb2cd57c6340629b8975e9e7a7027cdbde6cc1436de1a66db03d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 6 23:08:30.212498 containerd[1752]: time="2025-07-06T23:08:30.212409398Z" level=info msg="CreateContainer within sandbox \"2169c37e7adefb2cd57c6340629b8975e9e7a7027cdbde6cc1436de1a66db03d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"13e52a18c6c199508851cb53e29ae9c6101a554c20338381d62a72e9f534f65a\"" Jul 6 23:08:30.213433 containerd[1752]: time="2025-07-06T23:08:30.213191438Z" level=info msg="StartContainer for \"13e52a18c6c199508851cb53e29ae9c6101a554c20338381d62a72e9f534f65a\"" Jul 6 23:08:30.246114 systemd[1]: Started cri-containerd-13e52a18c6c199508851cb53e29ae9c6101a554c20338381d62a72e9f534f65a.scope - libcontainer container 13e52a18c6c199508851cb53e29ae9c6101a554c20338381d62a72e9f534f65a. Jul 6 23:08:30.277635 containerd[1752]: time="2025-07-06T23:08:30.277277797Z" level=info msg="StartContainer for \"13e52a18c6c199508851cb53e29ae9c6101a554c20338381d62a72e9f534f65a\" returns successfully" Jul 6 23:08:30.287575 systemd[1]: cri-containerd-13e52a18c6c199508851cb53e29ae9c6101a554c20338381d62a72e9f534f65a.scope: Deactivated successfully. Jul 6 23:08:31.063664 kubelet[3325]: I0706 23:08:31.063584 3325 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-8xv62" podStartSLOduration=7.908415176 podStartE2EDuration="10.063549591s" podCreationTimestamp="2025-07-06 23:08:21 +0000 UTC" firstStartedPulling="2025-07-06 23:08:21.784291397 +0000 UTC m=+6.947855902" lastFinishedPulling="2025-07-06 23:08:23.939425852 +0000 UTC m=+9.102990317" observedRunningTime="2025-07-06 23:08:25.060141145 +0000 UTC m=+10.223705650" watchObservedRunningTime="2025-07-06 23:08:31.063549591 +0000 UTC m=+16.227114136" Jul 6 23:08:31.197595 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-13e52a18c6c199508851cb53e29ae9c6101a554c20338381d62a72e9f534f65a-rootfs.mount: Deactivated successfully. Jul 6 23:08:31.700157 containerd[1752]: time="2025-07-06T23:08:31.700070494Z" level=info msg="shim disconnected" id=13e52a18c6c199508851cb53e29ae9c6101a554c20338381d62a72e9f534f65a namespace=k8s.io Jul 6 23:08:31.700157 containerd[1752]: time="2025-07-06T23:08:31.700124134Z" level=warning msg="cleaning up after shim disconnected" id=13e52a18c6c199508851cb53e29ae9c6101a554c20338381d62a72e9f534f65a namespace=k8s.io Jul 6 23:08:31.700157 containerd[1752]: time="2025-07-06T23:08:31.700132774Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:08:32.058457 containerd[1752]: time="2025-07-06T23:08:32.058408470Z" level=info msg="CreateContainer within sandbox \"2169c37e7adefb2cd57c6340629b8975e9e7a7027cdbde6cc1436de1a66db03d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 6 23:08:32.086282 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2169781345.mount: Deactivated successfully. Jul 6 23:08:32.098469 containerd[1752]: time="2025-07-06T23:08:32.098416095Z" level=info msg="CreateContainer within sandbox \"2169c37e7adefb2cd57c6340629b8975e9e7a7027cdbde6cc1436de1a66db03d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"2f185aa95074c4bbfeef038afc414ecdb31c61fb2d9e537e47d4c94d4ff8a007\"" Jul 6 23:08:32.100145 containerd[1752]: time="2025-07-06T23:08:32.099255375Z" level=info msg="StartContainer for \"2f185aa95074c4bbfeef038afc414ecdb31c61fb2d9e537e47d4c94d4ff8a007\"" Jul 6 23:08:32.128094 systemd[1]: Started cri-containerd-2f185aa95074c4bbfeef038afc414ecdb31c61fb2d9e537e47d4c94d4ff8a007.scope - libcontainer container 2f185aa95074c4bbfeef038afc414ecdb31c61fb2d9e537e47d4c94d4ff8a007. Jul 6 23:08:32.157651 containerd[1752]: time="2025-07-06T23:08:32.157606490Z" level=info msg="StartContainer for \"2f185aa95074c4bbfeef038afc414ecdb31c61fb2d9e537e47d4c94d4ff8a007\" returns successfully" Jul 6 23:08:32.168390 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 6 23:08:32.169099 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:08:32.169612 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:08:32.176719 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:08:32.176952 systemd[1]: cri-containerd-2f185aa95074c4bbfeef038afc414ecdb31c61fb2d9e537e47d4c94d4ff8a007.scope: Deactivated successfully. Jul 6 23:08:32.194212 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:08:32.199841 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2f185aa95074c4bbfeef038afc414ecdb31c61fb2d9e537e47d4c94d4ff8a007-rootfs.mount: Deactivated successfully. Jul 6 23:08:32.211420 containerd[1752]: time="2025-07-06T23:08:32.211356843Z" level=info msg="shim disconnected" id=2f185aa95074c4bbfeef038afc414ecdb31c61fb2d9e537e47d4c94d4ff8a007 namespace=k8s.io Jul 6 23:08:32.211420 containerd[1752]: time="2025-07-06T23:08:32.211412843Z" level=warning msg="cleaning up after shim disconnected" id=2f185aa95074c4bbfeef038afc414ecdb31c61fb2d9e537e47d4c94d4ff8a007 namespace=k8s.io Jul 6 23:08:32.211420 containerd[1752]: time="2025-07-06T23:08:32.211421163Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:08:33.059505 containerd[1752]: time="2025-07-06T23:08:33.059452834Z" level=info msg="CreateContainer within sandbox \"2169c37e7adefb2cd57c6340629b8975e9e7a7027cdbde6cc1436de1a66db03d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 6 23:08:33.115435 containerd[1752]: time="2025-07-06T23:08:33.115378948Z" level=info msg="CreateContainer within sandbox \"2169c37e7adefb2cd57c6340629b8975e9e7a7027cdbde6cc1436de1a66db03d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"47bb187476874e7ea0c29c4283b39eac1141dac710af00e15f7cea6ca948cb8f\"" Jul 6 23:08:33.116282 containerd[1752]: time="2025-07-06T23:08:33.116039028Z" level=info msg="StartContainer for \"47bb187476874e7ea0c29c4283b39eac1141dac710af00e15f7cea6ca948cb8f\"" Jul 6 23:08:33.146166 systemd[1]: Started cri-containerd-47bb187476874e7ea0c29c4283b39eac1141dac710af00e15f7cea6ca948cb8f.scope - libcontainer container 47bb187476874e7ea0c29c4283b39eac1141dac710af00e15f7cea6ca948cb8f. Jul 6 23:08:33.176216 systemd[1]: cri-containerd-47bb187476874e7ea0c29c4283b39eac1141dac710af00e15f7cea6ca948cb8f.scope: Deactivated successfully. Jul 6 23:08:33.179852 containerd[1752]: time="2025-07-06T23:08:33.179749666Z" level=info msg="StartContainer for \"47bb187476874e7ea0c29c4283b39eac1141dac710af00e15f7cea6ca948cb8f\" returns successfully" Jul 6 23:08:33.200064 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-47bb187476874e7ea0c29c4283b39eac1141dac710af00e15f7cea6ca948cb8f-rootfs.mount: Deactivated successfully. Jul 6 23:08:33.213394 containerd[1752]: time="2025-07-06T23:08:33.213331287Z" level=info msg="shim disconnected" id=47bb187476874e7ea0c29c4283b39eac1141dac710af00e15f7cea6ca948cb8f namespace=k8s.io Jul 6 23:08:33.213727 containerd[1752]: time="2025-07-06T23:08:33.213552327Z" level=warning msg="cleaning up after shim disconnected" id=47bb187476874e7ea0c29c4283b39eac1141dac710af00e15f7cea6ca948cb8f namespace=k8s.io Jul 6 23:08:33.213727 containerd[1752]: time="2025-07-06T23:08:33.213568927Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:08:34.063999 containerd[1752]: time="2025-07-06T23:08:34.063952719Z" level=info msg="CreateContainer within sandbox \"2169c37e7adefb2cd57c6340629b8975e9e7a7027cdbde6cc1436de1a66db03d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 6 23:08:34.117220 containerd[1752]: time="2025-07-06T23:08:34.117134751Z" level=info msg="CreateContainer within sandbox \"2169c37e7adefb2cd57c6340629b8975e9e7a7027cdbde6cc1436de1a66db03d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"5a0dbb38f34cb8edf36a186d45b2f33d70598004fbdf38d4d32a57da8187ad2e\"" Jul 6 23:08:34.118380 containerd[1752]: time="2025-07-06T23:08:34.118324032Z" level=info msg="StartContainer for \"5a0dbb38f34cb8edf36a186d45b2f33d70598004fbdf38d4d32a57da8187ad2e\"" Jul 6 23:08:34.146081 systemd[1]: Started cri-containerd-5a0dbb38f34cb8edf36a186d45b2f33d70598004fbdf38d4d32a57da8187ad2e.scope - libcontainer container 5a0dbb38f34cb8edf36a186d45b2f33d70598004fbdf38d4d32a57da8187ad2e. Jul 6 23:08:34.169014 systemd[1]: cri-containerd-5a0dbb38f34cb8edf36a186d45b2f33d70598004fbdf38d4d32a57da8187ad2e.scope: Deactivated successfully. Jul 6 23:08:34.174910 containerd[1752]: time="2025-07-06T23:08:34.174865786Z" level=info msg="StartContainer for \"5a0dbb38f34cb8edf36a186d45b2f33d70598004fbdf38d4d32a57da8187ad2e\" returns successfully" Jul 6 23:08:34.199080 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5a0dbb38f34cb8edf36a186d45b2f33d70598004fbdf38d4d32a57da8187ad2e-rootfs.mount: Deactivated successfully. Jul 6 23:08:34.206493 containerd[1752]: time="2025-07-06T23:08:34.206427325Z" level=info msg="shim disconnected" id=5a0dbb38f34cb8edf36a186d45b2f33d70598004fbdf38d4d32a57da8187ad2e namespace=k8s.io Jul 6 23:08:34.206493 containerd[1752]: time="2025-07-06T23:08:34.206489565Z" level=warning msg="cleaning up after shim disconnected" id=5a0dbb38f34cb8edf36a186d45b2f33d70598004fbdf38d4d32a57da8187ad2e namespace=k8s.io Jul 6 23:08:34.206659 containerd[1752]: time="2025-07-06T23:08:34.206508645Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:08:35.068314 containerd[1752]: time="2025-07-06T23:08:35.068259565Z" level=info msg="CreateContainer within sandbox \"2169c37e7adefb2cd57c6340629b8975e9e7a7027cdbde6cc1436de1a66db03d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 6 23:08:35.116969 containerd[1752]: time="2025-07-06T23:08:35.116824914Z" level=info msg="CreateContainer within sandbox \"2169c37e7adefb2cd57c6340629b8975e9e7a7027cdbde6cc1436de1a66db03d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"bdc4dd5c6583eb088ba1c4916119fe142366221d6dd9a88e436df1936deea4da\"" Jul 6 23:08:35.117516 containerd[1752]: time="2025-07-06T23:08:35.117361914Z" level=info msg="StartContainer for \"bdc4dd5c6583eb088ba1c4916119fe142366221d6dd9a88e436df1936deea4da\"" Jul 6 23:08:35.145107 systemd[1]: Started cri-containerd-bdc4dd5c6583eb088ba1c4916119fe142366221d6dd9a88e436df1936deea4da.scope - libcontainer container bdc4dd5c6583eb088ba1c4916119fe142366221d6dd9a88e436df1936deea4da. Jul 6 23:08:35.175471 containerd[1752]: time="2025-07-06T23:08:35.175414829Z" level=info msg="StartContainer for \"bdc4dd5c6583eb088ba1c4916119fe142366221d6dd9a88e436df1936deea4da\" returns successfully" Jul 6 23:08:35.306059 kubelet[3325]: I0706 23:08:35.305540 3325 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 6 23:08:35.354505 systemd[1]: Created slice kubepods-burstable-pod19fa9461_a554_425b_ae3d_950e699d6edd.slice - libcontainer container kubepods-burstable-pod19fa9461_a554_425b_ae3d_950e699d6edd.slice. Jul 6 23:08:35.364506 systemd[1]: Created slice kubepods-burstable-pod13a793ed_0264_4308_9465_5138130ea326.slice - libcontainer container kubepods-burstable-pod13a793ed_0264_4308_9465_5138130ea326.slice. Jul 6 23:08:35.388553 kubelet[3325]: I0706 23:08:35.388017 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/19fa9461-a554-425b-ae3d-950e699d6edd-config-volume\") pod \"coredns-674b8bbfcf-5l2qp\" (UID: \"19fa9461-a554-425b-ae3d-950e699d6edd\") " pod="kube-system/coredns-674b8bbfcf-5l2qp" Jul 6 23:08:35.388553 kubelet[3325]: I0706 23:08:35.388070 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99j9p\" (UniqueName: \"kubernetes.io/projected/19fa9461-a554-425b-ae3d-950e699d6edd-kube-api-access-99j9p\") pod \"coredns-674b8bbfcf-5l2qp\" (UID: \"19fa9461-a554-425b-ae3d-950e699d6edd\") " pod="kube-system/coredns-674b8bbfcf-5l2qp" Jul 6 23:08:35.388553 kubelet[3325]: I0706 23:08:35.388091 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zx2pv\" (UniqueName: \"kubernetes.io/projected/13a793ed-0264-4308-9465-5138130ea326-kube-api-access-zx2pv\") pod \"coredns-674b8bbfcf-qhxt2\" (UID: \"13a793ed-0264-4308-9465-5138130ea326\") " pod="kube-system/coredns-674b8bbfcf-qhxt2" Jul 6 23:08:35.388553 kubelet[3325]: I0706 23:08:35.388110 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/13a793ed-0264-4308-9465-5138130ea326-config-volume\") pod \"coredns-674b8bbfcf-qhxt2\" (UID: \"13a793ed-0264-4308-9465-5138130ea326\") " pod="kube-system/coredns-674b8bbfcf-qhxt2" Jul 6 23:08:35.663430 containerd[1752]: time="2025-07-06T23:08:35.663104563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-5l2qp,Uid:19fa9461-a554-425b-ae3d-950e699d6edd,Namespace:kube-system,Attempt:0,}" Jul 6 23:08:35.669997 containerd[1752]: time="2025-07-06T23:08:35.669676408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-qhxt2,Uid:13a793ed-0264-4308-9465-5138130ea326,Namespace:kube-system,Attempt:0,}" Jul 6 23:08:36.081825 kubelet[3325]: I0706 23:08:36.081544 3325 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-4gq2x" podStartSLOduration=7.724923669 podStartE2EDuration="16.081526747s" podCreationTimestamp="2025-07-06 23:08:20 +0000 UTC" firstStartedPulling="2025-07-06 23:08:21.812679054 +0000 UTC m=+6.976243559" lastFinishedPulling="2025-07-06 23:08:30.169282132 +0000 UTC m=+15.332846637" observedRunningTime="2025-07-06 23:08:36.081482867 +0000 UTC m=+21.245047372" watchObservedRunningTime="2025-07-06 23:08:36.081526747 +0000 UTC m=+21.245091252" Jul 6 23:08:37.365743 systemd-networkd[1607]: cilium_host: Link UP Jul 6 23:08:37.365890 systemd-networkd[1607]: cilium_net: Link UP Jul 6 23:08:37.367633 systemd-networkd[1607]: cilium_net: Gained carrier Jul 6 23:08:37.367832 systemd-networkd[1607]: cilium_host: Gained carrier Jul 6 23:08:37.522138 systemd-networkd[1607]: cilium_vxlan: Link UP Jul 6 23:08:37.522296 systemd-networkd[1607]: cilium_vxlan: Gained carrier Jul 6 23:08:37.816017 kernel: NET: Registered PF_ALG protocol family Jul 6 23:08:38.082661 systemd-networkd[1607]: cilium_host: Gained IPv6LL Jul 6 23:08:38.274061 systemd-networkd[1607]: cilium_net: Gained IPv6LL Jul 6 23:08:38.519858 systemd-networkd[1607]: lxc_health: Link UP Jul 6 23:08:38.520141 systemd-networkd[1607]: lxc_health: Gained carrier Jul 6 23:08:38.773960 kernel: eth0: renamed from tmpb8fa5 Jul 6 23:08:38.778671 systemd-networkd[1607]: lxc7efeea66875e: Link UP Jul 6 23:08:38.799976 kernel: eth0: renamed from tmp586e8 Jul 6 23:08:38.803580 systemd-networkd[1607]: lxc2ee39e8df39b: Link UP Jul 6 23:08:38.803824 systemd-networkd[1607]: lxc7efeea66875e: Gained carrier Jul 6 23:08:38.803955 systemd-networkd[1607]: cilium_vxlan: Gained IPv6LL Jul 6 23:08:38.807811 systemd-networkd[1607]: lxc2ee39e8df39b: Gained carrier Jul 6 23:08:40.450113 systemd-networkd[1607]: lxc_health: Gained IPv6LL Jul 6 23:08:40.580029 systemd-networkd[1607]: lxc7efeea66875e: Gained IPv6LL Jul 6 23:08:40.770181 systemd-networkd[1607]: lxc2ee39e8df39b: Gained IPv6LL Jul 6 23:08:42.689212 containerd[1752]: time="2025-07-06T23:08:42.688962061Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:08:42.689212 containerd[1752]: time="2025-07-06T23:08:42.689030421Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:08:42.689212 containerd[1752]: time="2025-07-06T23:08:42.689044461Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:08:42.689212 containerd[1752]: time="2025-07-06T23:08:42.689152862Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:08:42.730625 containerd[1752]: time="2025-07-06T23:08:42.730296927Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:08:42.730625 containerd[1752]: time="2025-07-06T23:08:42.730450408Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:08:42.730625 containerd[1752]: time="2025-07-06T23:08:42.730480008Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:08:42.731095 containerd[1752]: time="2025-07-06T23:08:42.730854048Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:08:42.732167 systemd[1]: Started cri-containerd-586e8410c7e5459ed88e3ac7a13f33a722ebd94d328eed495f28e448fc350b3a.scope - libcontainer container 586e8410c7e5459ed88e3ac7a13f33a722ebd94d328eed495f28e448fc350b3a. Jul 6 23:08:42.763132 systemd[1]: Started cri-containerd-b8fa5d8df0d3b19fad1b01aaee357fba25d9b427ef59d2a0103b8dab27b17204.scope - libcontainer container b8fa5d8df0d3b19fad1b01aaee357fba25d9b427ef59d2a0103b8dab27b17204. Jul 6 23:08:42.799345 containerd[1752]: time="2025-07-06T23:08:42.799202491Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-qhxt2,Uid:13a793ed-0264-4308-9465-5138130ea326,Namespace:kube-system,Attempt:0,} returns sandbox id \"586e8410c7e5459ed88e3ac7a13f33a722ebd94d328eed495f28e448fc350b3a\"" Jul 6 23:08:42.812227 containerd[1752]: time="2025-07-06T23:08:42.812182299Z" level=info msg="CreateContainer within sandbox \"586e8410c7e5459ed88e3ac7a13f33a722ebd94d328eed495f28e448fc350b3a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 6 23:08:42.827564 containerd[1752]: time="2025-07-06T23:08:42.827497669Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-5l2qp,Uid:19fa9461-a554-425b-ae3d-950e699d6edd,Namespace:kube-system,Attempt:0,} returns sandbox id \"b8fa5d8df0d3b19fad1b01aaee357fba25d9b427ef59d2a0103b8dab27b17204\"" Jul 6 23:08:42.837188 containerd[1752]: time="2025-07-06T23:08:42.837013075Z" level=info msg="CreateContainer within sandbox \"b8fa5d8df0d3b19fad1b01aaee357fba25d9b427ef59d2a0103b8dab27b17204\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 6 23:08:42.864540 containerd[1752]: time="2025-07-06T23:08:42.864396492Z" level=info msg="CreateContainer within sandbox \"586e8410c7e5459ed88e3ac7a13f33a722ebd94d328eed495f28e448fc350b3a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bf971476eb953df4604ccc23392438c5bfb8e1a01de7561aa7217b7c7268fa36\"" Jul 6 23:08:42.866164 containerd[1752]: time="2025-07-06T23:08:42.864967172Z" level=info msg="StartContainer for \"bf971476eb953df4604ccc23392438c5bfb8e1a01de7561aa7217b7c7268fa36\"" Jul 6 23:08:42.893216 systemd[1]: Started cri-containerd-bf971476eb953df4604ccc23392438c5bfb8e1a01de7561aa7217b7c7268fa36.scope - libcontainer container bf971476eb953df4604ccc23392438c5bfb8e1a01de7561aa7217b7c7268fa36. Jul 6 23:08:42.898640 containerd[1752]: time="2025-07-06T23:08:42.898594193Z" level=info msg="CreateContainer within sandbox \"b8fa5d8df0d3b19fad1b01aaee357fba25d9b427ef59d2a0103b8dab27b17204\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4acf0214c8a3826e52b2a01d0f905bc6dc2ce24f35d50b09c0760409218d1517\"" Jul 6 23:08:42.900118 containerd[1752]: time="2025-07-06T23:08:42.900082234Z" level=info msg="StartContainer for \"4acf0214c8a3826e52b2a01d0f905bc6dc2ce24f35d50b09c0760409218d1517\"" Jul 6 23:08:42.930165 systemd[1]: Started cri-containerd-4acf0214c8a3826e52b2a01d0f905bc6dc2ce24f35d50b09c0760409218d1517.scope - libcontainer container 4acf0214c8a3826e52b2a01d0f905bc6dc2ce24f35d50b09c0760409218d1517. Jul 6 23:08:42.942259 containerd[1752]: time="2025-07-06T23:08:42.941955141Z" level=info msg="StartContainer for \"bf971476eb953df4604ccc23392438c5bfb8e1a01de7561aa7217b7c7268fa36\" returns successfully" Jul 6 23:08:42.969064 containerd[1752]: time="2025-07-06T23:08:42.969011678Z" level=info msg="StartContainer for \"4acf0214c8a3826e52b2a01d0f905bc6dc2ce24f35d50b09c0760409218d1517\" returns successfully" Jul 6 23:08:43.122001 kubelet[3325]: I0706 23:08:43.121882 3325 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-5l2qp" podStartSLOduration=22.121864214 podStartE2EDuration="22.121864214s" podCreationTimestamp="2025-07-06 23:08:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:08:43.103533602 +0000 UTC m=+28.267098107" watchObservedRunningTime="2025-07-06 23:08:43.121864214 +0000 UTC m=+28.285428719" Jul 6 23:08:43.141455 kubelet[3325]: I0706 23:08:43.140929 3325 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-qhxt2" podStartSLOduration=22.140896626 podStartE2EDuration="22.140896626s" podCreationTimestamp="2025-07-06 23:08:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:08:43.140809626 +0000 UTC m=+28.304374131" watchObservedRunningTime="2025-07-06 23:08:43.140896626 +0000 UTC m=+28.304461131" Jul 6 23:09:50.838197 systemd[1]: Started sshd@7-10.200.20.36:22-10.200.16.10:40062.service - OpenSSH per-connection server daemon (10.200.16.10:40062). Jul 6 23:09:51.315363 sshd[4712]: Accepted publickey for core from 10.200.16.10 port 40062 ssh2: RSA SHA256:BYvOZLTfueOxq93dYKJbaYxARQqOBHJqeUMgtMpy+gQ Jul 6 23:09:51.316718 sshd-session[4712]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:09:51.320797 systemd-logind[1708]: New session 10 of user core. Jul 6 23:09:51.329152 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 6 23:09:51.756751 sshd[4714]: Connection closed by 10.200.16.10 port 40062 Jul 6 23:09:51.755988 sshd-session[4712]: pam_unix(sshd:session): session closed for user core Jul 6 23:09:51.758802 systemd[1]: sshd@7-10.200.20.36:22-10.200.16.10:40062.service: Deactivated successfully. Jul 6 23:09:51.761560 systemd[1]: session-10.scope: Deactivated successfully. Jul 6 23:09:51.763606 systemd-logind[1708]: Session 10 logged out. Waiting for processes to exit. Jul 6 23:09:51.764688 systemd-logind[1708]: Removed session 10. Jul 6 23:09:56.856230 systemd[1]: Started sshd@8-10.200.20.36:22-10.200.16.10:40064.service - OpenSSH per-connection server daemon (10.200.16.10:40064). Jul 6 23:09:57.346597 sshd[4729]: Accepted publickey for core from 10.200.16.10 port 40064 ssh2: RSA SHA256:BYvOZLTfueOxq93dYKJbaYxARQqOBHJqeUMgtMpy+gQ Jul 6 23:09:57.347962 sshd-session[4729]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:09:57.352248 systemd-logind[1708]: New session 11 of user core. Jul 6 23:09:57.358078 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 6 23:09:57.757968 sshd[4731]: Connection closed by 10.200.16.10 port 40064 Jul 6 23:09:57.757857 sshd-session[4729]: pam_unix(sshd:session): session closed for user core Jul 6 23:09:57.760546 systemd[1]: sshd@8-10.200.20.36:22-10.200.16.10:40064.service: Deactivated successfully. Jul 6 23:09:57.762590 systemd[1]: session-11.scope: Deactivated successfully. Jul 6 23:09:57.764803 systemd-logind[1708]: Session 11 logged out. Waiting for processes to exit. Jul 6 23:09:57.765780 systemd-logind[1708]: Removed session 11. Jul 6 23:10:02.856367 systemd[1]: Started sshd@9-10.200.20.36:22-10.200.16.10:53386.service - OpenSSH per-connection server daemon (10.200.16.10:53386). Jul 6 23:10:03.347385 sshd[4744]: Accepted publickey for core from 10.200.16.10 port 53386 ssh2: RSA SHA256:BYvOZLTfueOxq93dYKJbaYxARQqOBHJqeUMgtMpy+gQ Jul 6 23:10:03.348707 sshd-session[4744]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:10:03.353469 systemd-logind[1708]: New session 12 of user core. Jul 6 23:10:03.359813 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 6 23:10:03.757509 sshd[4746]: Connection closed by 10.200.16.10 port 53386 Jul 6 23:10:03.758155 sshd-session[4744]: pam_unix(sshd:session): session closed for user core Jul 6 23:10:03.761852 systemd[1]: sshd@9-10.200.20.36:22-10.200.16.10:53386.service: Deactivated successfully. Jul 6 23:10:03.763873 systemd[1]: session-12.scope: Deactivated successfully. Jul 6 23:10:03.766140 systemd-logind[1708]: Session 12 logged out. Waiting for processes to exit. Jul 6 23:10:03.767222 systemd-logind[1708]: Removed session 12. Jul 6 23:10:08.848174 systemd[1]: Started sshd@10-10.200.20.36:22-10.200.16.10:53398.service - OpenSSH per-connection server daemon (10.200.16.10:53398). Jul 6 23:10:09.329123 sshd[4760]: Accepted publickey for core from 10.200.16.10 port 53398 ssh2: RSA SHA256:BYvOZLTfueOxq93dYKJbaYxARQqOBHJqeUMgtMpy+gQ Jul 6 23:10:09.330462 sshd-session[4760]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:10:09.336135 systemd-logind[1708]: New session 13 of user core. Jul 6 23:10:09.342161 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 6 23:10:09.743038 sshd[4762]: Connection closed by 10.200.16.10 port 53398 Jul 6 23:10:09.743793 sshd-session[4760]: pam_unix(sshd:session): session closed for user core Jul 6 23:10:09.748973 systemd[1]: sshd@10-10.200.20.36:22-10.200.16.10:53398.service: Deactivated successfully. Jul 6 23:10:09.751992 systemd[1]: session-13.scope: Deactivated successfully. Jul 6 23:10:09.753454 systemd-logind[1708]: Session 13 logged out. Waiting for processes to exit. Jul 6 23:10:09.755082 systemd-logind[1708]: Removed session 13. Jul 6 23:10:14.841780 systemd[1]: Started sshd@11-10.200.20.36:22-10.200.16.10:59352.service - OpenSSH per-connection server daemon (10.200.16.10:59352). Jul 6 23:10:15.319909 sshd[4776]: Accepted publickey for core from 10.200.16.10 port 59352 ssh2: RSA SHA256:BYvOZLTfueOxq93dYKJbaYxARQqOBHJqeUMgtMpy+gQ Jul 6 23:10:15.321366 sshd-session[4776]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:10:15.326168 systemd-logind[1708]: New session 14 of user core. Jul 6 23:10:15.333373 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 6 23:10:15.735203 sshd[4780]: Connection closed by 10.200.16.10 port 59352 Jul 6 23:10:15.736052 sshd-session[4776]: pam_unix(sshd:session): session closed for user core Jul 6 23:10:15.740548 systemd[1]: sshd@11-10.200.20.36:22-10.200.16.10:59352.service: Deactivated successfully. Jul 6 23:10:15.743658 systemd[1]: session-14.scope: Deactivated successfully. Jul 6 23:10:15.744716 systemd-logind[1708]: Session 14 logged out. Waiting for processes to exit. Jul 6 23:10:15.745769 systemd-logind[1708]: Removed session 14. Jul 6 23:10:15.827211 systemd[1]: Started sshd@12-10.200.20.36:22-10.200.16.10:59364.service - OpenSSH per-connection server daemon (10.200.16.10:59364). Jul 6 23:10:16.304629 sshd[4792]: Accepted publickey for core from 10.200.16.10 port 59364 ssh2: RSA SHA256:BYvOZLTfueOxq93dYKJbaYxARQqOBHJqeUMgtMpy+gQ Jul 6 23:10:16.306050 sshd-session[4792]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:10:16.311356 systemd-logind[1708]: New session 15 of user core. Jul 6 23:10:16.318134 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 6 23:10:16.771508 sshd[4795]: Connection closed by 10.200.16.10 port 59364 Jul 6 23:10:16.772008 sshd-session[4792]: pam_unix(sshd:session): session closed for user core Jul 6 23:10:16.777429 systemd[1]: sshd@12-10.200.20.36:22-10.200.16.10:59364.service: Deactivated successfully. Jul 6 23:10:16.780802 systemd[1]: session-15.scope: Deactivated successfully. Jul 6 23:10:16.782033 systemd-logind[1708]: Session 15 logged out. Waiting for processes to exit. Jul 6 23:10:16.783141 systemd-logind[1708]: Removed session 15. Jul 6 23:10:16.871258 systemd[1]: Started sshd@13-10.200.20.36:22-10.200.16.10:59372.service - OpenSSH per-connection server daemon (10.200.16.10:59372). Jul 6 23:10:17.350044 sshd[4805]: Accepted publickey for core from 10.200.16.10 port 59372 ssh2: RSA SHA256:BYvOZLTfueOxq93dYKJbaYxARQqOBHJqeUMgtMpy+gQ Jul 6 23:10:17.351493 sshd-session[4805]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:10:17.356913 systemd-logind[1708]: New session 16 of user core. Jul 6 23:10:17.363130 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 6 23:10:17.765743 sshd[4807]: Connection closed by 10.200.16.10 port 59372 Jul 6 23:10:17.766361 sshd-session[4805]: pam_unix(sshd:session): session closed for user core Jul 6 23:10:17.770127 systemd[1]: sshd@13-10.200.20.36:22-10.200.16.10:59372.service: Deactivated successfully. Jul 6 23:10:17.772373 systemd[1]: session-16.scope: Deactivated successfully. Jul 6 23:10:17.773364 systemd-logind[1708]: Session 16 logged out. Waiting for processes to exit. Jul 6 23:10:17.774799 systemd-logind[1708]: Removed session 16. Jul 6 23:10:22.861351 systemd[1]: Started sshd@14-10.200.20.36:22-10.200.16.10:33108.service - OpenSSH per-connection server daemon (10.200.16.10:33108). Jul 6 23:10:23.337753 sshd[4822]: Accepted publickey for core from 10.200.16.10 port 33108 ssh2: RSA SHA256:BYvOZLTfueOxq93dYKJbaYxARQqOBHJqeUMgtMpy+gQ Jul 6 23:10:23.339181 sshd-session[4822]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:10:23.344054 systemd-logind[1708]: New session 17 of user core. Jul 6 23:10:23.350245 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 6 23:10:23.755584 sshd[4824]: Connection closed by 10.200.16.10 port 33108 Jul 6 23:10:23.756257 sshd-session[4822]: pam_unix(sshd:session): session closed for user core Jul 6 23:10:23.760092 systemd[1]: sshd@14-10.200.20.36:22-10.200.16.10:33108.service: Deactivated successfully. Jul 6 23:10:23.761884 systemd[1]: session-17.scope: Deactivated successfully. Jul 6 23:10:23.762659 systemd-logind[1708]: Session 17 logged out. Waiting for processes to exit. Jul 6 23:10:23.763731 systemd-logind[1708]: Removed session 17. Jul 6 23:10:28.850487 systemd[1]: Started sshd@15-10.200.20.36:22-10.200.16.10:33116.service - OpenSSH per-connection server daemon (10.200.16.10:33116). Jul 6 23:10:29.329504 sshd[4835]: Accepted publickey for core from 10.200.16.10 port 33116 ssh2: RSA SHA256:BYvOZLTfueOxq93dYKJbaYxARQqOBHJqeUMgtMpy+gQ Jul 6 23:10:29.331247 sshd-session[4835]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:10:29.335859 systemd-logind[1708]: New session 18 of user core. Jul 6 23:10:29.342347 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 6 23:10:29.747444 sshd[4837]: Connection closed by 10.200.16.10 port 33116 Jul 6 23:10:29.747808 sshd-session[4835]: pam_unix(sshd:session): session closed for user core Jul 6 23:10:29.752493 systemd-logind[1708]: Session 18 logged out. Waiting for processes to exit. Jul 6 23:10:29.753311 systemd[1]: sshd@15-10.200.20.36:22-10.200.16.10:33116.service: Deactivated successfully. Jul 6 23:10:29.756760 systemd[1]: session-18.scope: Deactivated successfully. Jul 6 23:10:29.758016 systemd-logind[1708]: Removed session 18. Jul 6 23:10:29.844607 systemd[1]: Started sshd@16-10.200.20.36:22-10.200.16.10:33324.service - OpenSSH per-connection server daemon (10.200.16.10:33324). Jul 6 23:10:30.336303 sshd[4849]: Accepted publickey for core from 10.200.16.10 port 33324 ssh2: RSA SHA256:BYvOZLTfueOxq93dYKJbaYxARQqOBHJqeUMgtMpy+gQ Jul 6 23:10:30.337814 sshd-session[4849]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:10:30.343623 systemd-logind[1708]: New session 19 of user core. Jul 6 23:10:30.348136 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 6 23:10:30.779738 sshd[4851]: Connection closed by 10.200.16.10 port 33324 Jul 6 23:10:30.780455 sshd-session[4849]: pam_unix(sshd:session): session closed for user core Jul 6 23:10:30.783516 systemd-logind[1708]: Session 19 logged out. Waiting for processes to exit. Jul 6 23:10:30.783766 systemd[1]: sshd@16-10.200.20.36:22-10.200.16.10:33324.service: Deactivated successfully. Jul 6 23:10:30.785823 systemd[1]: session-19.scope: Deactivated successfully. Jul 6 23:10:30.788376 systemd-logind[1708]: Removed session 19. Jul 6 23:10:30.874269 systemd[1]: Started sshd@17-10.200.20.36:22-10.200.16.10:33336.service - OpenSSH per-connection server daemon (10.200.16.10:33336). Jul 6 23:10:31.360145 sshd[4861]: Accepted publickey for core from 10.200.16.10 port 33336 ssh2: RSA SHA256:BYvOZLTfueOxq93dYKJbaYxARQqOBHJqeUMgtMpy+gQ Jul 6 23:10:31.361098 sshd-session[4861]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:10:31.365833 systemd-logind[1708]: New session 20 of user core. Jul 6 23:10:31.377185 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 6 23:10:32.544728 sshd[4863]: Connection closed by 10.200.16.10 port 33336 Jul 6 23:10:32.544131 sshd-session[4861]: pam_unix(sshd:session): session closed for user core Jul 6 23:10:32.548035 systemd[1]: sshd@17-10.200.20.36:22-10.200.16.10:33336.service: Deactivated successfully. Jul 6 23:10:32.549864 systemd[1]: session-20.scope: Deactivated successfully. Jul 6 23:10:32.551059 systemd-logind[1708]: Session 20 logged out. Waiting for processes to exit. Jul 6 23:10:32.553750 systemd-logind[1708]: Removed session 20. Jul 6 23:10:32.640460 systemd[1]: Started sshd@18-10.200.20.36:22-10.200.16.10:33346.service - OpenSSH per-connection server daemon (10.200.16.10:33346). Jul 6 23:10:33.117494 sshd[4880]: Accepted publickey for core from 10.200.16.10 port 33346 ssh2: RSA SHA256:BYvOZLTfueOxq93dYKJbaYxARQqOBHJqeUMgtMpy+gQ Jul 6 23:10:33.118819 sshd-session[4880]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:10:33.123526 systemd-logind[1708]: New session 21 of user core. Jul 6 23:10:33.128158 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 6 23:10:33.645489 sshd[4882]: Connection closed by 10.200.16.10 port 33346 Jul 6 23:10:33.646195 sshd-session[4880]: pam_unix(sshd:session): session closed for user core Jul 6 23:10:33.649418 systemd[1]: session-21.scope: Deactivated successfully. Jul 6 23:10:33.651532 systemd[1]: sshd@18-10.200.20.36:22-10.200.16.10:33346.service: Deactivated successfully. Jul 6 23:10:33.654944 systemd-logind[1708]: Session 21 logged out. Waiting for processes to exit. Jul 6 23:10:33.655988 systemd-logind[1708]: Removed session 21. Jul 6 23:10:33.750279 systemd[1]: Started sshd@19-10.200.20.36:22-10.200.16.10:33354.service - OpenSSH per-connection server daemon (10.200.16.10:33354). Jul 6 23:10:34.242201 sshd[4892]: Accepted publickey for core from 10.200.16.10 port 33354 ssh2: RSA SHA256:BYvOZLTfueOxq93dYKJbaYxARQqOBHJqeUMgtMpy+gQ Jul 6 23:10:34.243516 sshd-session[4892]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:10:34.248165 systemd-logind[1708]: New session 22 of user core. Jul 6 23:10:34.256103 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 6 23:10:34.647273 sshd[4894]: Connection closed by 10.200.16.10 port 33354 Jul 6 23:10:34.647847 sshd-session[4892]: pam_unix(sshd:session): session closed for user core Jul 6 23:10:34.652323 systemd[1]: sshd@19-10.200.20.36:22-10.200.16.10:33354.service: Deactivated successfully. Jul 6 23:10:34.655865 systemd[1]: session-22.scope: Deactivated successfully. Jul 6 23:10:34.657419 systemd-logind[1708]: Session 22 logged out. Waiting for processes to exit. Jul 6 23:10:34.658475 systemd-logind[1708]: Removed session 22. Jul 6 23:10:39.750254 systemd[1]: Started sshd@20-10.200.20.36:22-10.200.16.10:45076.service - OpenSSH per-connection server daemon (10.200.16.10:45076). Jul 6 23:10:40.228484 sshd[4907]: Accepted publickey for core from 10.200.16.10 port 45076 ssh2: RSA SHA256:BYvOZLTfueOxq93dYKJbaYxARQqOBHJqeUMgtMpy+gQ Jul 6 23:10:40.229779 sshd-session[4907]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:10:40.235607 systemd-logind[1708]: New session 23 of user core. Jul 6 23:10:40.241154 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 6 23:10:40.638221 sshd[4909]: Connection closed by 10.200.16.10 port 45076 Jul 6 23:10:40.638061 sshd-session[4907]: pam_unix(sshd:session): session closed for user core Jul 6 23:10:40.641884 systemd[1]: sshd@20-10.200.20.36:22-10.200.16.10:45076.service: Deactivated successfully. Jul 6 23:10:40.646644 systemd[1]: session-23.scope: Deactivated successfully. Jul 6 23:10:40.648439 systemd-logind[1708]: Session 23 logged out. Waiting for processes to exit. Jul 6 23:10:40.649848 systemd-logind[1708]: Removed session 23. Jul 6 23:10:45.733071 systemd[1]: Started sshd@21-10.200.20.36:22-10.200.16.10:45092.service - OpenSSH per-connection server daemon (10.200.16.10:45092). Jul 6 23:10:46.227973 sshd[4924]: Accepted publickey for core from 10.200.16.10 port 45092 ssh2: RSA SHA256:BYvOZLTfueOxq93dYKJbaYxARQqOBHJqeUMgtMpy+gQ Jul 6 23:10:46.229314 sshd-session[4924]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:10:46.234121 systemd-logind[1708]: New session 24 of user core. Jul 6 23:10:46.238105 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 6 23:10:46.636203 sshd[4926]: Connection closed by 10.200.16.10 port 45092 Jul 6 23:10:46.635281 sshd-session[4924]: pam_unix(sshd:session): session closed for user core Jul 6 23:10:46.643062 systemd[1]: sshd@21-10.200.20.36:22-10.200.16.10:45092.service: Deactivated successfully. Jul 6 23:10:46.643693 systemd-logind[1708]: Session 24 logged out. Waiting for processes to exit. Jul 6 23:10:46.645487 systemd[1]: session-24.scope: Deactivated successfully. Jul 6 23:10:46.647579 systemd-logind[1708]: Removed session 24. Jul 6 23:10:51.730213 systemd[1]: Started sshd@22-10.200.20.36:22-10.200.16.10:49808.service - OpenSSH per-connection server daemon (10.200.16.10:49808). Jul 6 23:10:52.223590 sshd[4938]: Accepted publickey for core from 10.200.16.10 port 49808 ssh2: RSA SHA256:BYvOZLTfueOxq93dYKJbaYxARQqOBHJqeUMgtMpy+gQ Jul 6 23:10:52.225000 sshd-session[4938]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:10:52.229999 systemd-logind[1708]: New session 25 of user core. Jul 6 23:10:52.238122 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 6 23:10:52.642028 sshd[4942]: Connection closed by 10.200.16.10 port 49808 Jul 6 23:10:52.642354 sshd-session[4938]: pam_unix(sshd:session): session closed for user core Jul 6 23:10:52.646343 systemd-logind[1708]: Session 25 logged out. Waiting for processes to exit. Jul 6 23:10:52.646510 systemd[1]: sshd@22-10.200.20.36:22-10.200.16.10:49808.service: Deactivated successfully. Jul 6 23:10:52.648406 systemd[1]: session-25.scope: Deactivated successfully. Jul 6 23:10:52.650766 systemd-logind[1708]: Removed session 25. Jul 6 23:10:57.729437 systemd[1]: Started sshd@23-10.200.20.36:22-10.200.16.10:49816.service - OpenSSH per-connection server daemon (10.200.16.10:49816). Jul 6 23:10:58.212222 sshd[4953]: Accepted publickey for core from 10.200.16.10 port 49816 ssh2: RSA SHA256:BYvOZLTfueOxq93dYKJbaYxARQqOBHJqeUMgtMpy+gQ Jul 6 23:10:58.212840 sshd-session[4953]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:10:58.217357 systemd-logind[1708]: New session 26 of user core. Jul 6 23:10:58.220123 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 6 23:10:58.630631 sshd[4955]: Connection closed by 10.200.16.10 port 49816 Jul 6 23:10:58.629700 sshd-session[4953]: pam_unix(sshd:session): session closed for user core Jul 6 23:10:58.633674 systemd[1]: sshd@23-10.200.20.36:22-10.200.16.10:49816.service: Deactivated successfully. Jul 6 23:10:58.637048 systemd[1]: session-26.scope: Deactivated successfully. Jul 6 23:10:58.639471 systemd-logind[1708]: Session 26 logged out. Waiting for processes to exit. Jul 6 23:10:58.640606 systemd-logind[1708]: Removed session 26. Jul 6 23:10:58.727279 systemd[1]: Started sshd@24-10.200.20.36:22-10.200.16.10:49820.service - OpenSSH per-connection server daemon (10.200.16.10:49820). Jul 6 23:10:59.220224 sshd[4967]: Accepted publickey for core from 10.200.16.10 port 49820 ssh2: RSA SHA256:BYvOZLTfueOxq93dYKJbaYxARQqOBHJqeUMgtMpy+gQ Jul 6 23:10:59.221506 sshd-session[4967]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:10:59.226281 systemd-logind[1708]: New session 27 of user core. Jul 6 23:10:59.229158 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 6 23:11:01.327004 systemd[1]: run-containerd-runc-k8s.io-bdc4dd5c6583eb088ba1c4916119fe142366221d6dd9a88e436df1936deea4da-runc.FeNyFV.mount: Deactivated successfully. Jul 6 23:11:01.333049 containerd[1752]: time="2025-07-06T23:11:01.332991628Z" level=info msg="StopContainer for \"313ec08a20dc415fb018d325b03e57894001bb3eaed53b7c793c370a4877ea1f\" with timeout 30 (s)" Jul 6 23:11:01.334184 containerd[1752]: time="2025-07-06T23:11:01.334027868Z" level=info msg="Stop container \"313ec08a20dc415fb018d325b03e57894001bb3eaed53b7c793c370a4877ea1f\" with signal terminated" Jul 6 23:11:01.365318 containerd[1752]: time="2025-07-06T23:11:01.365272407Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 6 23:11:01.385430 containerd[1752]: time="2025-07-06T23:11:01.385212779Z" level=info msg="StopContainer for \"bdc4dd5c6583eb088ba1c4916119fe142366221d6dd9a88e436df1936deea4da\" with timeout 2 (s)" Jul 6 23:11:01.385812 containerd[1752]: time="2025-07-06T23:11:01.385688339Z" level=info msg="Stop container \"bdc4dd5c6583eb088ba1c4916119fe142366221d6dd9a88e436df1936deea4da\" with signal terminated" Jul 6 23:11:01.391855 systemd[1]: cri-containerd-313ec08a20dc415fb018d325b03e57894001bb3eaed53b7c793c370a4877ea1f.scope: Deactivated successfully. Jul 6 23:11:01.400433 systemd-networkd[1607]: lxc_health: Link DOWN Jul 6 23:11:01.400442 systemd-networkd[1607]: lxc_health: Lost carrier Jul 6 23:11:01.418827 systemd[1]: cri-containerd-bdc4dd5c6583eb088ba1c4916119fe142366221d6dd9a88e436df1936deea4da.scope: Deactivated successfully. Jul 6 23:11:01.421142 systemd[1]: cri-containerd-bdc4dd5c6583eb088ba1c4916119fe142366221d6dd9a88e436df1936deea4da.scope: Consumed 6.654s CPU time, 124.9M memory peak, 136K read from disk, 12.9M written to disk. Jul 6 23:11:01.439672 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-313ec08a20dc415fb018d325b03e57894001bb3eaed53b7c793c370a4877ea1f-rootfs.mount: Deactivated successfully. Jul 6 23:11:01.449153 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bdc4dd5c6583eb088ba1c4916119fe142366221d6dd9a88e436df1936deea4da-rootfs.mount: Deactivated successfully. Jul 6 23:11:01.512586 containerd[1752]: time="2025-07-06T23:11:01.512477655Z" level=info msg="shim disconnected" id=bdc4dd5c6583eb088ba1c4916119fe142366221d6dd9a88e436df1936deea4da namespace=k8s.io Jul 6 23:11:01.512586 containerd[1752]: time="2025-07-06T23:11:01.512551575Z" level=warning msg="cleaning up after shim disconnected" id=bdc4dd5c6583eb088ba1c4916119fe142366221d6dd9a88e436df1936deea4da namespace=k8s.io Jul 6 23:11:01.512586 containerd[1752]: time="2025-07-06T23:11:01.512559655Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:11:01.513447 containerd[1752]: time="2025-07-06T23:11:01.513163895Z" level=info msg="shim disconnected" id=313ec08a20dc415fb018d325b03e57894001bb3eaed53b7c793c370a4877ea1f namespace=k8s.io Jul 6 23:11:01.513447 containerd[1752]: time="2025-07-06T23:11:01.513210855Z" level=warning msg="cleaning up after shim disconnected" id=313ec08a20dc415fb018d325b03e57894001bb3eaed53b7c793c370a4877ea1f namespace=k8s.io Jul 6 23:11:01.513447 containerd[1752]: time="2025-07-06T23:11:01.513218655Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:11:01.533315 containerd[1752]: time="2025-07-06T23:11:01.533260267Z" level=info msg="StopContainer for \"313ec08a20dc415fb018d325b03e57894001bb3eaed53b7c793c370a4877ea1f\" returns successfully" Jul 6 23:11:01.535702 containerd[1752]: time="2025-07-06T23:11:01.534085067Z" level=info msg="StopPodSandbox for \"c1161092f42f9af3e53d22cfd700290d5605941ff415521c8f0897f1f92e13ea\"" Jul 6 23:11:01.535702 containerd[1752]: time="2025-07-06T23:11:01.534130227Z" level=info msg="Container to stop \"313ec08a20dc415fb018d325b03e57894001bb3eaed53b7c793c370a4877ea1f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 6 23:11:01.536825 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c1161092f42f9af3e53d22cfd700290d5605941ff415521c8f0897f1f92e13ea-shm.mount: Deactivated successfully. Jul 6 23:11:01.538608 containerd[1752]: time="2025-07-06T23:11:01.538566030Z" level=info msg="StopContainer for \"bdc4dd5c6583eb088ba1c4916119fe142366221d6dd9a88e436df1936deea4da\" returns successfully" Jul 6 23:11:01.539204 containerd[1752]: time="2025-07-06T23:11:01.539174510Z" level=info msg="StopPodSandbox for \"2169c37e7adefb2cd57c6340629b8975e9e7a7027cdbde6cc1436de1a66db03d\"" Jul 6 23:11:01.539820 containerd[1752]: time="2025-07-06T23:11:01.539791351Z" level=info msg="Container to stop \"13e52a18c6c199508851cb53e29ae9c6101a554c20338381d62a72e9f534f65a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 6 23:11:01.539918 containerd[1752]: time="2025-07-06T23:11:01.539902871Z" level=info msg="Container to stop \"5a0dbb38f34cb8edf36a186d45b2f33d70598004fbdf38d4d32a57da8187ad2e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 6 23:11:01.540074 containerd[1752]: time="2025-07-06T23:11:01.540054471Z" level=info msg="Container to stop \"bdc4dd5c6583eb088ba1c4916119fe142366221d6dd9a88e436df1936deea4da\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 6 23:11:01.540276 containerd[1752]: time="2025-07-06T23:11:01.540126111Z" level=info msg="Container to stop \"2f185aa95074c4bbfeef038afc414ecdb31c61fb2d9e537e47d4c94d4ff8a007\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 6 23:11:01.540276 containerd[1752]: time="2025-07-06T23:11:01.540140951Z" level=info msg="Container to stop \"47bb187476874e7ea0c29c4283b39eac1141dac710af00e15f7cea6ca948cb8f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 6 23:11:01.544334 systemd[1]: cri-containerd-c1161092f42f9af3e53d22cfd700290d5605941ff415521c8f0897f1f92e13ea.scope: Deactivated successfully. Jul 6 23:11:01.553319 systemd[1]: cri-containerd-2169c37e7adefb2cd57c6340629b8975e9e7a7027cdbde6cc1436de1a66db03d.scope: Deactivated successfully. Jul 6 23:11:01.589691 containerd[1752]: time="2025-07-06T23:11:01.589399580Z" level=info msg="shim disconnected" id=2169c37e7adefb2cd57c6340629b8975e9e7a7027cdbde6cc1436de1a66db03d namespace=k8s.io Jul 6 23:11:01.589691 containerd[1752]: time="2025-07-06T23:11:01.589454900Z" level=warning msg="cleaning up after shim disconnected" id=2169c37e7adefb2cd57c6340629b8975e9e7a7027cdbde6cc1436de1a66db03d namespace=k8s.io Jul 6 23:11:01.589691 containerd[1752]: time="2025-07-06T23:11:01.589462580Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:11:01.591983 containerd[1752]: time="2025-07-06T23:11:01.590369901Z" level=info msg="shim disconnected" id=c1161092f42f9af3e53d22cfd700290d5605941ff415521c8f0897f1f92e13ea namespace=k8s.io Jul 6 23:11:01.591983 containerd[1752]: time="2025-07-06T23:11:01.590415301Z" level=warning msg="cleaning up after shim disconnected" id=c1161092f42f9af3e53d22cfd700290d5605941ff415521c8f0897f1f92e13ea namespace=k8s.io Jul 6 23:11:01.591983 containerd[1752]: time="2025-07-06T23:11:01.590432461Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:11:01.605601 containerd[1752]: time="2025-07-06T23:11:01.605545150Z" level=warning msg="cleanup warnings time=\"2025-07-06T23:11:01Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 6 23:11:01.609435 containerd[1752]: time="2025-07-06T23:11:01.609368912Z" level=info msg="TearDown network for sandbox \"2169c37e7adefb2cd57c6340629b8975e9e7a7027cdbde6cc1436de1a66db03d\" successfully" Jul 6 23:11:01.609435 containerd[1752]: time="2025-07-06T23:11:01.609412272Z" level=info msg="StopPodSandbox for \"2169c37e7adefb2cd57c6340629b8975e9e7a7027cdbde6cc1436de1a66db03d\" returns successfully" Jul 6 23:11:01.613893 containerd[1752]: time="2025-07-06T23:11:01.613829395Z" level=info msg="TearDown network for sandbox \"c1161092f42f9af3e53d22cfd700290d5605941ff415521c8f0897f1f92e13ea\" successfully" Jul 6 23:11:01.613893 containerd[1752]: time="2025-07-06T23:11:01.613883195Z" level=info msg="StopPodSandbox for \"c1161092f42f9af3e53d22cfd700290d5605941ff415521c8f0897f1f92e13ea\" returns successfully" Jul 6 23:11:01.743755 kubelet[3325]: I0706 23:11:01.743702 3325 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/19092466-f644-4374-99fc-4fc1a66f975a-host-proc-sys-kernel\") pod \"19092466-f644-4374-99fc-4fc1a66f975a\" (UID: \"19092466-f644-4374-99fc-4fc1a66f975a\") " Jul 6 23:11:01.743755 kubelet[3325]: I0706 23:11:01.743766 3325 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8115a4b2-1b6c-4dad-906e-41ac10c2857d-cilium-config-path\") pod \"8115a4b2-1b6c-4dad-906e-41ac10c2857d\" (UID: \"8115a4b2-1b6c-4dad-906e-41ac10c2857d\") " Jul 6 23:11:01.744219 kubelet[3325]: I0706 23:11:01.743789 3325 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/19092466-f644-4374-99fc-4fc1a66f975a-xtables-lock\") pod \"19092466-f644-4374-99fc-4fc1a66f975a\" (UID: \"19092466-f644-4374-99fc-4fc1a66f975a\") " Jul 6 23:11:01.744219 kubelet[3325]: I0706 23:11:01.743809 3325 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dls29\" (UniqueName: \"kubernetes.io/projected/19092466-f644-4374-99fc-4fc1a66f975a-kube-api-access-dls29\") pod \"19092466-f644-4374-99fc-4fc1a66f975a\" (UID: \"19092466-f644-4374-99fc-4fc1a66f975a\") " Jul 6 23:11:01.744219 kubelet[3325]: I0706 23:11:01.743826 3325 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5tpz9\" (UniqueName: \"kubernetes.io/projected/8115a4b2-1b6c-4dad-906e-41ac10c2857d-kube-api-access-5tpz9\") pod \"8115a4b2-1b6c-4dad-906e-41ac10c2857d\" (UID: \"8115a4b2-1b6c-4dad-906e-41ac10c2857d\") " Jul 6 23:11:01.744219 kubelet[3325]: I0706 23:11:01.743844 3325 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/19092466-f644-4374-99fc-4fc1a66f975a-hubble-tls\") pod \"19092466-f644-4374-99fc-4fc1a66f975a\" (UID: \"19092466-f644-4374-99fc-4fc1a66f975a\") " Jul 6 23:11:01.744219 kubelet[3325]: I0706 23:11:01.743858 3325 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/19092466-f644-4374-99fc-4fc1a66f975a-cilium-cgroup\") pod \"19092466-f644-4374-99fc-4fc1a66f975a\" (UID: \"19092466-f644-4374-99fc-4fc1a66f975a\") " Jul 6 23:11:01.744219 kubelet[3325]: I0706 23:11:01.743876 3325 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/19092466-f644-4374-99fc-4fc1a66f975a-cni-path\") pod \"19092466-f644-4374-99fc-4fc1a66f975a\" (UID: \"19092466-f644-4374-99fc-4fc1a66f975a\") " Jul 6 23:11:01.744359 kubelet[3325]: I0706 23:11:01.743917 3325 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/19092466-f644-4374-99fc-4fc1a66f975a-cilium-config-path\") pod \"19092466-f644-4374-99fc-4fc1a66f975a\" (UID: \"19092466-f644-4374-99fc-4fc1a66f975a\") " Jul 6 23:11:01.744359 kubelet[3325]: I0706 23:11:01.743957 3325 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/19092466-f644-4374-99fc-4fc1a66f975a-bpf-maps\") pod \"19092466-f644-4374-99fc-4fc1a66f975a\" (UID: \"19092466-f644-4374-99fc-4fc1a66f975a\") " Jul 6 23:11:01.744359 kubelet[3325]: I0706 23:11:01.743972 3325 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/19092466-f644-4374-99fc-4fc1a66f975a-lib-modules\") pod \"19092466-f644-4374-99fc-4fc1a66f975a\" (UID: \"19092466-f644-4374-99fc-4fc1a66f975a\") " Jul 6 23:11:01.744359 kubelet[3325]: I0706 23:11:01.743992 3325 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/19092466-f644-4374-99fc-4fc1a66f975a-clustermesh-secrets\") pod \"19092466-f644-4374-99fc-4fc1a66f975a\" (UID: \"19092466-f644-4374-99fc-4fc1a66f975a\") " Jul 6 23:11:01.744359 kubelet[3325]: I0706 23:11:01.744005 3325 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/19092466-f644-4374-99fc-4fc1a66f975a-etc-cni-netd\") pod \"19092466-f644-4374-99fc-4fc1a66f975a\" (UID: \"19092466-f644-4374-99fc-4fc1a66f975a\") " Jul 6 23:11:01.744359 kubelet[3325]: I0706 23:11:01.744018 3325 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/19092466-f644-4374-99fc-4fc1a66f975a-host-proc-sys-net\") pod \"19092466-f644-4374-99fc-4fc1a66f975a\" (UID: \"19092466-f644-4374-99fc-4fc1a66f975a\") " Jul 6 23:11:01.744481 kubelet[3325]: I0706 23:11:01.744033 3325 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/19092466-f644-4374-99fc-4fc1a66f975a-hostproc\") pod \"19092466-f644-4374-99fc-4fc1a66f975a\" (UID: \"19092466-f644-4374-99fc-4fc1a66f975a\") " Jul 6 23:11:01.744481 kubelet[3325]: I0706 23:11:01.744047 3325 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/19092466-f644-4374-99fc-4fc1a66f975a-cilium-run\") pod \"19092466-f644-4374-99fc-4fc1a66f975a\" (UID: \"19092466-f644-4374-99fc-4fc1a66f975a\") " Jul 6 23:11:01.744481 kubelet[3325]: I0706 23:11:01.744142 3325 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19092466-f644-4374-99fc-4fc1a66f975a-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "19092466-f644-4374-99fc-4fc1a66f975a" (UID: "19092466-f644-4374-99fc-4fc1a66f975a"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:11:01.744481 kubelet[3325]: I0706 23:11:01.744176 3325 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19092466-f644-4374-99fc-4fc1a66f975a-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "19092466-f644-4374-99fc-4fc1a66f975a" (UID: "19092466-f644-4374-99fc-4fc1a66f975a"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:11:01.747242 kubelet[3325]: I0706 23:11:01.746225 3325 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8115a4b2-1b6c-4dad-906e-41ac10c2857d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "8115a4b2-1b6c-4dad-906e-41ac10c2857d" (UID: "8115a4b2-1b6c-4dad-906e-41ac10c2857d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 6 23:11:01.747242 kubelet[3325]: I0706 23:11:01.746292 3325 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19092466-f644-4374-99fc-4fc1a66f975a-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "19092466-f644-4374-99fc-4fc1a66f975a" (UID: "19092466-f644-4374-99fc-4fc1a66f975a"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:11:01.747242 kubelet[3325]: I0706 23:11:01.746310 3325 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19092466-f644-4374-99fc-4fc1a66f975a-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "19092466-f644-4374-99fc-4fc1a66f975a" (UID: "19092466-f644-4374-99fc-4fc1a66f975a"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:11:01.747242 kubelet[3325]: I0706 23:11:01.746921 3325 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19092466-f644-4374-99fc-4fc1a66f975a-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "19092466-f644-4374-99fc-4fc1a66f975a" (UID: "19092466-f644-4374-99fc-4fc1a66f975a"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:11:01.747242 kubelet[3325]: I0706 23:11:01.746976 3325 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19092466-f644-4374-99fc-4fc1a66f975a-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "19092466-f644-4374-99fc-4fc1a66f975a" (UID: "19092466-f644-4374-99fc-4fc1a66f975a"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:11:01.747441 kubelet[3325]: I0706 23:11:01.746993 3325 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19092466-f644-4374-99fc-4fc1a66f975a-hostproc" (OuterVolumeSpecName: "hostproc") pod "19092466-f644-4374-99fc-4fc1a66f975a" (UID: "19092466-f644-4374-99fc-4fc1a66f975a"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:11:01.747441 kubelet[3325]: I0706 23:11:01.746990 3325 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19092466-f644-4374-99fc-4fc1a66f975a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "19092466-f644-4374-99fc-4fc1a66f975a" (UID: "19092466-f644-4374-99fc-4fc1a66f975a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 6 23:11:01.749291 kubelet[3325]: I0706 23:11:01.749246 3325 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19092466-f644-4374-99fc-4fc1a66f975a-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "19092466-f644-4374-99fc-4fc1a66f975a" (UID: "19092466-f644-4374-99fc-4fc1a66f975a"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:11:01.749461 kubelet[3325]: I0706 23:11:01.749387 3325 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19092466-f644-4374-99fc-4fc1a66f975a-cni-path" (OuterVolumeSpecName: "cni-path") pod "19092466-f644-4374-99fc-4fc1a66f975a" (UID: "19092466-f644-4374-99fc-4fc1a66f975a"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:11:01.749503 kubelet[3325]: I0706 23:11:01.749476 3325 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19092466-f644-4374-99fc-4fc1a66f975a-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "19092466-f644-4374-99fc-4fc1a66f975a" (UID: "19092466-f644-4374-99fc-4fc1a66f975a"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:11:01.752142 kubelet[3325]: I0706 23:11:01.752100 3325 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8115a4b2-1b6c-4dad-906e-41ac10c2857d-kube-api-access-5tpz9" (OuterVolumeSpecName: "kube-api-access-5tpz9") pod "8115a4b2-1b6c-4dad-906e-41ac10c2857d" (UID: "8115a4b2-1b6c-4dad-906e-41ac10c2857d"). InnerVolumeSpecName "kube-api-access-5tpz9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 6 23:11:01.752250 kubelet[3325]: I0706 23:11:01.752155 3325 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19092466-f644-4374-99fc-4fc1a66f975a-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "19092466-f644-4374-99fc-4fc1a66f975a" (UID: "19092466-f644-4374-99fc-4fc1a66f975a"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 6 23:11:01.752727 kubelet[3325]: I0706 23:11:01.752689 3325 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19092466-f644-4374-99fc-4fc1a66f975a-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "19092466-f644-4374-99fc-4fc1a66f975a" (UID: "19092466-f644-4374-99fc-4fc1a66f975a"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 6 23:11:01.752833 kubelet[3325]: I0706 23:11:01.752809 3325 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19092466-f644-4374-99fc-4fc1a66f975a-kube-api-access-dls29" (OuterVolumeSpecName: "kube-api-access-dls29") pod "19092466-f644-4374-99fc-4fc1a66f975a" (UID: "19092466-f644-4374-99fc-4fc1a66f975a"). InnerVolumeSpecName "kube-api-access-dls29". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 6 23:11:01.845336 kubelet[3325]: I0706 23:11:01.845204 3325 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/19092466-f644-4374-99fc-4fc1a66f975a-cni-path\") on node \"ci-4230.2.1-a-dc1fa1989d\" DevicePath \"\"" Jul 6 23:11:01.845336 kubelet[3325]: I0706 23:11:01.845246 3325 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/19092466-f644-4374-99fc-4fc1a66f975a-cilium-config-path\") on node \"ci-4230.2.1-a-dc1fa1989d\" DevicePath \"\"" Jul 6 23:11:01.845336 kubelet[3325]: I0706 23:11:01.845258 3325 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/19092466-f644-4374-99fc-4fc1a66f975a-bpf-maps\") on node \"ci-4230.2.1-a-dc1fa1989d\" DevicePath \"\"" Jul 6 23:11:01.845336 kubelet[3325]: I0706 23:11:01.845267 3325 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/19092466-f644-4374-99fc-4fc1a66f975a-lib-modules\") on node \"ci-4230.2.1-a-dc1fa1989d\" DevicePath \"\"" Jul 6 23:11:01.845336 kubelet[3325]: I0706 23:11:01.845275 3325 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/19092466-f644-4374-99fc-4fc1a66f975a-clustermesh-secrets\") on node \"ci-4230.2.1-a-dc1fa1989d\" DevicePath \"\"" Jul 6 23:11:01.845336 kubelet[3325]: I0706 23:11:01.845283 3325 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/19092466-f644-4374-99fc-4fc1a66f975a-etc-cni-netd\") on node \"ci-4230.2.1-a-dc1fa1989d\" DevicePath \"\"" Jul 6 23:11:01.845336 kubelet[3325]: I0706 23:11:01.845291 3325 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/19092466-f644-4374-99fc-4fc1a66f975a-host-proc-sys-net\") on node \"ci-4230.2.1-a-dc1fa1989d\" DevicePath \"\"" Jul 6 23:11:01.845336 kubelet[3325]: I0706 23:11:01.845299 3325 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/19092466-f644-4374-99fc-4fc1a66f975a-hostproc\") on node \"ci-4230.2.1-a-dc1fa1989d\" DevicePath \"\"" Jul 6 23:11:01.845601 kubelet[3325]: I0706 23:11:01.845307 3325 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/19092466-f644-4374-99fc-4fc1a66f975a-cilium-run\") on node \"ci-4230.2.1-a-dc1fa1989d\" DevicePath \"\"" Jul 6 23:11:01.845601 kubelet[3325]: I0706 23:11:01.845315 3325 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/19092466-f644-4374-99fc-4fc1a66f975a-host-proc-sys-kernel\") on node \"ci-4230.2.1-a-dc1fa1989d\" DevicePath \"\"" Jul 6 23:11:01.845601 kubelet[3325]: I0706 23:11:01.845324 3325 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8115a4b2-1b6c-4dad-906e-41ac10c2857d-cilium-config-path\") on node \"ci-4230.2.1-a-dc1fa1989d\" DevicePath \"\"" Jul 6 23:11:01.845601 kubelet[3325]: I0706 23:11:01.845332 3325 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/19092466-f644-4374-99fc-4fc1a66f975a-xtables-lock\") on node \"ci-4230.2.1-a-dc1fa1989d\" DevicePath \"\"" Jul 6 23:11:01.845601 kubelet[3325]: I0706 23:11:01.845340 3325 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dls29\" (UniqueName: \"kubernetes.io/projected/19092466-f644-4374-99fc-4fc1a66f975a-kube-api-access-dls29\") on node \"ci-4230.2.1-a-dc1fa1989d\" DevicePath \"\"" Jul 6 23:11:01.845601 kubelet[3325]: I0706 23:11:01.845348 3325 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5tpz9\" (UniqueName: \"kubernetes.io/projected/8115a4b2-1b6c-4dad-906e-41ac10c2857d-kube-api-access-5tpz9\") on node \"ci-4230.2.1-a-dc1fa1989d\" DevicePath \"\"" Jul 6 23:11:01.845601 kubelet[3325]: I0706 23:11:01.845356 3325 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/19092466-f644-4374-99fc-4fc1a66f975a-hubble-tls\") on node \"ci-4230.2.1-a-dc1fa1989d\" DevicePath \"\"" Jul 6 23:11:01.845601 kubelet[3325]: I0706 23:11:01.845366 3325 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/19092466-f644-4374-99fc-4fc1a66f975a-cilium-cgroup\") on node \"ci-4230.2.1-a-dc1fa1989d\" DevicePath \"\"" Jul 6 23:11:02.316264 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2169c37e7adefb2cd57c6340629b8975e9e7a7027cdbde6cc1436de1a66db03d-rootfs.mount: Deactivated successfully. Jul 6 23:11:02.316374 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2169c37e7adefb2cd57c6340629b8975e9e7a7027cdbde6cc1436de1a66db03d-shm.mount: Deactivated successfully. Jul 6 23:11:02.316433 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c1161092f42f9af3e53d22cfd700290d5605941ff415521c8f0897f1f92e13ea-rootfs.mount: Deactivated successfully. Jul 6 23:11:02.316483 systemd[1]: var-lib-kubelet-pods-8115a4b2\x2d1b6c\x2d4dad\x2d906e\x2d41ac10c2857d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5tpz9.mount: Deactivated successfully. Jul 6 23:11:02.316533 systemd[1]: var-lib-kubelet-pods-19092466\x2df644\x2d4374\x2d99fc\x2d4fc1a66f975a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddls29.mount: Deactivated successfully. Jul 6 23:11:02.316580 systemd[1]: var-lib-kubelet-pods-19092466\x2df644\x2d4374\x2d99fc\x2d4fc1a66f975a-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 6 23:11:02.316629 systemd[1]: var-lib-kubelet-pods-19092466\x2df644\x2d4374\x2d99fc\x2d4fc1a66f975a-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 6 23:11:02.357068 kubelet[3325]: I0706 23:11:02.357033 3325 scope.go:117] "RemoveContainer" containerID="313ec08a20dc415fb018d325b03e57894001bb3eaed53b7c793c370a4877ea1f" Jul 6 23:11:02.362453 containerd[1752]: time="2025-07-06T23:11:02.362125881Z" level=info msg="RemoveContainer for \"313ec08a20dc415fb018d325b03e57894001bb3eaed53b7c793c370a4877ea1f\"" Jul 6 23:11:02.364067 systemd[1]: Removed slice kubepods-besteffort-pod8115a4b2_1b6c_4dad_906e_41ac10c2857d.slice - libcontainer container kubepods-besteffort-pod8115a4b2_1b6c_4dad_906e_41ac10c2857d.slice. Jul 6 23:11:02.370332 systemd[1]: Removed slice kubepods-burstable-pod19092466_f644_4374_99fc_4fc1a66f975a.slice - libcontainer container kubepods-burstable-pod19092466_f644_4374_99fc_4fc1a66f975a.slice. Jul 6 23:11:02.370430 systemd[1]: kubepods-burstable-pod19092466_f644_4374_99fc_4fc1a66f975a.slice: Consumed 6.725s CPU time, 125.3M memory peak, 136K read from disk, 12.9M written to disk. Jul 6 23:11:02.377113 containerd[1752]: time="2025-07-06T23:11:02.376872890Z" level=info msg="RemoveContainer for \"313ec08a20dc415fb018d325b03e57894001bb3eaed53b7c793c370a4877ea1f\" returns successfully" Jul 6 23:11:02.378038 kubelet[3325]: I0706 23:11:02.377444 3325 scope.go:117] "RemoveContainer" containerID="313ec08a20dc415fb018d325b03e57894001bb3eaed53b7c793c370a4877ea1f" Jul 6 23:11:02.378814 containerd[1752]: time="2025-07-06T23:11:02.378405971Z" level=error msg="ContainerStatus for \"313ec08a20dc415fb018d325b03e57894001bb3eaed53b7c793c370a4877ea1f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"313ec08a20dc415fb018d325b03e57894001bb3eaed53b7c793c370a4877ea1f\": not found" Jul 6 23:11:02.378985 kubelet[3325]: E0706 23:11:02.378579 3325 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"313ec08a20dc415fb018d325b03e57894001bb3eaed53b7c793c370a4877ea1f\": not found" containerID="313ec08a20dc415fb018d325b03e57894001bb3eaed53b7c793c370a4877ea1f" Jul 6 23:11:02.378985 kubelet[3325]: I0706 23:11:02.378610 3325 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"313ec08a20dc415fb018d325b03e57894001bb3eaed53b7c793c370a4877ea1f"} err="failed to get container status \"313ec08a20dc415fb018d325b03e57894001bb3eaed53b7c793c370a4877ea1f\": rpc error: code = NotFound desc = an error occurred when try to find container \"313ec08a20dc415fb018d325b03e57894001bb3eaed53b7c793c370a4877ea1f\": not found" Jul 6 23:11:02.378985 kubelet[3325]: I0706 23:11:02.378646 3325 scope.go:117] "RemoveContainer" containerID="bdc4dd5c6583eb088ba1c4916119fe142366221d6dd9a88e436df1936deea4da" Jul 6 23:11:02.381564 containerd[1752]: time="2025-07-06T23:11:02.381260173Z" level=info msg="RemoveContainer for \"bdc4dd5c6583eb088ba1c4916119fe142366221d6dd9a88e436df1936deea4da\"" Jul 6 23:11:02.389055 containerd[1752]: time="2025-07-06T23:11:02.389002537Z" level=info msg="RemoveContainer for \"bdc4dd5c6583eb088ba1c4916119fe142366221d6dd9a88e436df1936deea4da\" returns successfully" Jul 6 23:11:02.389326 kubelet[3325]: I0706 23:11:02.389249 3325 scope.go:117] "RemoveContainer" containerID="5a0dbb38f34cb8edf36a186d45b2f33d70598004fbdf38d4d32a57da8187ad2e" Jul 6 23:11:02.391527 containerd[1752]: time="2025-07-06T23:11:02.391474619Z" level=info msg="RemoveContainer for \"5a0dbb38f34cb8edf36a186d45b2f33d70598004fbdf38d4d32a57da8187ad2e\"" Jul 6 23:11:02.402175 containerd[1752]: time="2025-07-06T23:11:02.402127265Z" level=info msg="RemoveContainer for \"5a0dbb38f34cb8edf36a186d45b2f33d70598004fbdf38d4d32a57da8187ad2e\" returns successfully" Jul 6 23:11:02.402475 kubelet[3325]: I0706 23:11:02.402365 3325 scope.go:117] "RemoveContainer" containerID="47bb187476874e7ea0c29c4283b39eac1141dac710af00e15f7cea6ca948cb8f" Jul 6 23:11:02.403718 containerd[1752]: time="2025-07-06T23:11:02.403653666Z" level=info msg="RemoveContainer for \"47bb187476874e7ea0c29c4283b39eac1141dac710af00e15f7cea6ca948cb8f\"" Jul 6 23:11:02.416551 containerd[1752]: time="2025-07-06T23:11:02.416503434Z" level=info msg="RemoveContainer for \"47bb187476874e7ea0c29c4283b39eac1141dac710af00e15f7cea6ca948cb8f\" returns successfully" Jul 6 23:11:02.416813 kubelet[3325]: I0706 23:11:02.416784 3325 scope.go:117] "RemoveContainer" containerID="2f185aa95074c4bbfeef038afc414ecdb31c61fb2d9e537e47d4c94d4ff8a007" Jul 6 23:11:02.418106 containerd[1752]: time="2025-07-06T23:11:02.418001074Z" level=info msg="RemoveContainer for \"2f185aa95074c4bbfeef038afc414ecdb31c61fb2d9e537e47d4c94d4ff8a007\"" Jul 6 23:11:02.430037 containerd[1752]: time="2025-07-06T23:11:02.429814481Z" level=info msg="RemoveContainer for \"2f185aa95074c4bbfeef038afc414ecdb31c61fb2d9e537e47d4c94d4ff8a007\" returns successfully" Jul 6 23:11:02.430199 kubelet[3325]: I0706 23:11:02.430103 3325 scope.go:117] "RemoveContainer" containerID="13e52a18c6c199508851cb53e29ae9c6101a554c20338381d62a72e9f534f65a" Jul 6 23:11:02.431515 containerd[1752]: time="2025-07-06T23:11:02.431244362Z" level=info msg="RemoveContainer for \"13e52a18c6c199508851cb53e29ae9c6101a554c20338381d62a72e9f534f65a\"" Jul 6 23:11:02.440234 containerd[1752]: time="2025-07-06T23:11:02.440194728Z" level=info msg="RemoveContainer for \"13e52a18c6c199508851cb53e29ae9c6101a554c20338381d62a72e9f534f65a\" returns successfully" Jul 6 23:11:02.440688 kubelet[3325]: I0706 23:11:02.440657 3325 scope.go:117] "RemoveContainer" containerID="bdc4dd5c6583eb088ba1c4916119fe142366221d6dd9a88e436df1936deea4da" Jul 6 23:11:02.441172 containerd[1752]: time="2025-07-06T23:11:02.441128288Z" level=error msg="ContainerStatus for \"bdc4dd5c6583eb088ba1c4916119fe142366221d6dd9a88e436df1936deea4da\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bdc4dd5c6583eb088ba1c4916119fe142366221d6dd9a88e436df1936deea4da\": not found" Jul 6 23:11:02.441319 kubelet[3325]: E0706 23:11:02.441278 3325 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bdc4dd5c6583eb088ba1c4916119fe142366221d6dd9a88e436df1936deea4da\": not found" containerID="bdc4dd5c6583eb088ba1c4916119fe142366221d6dd9a88e436df1936deea4da" Jul 6 23:11:02.441361 kubelet[3325]: I0706 23:11:02.441311 3325 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bdc4dd5c6583eb088ba1c4916119fe142366221d6dd9a88e436df1936deea4da"} err="failed to get container status \"bdc4dd5c6583eb088ba1c4916119fe142366221d6dd9a88e436df1936deea4da\": rpc error: code = NotFound desc = an error occurred when try to find container \"bdc4dd5c6583eb088ba1c4916119fe142366221d6dd9a88e436df1936deea4da\": not found" Jul 6 23:11:02.441361 kubelet[3325]: I0706 23:11:02.441332 3325 scope.go:117] "RemoveContainer" containerID="5a0dbb38f34cb8edf36a186d45b2f33d70598004fbdf38d4d32a57da8187ad2e" Jul 6 23:11:02.441659 kubelet[3325]: E0706 23:11:02.441642 3325 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5a0dbb38f34cb8edf36a186d45b2f33d70598004fbdf38d4d32a57da8187ad2e\": not found" containerID="5a0dbb38f34cb8edf36a186d45b2f33d70598004fbdf38d4d32a57da8187ad2e" Jul 6 23:11:02.441699 containerd[1752]: time="2025-07-06T23:11:02.441530008Z" level=error msg="ContainerStatus for \"5a0dbb38f34cb8edf36a186d45b2f33d70598004fbdf38d4d32a57da8187ad2e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5a0dbb38f34cb8edf36a186d45b2f33d70598004fbdf38d4d32a57da8187ad2e\": not found" Jul 6 23:11:02.441730 kubelet[3325]: I0706 23:11:02.441660 3325 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5a0dbb38f34cb8edf36a186d45b2f33d70598004fbdf38d4d32a57da8187ad2e"} err="failed to get container status \"5a0dbb38f34cb8edf36a186d45b2f33d70598004fbdf38d4d32a57da8187ad2e\": rpc error: code = NotFound desc = an error occurred when try to find container \"5a0dbb38f34cb8edf36a186d45b2f33d70598004fbdf38d4d32a57da8187ad2e\": not found" Jul 6 23:11:02.441730 kubelet[3325]: I0706 23:11:02.441674 3325 scope.go:117] "RemoveContainer" containerID="47bb187476874e7ea0c29c4283b39eac1141dac710af00e15f7cea6ca948cb8f" Jul 6 23:11:02.442064 containerd[1752]: time="2025-07-06T23:11:02.441980089Z" level=error msg="ContainerStatus for \"47bb187476874e7ea0c29c4283b39eac1141dac710af00e15f7cea6ca948cb8f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"47bb187476874e7ea0c29c4283b39eac1141dac710af00e15f7cea6ca948cb8f\": not found" Jul 6 23:11:02.442156 kubelet[3325]: E0706 23:11:02.442117 3325 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"47bb187476874e7ea0c29c4283b39eac1141dac710af00e15f7cea6ca948cb8f\": not found" containerID="47bb187476874e7ea0c29c4283b39eac1141dac710af00e15f7cea6ca948cb8f" Jul 6 23:11:02.442156 kubelet[3325]: I0706 23:11:02.442137 3325 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"47bb187476874e7ea0c29c4283b39eac1141dac710af00e15f7cea6ca948cb8f"} err="failed to get container status \"47bb187476874e7ea0c29c4283b39eac1141dac710af00e15f7cea6ca948cb8f\": rpc error: code = NotFound desc = an error occurred when try to find container \"47bb187476874e7ea0c29c4283b39eac1141dac710af00e15f7cea6ca948cb8f\": not found" Jul 6 23:11:02.442156 kubelet[3325]: I0706 23:11:02.442151 3325 scope.go:117] "RemoveContainer" containerID="2f185aa95074c4bbfeef038afc414ecdb31c61fb2d9e537e47d4c94d4ff8a007" Jul 6 23:11:02.442495 containerd[1752]: time="2025-07-06T23:11:02.442408169Z" level=error msg="ContainerStatus for \"2f185aa95074c4bbfeef038afc414ecdb31c61fb2d9e537e47d4c94d4ff8a007\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2f185aa95074c4bbfeef038afc414ecdb31c61fb2d9e537e47d4c94d4ff8a007\": not found" Jul 6 23:11:02.442548 kubelet[3325]: E0706 23:11:02.442530 3325 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2f185aa95074c4bbfeef038afc414ecdb31c61fb2d9e537e47d4c94d4ff8a007\": not found" containerID="2f185aa95074c4bbfeef038afc414ecdb31c61fb2d9e537e47d4c94d4ff8a007" Jul 6 23:11:02.442615 kubelet[3325]: I0706 23:11:02.442547 3325 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2f185aa95074c4bbfeef038afc414ecdb31c61fb2d9e537e47d4c94d4ff8a007"} err="failed to get container status \"2f185aa95074c4bbfeef038afc414ecdb31c61fb2d9e537e47d4c94d4ff8a007\": rpc error: code = NotFound desc = an error occurred when try to find container \"2f185aa95074c4bbfeef038afc414ecdb31c61fb2d9e537e47d4c94d4ff8a007\": not found" Jul 6 23:11:02.442685 kubelet[3325]: I0706 23:11:02.442614 3325 scope.go:117] "RemoveContainer" containerID="13e52a18c6c199508851cb53e29ae9c6101a554c20338381d62a72e9f534f65a" Jul 6 23:11:02.442835 containerd[1752]: time="2025-07-06T23:11:02.442787169Z" level=error msg="ContainerStatus for \"13e52a18c6c199508851cb53e29ae9c6101a554c20338381d62a72e9f534f65a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"13e52a18c6c199508851cb53e29ae9c6101a554c20338381d62a72e9f534f65a\": not found" Jul 6 23:11:02.442970 kubelet[3325]: E0706 23:11:02.442916 3325 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"13e52a18c6c199508851cb53e29ae9c6101a554c20338381d62a72e9f534f65a\": not found" containerID="13e52a18c6c199508851cb53e29ae9c6101a554c20338381d62a72e9f534f65a" Jul 6 23:11:02.443042 kubelet[3325]: I0706 23:11:02.442972 3325 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"13e52a18c6c199508851cb53e29ae9c6101a554c20338381d62a72e9f534f65a"} err="failed to get container status \"13e52a18c6c199508851cb53e29ae9c6101a554c20338381d62a72e9f534f65a\": rpc error: code = NotFound desc = an error occurred when try to find container \"13e52a18c6c199508851cb53e29ae9c6101a554c20338381d62a72e9f534f65a\": not found" Jul 6 23:11:02.957106 kubelet[3325]: I0706 23:11:02.957002 3325 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19092466-f644-4374-99fc-4fc1a66f975a" path="/var/lib/kubelet/pods/19092466-f644-4374-99fc-4fc1a66f975a/volumes" Jul 6 23:11:02.958155 kubelet[3325]: I0706 23:11:02.958134 3325 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8115a4b2-1b6c-4dad-906e-41ac10c2857d" path="/var/lib/kubelet/pods/8115a4b2-1b6c-4dad-906e-41ac10c2857d/volumes" Jul 6 23:11:03.295063 sshd[4969]: Connection closed by 10.200.16.10 port 49820 Jul 6 23:11:03.296103 sshd-session[4967]: pam_unix(sshd:session): session closed for user core Jul 6 23:11:03.299027 systemd[1]: sshd@24-10.200.20.36:22-10.200.16.10:49820.service: Deactivated successfully. Jul 6 23:11:03.301262 systemd[1]: session-27.scope: Deactivated successfully. Jul 6 23:11:03.301632 systemd[1]: session-27.scope: Consumed 1.129s CPU time, 23.6M memory peak. Jul 6 23:11:03.303098 systemd-logind[1708]: Session 27 logged out. Waiting for processes to exit. Jul 6 23:11:03.304376 systemd-logind[1708]: Removed session 27. Jul 6 23:11:03.403289 systemd[1]: Started sshd@25-10.200.20.36:22-10.200.16.10:36912.service - OpenSSH per-connection server daemon (10.200.16.10:36912). Jul 6 23:11:03.876746 sshd[5136]: Accepted publickey for core from 10.200.16.10 port 36912 ssh2: RSA SHA256:BYvOZLTfueOxq93dYKJbaYxARQqOBHJqeUMgtMpy+gQ Jul 6 23:11:03.880452 sshd-session[5136]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:11:03.885831 systemd-logind[1708]: New session 28 of user core. Jul 6 23:11:03.888163 systemd[1]: Started session-28.scope - Session 28 of User core. Jul 6 23:11:05.064844 kubelet[3325]: E0706 23:11:05.064801 3325 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 6 23:11:06.680669 systemd[1]: Created slice kubepods-burstable-podc5e5b0ba_9839_41a5_9613_321e8a4717f7.slice - libcontainer container kubepods-burstable-podc5e5b0ba_9839_41a5_9613_321e8a4717f7.slice. Jul 6 23:11:06.715893 sshd[5138]: Connection closed by 10.200.16.10 port 36912 Jul 6 23:11:06.716288 sshd-session[5136]: pam_unix(sshd:session): session closed for user core Jul 6 23:11:06.721367 systemd-logind[1708]: Session 28 logged out. Waiting for processes to exit. Jul 6 23:11:06.723152 systemd[1]: sshd@25-10.200.20.36:22-10.200.16.10:36912.service: Deactivated successfully. Jul 6 23:11:06.727166 systemd[1]: session-28.scope: Deactivated successfully. Jul 6 23:11:06.727624 systemd[1]: session-28.scope: Consumed 2.408s CPU time, 27.5M memory peak. Jul 6 23:11:06.729805 systemd-logind[1708]: Removed session 28. Jul 6 23:11:06.773414 kubelet[3325]: I0706 23:11:06.773299 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c5e5b0ba-9839-41a5-9613-321e8a4717f7-xtables-lock\") pod \"cilium-hzvm4\" (UID: \"c5e5b0ba-9839-41a5-9613-321e8a4717f7\") " pod="kube-system/cilium-hzvm4" Jul 6 23:11:06.773414 kubelet[3325]: I0706 23:11:06.773340 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c5e5b0ba-9839-41a5-9613-321e8a4717f7-clustermesh-secrets\") pod \"cilium-hzvm4\" (UID: \"c5e5b0ba-9839-41a5-9613-321e8a4717f7\") " pod="kube-system/cilium-hzvm4" Jul 6 23:11:06.773414 kubelet[3325]: I0706 23:11:06.773360 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c5e5b0ba-9839-41a5-9613-321e8a4717f7-cilium-ipsec-secrets\") pod \"cilium-hzvm4\" (UID: \"c5e5b0ba-9839-41a5-9613-321e8a4717f7\") " pod="kube-system/cilium-hzvm4" Jul 6 23:11:06.773414 kubelet[3325]: I0706 23:11:06.773375 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c5e5b0ba-9839-41a5-9613-321e8a4717f7-hubble-tls\") pod \"cilium-hzvm4\" (UID: \"c5e5b0ba-9839-41a5-9613-321e8a4717f7\") " pod="kube-system/cilium-hzvm4" Jul 6 23:11:06.773964 kubelet[3325]: I0706 23:11:06.773899 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c5e5b0ba-9839-41a5-9613-321e8a4717f7-cilium-run\") pod \"cilium-hzvm4\" (UID: \"c5e5b0ba-9839-41a5-9613-321e8a4717f7\") " pod="kube-system/cilium-hzvm4" Jul 6 23:11:06.774206 kubelet[3325]: I0706 23:11:06.773973 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c5e5b0ba-9839-41a5-9613-321e8a4717f7-bpf-maps\") pod \"cilium-hzvm4\" (UID: \"c5e5b0ba-9839-41a5-9613-321e8a4717f7\") " pod="kube-system/cilium-hzvm4" Jul 6 23:11:06.774206 kubelet[3325]: I0706 23:11:06.774023 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c5e5b0ba-9839-41a5-9613-321e8a4717f7-cilium-config-path\") pod \"cilium-hzvm4\" (UID: \"c5e5b0ba-9839-41a5-9613-321e8a4717f7\") " pod="kube-system/cilium-hzvm4" Jul 6 23:11:06.774206 kubelet[3325]: I0706 23:11:06.774071 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c5e5b0ba-9839-41a5-9613-321e8a4717f7-lib-modules\") pod \"cilium-hzvm4\" (UID: \"c5e5b0ba-9839-41a5-9613-321e8a4717f7\") " pod="kube-system/cilium-hzvm4" Jul 6 23:11:06.774206 kubelet[3325]: I0706 23:11:06.774101 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c5e5b0ba-9839-41a5-9613-321e8a4717f7-host-proc-sys-net\") pod \"cilium-hzvm4\" (UID: \"c5e5b0ba-9839-41a5-9613-321e8a4717f7\") " pod="kube-system/cilium-hzvm4" Jul 6 23:11:06.774206 kubelet[3325]: I0706 23:11:06.774159 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c5e5b0ba-9839-41a5-9613-321e8a4717f7-cilium-cgroup\") pod \"cilium-hzvm4\" (UID: \"c5e5b0ba-9839-41a5-9613-321e8a4717f7\") " pod="kube-system/cilium-hzvm4" Jul 6 23:11:06.774206 kubelet[3325]: I0706 23:11:06.774204 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c5e5b0ba-9839-41a5-9613-321e8a4717f7-host-proc-sys-kernel\") pod \"cilium-hzvm4\" (UID: \"c5e5b0ba-9839-41a5-9613-321e8a4717f7\") " pod="kube-system/cilium-hzvm4" Jul 6 23:11:06.774438 kubelet[3325]: I0706 23:11:06.774223 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glpsb\" (UniqueName: \"kubernetes.io/projected/c5e5b0ba-9839-41a5-9613-321e8a4717f7-kube-api-access-glpsb\") pod \"cilium-hzvm4\" (UID: \"c5e5b0ba-9839-41a5-9613-321e8a4717f7\") " pod="kube-system/cilium-hzvm4" Jul 6 23:11:06.774438 kubelet[3325]: I0706 23:11:06.774262 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c5e5b0ba-9839-41a5-9613-321e8a4717f7-hostproc\") pod \"cilium-hzvm4\" (UID: \"c5e5b0ba-9839-41a5-9613-321e8a4717f7\") " pod="kube-system/cilium-hzvm4" Jul 6 23:11:06.774438 kubelet[3325]: I0706 23:11:06.774297 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c5e5b0ba-9839-41a5-9613-321e8a4717f7-cni-path\") pod \"cilium-hzvm4\" (UID: \"c5e5b0ba-9839-41a5-9613-321e8a4717f7\") " pod="kube-system/cilium-hzvm4" Jul 6 23:11:06.774438 kubelet[3325]: I0706 23:11:06.774312 3325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c5e5b0ba-9839-41a5-9613-321e8a4717f7-etc-cni-netd\") pod \"cilium-hzvm4\" (UID: \"c5e5b0ba-9839-41a5-9613-321e8a4717f7\") " pod="kube-system/cilium-hzvm4" Jul 6 23:11:06.809254 systemd[1]: Started sshd@26-10.200.20.36:22-10.200.16.10:36918.service - OpenSSH per-connection server daemon (10.200.16.10:36918). Jul 6 23:11:06.986960 containerd[1752]: time="2025-07-06T23:11:06.986598638Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hzvm4,Uid:c5e5b0ba-9839-41a5-9613-321e8a4717f7,Namespace:kube-system,Attempt:0,}" Jul 6 23:11:07.026600 containerd[1752]: time="2025-07-06T23:11:07.026210382Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:11:07.026600 containerd[1752]: time="2025-07-06T23:11:07.026270422Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:11:07.026600 containerd[1752]: time="2025-07-06T23:11:07.026286582Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:11:07.026600 containerd[1752]: time="2025-07-06T23:11:07.026360822Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:11:07.052101 systemd[1]: Started cri-containerd-9faffcb1899c7422b768a6d8c212f4570c3032d2ffecb1258b553bf37378897f.scope - libcontainer container 9faffcb1899c7422b768a6d8c212f4570c3032d2ffecb1258b553bf37378897f. Jul 6 23:11:07.074577 containerd[1752]: time="2025-07-06T23:11:07.074376650Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hzvm4,Uid:c5e5b0ba-9839-41a5-9613-321e8a4717f7,Namespace:kube-system,Attempt:0,} returns sandbox id \"9faffcb1899c7422b768a6d8c212f4570c3032d2ffecb1258b553bf37378897f\"" Jul 6 23:11:07.085814 containerd[1752]: time="2025-07-06T23:11:07.085621577Z" level=info msg="CreateContainer within sandbox \"9faffcb1899c7422b768a6d8c212f4570c3032d2ffecb1258b553bf37378897f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 6 23:11:07.126311 containerd[1752]: time="2025-07-06T23:11:07.126264041Z" level=info msg="CreateContainer within sandbox \"9faffcb1899c7422b768a6d8c212f4570c3032d2ffecb1258b553bf37378897f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6a33b3b443886f00068dab5db3a4abd8a45d40c7d3c9411312be22d5a096264f\"" Jul 6 23:11:07.127135 containerd[1752]: time="2025-07-06T23:11:07.127104402Z" level=info msg="StartContainer for \"6a33b3b443886f00068dab5db3a4abd8a45d40c7d3c9411312be22d5a096264f\"" Jul 6 23:11:07.155854 systemd[1]: Started cri-containerd-6a33b3b443886f00068dab5db3a4abd8a45d40c7d3c9411312be22d5a096264f.scope - libcontainer container 6a33b3b443886f00068dab5db3a4abd8a45d40c7d3c9411312be22d5a096264f. Jul 6 23:11:07.191900 containerd[1752]: time="2025-07-06T23:11:07.191230960Z" level=info msg="StartContainer for \"6a33b3b443886f00068dab5db3a4abd8a45d40c7d3c9411312be22d5a096264f\" returns successfully" Jul 6 23:11:07.199395 systemd[1]: cri-containerd-6a33b3b443886f00068dab5db3a4abd8a45d40c7d3c9411312be22d5a096264f.scope: Deactivated successfully. Jul 6 23:11:07.286357 containerd[1752]: time="2025-07-06T23:11:07.286054177Z" level=info msg="shim disconnected" id=6a33b3b443886f00068dab5db3a4abd8a45d40c7d3c9411312be22d5a096264f namespace=k8s.io Jul 6 23:11:07.286357 containerd[1752]: time="2025-07-06T23:11:07.286110857Z" level=warning msg="cleaning up after shim disconnected" id=6a33b3b443886f00068dab5db3a4abd8a45d40c7d3c9411312be22d5a096264f namespace=k8s.io Jul 6 23:11:07.286357 containerd[1752]: time="2025-07-06T23:11:07.286120177Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:11:07.302717 sshd[5148]: Accepted publickey for core from 10.200.16.10 port 36918 ssh2: RSA SHA256:BYvOZLTfueOxq93dYKJbaYxARQqOBHJqeUMgtMpy+gQ Jul 6 23:11:07.304570 sshd-session[5148]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:11:07.309354 systemd-logind[1708]: New session 29 of user core. Jul 6 23:11:07.316130 systemd[1]: Started session-29.scope - Session 29 of User core. Jul 6 23:11:07.391762 containerd[1752]: time="2025-07-06T23:11:07.391570560Z" level=info msg="CreateContainer within sandbox \"9faffcb1899c7422b768a6d8c212f4570c3032d2ffecb1258b553bf37378897f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 6 23:11:07.427522 containerd[1752]: time="2025-07-06T23:11:07.427346221Z" level=info msg="CreateContainer within sandbox \"9faffcb1899c7422b768a6d8c212f4570c3032d2ffecb1258b553bf37378897f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b4607988051f5250e421d387f07fc1ba9ae7bac2d9cbc2f86fe43164ef192dbc\"" Jul 6 23:11:07.429432 containerd[1752]: time="2025-07-06T23:11:07.429366422Z" level=info msg="StartContainer for \"b4607988051f5250e421d387f07fc1ba9ae7bac2d9cbc2f86fe43164ef192dbc\"" Jul 6 23:11:07.453156 systemd[1]: Started cri-containerd-b4607988051f5250e421d387f07fc1ba9ae7bac2d9cbc2f86fe43164ef192dbc.scope - libcontainer container b4607988051f5250e421d387f07fc1ba9ae7bac2d9cbc2f86fe43164ef192dbc. Jul 6 23:11:07.482379 containerd[1752]: time="2025-07-06T23:11:07.482231694Z" level=info msg="StartContainer for \"b4607988051f5250e421d387f07fc1ba9ae7bac2d9cbc2f86fe43164ef192dbc\" returns successfully" Jul 6 23:11:07.490096 systemd[1]: cri-containerd-b4607988051f5250e421d387f07fc1ba9ae7bac2d9cbc2f86fe43164ef192dbc.scope: Deactivated successfully. Jul 6 23:11:07.531816 containerd[1752]: time="2025-07-06T23:11:07.531481643Z" level=info msg="shim disconnected" id=b4607988051f5250e421d387f07fc1ba9ae7bac2d9cbc2f86fe43164ef192dbc namespace=k8s.io Jul 6 23:11:07.531816 containerd[1752]: time="2025-07-06T23:11:07.531625203Z" level=warning msg="cleaning up after shim disconnected" id=b4607988051f5250e421d387f07fc1ba9ae7bac2d9cbc2f86fe43164ef192dbc namespace=k8s.io Jul 6 23:11:07.531816 containerd[1752]: time="2025-07-06T23:11:07.531634563Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:11:07.657029 sshd[5259]: Connection closed by 10.200.16.10 port 36918 Jul 6 23:11:07.656273 sshd-session[5148]: pam_unix(sshd:session): session closed for user core Jul 6 23:11:07.660279 systemd[1]: sshd@26-10.200.20.36:22-10.200.16.10:36918.service: Deactivated successfully. Jul 6 23:11:07.662825 systemd[1]: session-29.scope: Deactivated successfully. Jul 6 23:11:07.664077 systemd-logind[1708]: Session 29 logged out. Waiting for processes to exit. Jul 6 23:11:07.664994 systemd-logind[1708]: Removed session 29. Jul 6 23:11:07.748261 systemd[1]: Started sshd@27-10.200.20.36:22-10.200.16.10:36926.service - OpenSSH per-connection server daemon (10.200.16.10:36926). Jul 6 23:11:08.224991 sshd[5326]: Accepted publickey for core from 10.200.16.10 port 36926 ssh2: RSA SHA256:BYvOZLTfueOxq93dYKJbaYxARQqOBHJqeUMgtMpy+gQ Jul 6 23:11:08.226329 sshd-session[5326]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:11:08.231903 systemd-logind[1708]: New session 30 of user core. Jul 6 23:11:08.238408 systemd[1]: Started session-30.scope - Session 30 of User core. Jul 6 23:11:08.392121 containerd[1752]: time="2025-07-06T23:11:08.391862133Z" level=info msg="CreateContainer within sandbox \"9faffcb1899c7422b768a6d8c212f4570c3032d2ffecb1258b553bf37378897f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 6 23:11:08.435249 containerd[1752]: time="2025-07-06T23:11:08.435074880Z" level=info msg="CreateContainer within sandbox \"9faffcb1899c7422b768a6d8c212f4570c3032d2ffecb1258b553bf37378897f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"87a9ffde3eaad169a0f373cbfbb0eb2d49dcac944b2de7af7a5f49749f2e2b9c\"" Jul 6 23:11:08.436526 containerd[1752]: time="2025-07-06T23:11:08.435789521Z" level=info msg="StartContainer for \"87a9ffde3eaad169a0f373cbfbb0eb2d49dcac944b2de7af7a5f49749f2e2b9c\"" Jul 6 23:11:08.467262 systemd[1]: Started cri-containerd-87a9ffde3eaad169a0f373cbfbb0eb2d49dcac944b2de7af7a5f49749f2e2b9c.scope - libcontainer container 87a9ffde3eaad169a0f373cbfbb0eb2d49dcac944b2de7af7a5f49749f2e2b9c. Jul 6 23:11:08.508972 containerd[1752]: time="2025-07-06T23:11:08.508293686Z" level=info msg="StartContainer for \"87a9ffde3eaad169a0f373cbfbb0eb2d49dcac944b2de7af7a5f49749f2e2b9c\" returns successfully" Jul 6 23:11:08.512643 systemd[1]: cri-containerd-87a9ffde3eaad169a0f373cbfbb0eb2d49dcac944b2de7af7a5f49749f2e2b9c.scope: Deactivated successfully. Jul 6 23:11:08.571550 containerd[1752]: time="2025-07-06T23:11:08.571266326Z" level=info msg="shim disconnected" id=87a9ffde3eaad169a0f373cbfbb0eb2d49dcac944b2de7af7a5f49749f2e2b9c namespace=k8s.io Jul 6 23:11:08.571550 containerd[1752]: time="2025-07-06T23:11:08.571333926Z" level=warning msg="cleaning up after shim disconnected" id=87a9ffde3eaad169a0f373cbfbb0eb2d49dcac944b2de7af7a5f49749f2e2b9c namespace=k8s.io Jul 6 23:11:08.571550 containerd[1752]: time="2025-07-06T23:11:08.571342046Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:11:08.882585 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-87a9ffde3eaad169a0f373cbfbb0eb2d49dcac944b2de7af7a5f49749f2e2b9c-rootfs.mount: Deactivated successfully. Jul 6 23:11:09.399721 containerd[1752]: time="2025-07-06T23:11:09.399647888Z" level=info msg="CreateContainer within sandbox \"9faffcb1899c7422b768a6d8c212f4570c3032d2ffecb1258b553bf37378897f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 6 23:11:09.440651 containerd[1752]: time="2025-07-06T23:11:09.440548953Z" level=info msg="CreateContainer within sandbox \"9faffcb1899c7422b768a6d8c212f4570c3032d2ffecb1258b553bf37378897f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"cbd7f4a8c68f5f95fedc4f2c23272faa2968f932870aa954ca687454e9136b89\"" Jul 6 23:11:09.441735 containerd[1752]: time="2025-07-06T23:11:09.441549314Z" level=info msg="StartContainer for \"cbd7f4a8c68f5f95fedc4f2c23272faa2968f932870aa954ca687454e9136b89\"" Jul 6 23:11:09.473269 systemd[1]: Started cri-containerd-cbd7f4a8c68f5f95fedc4f2c23272faa2968f932870aa954ca687454e9136b89.scope - libcontainer container cbd7f4a8c68f5f95fedc4f2c23272faa2968f932870aa954ca687454e9136b89. Jul 6 23:11:09.503812 systemd[1]: cri-containerd-cbd7f4a8c68f5f95fedc4f2c23272faa2968f932870aa954ca687454e9136b89.scope: Deactivated successfully. Jul 6 23:11:09.509289 containerd[1752]: time="2025-07-06T23:11:09.509205317Z" level=info msg="StartContainer for \"cbd7f4a8c68f5f95fedc4f2c23272faa2968f932870aa954ca687454e9136b89\" returns successfully" Jul 6 23:11:09.547431 containerd[1752]: time="2025-07-06T23:11:09.547326941Z" level=info msg="shim disconnected" id=cbd7f4a8c68f5f95fedc4f2c23272faa2968f932870aa954ca687454e9136b89 namespace=k8s.io Jul 6 23:11:09.547431 containerd[1752]: time="2025-07-06T23:11:09.547380301Z" level=warning msg="cleaning up after shim disconnected" id=cbd7f4a8c68f5f95fedc4f2c23272faa2968f932870aa954ca687454e9136b89 namespace=k8s.io Jul 6 23:11:09.547431 containerd[1752]: time="2025-07-06T23:11:09.547393621Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:11:09.744605 kubelet[3325]: I0706 23:11:09.743456 3325 setters.go:618] "Node became not ready" node="ci-4230.2.1-a-dc1fa1989d" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-07-06T23:11:09Z","lastTransitionTime":"2025-07-06T23:11:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 6 23:11:09.882624 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cbd7f4a8c68f5f95fedc4f2c23272faa2968f932870aa954ca687454e9136b89-rootfs.mount: Deactivated successfully. Jul 6 23:11:10.066721 kubelet[3325]: E0706 23:11:10.066586 3325 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 6 23:11:10.401447 containerd[1752]: time="2025-07-06T23:11:10.400996878Z" level=info msg="CreateContainer within sandbox \"9faffcb1899c7422b768a6d8c212f4570c3032d2ffecb1258b553bf37378897f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 6 23:11:10.453215 containerd[1752]: time="2025-07-06T23:11:10.453161471Z" level=info msg="CreateContainer within sandbox \"9faffcb1899c7422b768a6d8c212f4570c3032d2ffecb1258b553bf37378897f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"5067768e2deab80283c7d36c38b7ca9cdd1d1bd28568e85ed69e14ebd8773e26\"" Jul 6 23:11:10.454234 containerd[1752]: time="2025-07-06T23:11:10.454196231Z" level=info msg="StartContainer for \"5067768e2deab80283c7d36c38b7ca9cdd1d1bd28568e85ed69e14ebd8773e26\"" Jul 6 23:11:10.483120 systemd[1]: Started cri-containerd-5067768e2deab80283c7d36c38b7ca9cdd1d1bd28568e85ed69e14ebd8773e26.scope - libcontainer container 5067768e2deab80283c7d36c38b7ca9cdd1d1bd28568e85ed69e14ebd8773e26. Jul 6 23:11:10.523821 containerd[1752]: time="2025-07-06T23:11:10.523757955Z" level=info msg="StartContainer for \"5067768e2deab80283c7d36c38b7ca9cdd1d1bd28568e85ed69e14ebd8773e26\" returns successfully" Jul 6 23:11:10.954547 kubelet[3325]: E0706 23:11:10.954218 3325 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-qhxt2" podUID="13a793ed-0264-4308-9465-5138130ea326" Jul 6 23:11:10.996978 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jul 6 23:11:11.421274 kubelet[3325]: I0706 23:11:11.421067 3325 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-hzvm4" podStartSLOduration=5.421045 podStartE2EDuration="5.421045s" podCreationTimestamp="2025-07-06 23:11:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:11:11.42058996 +0000 UTC m=+176.584154465" watchObservedRunningTime="2025-07-06 23:11:11.421045 +0000 UTC m=+176.584609465" Jul 6 23:11:12.955555 kubelet[3325]: E0706 23:11:12.954511 3325 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-qhxt2" podUID="13a793ed-0264-4308-9465-5138130ea326" Jul 6 23:11:13.947071 systemd-networkd[1607]: lxc_health: Link UP Jul 6 23:11:13.962109 systemd-networkd[1607]: lxc_health: Gained carrier Jul 6 23:11:14.956232 kubelet[3325]: E0706 23:11:14.956171 3325 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-qhxt2" podUID="13a793ed-0264-4308-9465-5138130ea326" Jul 6 23:11:14.963548 containerd[1752]: time="2025-07-06T23:11:14.963499270Z" level=info msg="StopPodSandbox for \"c1161092f42f9af3e53d22cfd700290d5605941ff415521c8f0897f1f92e13ea\"" Jul 6 23:11:14.968089 containerd[1752]: time="2025-07-06T23:11:14.963604270Z" level=info msg="TearDown network for sandbox \"c1161092f42f9af3e53d22cfd700290d5605941ff415521c8f0897f1f92e13ea\" successfully" Jul 6 23:11:14.968089 containerd[1752]: time="2025-07-06T23:11:14.963615270Z" level=info msg="StopPodSandbox for \"c1161092f42f9af3e53d22cfd700290d5605941ff415521c8f0897f1f92e13ea\" returns successfully" Jul 6 23:11:14.968089 containerd[1752]: time="2025-07-06T23:11:14.965424232Z" level=info msg="RemovePodSandbox for \"c1161092f42f9af3e53d22cfd700290d5605941ff415521c8f0897f1f92e13ea\"" Jul 6 23:11:14.968089 containerd[1752]: time="2025-07-06T23:11:14.965462392Z" level=info msg="Forcibly stopping sandbox \"c1161092f42f9af3e53d22cfd700290d5605941ff415521c8f0897f1f92e13ea\"" Jul 6 23:11:14.968089 containerd[1752]: time="2025-07-06T23:11:14.965521192Z" level=info msg="TearDown network for sandbox \"c1161092f42f9af3e53d22cfd700290d5605941ff415521c8f0897f1f92e13ea\" successfully" Jul 6 23:11:14.978028 containerd[1752]: time="2025-07-06T23:11:14.977690599Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c1161092f42f9af3e53d22cfd700290d5605941ff415521c8f0897f1f92e13ea\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 6 23:11:14.978254 containerd[1752]: time="2025-07-06T23:11:14.978229440Z" level=info msg="RemovePodSandbox \"c1161092f42f9af3e53d22cfd700290d5605941ff415521c8f0897f1f92e13ea\" returns successfully" Jul 6 23:11:14.979190 containerd[1752]: time="2025-07-06T23:11:14.979140880Z" level=info msg="StopPodSandbox for \"2169c37e7adefb2cd57c6340629b8975e9e7a7027cdbde6cc1436de1a66db03d\"" Jul 6 23:11:14.979291 containerd[1752]: time="2025-07-06T23:11:14.979240560Z" level=info msg="TearDown network for sandbox \"2169c37e7adefb2cd57c6340629b8975e9e7a7027cdbde6cc1436de1a66db03d\" successfully" Jul 6 23:11:14.979291 containerd[1752]: time="2025-07-06T23:11:14.979253920Z" level=info msg="StopPodSandbox for \"2169c37e7adefb2cd57c6340629b8975e9e7a7027cdbde6cc1436de1a66db03d\" returns successfully" Jul 6 23:11:14.980352 containerd[1752]: time="2025-07-06T23:11:14.980311241Z" level=info msg="RemovePodSandbox for \"2169c37e7adefb2cd57c6340629b8975e9e7a7027cdbde6cc1436de1a66db03d\"" Jul 6 23:11:14.980352 containerd[1752]: time="2025-07-06T23:11:14.980347321Z" level=info msg="Forcibly stopping sandbox \"2169c37e7adefb2cd57c6340629b8975e9e7a7027cdbde6cc1436de1a66db03d\"" Jul 6 23:11:14.980465 containerd[1752]: time="2025-07-06T23:11:14.980401801Z" level=info msg="TearDown network for sandbox \"2169c37e7adefb2cd57c6340629b8975e9e7a7027cdbde6cc1436de1a66db03d\" successfully" Jul 6 23:11:14.992148 containerd[1752]: time="2025-07-06T23:11:14.992066368Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2169c37e7adefb2cd57c6340629b8975e9e7a7027cdbde6cc1436de1a66db03d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 6 23:11:14.992148 containerd[1752]: time="2025-07-06T23:11:14.992141128Z" level=info msg="RemovePodSandbox \"2169c37e7adefb2cd57c6340629b8975e9e7a7027cdbde6cc1436de1a66db03d\" returns successfully" Jul 6 23:11:15.842083 systemd-networkd[1607]: lxc_health: Gained IPv6LL Jul 6 23:11:17.198140 systemd[1]: run-containerd-runc-k8s.io-5067768e2deab80283c7d36c38b7ca9cdd1d1bd28568e85ed69e14ebd8773e26-runc.umgdvy.mount: Deactivated successfully. Jul 6 23:11:19.403079 kubelet[3325]: E0706 23:11:19.403005 3325 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:59216->127.0.0.1:42359: write tcp 127.0.0.1:59216->127.0.0.1:42359: write: broken pipe Jul 6 23:11:19.506130 sshd[5330]: Connection closed by 10.200.16.10 port 36926 Jul 6 23:11:19.506785 sshd-session[5326]: pam_unix(sshd:session): session closed for user core Jul 6 23:11:19.510596 systemd[1]: sshd@27-10.200.20.36:22-10.200.16.10:36926.service: Deactivated successfully. Jul 6 23:11:19.512431 systemd[1]: session-30.scope: Deactivated successfully. Jul 6 23:11:19.513290 systemd-logind[1708]: Session 30 logged out. Waiting for processes to exit. Jul 6 23:11:19.514630 systemd-logind[1708]: Removed session 30.