Jul 6 23:32:11.347462 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 6 23:32:11.347484 kernel: Linux version 6.6.95-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Sun Jul 6 21:51:54 -00 2025 Jul 6 23:32:11.347492 kernel: KASLR enabled Jul 6 23:32:11.347497 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jul 6 23:32:11.347505 kernel: printk: bootconsole [pl11] enabled Jul 6 23:32:11.347510 kernel: efi: EFI v2.7 by EDK II Jul 6 23:32:11.347517 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f20e698 RNG=0x3fd5f998 MEMRESERVE=0x3e477598 Jul 6 23:32:11.347523 kernel: random: crng init done Jul 6 23:32:11.347529 kernel: secureboot: Secure boot disabled Jul 6 23:32:11.347534 kernel: ACPI: Early table checksum verification disabled Jul 6 23:32:11.347540 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Jul 6 23:32:11.347545 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 6 23:32:11.347551 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 6 23:32:11.347559 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jul 6 23:32:11.347566 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 6 23:32:11.347572 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 6 23:32:11.347578 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 6 23:32:11.347586 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 6 23:32:11.347592 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 6 23:32:11.349643 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 6 23:32:11.349666 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jul 6 23:32:11.349673 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 6 23:32:11.349680 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jul 6 23:32:11.349691 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Jul 6 23:32:11.349698 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Jul 6 23:32:11.349705 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Jul 6 23:32:11.349711 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Jul 6 23:32:11.349717 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Jul 6 23:32:11.349729 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Jul 6 23:32:11.349736 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Jul 6 23:32:11.349742 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Jul 6 23:32:11.349748 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Jul 6 23:32:11.349755 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Jul 6 23:32:11.349761 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Jul 6 23:32:11.349767 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Jul 6 23:32:11.349774 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Jul 6 23:32:11.349780 kernel: Zone ranges: Jul 6 23:32:11.349786 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jul 6 23:32:11.349792 kernel: DMA32 empty Jul 6 23:32:11.349799 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jul 6 23:32:11.349809 kernel: Movable zone start for each node Jul 6 23:32:11.349815 kernel: Early memory node ranges Jul 6 23:32:11.349822 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jul 6 23:32:11.349829 kernel: node 0: [mem 0x0000000000824000-0x000000003e45ffff] Jul 6 23:32:11.349836 kernel: node 0: [mem 0x000000003e460000-0x000000003e46ffff] Jul 6 23:32:11.349844 kernel: node 0: [mem 0x000000003e470000-0x000000003e54ffff] Jul 6 23:32:11.349850 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Jul 6 23:32:11.349857 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Jul 6 23:32:11.349864 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Jul 6 23:32:11.349871 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Jul 6 23:32:11.349877 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jul 6 23:32:11.349884 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jul 6 23:32:11.349891 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jul 6 23:32:11.349898 kernel: psci: probing for conduit method from ACPI. Jul 6 23:32:11.349904 kernel: psci: PSCIv1.1 detected in firmware. Jul 6 23:32:11.349911 kernel: psci: Using standard PSCI v0.2 function IDs Jul 6 23:32:11.349918 kernel: psci: MIGRATE_INFO_TYPE not supported. Jul 6 23:32:11.349926 kernel: psci: SMC Calling Convention v1.4 Jul 6 23:32:11.349932 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jul 6 23:32:11.349939 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jul 6 23:32:11.349946 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jul 6 23:32:11.349953 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jul 6 23:32:11.349960 kernel: pcpu-alloc: [0] 0 [0] 1 Jul 6 23:32:11.349966 kernel: Detected PIPT I-cache on CPU0 Jul 6 23:32:11.349973 kernel: CPU features: detected: GIC system register CPU interface Jul 6 23:32:11.349980 kernel: CPU features: detected: Hardware dirty bit management Jul 6 23:32:11.349986 kernel: CPU features: detected: Spectre-BHB Jul 6 23:32:11.349993 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 6 23:32:11.350001 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 6 23:32:11.350007 kernel: CPU features: detected: ARM erratum 1418040 Jul 6 23:32:11.350014 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Jul 6 23:32:11.350020 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 6 23:32:11.350027 kernel: alternatives: applying boot alternatives Jul 6 23:32:11.350035 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=ca8feb1f79a67c117068f051b5f829d3e40170c022cd5834bd6789cba9641479 Jul 6 23:32:11.350042 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 6 23:32:11.350049 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 6 23:32:11.350056 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 6 23:32:11.350062 kernel: Fallback order for Node 0: 0 Jul 6 23:32:11.350069 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Jul 6 23:32:11.350077 kernel: Policy zone: Normal Jul 6 23:32:11.350084 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 6 23:32:11.350090 kernel: software IO TLB: area num 2. Jul 6 23:32:11.350097 kernel: software IO TLB: mapped [mem 0x0000000036540000-0x000000003a540000] (64MB) Jul 6 23:32:11.350104 kernel: Memory: 3983592K/4194160K available (10368K kernel code, 2186K rwdata, 8104K rodata, 38336K init, 897K bss, 210568K reserved, 0K cma-reserved) Jul 6 23:32:11.350110 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 6 23:32:11.350117 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 6 23:32:11.350125 kernel: rcu: RCU event tracing is enabled. Jul 6 23:32:11.350131 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 6 23:32:11.350138 kernel: Trampoline variant of Tasks RCU enabled. Jul 6 23:32:11.350145 kernel: Tracing variant of Tasks RCU enabled. Jul 6 23:32:11.350153 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 6 23:32:11.350160 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 6 23:32:11.350167 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 6 23:32:11.350173 kernel: GICv3: 960 SPIs implemented Jul 6 23:32:11.350180 kernel: GICv3: 0 Extended SPIs implemented Jul 6 23:32:11.350186 kernel: Root IRQ handler: gic_handle_irq Jul 6 23:32:11.350193 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jul 6 23:32:11.350199 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jul 6 23:32:11.350206 kernel: ITS: No ITS available, not enabling LPIs Jul 6 23:32:11.350212 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 6 23:32:11.350219 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 6 23:32:11.350226 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 6 23:32:11.350234 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 6 23:32:11.350241 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 6 23:32:11.350248 kernel: Console: colour dummy device 80x25 Jul 6 23:32:11.350255 kernel: printk: console [tty1] enabled Jul 6 23:32:11.350262 kernel: ACPI: Core revision 20230628 Jul 6 23:32:11.350269 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 6 23:32:11.350275 kernel: pid_max: default: 32768 minimum: 301 Jul 6 23:32:11.350282 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 6 23:32:11.350289 kernel: landlock: Up and running. Jul 6 23:32:11.350297 kernel: SELinux: Initializing. Jul 6 23:32:11.350304 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 6 23:32:11.350311 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 6 23:32:11.350318 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 6 23:32:11.350325 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 6 23:32:11.350332 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Jul 6 23:32:11.350339 kernel: Hyper-V: Host Build 10.0.22477.1619-1-0 Jul 6 23:32:11.350353 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jul 6 23:32:11.350360 kernel: rcu: Hierarchical SRCU implementation. Jul 6 23:32:11.350368 kernel: rcu: Max phase no-delay instances is 400. Jul 6 23:32:11.350375 kernel: Remapping and enabling EFI services. Jul 6 23:32:11.350382 kernel: smp: Bringing up secondary CPUs ... Jul 6 23:32:11.350390 kernel: Detected PIPT I-cache on CPU1 Jul 6 23:32:11.350398 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jul 6 23:32:11.350405 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 6 23:32:11.350412 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 6 23:32:11.350419 kernel: smp: Brought up 1 node, 2 CPUs Jul 6 23:32:11.350428 kernel: SMP: Total of 2 processors activated. Jul 6 23:32:11.350435 kernel: CPU features: detected: 32-bit EL0 Support Jul 6 23:32:11.350442 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jul 6 23:32:11.350450 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 6 23:32:11.350457 kernel: CPU features: detected: CRC32 instructions Jul 6 23:32:11.350464 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 6 23:32:11.350472 kernel: CPU features: detected: LSE atomic instructions Jul 6 23:32:11.350479 kernel: CPU features: detected: Privileged Access Never Jul 6 23:32:11.350486 kernel: CPU: All CPU(s) started at EL1 Jul 6 23:32:11.350494 kernel: alternatives: applying system-wide alternatives Jul 6 23:32:11.350502 kernel: devtmpfs: initialized Jul 6 23:32:11.350509 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 6 23:32:11.350516 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 6 23:32:11.350523 kernel: pinctrl core: initialized pinctrl subsystem Jul 6 23:32:11.350531 kernel: SMBIOS 3.1.0 present. Jul 6 23:32:11.350538 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Jul 6 23:32:11.350545 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 6 23:32:11.350552 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 6 23:32:11.350561 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 6 23:32:11.350569 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 6 23:32:11.350576 kernel: audit: initializing netlink subsys (disabled) Jul 6 23:32:11.350583 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Jul 6 23:32:11.350590 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 6 23:32:11.350607 kernel: cpuidle: using governor menu Jul 6 23:32:11.350615 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 6 23:32:11.350623 kernel: ASID allocator initialised with 32768 entries Jul 6 23:32:11.350630 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 6 23:32:11.350639 kernel: Serial: AMBA PL011 UART driver Jul 6 23:32:11.350646 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 6 23:32:11.350653 kernel: Modules: 0 pages in range for non-PLT usage Jul 6 23:32:11.350661 kernel: Modules: 509264 pages in range for PLT usage Jul 6 23:32:11.350668 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 6 23:32:11.350675 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 6 23:32:11.350682 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 6 23:32:11.350690 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 6 23:32:11.350697 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 6 23:32:11.350705 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 6 23:32:11.350713 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 6 23:32:11.350720 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 6 23:32:11.350727 kernel: ACPI: Added _OSI(Module Device) Jul 6 23:32:11.350734 kernel: ACPI: Added _OSI(Processor Device) Jul 6 23:32:11.350741 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 6 23:32:11.350749 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 6 23:32:11.350756 kernel: ACPI: Interpreter enabled Jul 6 23:32:11.350763 kernel: ACPI: Using GIC for interrupt routing Jul 6 23:32:11.350772 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jul 6 23:32:11.350779 kernel: printk: console [ttyAMA0] enabled Jul 6 23:32:11.350786 kernel: printk: bootconsole [pl11] disabled Jul 6 23:32:11.350793 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jul 6 23:32:11.350800 kernel: iommu: Default domain type: Translated Jul 6 23:32:11.350808 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 6 23:32:11.350815 kernel: efivars: Registered efivars operations Jul 6 23:32:11.350822 kernel: vgaarb: loaded Jul 6 23:32:11.350829 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 6 23:32:11.350838 kernel: VFS: Disk quotas dquot_6.6.0 Jul 6 23:32:11.350845 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 6 23:32:11.350852 kernel: pnp: PnP ACPI init Jul 6 23:32:11.350860 kernel: pnp: PnP ACPI: found 0 devices Jul 6 23:32:11.350867 kernel: NET: Registered PF_INET protocol family Jul 6 23:32:11.350874 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 6 23:32:11.350881 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 6 23:32:11.350889 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 6 23:32:11.350896 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 6 23:32:11.350905 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 6 23:32:11.350912 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 6 23:32:11.350920 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 6 23:32:11.350927 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 6 23:32:11.350934 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 6 23:32:11.350941 kernel: PCI: CLS 0 bytes, default 64 Jul 6 23:32:11.350948 kernel: kvm [1]: HYP mode not available Jul 6 23:32:11.350956 kernel: Initialise system trusted keyrings Jul 6 23:32:11.350963 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 6 23:32:11.350971 kernel: Key type asymmetric registered Jul 6 23:32:11.350979 kernel: Asymmetric key parser 'x509' registered Jul 6 23:32:11.350986 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 6 23:32:11.350993 kernel: io scheduler mq-deadline registered Jul 6 23:32:11.351000 kernel: io scheduler kyber registered Jul 6 23:32:11.351007 kernel: io scheduler bfq registered Jul 6 23:32:11.351015 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 6 23:32:11.351022 kernel: thunder_xcv, ver 1.0 Jul 6 23:32:11.351028 kernel: thunder_bgx, ver 1.0 Jul 6 23:32:11.351037 kernel: nicpf, ver 1.0 Jul 6 23:32:11.351044 kernel: nicvf, ver 1.0 Jul 6 23:32:11.351225 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 6 23:32:11.351301 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-06T23:32:10 UTC (1751844730) Jul 6 23:32:11.351311 kernel: efifb: probing for efifb Jul 6 23:32:11.351319 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jul 6 23:32:11.351326 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jul 6 23:32:11.351333 kernel: efifb: scrolling: redraw Jul 6 23:32:11.351343 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jul 6 23:32:11.351351 kernel: Console: switching to colour frame buffer device 128x48 Jul 6 23:32:11.351358 kernel: fb0: EFI VGA frame buffer device Jul 6 23:32:11.351365 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jul 6 23:32:11.351372 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 6 23:32:11.351379 kernel: No ACPI PMU IRQ for CPU0 Jul 6 23:32:11.351386 kernel: No ACPI PMU IRQ for CPU1 Jul 6 23:32:11.351393 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Jul 6 23:32:11.351401 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jul 6 23:32:11.351409 kernel: watchdog: Hard watchdog permanently disabled Jul 6 23:32:11.351417 kernel: NET: Registered PF_INET6 protocol family Jul 6 23:32:11.351425 kernel: Segment Routing with IPv6 Jul 6 23:32:11.351432 kernel: In-situ OAM (IOAM) with IPv6 Jul 6 23:32:11.351439 kernel: NET: Registered PF_PACKET protocol family Jul 6 23:32:11.351446 kernel: Key type dns_resolver registered Jul 6 23:32:11.351453 kernel: registered taskstats version 1 Jul 6 23:32:11.351460 kernel: Loading compiled-in X.509 certificates Jul 6 23:32:11.351467 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.95-flatcar: b86e6d3bec2e587f2e5c37def91c4582416a83e3' Jul 6 23:32:11.351476 kernel: Key type .fscrypt registered Jul 6 23:32:11.351483 kernel: Key type fscrypt-provisioning registered Jul 6 23:32:11.351491 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 6 23:32:11.351498 kernel: ima: Allocated hash algorithm: sha1 Jul 6 23:32:11.351505 kernel: ima: No architecture policies found Jul 6 23:32:11.351512 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 6 23:32:11.351520 kernel: clk: Disabling unused clocks Jul 6 23:32:11.351527 kernel: Freeing unused kernel memory: 38336K Jul 6 23:32:11.351534 kernel: Run /init as init process Jul 6 23:32:11.351543 kernel: with arguments: Jul 6 23:32:11.351550 kernel: /init Jul 6 23:32:11.351557 kernel: with environment: Jul 6 23:32:11.351564 kernel: HOME=/ Jul 6 23:32:11.351571 kernel: TERM=linux Jul 6 23:32:11.351578 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 6 23:32:11.351587 systemd[1]: Successfully made /usr/ read-only. Jul 6 23:32:11.353635 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 6 23:32:11.353671 systemd[1]: Detected virtualization microsoft. Jul 6 23:32:11.353680 systemd[1]: Detected architecture arm64. Jul 6 23:32:11.353687 systemd[1]: Running in initrd. Jul 6 23:32:11.353695 systemd[1]: No hostname configured, using default hostname. Jul 6 23:32:11.353703 systemd[1]: Hostname set to . Jul 6 23:32:11.353711 systemd[1]: Initializing machine ID from random generator. Jul 6 23:32:11.353718 systemd[1]: Queued start job for default target initrd.target. Jul 6 23:32:11.353726 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:32:11.353736 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:32:11.353744 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 6 23:32:11.353753 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 6 23:32:11.353760 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 6 23:32:11.353769 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 6 23:32:11.353778 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 6 23:32:11.353788 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 6 23:32:11.353796 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:32:11.353803 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:32:11.353811 systemd[1]: Reached target paths.target - Path Units. Jul 6 23:32:11.353819 systemd[1]: Reached target slices.target - Slice Units. Jul 6 23:32:11.353827 systemd[1]: Reached target swap.target - Swaps. Jul 6 23:32:11.353834 systemd[1]: Reached target timers.target - Timer Units. Jul 6 23:32:11.353842 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 6 23:32:11.353850 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 6 23:32:11.353859 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 6 23:32:11.353867 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 6 23:32:11.353875 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:32:11.353883 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 6 23:32:11.353890 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:32:11.353898 systemd[1]: Reached target sockets.target - Socket Units. Jul 6 23:32:11.353906 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 6 23:32:11.353914 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 6 23:32:11.353923 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 6 23:32:11.353931 systemd[1]: Starting systemd-fsck-usr.service... Jul 6 23:32:11.353938 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 6 23:32:11.353975 systemd-journald[217]: Collecting audit messages is disabled. Jul 6 23:32:11.353997 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 6 23:32:11.354007 systemd-journald[217]: Journal started Jul 6 23:32:11.354025 systemd-journald[217]: Runtime Journal (/run/log/journal/4f1e21f1698644d3b66a8cb27c9cb314) is 8M, max 78.5M, 70.5M free. Jul 6 23:32:11.365544 systemd-modules-load[219]: Inserted module 'overlay' Jul 6 23:32:11.376939 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:32:11.396624 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 6 23:32:11.409866 systemd[1]: Started systemd-journald.service - Journal Service. Jul 6 23:32:11.413459 systemd-modules-load[219]: Inserted module 'br_netfilter' Jul 6 23:32:11.419163 kernel: Bridge firewalling registered Jul 6 23:32:11.414245 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 6 23:32:11.425553 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:32:11.440916 systemd[1]: Finished systemd-fsck-usr.service. Jul 6 23:32:11.452237 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 6 23:32:11.463262 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:32:11.488918 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 6 23:32:11.505293 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:32:11.528844 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 6 23:32:11.546112 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 6 23:32:11.572273 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:32:11.580860 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:32:11.588208 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 6 23:32:11.607115 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:32:11.638907 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 6 23:32:11.647128 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 6 23:32:11.672336 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 6 23:32:11.688115 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:32:11.710931 dracut-cmdline[249]: dracut-dracut-053 Jul 6 23:32:11.710931 dracut-cmdline[249]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=ca8feb1f79a67c117068f051b5f829d3e40170c022cd5834bd6789cba9641479 Jul 6 23:32:11.757213 systemd-resolved[251]: Positive Trust Anchors: Jul 6 23:32:11.757234 systemd-resolved[251]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 6 23:32:11.757266 systemd-resolved[251]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 6 23:32:11.759570 systemd-resolved[251]: Defaulting to hostname 'linux'. Jul 6 23:32:11.762171 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 6 23:32:11.771296 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:32:11.896628 kernel: SCSI subsystem initialized Jul 6 23:32:11.905624 kernel: Loading iSCSI transport class v2.0-870. Jul 6 23:32:11.915752 kernel: iscsi: registered transport (tcp) Jul 6 23:32:11.933854 kernel: iscsi: registered transport (qla4xxx) Jul 6 23:32:11.933908 kernel: QLogic iSCSI HBA Driver Jul 6 23:32:11.973336 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 6 23:32:11.994904 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 6 23:32:12.028638 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 6 23:32:12.028735 kernel: device-mapper: uevent: version 1.0.3 Jul 6 23:32:12.036286 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 6 23:32:12.088631 kernel: raid6: neonx8 gen() 15747 MB/s Jul 6 23:32:12.109626 kernel: raid6: neonx4 gen() 15826 MB/s Jul 6 23:32:12.129614 kernel: raid6: neonx2 gen() 13212 MB/s Jul 6 23:32:12.150613 kernel: raid6: neonx1 gen() 10539 MB/s Jul 6 23:32:12.170641 kernel: raid6: int64x8 gen() 6793 MB/s Jul 6 23:32:12.190630 kernel: raid6: int64x4 gen() 7347 MB/s Jul 6 23:32:12.211641 kernel: raid6: int64x2 gen() 6112 MB/s Jul 6 23:32:12.235224 kernel: raid6: int64x1 gen() 5059 MB/s Jul 6 23:32:12.235291 kernel: raid6: using algorithm neonx4 gen() 15826 MB/s Jul 6 23:32:12.259752 kernel: raid6: .... xor() 12461 MB/s, rmw enabled Jul 6 23:32:12.259765 kernel: raid6: using neon recovery algorithm Jul 6 23:32:12.272569 kernel: xor: measuring software checksum speed Jul 6 23:32:12.272614 kernel: 8regs : 21573 MB/sec Jul 6 23:32:12.276489 kernel: 32regs : 21630 MB/sec Jul 6 23:32:12.280374 kernel: arm64_neon : 27898 MB/sec Jul 6 23:32:12.285110 kernel: xor: using function: arm64_neon (27898 MB/sec) Jul 6 23:32:12.336635 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 6 23:32:12.348446 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 6 23:32:12.369796 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:32:12.399738 systemd-udevd[437]: Using default interface naming scheme 'v255'. Jul 6 23:32:12.409437 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:32:12.442842 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 6 23:32:12.467992 dracut-pre-trigger[447]: rd.md=0: removing MD RAID activation Jul 6 23:32:12.498626 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 6 23:32:12.517898 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 6 23:32:12.559502 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:32:12.581857 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 6 23:32:12.605046 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 6 23:32:12.625069 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 6 23:32:12.649392 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:32:12.664174 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 6 23:32:12.687798 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 6 23:32:12.716198 kernel: hv_vmbus: Vmbus version:5.3 Jul 6 23:32:12.716552 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 6 23:32:12.724203 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:32:12.742976 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 6 23:32:12.758692 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:32:12.760545 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:32:12.814091 kernel: hv_vmbus: registering driver hv_netvsc Jul 6 23:32:12.814115 kernel: hv_vmbus: registering driver hv_storvsc Jul 6 23:32:12.814139 kernel: hv_vmbus: registering driver hid_hyperv Jul 6 23:32:12.814148 kernel: hv_vmbus: registering driver hyperv_keyboard Jul 6 23:32:12.814157 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Jul 6 23:32:12.814166 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jul 6 23:32:12.800762 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:32:12.882504 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 6 23:32:12.882530 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Jul 6 23:32:12.882541 kernel: scsi host0: storvsc_host_t Jul 6 23:32:12.882760 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 6 23:32:12.882772 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jul 6 23:32:12.882793 kernel: scsi host1: storvsc_host_t Jul 6 23:32:12.882885 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jul 6 23:32:12.857312 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:32:12.890078 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 6 23:32:12.890532 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 6 23:32:12.918244 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:32:12.964324 kernel: hv_netvsc 0022487b-fa12-0022-487b-fa120022487b eth0: VF slot 1 added Jul 6 23:32:12.964503 kernel: PTP clock support registered Jul 6 23:32:12.918349 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:32:12.946322 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 6 23:32:12.954779 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:32:13.029026 kernel: hv_utils: Registering HyperV Utility Driver Jul 6 23:32:13.029051 kernel: hv_vmbus: registering driver hv_pci Jul 6 23:32:13.029061 kernel: hv_vmbus: registering driver hv_utils Jul 6 23:32:13.029070 kernel: hv_utils: Heartbeat IC version 3.0 Jul 6 23:32:13.029079 kernel: hv_utils: Shutdown IC version 3.2 Jul 6 23:32:13.029088 kernel: hv_pci 8bca2930-ecdb-462a-a650-f61adab24e0d: PCI VMBus probing: Using version 0x10004 Jul 6 23:32:12.823423 kernel: hv_utils: TimeSync IC version 4.0 Jul 6 23:32:12.831077 systemd-journald[217]: Time jumped backwards, rotating. Jul 6 23:32:12.838232 kernel: hv_pci 8bca2930-ecdb-462a-a650-f61adab24e0d: PCI host bridge to bus ecdb:00 Jul 6 23:32:13.014892 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:32:12.870591 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jul 6 23:32:12.870795 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 6 23:32:12.870806 kernel: pci_bus ecdb:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jul 6 23:32:12.870920 kernel: pci_bus ecdb:00: No busn resource found for root bus, will use [bus 00-ff] Jul 6 23:32:12.871007 kernel: pci ecdb:00:02.0: [15b3:1018] type 00 class 0x020000 Jul 6 23:32:12.814751 systemd-resolved[251]: Clock change detected. Flushing caches. Jul 6 23:32:12.890247 kernel: pci ecdb:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Jul 6 23:32:12.870482 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 6 23:32:12.910451 kernel: pci ecdb:00:02.0: enabling Extended Tags Jul 6 23:32:12.913264 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jul 6 23:32:12.939402 kernel: pci ecdb:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at ecdb:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Jul 6 23:32:12.942785 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:32:12.983064 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jul 6 23:32:12.983323 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jul 6 23:32:12.983414 kernel: pci_bus ecdb:00: busn_res: [bus 00-ff] end is updated to 00 Jul 6 23:32:12.983502 kernel: sd 0:0:0:0: [sda] Write Protect is off Jul 6 23:32:12.983585 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jul 6 23:32:12.983667 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jul 6 23:32:12.998099 kernel: pci ecdb:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Jul 6 23:32:13.008235 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 6 23:32:13.008285 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jul 6 23:32:13.051389 kernel: mlx5_core ecdb:00:02.0: enabling device (0000 -> 0002) Jul 6 23:32:13.059149 kernel: mlx5_core ecdb:00:02.0: firmware version: 16.30.1284 Jul 6 23:32:13.262153 kernel: hv_netvsc 0022487b-fa12-0022-487b-fa120022487b eth0: VF registering: eth1 Jul 6 23:32:13.262480 kernel: mlx5_core ecdb:00:02.0 eth1: joined to eth0 Jul 6 23:32:13.269606 kernel: mlx5_core ecdb:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Jul 6 23:32:13.282677 kernel: mlx5_core ecdb:00:02.0 enP60635s1: renamed from eth1 Jul 6 23:32:13.373395 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jul 6 23:32:13.462384 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (486) Jul 6 23:32:13.478156 kernel: BTRFS: device fsid 990dd864-0c88-4d4d-9797-49057844458a devid 1 transid 35 /dev/sda3 scanned by (udev-worker) (495) Jul 6 23:32:13.487932 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jul 6 23:32:13.522497 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jul 6 23:32:13.546169 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jul 6 23:32:13.553897 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jul 6 23:32:13.588307 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 6 23:32:13.620185 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 6 23:32:13.629138 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 6 23:32:14.643152 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 6 23:32:14.643213 disk-uuid[604]: The operation has completed successfully. Jul 6 23:32:14.706278 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 6 23:32:14.707368 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 6 23:32:14.755264 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 6 23:32:14.768462 sh[690]: Success Jul 6 23:32:14.795239 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 6 23:32:14.930793 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 6 23:32:14.949856 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 6 23:32:14.957150 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 6 23:32:14.988976 kernel: BTRFS info (device dm-0): first mount of filesystem 990dd864-0c88-4d4d-9797-49057844458a Jul 6 23:32:14.989037 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 6 23:32:14.996222 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 6 23:32:15.001349 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 6 23:32:15.005508 kernel: BTRFS info (device dm-0): using free space tree Jul 6 23:32:15.280963 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 6 23:32:15.287092 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 6 23:32:15.308410 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 6 23:32:15.317357 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 6 23:32:15.364834 kernel: BTRFS info (device sda6): first mount of filesystem 297af9a7-3de6-47a6-b022-d94c20ff287b Jul 6 23:32:15.364891 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 6 23:32:15.369305 kernel: BTRFS info (device sda6): using free space tree Jul 6 23:32:15.389186 kernel: BTRFS info (device sda6): auto enabling async discard Jul 6 23:32:15.401161 kernel: BTRFS info (device sda6): last unmount of filesystem 297af9a7-3de6-47a6-b022-d94c20ff287b Jul 6 23:32:15.405465 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 6 23:32:15.418362 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 6 23:32:15.476970 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 6 23:32:15.496291 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 6 23:32:15.527685 systemd-networkd[875]: lo: Link UP Jul 6 23:32:15.527697 systemd-networkd[875]: lo: Gained carrier Jul 6 23:32:15.530549 systemd-networkd[875]: Enumeration completed Jul 6 23:32:15.530800 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 6 23:32:15.531981 systemd-networkd[875]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:32:15.531985 systemd-networkd[875]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 6 23:32:15.542191 systemd[1]: Reached target network.target - Network. Jul 6 23:32:15.601154 kernel: mlx5_core ecdb:00:02.0 enP60635s1: Link up Jul 6 23:32:15.647144 kernel: hv_netvsc 0022487b-fa12-0022-487b-fa120022487b eth0: Data path switched to VF: enP60635s1 Jul 6 23:32:15.647947 systemd-networkd[875]: enP60635s1: Link UP Jul 6 23:32:15.648034 systemd-networkd[875]: eth0: Link UP Jul 6 23:32:15.648145 systemd-networkd[875]: eth0: Gained carrier Jul 6 23:32:15.648154 systemd-networkd[875]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:32:15.659755 systemd-networkd[875]: enP60635s1: Gained carrier Jul 6 23:32:15.689187 systemd-networkd[875]: eth0: DHCPv4 address 10.200.20.11/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jul 6 23:32:16.140366 ignition[805]: Ignition 2.20.0 Jul 6 23:32:16.140377 ignition[805]: Stage: fetch-offline Jul 6 23:32:16.140424 ignition[805]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:32:16.148276 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 6 23:32:16.140432 ignition[805]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 6 23:32:16.163414 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 6 23:32:16.144161 ignition[805]: parsed url from cmdline: "" Jul 6 23:32:16.144167 ignition[805]: no config URL provided Jul 6 23:32:16.144177 ignition[805]: reading system config file "/usr/lib/ignition/user.ign" Jul 6 23:32:16.144196 ignition[805]: no config at "/usr/lib/ignition/user.ign" Jul 6 23:32:16.144203 ignition[805]: failed to fetch config: resource requires networking Jul 6 23:32:16.144590 ignition[805]: Ignition finished successfully Jul 6 23:32:16.193996 ignition[886]: Ignition 2.20.0 Jul 6 23:32:16.194002 ignition[886]: Stage: fetch Jul 6 23:32:16.194197 ignition[886]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:32:16.194207 ignition[886]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 6 23:32:16.194291 ignition[886]: parsed url from cmdline: "" Jul 6 23:32:16.194294 ignition[886]: no config URL provided Jul 6 23:32:16.194299 ignition[886]: reading system config file "/usr/lib/ignition/user.ign" Jul 6 23:32:16.194306 ignition[886]: no config at "/usr/lib/ignition/user.ign" Jul 6 23:32:16.194331 ignition[886]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jul 6 23:32:16.362348 ignition[886]: GET result: OK Jul 6 23:32:16.362432 ignition[886]: config has been read from IMDS userdata Jul 6 23:32:16.362474 ignition[886]: parsing config with SHA512: c71003a3282feeb91201b9a6075436a65dad8e57f5a42d0d95e7b3b65cde1142de552b079482a9a0e4d8bef73c8b09578af952839d894b081f62264e6fd745fb Jul 6 23:32:16.366962 unknown[886]: fetched base config from "system" Jul 6 23:32:16.370726 ignition[886]: fetch: fetch complete Jul 6 23:32:16.366970 unknown[886]: fetched base config from "system" Jul 6 23:32:16.370736 ignition[886]: fetch: fetch passed Jul 6 23:32:16.366976 unknown[886]: fetched user config from "azure" Jul 6 23:32:16.370814 ignition[886]: Ignition finished successfully Jul 6 23:32:16.373431 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 6 23:32:16.394931 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 6 23:32:16.423140 ignition[893]: Ignition 2.20.0 Jul 6 23:32:16.423147 ignition[893]: Stage: kargs Jul 6 23:32:16.433744 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 6 23:32:16.423321 ignition[893]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:32:16.423330 ignition[893]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 6 23:32:16.424383 ignition[893]: kargs: kargs passed Jul 6 23:32:16.424432 ignition[893]: Ignition finished successfully Jul 6 23:32:16.463435 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 6 23:32:16.482286 ignition[899]: Ignition 2.20.0 Jul 6 23:32:16.489557 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 6 23:32:16.482293 ignition[899]: Stage: disks Jul 6 23:32:16.496333 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 6 23:32:16.482510 ignition[899]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:32:16.505750 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 6 23:32:16.482520 ignition[899]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 6 23:32:16.521250 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 6 23:32:16.483539 ignition[899]: disks: disks passed Jul 6 23:32:16.533893 systemd[1]: Reached target sysinit.target - System Initialization. Jul 6 23:32:16.483590 ignition[899]: Ignition finished successfully Jul 6 23:32:16.546799 systemd[1]: Reached target basic.target - Basic System. Jul 6 23:32:16.577434 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 6 23:32:16.653218 systemd-fsck[907]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jul 6 23:32:16.663614 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 6 23:32:16.688312 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 6 23:32:16.750149 kernel: EXT4-fs (sda9): mounted filesystem efd38a90-a3d5-48a9-85e4-1ea6162daba0 r/w with ordered data mode. Quota mode: none. Jul 6 23:32:16.751619 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 6 23:32:16.759021 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 6 23:32:16.793240 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 6 23:32:16.803679 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 6 23:32:16.811345 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jul 6 23:32:16.826490 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 6 23:32:16.826531 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 6 23:32:16.883146 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (918) Jul 6 23:32:16.844014 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 6 23:32:16.883091 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 6 23:32:16.920925 kernel: BTRFS info (device sda6): first mount of filesystem 297af9a7-3de6-47a6-b022-d94c20ff287b Jul 6 23:32:16.920969 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 6 23:32:16.920988 kernel: BTRFS info (device sda6): using free space tree Jul 6 23:32:16.934158 kernel: BTRFS info (device sda6): auto enabling async discard Jul 6 23:32:16.934099 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 6 23:32:16.970380 systemd-networkd[875]: enP60635s1: Gained IPv6LL Jul 6 23:32:17.257768 coreos-metadata[920]: Jul 06 23:32:17.257 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jul 6 23:32:17.268729 coreos-metadata[920]: Jul 06 23:32:17.268 INFO Fetch successful Jul 6 23:32:17.274281 coreos-metadata[920]: Jul 06 23:32:17.273 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jul 6 23:32:17.285737 coreos-metadata[920]: Jul 06 23:32:17.285 INFO Fetch successful Jul 6 23:32:17.290875 coreos-metadata[920]: Jul 06 23:32:17.290 INFO wrote hostname ci-4230.2.1-a-3b9b3bec0f to /sysroot/etc/hostname Jul 6 23:32:17.300249 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 6 23:32:17.482458 systemd-networkd[875]: eth0: Gained IPv6LL Jul 6 23:32:17.521310 initrd-setup-root[949]: cut: /sysroot/etc/passwd: No such file or directory Jul 6 23:32:17.589697 initrd-setup-root[956]: cut: /sysroot/etc/group: No such file or directory Jul 6 23:32:17.596504 initrd-setup-root[963]: cut: /sysroot/etc/shadow: No such file or directory Jul 6 23:32:17.603601 initrd-setup-root[970]: cut: /sysroot/etc/gshadow: No such file or directory Jul 6 23:32:18.312967 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 6 23:32:18.331335 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 6 23:32:18.350568 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 6 23:32:18.363374 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 6 23:32:18.377190 kernel: BTRFS info (device sda6): last unmount of filesystem 297af9a7-3de6-47a6-b022-d94c20ff287b Jul 6 23:32:18.386185 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 6 23:32:18.413892 ignition[1040]: INFO : Ignition 2.20.0 Jul 6 23:32:18.419356 ignition[1040]: INFO : Stage: mount Jul 6 23:32:18.419356 ignition[1040]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:32:18.419356 ignition[1040]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 6 23:32:18.419356 ignition[1040]: INFO : mount: mount passed Jul 6 23:32:18.419356 ignition[1040]: INFO : Ignition finished successfully Jul 6 23:32:18.419866 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 6 23:32:18.448305 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 6 23:32:18.479820 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 6 23:32:18.511022 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (1050) Jul 6 23:32:18.511088 kernel: BTRFS info (device sda6): first mount of filesystem 297af9a7-3de6-47a6-b022-d94c20ff287b Jul 6 23:32:18.516145 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 6 23:32:18.521470 kernel: BTRFS info (device sda6): using free space tree Jul 6 23:32:18.529164 kernel: BTRFS info (device sda6): auto enabling async discard Jul 6 23:32:18.530821 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 6 23:32:18.558145 ignition[1067]: INFO : Ignition 2.20.0 Jul 6 23:32:18.558145 ignition[1067]: INFO : Stage: files Jul 6 23:32:18.558145 ignition[1067]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:32:18.558145 ignition[1067]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 6 23:32:18.580220 ignition[1067]: DEBUG : files: compiled without relabeling support, skipping Jul 6 23:32:18.580220 ignition[1067]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 6 23:32:18.580220 ignition[1067]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 6 23:32:18.613741 ignition[1067]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 6 23:32:18.621650 ignition[1067]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 6 23:32:18.630449 ignition[1067]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 6 23:32:18.621796 unknown[1067]: wrote ssh authorized keys file for user: core Jul 6 23:32:18.644596 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jul 6 23:32:18.644596 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Jul 6 23:32:18.687746 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 6 23:32:18.760548 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jul 6 23:32:18.773475 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 6 23:32:18.773475 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jul 6 23:32:19.193536 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 6 23:32:19.271555 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 6 23:32:19.281673 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 6 23:32:19.281673 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 6 23:32:19.281673 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 6 23:32:19.281673 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 6 23:32:19.323063 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 6 23:32:19.323063 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 6 23:32:19.323063 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 6 23:32:19.323063 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 6 23:32:19.323063 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 6 23:32:19.323063 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 6 23:32:19.323063 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 6 23:32:19.323063 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 6 23:32:19.323063 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 6 23:32:19.323063 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Jul 6 23:32:19.922012 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 6 23:32:20.113208 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 6 23:32:20.113208 ignition[1067]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 6 23:32:20.133371 ignition[1067]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 6 23:32:20.144451 ignition[1067]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 6 23:32:20.144451 ignition[1067]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 6 23:32:20.144451 ignition[1067]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jul 6 23:32:20.144451 ignition[1067]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jul 6 23:32:20.144451 ignition[1067]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 6 23:32:20.144451 ignition[1067]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 6 23:32:20.144451 ignition[1067]: INFO : files: files passed Jul 6 23:32:20.144451 ignition[1067]: INFO : Ignition finished successfully Jul 6 23:32:20.145363 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 6 23:32:20.180890 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 6 23:32:20.195323 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 6 23:32:20.222516 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 6 23:32:20.272390 initrd-setup-root-after-ignition[1095]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:32:20.272390 initrd-setup-root-after-ignition[1095]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:32:20.222629 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 6 23:32:20.302997 initrd-setup-root-after-ignition[1099]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:32:20.259199 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 6 23:32:20.267674 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 6 23:32:20.303359 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 6 23:32:20.346530 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 6 23:32:20.346659 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 6 23:32:20.359725 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 6 23:32:20.372824 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 6 23:32:20.384530 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 6 23:32:20.404396 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 6 23:32:20.429361 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 6 23:32:20.446329 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 6 23:32:20.467031 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 6 23:32:20.469149 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 6 23:32:20.483305 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:32:20.497439 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:32:20.513988 systemd[1]: Stopped target timers.target - Timer Units. Jul 6 23:32:20.527237 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 6 23:32:20.527336 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 6 23:32:20.546324 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 6 23:32:20.552388 systemd[1]: Stopped target basic.target - Basic System. Jul 6 23:32:20.564018 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 6 23:32:20.576301 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 6 23:32:20.589160 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 6 23:32:20.602236 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 6 23:32:20.614845 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 6 23:32:20.628483 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 6 23:32:20.643926 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 6 23:32:20.657075 systemd[1]: Stopped target swap.target - Swaps. Jul 6 23:32:20.668063 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 6 23:32:20.668159 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 6 23:32:20.686217 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:32:20.694606 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:32:20.706836 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 6 23:32:20.712572 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:32:20.720231 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 6 23:32:20.720326 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 6 23:32:20.741463 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 6 23:32:20.741535 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 6 23:32:20.754249 systemd[1]: ignition-files.service: Deactivated successfully. Jul 6 23:32:20.754331 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 6 23:32:20.768811 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jul 6 23:32:20.838977 ignition[1120]: INFO : Ignition 2.20.0 Jul 6 23:32:20.838977 ignition[1120]: INFO : Stage: umount Jul 6 23:32:20.838977 ignition[1120]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:32:20.838977 ignition[1120]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 6 23:32:20.838977 ignition[1120]: INFO : umount: umount passed Jul 6 23:32:20.838977 ignition[1120]: INFO : Ignition finished successfully Jul 6 23:32:20.768882 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 6 23:32:20.804341 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 6 23:32:20.821729 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 6 23:32:20.821827 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:32:20.833310 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 6 23:32:20.854904 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 6 23:32:20.854993 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:32:20.868281 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 6 23:32:20.868349 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 6 23:32:20.888428 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 6 23:32:20.888520 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 6 23:32:20.900891 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 6 23:32:20.901735 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 6 23:32:20.901837 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 6 23:32:20.913880 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 6 23:32:20.913950 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 6 23:32:20.920356 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 6 23:32:20.920413 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 6 23:32:20.932653 systemd[1]: Stopped target network.target - Network. Jul 6 23:32:20.943716 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 6 23:32:20.943789 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 6 23:32:20.956935 systemd[1]: Stopped target paths.target - Path Units. Jul 6 23:32:20.967053 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 6 23:32:20.972144 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:32:20.981720 systemd[1]: Stopped target slices.target - Slice Units. Jul 6 23:32:20.993483 systemd[1]: Stopped target sockets.target - Socket Units. Jul 6 23:32:21.003761 systemd[1]: iscsid.socket: Deactivated successfully. Jul 6 23:32:21.003824 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 6 23:32:21.014917 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 6 23:32:21.014963 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 6 23:32:21.026842 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 6 23:32:21.026902 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 6 23:32:21.038768 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 6 23:32:21.038822 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 6 23:32:21.049794 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 6 23:32:21.060489 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 6 23:32:21.078981 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 6 23:32:21.079093 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 6 23:32:21.097740 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 6 23:32:21.098008 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 6 23:32:21.323297 kernel: hv_netvsc 0022487b-fa12-0022-487b-fa120022487b eth0: Data path switched from VF: enP60635s1 Jul 6 23:32:21.098241 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 6 23:32:21.114181 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 6 23:32:21.114411 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 6 23:32:21.114489 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 6 23:32:21.126320 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 6 23:32:21.126384 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:32:21.139328 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 6 23:32:21.139404 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 6 23:32:21.171326 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 6 23:32:21.180737 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 6 23:32:21.180830 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 6 23:32:21.194469 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 6 23:32:21.194527 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:32:21.209994 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 6 23:32:21.210048 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 6 23:32:21.216321 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 6 23:32:21.216366 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:32:21.234815 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:32:21.246857 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 6 23:32:21.246935 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 6 23:32:21.286167 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 6 23:32:21.286368 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:32:21.298397 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 6 23:32:21.298459 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 6 23:32:21.318001 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 6 23:32:21.318039 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:32:21.331398 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 6 23:32:21.331481 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 6 23:32:21.352240 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 6 23:32:21.352310 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 6 23:32:21.368932 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 6 23:32:21.368999 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:32:21.414390 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 6 23:32:21.434189 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 6 23:32:21.434270 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:32:21.451809 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:32:21.640631 systemd-journald[217]: Received SIGTERM from PID 1 (systemd). Jul 6 23:32:21.451875 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:32:21.465423 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 6 23:32:21.465497 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 6 23:32:21.465812 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 6 23:32:21.465897 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 6 23:32:21.485289 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 6 23:32:21.485430 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 6 23:32:21.497776 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 6 23:32:21.530405 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 6 23:32:21.562505 systemd[1]: Switching root. Jul 6 23:32:21.703523 systemd-journald[217]: Journal stopped Jul 6 23:32:26.550541 kernel: SELinux: policy capability network_peer_controls=1 Jul 6 23:32:26.550590 kernel: SELinux: policy capability open_perms=1 Jul 6 23:32:26.550601 kernel: SELinux: policy capability extended_socket_class=1 Jul 6 23:32:26.550610 kernel: SELinux: policy capability always_check_network=0 Jul 6 23:32:26.550625 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 6 23:32:26.550634 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 6 23:32:26.550644 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 6 23:32:26.550654 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 6 23:32:26.550662 kernel: audit: type=1403 audit(1751844743.009:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 6 23:32:26.550673 systemd[1]: Successfully loaded SELinux policy in 183.419ms. Jul 6 23:32:26.550687 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.360ms. Jul 6 23:32:26.550699 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 6 23:32:26.550709 systemd[1]: Detected virtualization microsoft. Jul 6 23:32:26.550717 systemd[1]: Detected architecture arm64. Jul 6 23:32:26.550728 systemd[1]: Detected first boot. Jul 6 23:32:26.550740 systemd[1]: Hostname set to . Jul 6 23:32:26.550753 systemd[1]: Initializing machine ID from random generator. Jul 6 23:32:26.550763 zram_generator::config[1164]: No configuration found. Jul 6 23:32:26.550773 kernel: NET: Registered PF_VSOCK protocol family Jul 6 23:32:26.550782 systemd[1]: Populated /etc with preset unit settings. Jul 6 23:32:26.550793 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 6 23:32:26.550804 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 6 23:32:26.550816 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 6 23:32:26.550826 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 6 23:32:26.550836 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 6 23:32:26.550847 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 6 23:32:26.550856 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 6 23:32:26.550867 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 6 23:32:26.550876 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 6 23:32:26.550887 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 6 23:32:26.550898 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 6 23:32:26.550908 systemd[1]: Created slice user.slice - User and Session Slice. Jul 6 23:32:26.550919 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:32:26.550929 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:32:26.550938 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 6 23:32:26.550949 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 6 23:32:26.550959 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 6 23:32:26.550972 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 6 23:32:26.550982 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jul 6 23:32:26.550991 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:32:26.551004 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 6 23:32:26.551015 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 6 23:32:26.551026 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 6 23:32:26.551036 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 6 23:32:26.551045 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:32:26.551058 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 6 23:32:26.551068 systemd[1]: Reached target slices.target - Slice Units. Jul 6 23:32:26.551079 systemd[1]: Reached target swap.target - Swaps. Jul 6 23:32:26.551089 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 6 23:32:26.551099 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 6 23:32:26.551109 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 6 23:32:26.551142 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:32:26.551155 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 6 23:32:26.551166 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:32:26.551181 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 6 23:32:26.551192 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 6 23:32:26.551202 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 6 23:32:26.551212 systemd[1]: Mounting media.mount - External Media Directory... Jul 6 23:32:26.551227 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 6 23:32:26.551239 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 6 23:32:26.551249 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 6 23:32:26.551260 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 6 23:32:26.551270 systemd[1]: Reached target machines.target - Containers. Jul 6 23:32:26.551280 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 6 23:32:26.551292 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:32:26.551303 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 6 23:32:26.551315 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 6 23:32:26.551325 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:32:26.551337 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 6 23:32:26.551347 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:32:26.551358 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 6 23:32:26.551368 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:32:26.551378 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 6 23:32:26.551388 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 6 23:32:26.551401 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 6 23:32:26.551411 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 6 23:32:26.551421 kernel: fuse: init (API version 7.39) Jul 6 23:32:26.551430 systemd[1]: Stopped systemd-fsck-usr.service. Jul 6 23:32:26.551440 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 6 23:32:26.551453 kernel: loop: module loaded Jul 6 23:32:26.551463 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 6 23:32:26.551473 kernel: ACPI: bus type drm_connector registered Jul 6 23:32:26.551482 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 6 23:32:26.551537 systemd-journald[1268]: Collecting audit messages is disabled. Jul 6 23:32:26.551560 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 6 23:32:26.551573 systemd-journald[1268]: Journal started Jul 6 23:32:26.551597 systemd-journald[1268]: Runtime Journal (/run/log/journal/334c6c5116f541f485df3192b9b3d0f9) is 8M, max 78.5M, 70.5M free. Jul 6 23:32:25.568222 systemd[1]: Queued start job for default target multi-user.target. Jul 6 23:32:25.575961 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jul 6 23:32:25.576372 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 6 23:32:25.576704 systemd[1]: systemd-journald.service: Consumed 3.618s CPU time. Jul 6 23:32:26.579388 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 6 23:32:26.599158 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 6 23:32:26.614960 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 6 23:32:26.624040 systemd[1]: verity-setup.service: Deactivated successfully. Jul 6 23:32:26.624114 systemd[1]: Stopped verity-setup.service. Jul 6 23:32:26.641947 systemd[1]: Started systemd-journald.service - Journal Service. Jul 6 23:32:26.642799 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 6 23:32:26.648968 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 6 23:32:26.655481 systemd[1]: Mounted media.mount - External Media Directory. Jul 6 23:32:26.661588 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 6 23:32:26.668051 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 6 23:32:26.674614 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 6 23:32:26.680388 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 6 23:32:26.687316 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:32:26.694667 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 6 23:32:26.694828 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 6 23:32:26.702019 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:32:26.702246 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:32:26.709095 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 6 23:32:26.711149 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 6 23:32:26.717370 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:32:26.717524 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:32:26.724457 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 6 23:32:26.724601 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 6 23:32:26.731423 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:32:26.731592 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:32:26.738071 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 6 23:32:26.744832 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 6 23:32:26.752047 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 6 23:32:26.759746 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 6 23:32:26.768295 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:32:26.787188 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 6 23:32:26.804232 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 6 23:32:26.811597 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 6 23:32:26.818422 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 6 23:32:26.818464 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 6 23:32:26.825369 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 6 23:32:26.842306 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 6 23:32:26.850163 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 6 23:32:26.855972 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:32:26.857296 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 6 23:32:26.865336 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 6 23:32:26.875633 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 6 23:32:26.877609 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 6 23:32:26.884092 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 6 23:32:26.885664 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:32:26.901160 systemd-journald[1268]: Time spent on flushing to /var/log/journal/334c6c5116f541f485df3192b9b3d0f9 is 82.850ms for 913 entries. Jul 6 23:32:26.901160 systemd-journald[1268]: System Journal (/var/log/journal/334c6c5116f541f485df3192b9b3d0f9) is 11.8M, max 2.6G, 2.6G free. Jul 6 23:32:27.074414 systemd-journald[1268]: Received client request to flush runtime journal. Jul 6 23:32:27.074477 systemd-journald[1268]: /var/log/journal/334c6c5116f541f485df3192b9b3d0f9/system.journal: Realtime clock jumped backwards relative to last journal entry, rotating. Jul 6 23:32:27.074502 systemd-journald[1268]: Rotating system journal. Jul 6 23:32:27.074524 kernel: loop0: detected capacity change from 0 to 113512 Jul 6 23:32:26.899368 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 6 23:32:26.915784 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 6 23:32:26.924317 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 6 23:32:26.937407 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 6 23:32:26.944315 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 6 23:32:26.951685 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 6 23:32:26.959869 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 6 23:32:26.985600 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 6 23:32:27.009185 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 6 23:32:27.019005 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:32:27.029689 udevadm[1307]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jul 6 23:32:27.072501 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 6 23:32:27.075632 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 6 23:32:27.084413 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 6 23:32:27.093765 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 6 23:32:27.109331 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 6 23:32:27.197354 systemd-tmpfiles[1321]: ACLs are not supported, ignoring. Jul 6 23:32:27.197374 systemd-tmpfiles[1321]: ACLs are not supported, ignoring. Jul 6 23:32:27.202296 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:32:27.355940 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 6 23:32:27.412162 kernel: loop1: detected capacity change from 0 to 28720 Jul 6 23:32:27.797325 kernel: loop2: detected capacity change from 0 to 123192 Jul 6 23:32:27.930173 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 6 23:32:27.941319 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:32:27.970582 systemd-udevd[1328]: Using default interface naming scheme 'v255'. Jul 6 23:32:28.111240 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:32:28.132354 kernel: loop3: detected capacity change from 0 to 211168 Jul 6 23:32:28.137495 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 6 23:32:28.188890 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jul 6 23:32:28.200316 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 6 23:32:28.224181 kernel: loop4: detected capacity change from 0 to 113512 Jul 6 23:32:28.253143 kernel: loop5: detected capacity change from 0 to 28720 Jul 6 23:32:28.281147 kernel: loop6: detected capacity change from 0 to 123192 Jul 6 23:32:28.283210 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 6 23:32:28.308332 kernel: mousedev: PS/2 mouse device common for all mice Jul 6 23:32:28.308416 kernel: loop7: detected capacity change from 0 to 211168 Jul 6 23:32:28.323571 (sd-merge)[1354]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jul 6 23:32:28.324382 (sd-merge)[1354]: Merged extensions into '/usr'. Jul 6 23:32:28.344547 systemd[1]: Reload requested from client PID 1304 ('systemd-sysext') (unit systemd-sysext.service)... Jul 6 23:32:28.344570 systemd[1]: Reloading... Jul 6 23:32:28.400492 kernel: hv_vmbus: registering driver hyperv_fb Jul 6 23:32:28.419175 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jul 6 23:32:28.419284 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jul 6 23:32:28.455153 kernel: Console: switching to colour dummy device 80x25 Jul 6 23:32:28.471848 kernel: hv_vmbus: registering driver hv_balloon Jul 6 23:32:28.495800 kernel: Console: switching to colour frame buffer device 128x48 Jul 6 23:32:28.520617 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jul 6 23:32:28.520730 zram_generator::config[1424]: No configuration found. Jul 6 23:32:28.520767 kernel: hv_balloon: Memory hot add disabled on ARM64 Jul 6 23:32:28.526461 systemd-networkd[1343]: lo: Link UP Jul 6 23:32:28.526473 systemd-networkd[1343]: lo: Gained carrier Jul 6 23:32:28.536371 systemd-networkd[1343]: Enumeration completed Jul 6 23:32:28.538595 systemd-networkd[1343]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:32:28.538604 systemd-networkd[1343]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 6 23:32:28.567331 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 35 scanned by (udev-worker) (1337) Jul 6 23:32:28.608321 kernel: mlx5_core ecdb:00:02.0 enP60635s1: Link up Jul 6 23:32:28.640148 kernel: hv_netvsc 0022487b-fa12-0022-487b-fa120022487b eth0: Data path switched to VF: enP60635s1 Jul 6 23:32:28.642017 systemd-networkd[1343]: enP60635s1: Link UP Jul 6 23:32:28.643369 systemd-networkd[1343]: eth0: Link UP Jul 6 23:32:28.643682 systemd-networkd[1343]: eth0: Gained carrier Jul 6 23:32:28.643704 systemd-networkd[1343]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:32:28.651758 systemd-networkd[1343]: enP60635s1: Gained carrier Jul 6 23:32:28.664216 systemd-networkd[1343]: eth0: DHCPv4 address 10.200.20.11/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jul 6 23:32:28.719895 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:32:28.816277 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jul 6 23:32:28.823460 systemd[1]: Reloading finished in 478 ms. Jul 6 23:32:28.842139 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 6 23:32:28.848825 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 6 23:32:28.894496 systemd[1]: Starting ensure-sysext.service... Jul 6 23:32:28.908351 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 6 23:32:28.920292 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 6 23:32:28.929330 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 6 23:32:28.938096 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 6 23:32:28.950645 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:32:28.965056 systemd-tmpfiles[1527]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 6 23:32:28.965458 systemd-tmpfiles[1527]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 6 23:32:28.966100 systemd-tmpfiles[1527]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 6 23:32:28.966329 systemd-tmpfiles[1527]: ACLs are not supported, ignoring. Jul 6 23:32:28.966371 systemd-tmpfiles[1527]: ACLs are not supported, ignoring. Jul 6 23:32:28.978246 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 6 23:32:28.988629 systemd-tmpfiles[1527]: Detected autofs mount point /boot during canonicalization of boot. Jul 6 23:32:28.988640 systemd-tmpfiles[1527]: Skipping /boot Jul 6 23:32:28.994172 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 6 23:32:28.997769 systemd-tmpfiles[1527]: Detected autofs mount point /boot during canonicalization of boot. Jul 6 23:32:28.997788 systemd-tmpfiles[1527]: Skipping /boot Jul 6 23:32:29.009645 systemd[1]: Reload requested from client PID 1523 ('systemctl') (unit ensure-sysext.service)... Jul 6 23:32:29.009883 systemd[1]: Reloading... Jul 6 23:32:29.106184 zram_generator::config[1576]: No configuration found. Jul 6 23:32:29.211416 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:32:29.313495 systemd[1]: Reloading finished in 303 ms. Jul 6 23:32:29.325435 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 6 23:32:29.346157 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:32:29.357164 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:32:29.381510 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 6 23:32:29.403706 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 6 23:32:29.413523 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 6 23:32:29.423240 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 6 23:32:29.435244 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 6 23:32:29.446251 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 6 23:32:29.459010 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:32:29.467451 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:32:29.476963 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:32:29.494902 lvm[1630]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 6 23:32:29.497418 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:32:29.507246 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:32:29.507589 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 6 23:32:29.510809 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:32:29.512171 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:32:29.519911 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:32:29.520306 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:32:29.529457 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:32:29.529773 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:32:29.540323 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 6 23:32:29.557022 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:32:29.565871 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:32:29.574443 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 6 23:32:29.584524 systemd-resolved[1632]: Positive Trust Anchors: Jul 6 23:32:29.584541 systemd-resolved[1632]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 6 23:32:29.584571 systemd-resolved[1632]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 6 23:32:29.587462 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:32:29.589136 lvm[1656]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 6 23:32:29.605989 systemd-resolved[1632]: Using system hostname 'ci-4230.2.1-a-3b9b3bec0f'. Jul 6 23:32:29.607458 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:32:29.616419 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:32:29.623426 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:32:29.623572 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 6 23:32:29.625015 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 6 23:32:29.634134 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 6 23:32:29.635793 augenrules[1664]: No rules Jul 6 23:32:29.644701 systemd[1]: audit-rules.service: Deactivated successfully. Jul 6 23:32:29.644890 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 6 23:32:29.651961 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 6 23:32:29.660889 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 6 23:32:29.669177 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:32:29.669492 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:32:29.677074 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:32:29.678276 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:32:29.686438 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:32:29.686663 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:32:29.703201 systemd[1]: Reached target network.target - Network. Jul 6 23:32:29.708603 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:32:29.724792 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 6 23:32:29.730687 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:32:29.734868 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:32:29.746296 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 6 23:32:29.755239 augenrules[1676]: /sbin/augenrules: No change Jul 6 23:32:29.757414 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:32:29.770428 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:32:29.776736 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:32:29.776873 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 6 23:32:29.777003 systemd[1]: Reached target time-set.target - System Time Set. Jul 6 23:32:29.784883 augenrules[1697]: No rules Jul 6 23:32:29.786443 systemd[1]: audit-rules.service: Deactivated successfully. Jul 6 23:32:29.786652 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 6 23:32:29.793076 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:32:29.793444 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:32:29.801060 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 6 23:32:29.801291 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 6 23:32:29.808376 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:32:29.808532 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:32:29.816559 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:32:29.816711 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:32:29.827966 systemd[1]: Finished ensure-sysext.service. Jul 6 23:32:29.834295 systemd-networkd[1343]: eth0: Gained IPv6LL Jul 6 23:32:29.836878 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 6 23:32:29.846279 systemd[1]: Reached target network-online.target - Network is Online. Jul 6 23:32:29.853075 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 6 23:32:29.853190 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 6 23:32:29.970797 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 6 23:32:29.979224 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 6 23:32:30.410298 systemd-networkd[1343]: enP60635s1: Gained IPv6LL Jul 6 23:32:30.784367 ldconfig[1299]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 6 23:32:30.796807 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 6 23:32:30.808377 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 6 23:32:30.825643 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 6 23:32:30.832252 systemd[1]: Reached target sysinit.target - System Initialization. Jul 6 23:32:30.839098 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 6 23:32:30.846153 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 6 23:32:30.854761 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 6 23:32:30.861279 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 6 23:32:30.868615 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 6 23:32:30.876058 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 6 23:32:30.876100 systemd[1]: Reached target paths.target - Path Units. Jul 6 23:32:30.881271 systemd[1]: Reached target timers.target - Timer Units. Jul 6 23:32:30.897518 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 6 23:32:30.905903 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 6 23:32:30.913406 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 6 23:32:30.921167 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 6 23:32:30.928403 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 6 23:32:30.946964 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 6 23:32:30.953077 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 6 23:32:30.960249 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 6 23:32:30.966387 systemd[1]: Reached target sockets.target - Socket Units. Jul 6 23:32:30.971572 systemd[1]: Reached target basic.target - Basic System. Jul 6 23:32:30.976898 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 6 23:32:30.976926 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 6 23:32:30.982275 systemd[1]: Starting chronyd.service - NTP client/server... Jul 6 23:32:30.990260 systemd[1]: Starting containerd.service - containerd container runtime... Jul 6 23:32:30.999310 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 6 23:32:31.008362 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 6 23:32:31.021286 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 6 23:32:31.029191 (chronyd)[1715]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Jul 6 23:32:31.032998 jq[1722]: false Jul 6 23:32:31.034268 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 6 23:32:31.039994 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 6 23:32:31.040097 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Jul 6 23:32:31.042307 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jul 6 23:32:31.049932 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jul 6 23:32:31.052281 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:32:31.054563 KVP[1724]: KVP starting; pid is:1724 Jul 6 23:32:31.064392 KVP[1724]: KVP LIC Version: 3.1 Jul 6 23:32:31.065199 kernel: hv_utils: KVP IC version 4.0 Jul 6 23:32:31.069732 chronyd[1729]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Jul 6 23:32:31.074347 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 6 23:32:31.088184 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 6 23:32:31.096826 extend-filesystems[1723]: Found loop4 Jul 6 23:32:31.096826 extend-filesystems[1723]: Found loop5 Jul 6 23:32:31.096826 extend-filesystems[1723]: Found loop6 Jul 6 23:32:31.096826 extend-filesystems[1723]: Found loop7 Jul 6 23:32:31.096826 extend-filesystems[1723]: Found sda Jul 6 23:32:31.096826 extend-filesystems[1723]: Found sda1 Jul 6 23:32:31.096826 extend-filesystems[1723]: Found sda2 Jul 6 23:32:31.096826 extend-filesystems[1723]: Found sda3 Jul 6 23:32:31.096826 extend-filesystems[1723]: Found usr Jul 6 23:32:31.096826 extend-filesystems[1723]: Found sda4 Jul 6 23:32:31.176588 extend-filesystems[1723]: Found sda6 Jul 6 23:32:31.176588 extend-filesystems[1723]: Found sda7 Jul 6 23:32:31.176588 extend-filesystems[1723]: Found sda9 Jul 6 23:32:31.176588 extend-filesystems[1723]: Checking size of /dev/sda9 Jul 6 23:32:31.099262 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 6 23:32:31.105753 chronyd[1729]: Timezone right/UTC failed leap second check, ignoring Jul 6 23:32:31.226724 extend-filesystems[1723]: Old size kept for /dev/sda9 Jul 6 23:32:31.226724 extend-filesystems[1723]: Found sr0 Jul 6 23:32:31.128075 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 6 23:32:31.105948 chronyd[1729]: Loaded seccomp filter (level 2) Jul 6 23:32:31.256286 coreos-metadata[1717]: Jul 06 23:32:31.234 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jul 6 23:32:31.256286 coreos-metadata[1717]: Jul 06 23:32:31.242 INFO Fetch successful Jul 6 23:32:31.256286 coreos-metadata[1717]: Jul 06 23:32:31.242 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jul 6 23:32:31.256286 coreos-metadata[1717]: Jul 06 23:32:31.250 INFO Fetch successful Jul 6 23:32:31.256286 coreos-metadata[1717]: Jul 06 23:32:31.250 INFO Fetching http://168.63.129.16/machine/c88bf47b-76f6-4edf-acb6-28b4bee44c87/2915b3d6%2D5542%2D4eda%2D8ed0%2D9271356ba50f.%5Fci%2D4230.2.1%2Da%2D3b9b3bec0f?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jul 6 23:32:31.145365 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 6 23:32:31.147617 dbus-daemon[1718]: [system] SELinux support is enabled Jul 6 23:32:31.184316 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 6 23:32:31.256894 coreos-metadata[1717]: Jul 06 23:32:31.256 INFO Fetch successful Jul 6 23:32:31.256894 coreos-metadata[1717]: Jul 06 23:32:31.256 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jul 6 23:32:31.190639 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 6 23:32:31.194623 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 6 23:32:31.195974 systemd[1]: Starting update-engine.service - Update Engine... Jul 6 23:32:31.260378 jq[1757]: true Jul 6 23:32:31.216261 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 6 23:32:31.249591 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 6 23:32:31.262746 systemd[1]: Started chronyd.service - NTP client/server. Jul 6 23:32:31.272666 coreos-metadata[1717]: Jul 06 23:32:31.270 INFO Fetch successful Jul 6 23:32:31.275694 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 6 23:32:31.275884 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 6 23:32:31.276171 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 6 23:32:31.276338 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 6 23:32:31.287225 update_engine[1754]: I20250706 23:32:31.287096 1754 main.cc:92] Flatcar Update Engine starting Jul 6 23:32:31.289778 update_engine[1754]: I20250706 23:32:31.289698 1754 update_check_scheduler.cc:74] Next update check in 10m37s Jul 6 23:32:31.291151 systemd[1]: motdgen.service: Deactivated successfully. Jul 6 23:32:31.291351 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 6 23:32:31.305159 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 6 23:32:31.320792 systemd-logind[1752]: New seat seat0. Jul 6 23:32:31.323574 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 6 23:32:31.323898 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 6 23:32:31.324746 systemd-logind[1752]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Jul 6 23:32:31.337385 systemd[1]: Started systemd-logind.service - User Login Management. Jul 6 23:32:31.370203 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 35 scanned by (udev-worker) (1775) Jul 6 23:32:31.382947 jq[1779]: true Jul 6 23:32:31.394646 (ntainerd)[1785]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 6 23:32:31.402639 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 6 23:32:31.416832 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 6 23:32:31.417109 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 6 23:32:31.431307 tar[1770]: linux-arm64/LICENSE Jul 6 23:32:31.431307 tar[1770]: linux-arm64/helm Jul 6 23:32:31.421246 dbus-daemon[1718]: [system] Successfully activated service 'org.freedesktop.systemd1' Jul 6 23:32:31.417544 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 6 23:32:31.429020 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 6 23:32:31.429040 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 6 23:32:31.439605 systemd[1]: Started update-engine.service - Update Engine. Jul 6 23:32:31.450305 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 6 23:32:31.569147 bash[1839]: Updated "/home/core/.ssh/authorized_keys" Jul 6 23:32:31.578393 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 6 23:32:31.587543 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 6 23:32:31.714334 locksmithd[1813]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 6 23:32:31.874913 containerd[1785]: time="2025-07-06T23:32:31.874761500Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jul 6 23:32:31.951400 containerd[1785]: time="2025-07-06T23:32:31.951343260Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 6 23:32:31.958133 containerd[1785]: time="2025-07-06T23:32:31.955114420Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.95-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:32:31.958133 containerd[1785]: time="2025-07-06T23:32:31.955586100Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 6 23:32:31.958133 containerd[1785]: time="2025-07-06T23:32:31.955608700Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 6 23:32:31.958133 containerd[1785]: time="2025-07-06T23:32:31.956514780Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 6 23:32:31.958133 containerd[1785]: time="2025-07-06T23:32:31.956544100Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 6 23:32:31.958133 containerd[1785]: time="2025-07-06T23:32:31.956613620Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:32:31.958133 containerd[1785]: time="2025-07-06T23:32:31.956627340Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 6 23:32:31.958133 containerd[1785]: time="2025-07-06T23:32:31.958035700Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:32:31.958133 containerd[1785]: time="2025-07-06T23:32:31.958057300Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 6 23:32:31.958133 containerd[1785]: time="2025-07-06T23:32:31.958078540Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:32:31.958133 containerd[1785]: time="2025-07-06T23:32:31.958088980Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 6 23:32:31.958424 containerd[1785]: time="2025-07-06T23:32:31.958201420Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 6 23:32:31.958444 containerd[1785]: time="2025-07-06T23:32:31.958422140Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 6 23:32:31.958584 containerd[1785]: time="2025-07-06T23:32:31.958555980Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:32:31.958584 containerd[1785]: time="2025-07-06T23:32:31.958577900Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 6 23:32:31.959573 containerd[1785]: time="2025-07-06T23:32:31.959542500Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 6 23:32:31.959648 containerd[1785]: time="2025-07-06T23:32:31.959627540Z" level=info msg="metadata content store policy set" policy=shared Jul 6 23:32:31.978155 containerd[1785]: time="2025-07-06T23:32:31.978086340Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 6 23:32:31.978292 containerd[1785]: time="2025-07-06T23:32:31.978184940Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 6 23:32:31.978292 containerd[1785]: time="2025-07-06T23:32:31.978210580Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 6 23:32:31.978292 containerd[1785]: time="2025-07-06T23:32:31.978232700Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 6 23:32:31.978292 containerd[1785]: time="2025-07-06T23:32:31.978251740Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 6 23:32:31.978504 containerd[1785]: time="2025-07-06T23:32:31.978478340Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 6 23:32:31.978767 containerd[1785]: time="2025-07-06T23:32:31.978744860Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 6 23:32:31.978877 containerd[1785]: time="2025-07-06T23:32:31.978853180Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 6 23:32:31.978912 containerd[1785]: time="2025-07-06T23:32:31.978877940Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 6 23:32:31.978912 containerd[1785]: time="2025-07-06T23:32:31.978895700Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 6 23:32:31.978991 containerd[1785]: time="2025-07-06T23:32:31.978909980Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 6 23:32:31.978991 containerd[1785]: time="2025-07-06T23:32:31.978924100Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 6 23:32:31.978991 containerd[1785]: time="2025-07-06T23:32:31.978939060Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 6 23:32:31.978991 containerd[1785]: time="2025-07-06T23:32:31.978954900Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 6 23:32:31.978991 containerd[1785]: time="2025-07-06T23:32:31.978971620Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 6 23:32:31.978991 containerd[1785]: time="2025-07-06T23:32:31.978985620Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 6 23:32:31.979103 containerd[1785]: time="2025-07-06T23:32:31.979001020Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 6 23:32:31.979103 containerd[1785]: time="2025-07-06T23:32:31.979013220Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 6 23:32:31.979103 containerd[1785]: time="2025-07-06T23:32:31.979048700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 6 23:32:31.979103 containerd[1785]: time="2025-07-06T23:32:31.979063140Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 6 23:32:31.979103 containerd[1785]: time="2025-07-06T23:32:31.979078780Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 6 23:32:31.979103 containerd[1785]: time="2025-07-06T23:32:31.979093420Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 6 23:32:31.979258 containerd[1785]: time="2025-07-06T23:32:31.979107140Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 6 23:32:31.979258 containerd[1785]: time="2025-07-06T23:32:31.979144740Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 6 23:32:31.979258 containerd[1785]: time="2025-07-06T23:32:31.979159780Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 6 23:32:31.979258 containerd[1785]: time="2025-07-06T23:32:31.979173300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 6 23:32:31.979258 containerd[1785]: time="2025-07-06T23:32:31.979187140Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 6 23:32:31.979258 containerd[1785]: time="2025-07-06T23:32:31.979202860Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 6 23:32:31.979258 containerd[1785]: time="2025-07-06T23:32:31.979217900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 6 23:32:31.979258 containerd[1785]: time="2025-07-06T23:32:31.979230060Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 6 23:32:31.979258 containerd[1785]: time="2025-07-06T23:32:31.979245140Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 6 23:32:31.979424 containerd[1785]: time="2025-07-06T23:32:31.979269660Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 6 23:32:31.979424 containerd[1785]: time="2025-07-06T23:32:31.979292580Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 6 23:32:31.979424 containerd[1785]: time="2025-07-06T23:32:31.979309820Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 6 23:32:31.979424 containerd[1785]: time="2025-07-06T23:32:31.979322940Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 6 23:32:31.979424 containerd[1785]: time="2025-07-06T23:32:31.979374220Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 6 23:32:31.979424 containerd[1785]: time="2025-07-06T23:32:31.979393260Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 6 23:32:31.979424 containerd[1785]: time="2025-07-06T23:32:31.979405060Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 6 23:32:31.979424 containerd[1785]: time="2025-07-06T23:32:31.979419180Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 6 23:32:31.979568 containerd[1785]: time="2025-07-06T23:32:31.979430700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 6 23:32:31.979568 containerd[1785]: time="2025-07-06T23:32:31.979444460Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 6 23:32:31.979568 containerd[1785]: time="2025-07-06T23:32:31.979456980Z" level=info msg="NRI interface is disabled by configuration." Jul 6 23:32:31.979568 containerd[1785]: time="2025-07-06T23:32:31.979468380Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 6 23:32:31.981150 containerd[1785]: time="2025-07-06T23:32:31.979773860Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 6 23:32:31.981150 containerd[1785]: time="2025-07-06T23:32:31.979833700Z" level=info msg="Connect containerd service" Jul 6 23:32:31.981150 containerd[1785]: time="2025-07-06T23:32:31.979878820Z" level=info msg="using legacy CRI server" Jul 6 23:32:31.981150 containerd[1785]: time="2025-07-06T23:32:31.979886860Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 6 23:32:31.981150 containerd[1785]: time="2025-07-06T23:32:31.980011140Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 6 23:32:31.984142 containerd[1785]: time="2025-07-06T23:32:31.982775580Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 6 23:32:31.984142 containerd[1785]: time="2025-07-06T23:32:31.983043460Z" level=info msg="Start subscribing containerd event" Jul 6 23:32:31.984142 containerd[1785]: time="2025-07-06T23:32:31.983094820Z" level=info msg="Start recovering state" Jul 6 23:32:31.984142 containerd[1785]: time="2025-07-06T23:32:31.983591460Z" level=info msg="Start event monitor" Jul 6 23:32:31.984142 containerd[1785]: time="2025-07-06T23:32:31.983611060Z" level=info msg="Start snapshots syncer" Jul 6 23:32:31.984142 containerd[1785]: time="2025-07-06T23:32:31.983621100Z" level=info msg="Start cni network conf syncer for default" Jul 6 23:32:31.984142 containerd[1785]: time="2025-07-06T23:32:31.983632820Z" level=info msg="Start streaming server" Jul 6 23:32:31.984142 containerd[1785]: time="2025-07-06T23:32:31.984102980Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 6 23:32:31.984345 containerd[1785]: time="2025-07-06T23:32:31.984167300Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 6 23:32:31.989140 systemd[1]: Started containerd.service - containerd container runtime. Jul 6 23:32:31.995908 containerd[1785]: time="2025-07-06T23:32:31.995866140Z" level=info msg="containerd successfully booted in 0.126279s" Jul 6 23:32:32.257393 tar[1770]: linux-arm64/README.md Jul 6 23:32:32.270174 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 6 23:32:32.413295 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:32:32.420338 (kubelet)[1883]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:32:32.596614 sshd_keygen[1749]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 6 23:32:32.617762 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 6 23:32:32.636454 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 6 23:32:32.646591 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jul 6 23:32:32.656264 systemd[1]: issuegen.service: Deactivated successfully. Jul 6 23:32:32.657380 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 6 23:32:32.674810 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 6 23:32:32.687312 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jul 6 23:32:32.701066 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 6 23:32:32.717541 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 6 23:32:32.729436 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jul 6 23:32:32.736332 systemd[1]: Reached target getty.target - Login Prompts. Jul 6 23:32:32.741824 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 6 23:32:32.750493 systemd[1]: Startup finished in 717ms (kernel) + 12.275s (initrd) + 9.922s (userspace) = 22.915s. Jul 6 23:32:32.899956 kubelet[1883]: E0706 23:32:32.899852 1883 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:32:32.902694 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:32:32.902841 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:32:32.903160 systemd[1]: kubelet.service: Consumed 736ms CPU time, 257M memory peak. Jul 6 23:32:32.911847 login[1911]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:32:32.912945 login[1912]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:32:32.936069 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 6 23:32:32.944405 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 6 23:32:32.953620 systemd-logind[1752]: New session 2 of user core. Jul 6 23:32:32.957170 systemd-logind[1752]: New session 1 of user core. Jul 6 23:32:32.963211 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 6 23:32:32.969494 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 6 23:32:32.973725 (systemd)[1920]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 6 23:32:32.976346 systemd-logind[1752]: New session c1 of user core. Jul 6 23:32:33.126485 systemd[1920]: Queued start job for default target default.target. Jul 6 23:32:33.136258 systemd[1920]: Created slice app.slice - User Application Slice. Jul 6 23:32:33.136291 systemd[1920]: Reached target paths.target - Paths. Jul 6 23:32:33.136331 systemd[1920]: Reached target timers.target - Timers. Jul 6 23:32:33.137604 systemd[1920]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 6 23:32:33.149012 systemd[1920]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 6 23:32:33.149164 systemd[1920]: Reached target sockets.target - Sockets. Jul 6 23:32:33.149219 systemd[1920]: Reached target basic.target - Basic System. Jul 6 23:32:33.149248 systemd[1920]: Reached target default.target - Main User Target. Jul 6 23:32:33.149274 systemd[1920]: Startup finished in 166ms. Jul 6 23:32:33.149611 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 6 23:32:33.152012 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 6 23:32:33.153173 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 6 23:32:34.044142 waagent[1907]: 2025-07-06T23:32:34.043758Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Jul 6 23:32:34.049640 waagent[1907]: 2025-07-06T23:32:34.049572Z INFO Daemon Daemon OS: flatcar 4230.2.1 Jul 6 23:32:34.054141 waagent[1907]: 2025-07-06T23:32:34.054078Z INFO Daemon Daemon Python: 3.11.11 Jul 6 23:32:34.059198 waagent[1907]: 2025-07-06T23:32:34.058938Z INFO Daemon Daemon Run daemon Jul 6 23:32:34.064257 waagent[1907]: 2025-07-06T23:32:34.064193Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4230.2.1' Jul 6 23:32:34.074617 waagent[1907]: 2025-07-06T23:32:34.074547Z INFO Daemon Daemon Using waagent for provisioning Jul 6 23:32:34.080675 waagent[1907]: 2025-07-06T23:32:34.080624Z INFO Daemon Daemon Activate resource disk Jul 6 23:32:34.085648 waagent[1907]: 2025-07-06T23:32:34.085597Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jul 6 23:32:34.099098 waagent[1907]: 2025-07-06T23:32:34.099030Z INFO Daemon Daemon Found device: None Jul 6 23:32:34.103970 waagent[1907]: 2025-07-06T23:32:34.103909Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jul 6 23:32:34.112909 waagent[1907]: 2025-07-06T23:32:34.112847Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jul 6 23:32:34.125454 waagent[1907]: 2025-07-06T23:32:34.125392Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jul 6 23:32:34.131638 waagent[1907]: 2025-07-06T23:32:34.131582Z INFO Daemon Daemon Running default provisioning handler Jul 6 23:32:34.143174 waagent[1907]: 2025-07-06T23:32:34.143090Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jul 6 23:32:34.157612 waagent[1907]: 2025-07-06T23:32:34.157533Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jul 6 23:32:34.168908 waagent[1907]: 2025-07-06T23:32:34.168838Z INFO Daemon Daemon cloud-init is enabled: False Jul 6 23:32:34.174690 waagent[1907]: 2025-07-06T23:32:34.174598Z INFO Daemon Daemon Copying ovf-env.xml Jul 6 23:32:34.250724 waagent[1907]: 2025-07-06T23:32:34.250622Z INFO Daemon Daemon Successfully mounted dvd Jul 6 23:32:34.280691 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jul 6 23:32:34.283485 waagent[1907]: 2025-07-06T23:32:34.283406Z INFO Daemon Daemon Detect protocol endpoint Jul 6 23:32:34.289979 waagent[1907]: 2025-07-06T23:32:34.289897Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jul 6 23:32:34.295964 waagent[1907]: 2025-07-06T23:32:34.295819Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jul 6 23:32:34.302881 waagent[1907]: 2025-07-06T23:32:34.302815Z INFO Daemon Daemon Test for route to 168.63.129.16 Jul 6 23:32:34.308424 waagent[1907]: 2025-07-06T23:32:34.308342Z INFO Daemon Daemon Route to 168.63.129.16 exists Jul 6 23:32:34.313435 waagent[1907]: 2025-07-06T23:32:34.313377Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jul 6 23:32:34.350551 waagent[1907]: 2025-07-06T23:32:34.350487Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jul 6 23:32:34.358002 waagent[1907]: 2025-07-06T23:32:34.357954Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jul 6 23:32:34.365031 waagent[1907]: 2025-07-06T23:32:34.364939Z INFO Daemon Daemon Server preferred version:2015-04-05 Jul 6 23:32:34.681212 waagent[1907]: 2025-07-06T23:32:34.680347Z INFO Daemon Daemon Initializing goal state during protocol detection Jul 6 23:32:34.687785 waagent[1907]: 2025-07-06T23:32:34.687704Z INFO Daemon Daemon Forcing an update of the goal state. Jul 6 23:32:34.697273 waagent[1907]: 2025-07-06T23:32:34.697211Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jul 6 23:32:34.719820 waagent[1907]: 2025-07-06T23:32:34.719770Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.175 Jul 6 23:32:34.726870 waagent[1907]: 2025-07-06T23:32:34.726816Z INFO Daemon Jul 6 23:32:34.730005 waagent[1907]: 2025-07-06T23:32:34.729954Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: e1ac551e-1742-4ead-ba3a-374eeea405ed eTag: 14592056296283033228 source: Fabric] Jul 6 23:32:34.744523 waagent[1907]: 2025-07-06T23:32:34.744431Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jul 6 23:32:34.753790 waagent[1907]: 2025-07-06T23:32:34.753673Z INFO Daemon Jul 6 23:32:34.757413 waagent[1907]: 2025-07-06T23:32:34.757351Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jul 6 23:32:34.769194 waagent[1907]: 2025-07-06T23:32:34.769146Z INFO Daemon Daemon Downloading artifacts profile blob Jul 6 23:32:34.857726 waagent[1907]: 2025-07-06T23:32:34.857627Z INFO Daemon Downloaded certificate {'thumbprint': '85F6B69789D7F3B4CD2BE2E7EBA7CF79C037463E', 'hasPrivateKey': True} Jul 6 23:32:34.867797 waagent[1907]: 2025-07-06T23:32:34.867736Z INFO Daemon Downloaded certificate {'thumbprint': '77C037343708DACAD58CE47E63849AD203B40458', 'hasPrivateKey': False} Jul 6 23:32:34.877767 waagent[1907]: 2025-07-06T23:32:34.877706Z INFO Daemon Fetch goal state completed Jul 6 23:32:34.928459 waagent[1907]: 2025-07-06T23:32:34.928398Z INFO Daemon Daemon Starting provisioning Jul 6 23:32:34.933548 waagent[1907]: 2025-07-06T23:32:34.933442Z INFO Daemon Daemon Handle ovf-env.xml. Jul 6 23:32:34.938189 waagent[1907]: 2025-07-06T23:32:34.938114Z INFO Daemon Daemon Set hostname [ci-4230.2.1-a-3b9b3bec0f] Jul 6 23:32:34.962933 waagent[1907]: 2025-07-06T23:32:34.962850Z INFO Daemon Daemon Publish hostname [ci-4230.2.1-a-3b9b3bec0f] Jul 6 23:32:34.970068 waagent[1907]: 2025-07-06T23:32:34.969997Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jul 6 23:32:34.976711 waagent[1907]: 2025-07-06T23:32:34.976641Z INFO Daemon Daemon Primary interface is [eth0] Jul 6 23:32:34.989427 systemd-networkd[1343]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:32:34.989437 systemd-networkd[1343]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 6 23:32:34.989465 systemd-networkd[1343]: eth0: DHCP lease lost Jul 6 23:32:34.990794 waagent[1907]: 2025-07-06T23:32:34.990709Z INFO Daemon Daemon Create user account if not exists Jul 6 23:32:34.997004 waagent[1907]: 2025-07-06T23:32:34.996938Z INFO Daemon Daemon User core already exists, skip useradd Jul 6 23:32:35.002981 waagent[1907]: 2025-07-06T23:32:35.002914Z INFO Daemon Daemon Configure sudoer Jul 6 23:32:35.007921 waagent[1907]: 2025-07-06T23:32:35.007853Z INFO Daemon Daemon Configure sshd Jul 6 23:32:35.012897 waagent[1907]: 2025-07-06T23:32:35.012827Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jul 6 23:32:35.026502 waagent[1907]: 2025-07-06T23:32:35.026420Z INFO Daemon Daemon Deploy ssh public key. Jul 6 23:32:35.039324 systemd-networkd[1343]: eth0: DHCPv4 address 10.200.20.11/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jul 6 23:32:36.134187 waagent[1907]: 2025-07-06T23:32:36.134086Z INFO Daemon Daemon Provisioning complete Jul 6 23:32:36.153181 waagent[1907]: 2025-07-06T23:32:36.152989Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jul 6 23:32:36.159526 waagent[1907]: 2025-07-06T23:32:36.159448Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jul 6 23:32:36.169251 waagent[1907]: 2025-07-06T23:32:36.169174Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Jul 6 23:32:36.308850 waagent[1976]: 2025-07-06T23:32:36.308734Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Jul 6 23:32:36.309960 waagent[1976]: 2025-07-06T23:32:36.309314Z INFO ExtHandler ExtHandler OS: flatcar 4230.2.1 Jul 6 23:32:36.309960 waagent[1976]: 2025-07-06T23:32:36.309385Z INFO ExtHandler ExtHandler Python: 3.11.11 Jul 6 23:32:36.362185 waagent[1976]: 2025-07-06T23:32:36.361354Z INFO ExtHandler ExtHandler Distro: flatcar-4230.2.1; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.11; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Jul 6 23:32:36.362185 waagent[1976]: 2025-07-06T23:32:36.361606Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 6 23:32:36.362185 waagent[1976]: 2025-07-06T23:32:36.361666Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 6 23:32:36.370770 waagent[1976]: 2025-07-06T23:32:36.370682Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jul 6 23:32:36.376998 waagent[1976]: 2025-07-06T23:32:36.376948Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.175 Jul 6 23:32:36.377578 waagent[1976]: 2025-07-06T23:32:36.377530Z INFO ExtHandler Jul 6 23:32:36.377655 waagent[1976]: 2025-07-06T23:32:36.377622Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: bae6bdef-9b15-4c1a-ac1f-9d152edfe337 eTag: 14592056296283033228 source: Fabric] Jul 6 23:32:36.377952 waagent[1976]: 2025-07-06T23:32:36.377909Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jul 6 23:32:36.378554 waagent[1976]: 2025-07-06T23:32:36.378504Z INFO ExtHandler Jul 6 23:32:36.378619 waagent[1976]: 2025-07-06T23:32:36.378590Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jul 6 23:32:36.383264 waagent[1976]: 2025-07-06T23:32:36.383194Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jul 6 23:32:36.732163 waagent[1976]: 2025-07-06T23:32:36.731063Z INFO ExtHandler Downloaded certificate {'thumbprint': '85F6B69789D7F3B4CD2BE2E7EBA7CF79C037463E', 'hasPrivateKey': True} Jul 6 23:32:36.732163 waagent[1976]: 2025-07-06T23:32:36.731671Z INFO ExtHandler Downloaded certificate {'thumbprint': '77C037343708DACAD58CE47E63849AD203B40458', 'hasPrivateKey': False} Jul 6 23:32:36.732163 waagent[1976]: 2025-07-06T23:32:36.732067Z INFO ExtHandler Fetch goal state completed Jul 6 23:32:36.754285 waagent[1976]: 2025-07-06T23:32:36.754216Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1976 Jul 6 23:32:36.754451 waagent[1976]: 2025-07-06T23:32:36.754414Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jul 6 23:32:36.756177 waagent[1976]: 2025-07-06T23:32:36.756111Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4230.2.1', '', 'Flatcar Container Linux by Kinvolk'] Jul 6 23:32:36.756568 waagent[1976]: 2025-07-06T23:32:36.756531Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jul 6 23:32:37.300871 waagent[1976]: 2025-07-06T23:32:37.300815Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jul 6 23:32:37.301093 waagent[1976]: 2025-07-06T23:32:37.301048Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jul 6 23:32:37.307937 waagent[1976]: 2025-07-06T23:32:37.307882Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jul 6 23:32:37.314634 systemd[1]: Reload requested from client PID 1991 ('systemctl') (unit waagent.service)... Jul 6 23:32:37.314649 systemd[1]: Reloading... Jul 6 23:32:37.401047 zram_generator::config[2026]: No configuration found. Jul 6 23:32:37.517781 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:32:37.616773 systemd[1]: Reloading finished in 301 ms. Jul 6 23:32:37.637171 waagent[1976]: 2025-07-06T23:32:37.631494Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Jul 6 23:32:37.638350 systemd[1]: Reload requested from client PID 2084 ('systemctl') (unit waagent.service)... Jul 6 23:32:37.638492 systemd[1]: Reloading... Jul 6 23:32:37.733189 zram_generator::config[2124]: No configuration found. Jul 6 23:32:37.844481 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:32:37.943719 systemd[1]: Reloading finished in 304 ms. Jul 6 23:32:37.964333 waagent[1976]: 2025-07-06T23:32:37.959489Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jul 6 23:32:37.964333 waagent[1976]: 2025-07-06T23:32:37.959686Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jul 6 23:32:38.239016 waagent[1976]: 2025-07-06T23:32:38.238871Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jul 6 23:32:38.239596 waagent[1976]: 2025-07-06T23:32:38.239524Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Jul 6 23:32:38.240407 waagent[1976]: 2025-07-06T23:32:38.240319Z INFO ExtHandler ExtHandler Starting env monitor service. Jul 6 23:32:38.240823 waagent[1976]: 2025-07-06T23:32:38.240723Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jul 6 23:32:38.241233 waagent[1976]: 2025-07-06T23:32:38.241105Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jul 6 23:32:38.241322 waagent[1976]: 2025-07-06T23:32:38.241220Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jul 6 23:32:38.241878 waagent[1976]: 2025-07-06T23:32:38.241766Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jul 6 23:32:38.242074 waagent[1976]: 2025-07-06T23:32:38.242019Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jul 6 23:32:38.242268 waagent[1976]: 2025-07-06T23:32:38.242231Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jul 6 23:32:38.242268 waagent[1976]: 2025-07-06T23:32:38.242116Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 6 23:32:38.243069 waagent[1976]: 2025-07-06T23:32:38.243032Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 6 23:32:38.243332 waagent[1976]: 2025-07-06T23:32:38.243280Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jul 6 23:32:38.243588 waagent[1976]: 2025-07-06T23:32:38.243493Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jul 6 23:32:38.243588 waagent[1976]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jul 6 23:32:38.243588 waagent[1976]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Jul 6 23:32:38.243588 waagent[1976]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jul 6 23:32:38.243588 waagent[1976]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jul 6 23:32:38.243588 waagent[1976]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jul 6 23:32:38.243588 waagent[1976]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jul 6 23:32:38.243913 waagent[1976]: 2025-07-06T23:32:38.243580Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 6 23:32:38.248091 waagent[1976]: 2025-07-06T23:32:38.248032Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 6 23:32:38.248819 waagent[1976]: 2025-07-06T23:32:38.248772Z INFO ExtHandler ExtHandler Jul 6 23:32:38.249370 waagent[1976]: 2025-07-06T23:32:38.249309Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: e0909bd7-4ff6-4f55-b522-af8330fa1b43 correlation 94e511bb-b70c-43ab-aec1-18d97a060775 created: 2025-07-06T23:31:30.034225Z] Jul 6 23:32:38.249723 waagent[1976]: 2025-07-06T23:32:38.249662Z INFO EnvHandler ExtHandler Configure routes Jul 6 23:32:38.250066 waagent[1976]: 2025-07-06T23:32:38.250020Z INFO EnvHandler ExtHandler Gateway:None Jul 6 23:32:38.250999 waagent[1976]: 2025-07-06T23:32:38.250941Z INFO EnvHandler ExtHandler Routes:None Jul 6 23:32:38.251872 waagent[1976]: 2025-07-06T23:32:38.251829Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jul 6 23:32:38.254159 waagent[1976]: 2025-07-06T23:32:38.253904Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 5 ms] Jul 6 23:32:38.275389 waagent[1976]: 2025-07-06T23:32:38.275303Z INFO MonitorHandler ExtHandler Network interfaces: Jul 6 23:32:38.275389 waagent[1976]: Executing ['ip', '-a', '-o', 'link']: Jul 6 23:32:38.275389 waagent[1976]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jul 6 23:32:38.275389 waagent[1976]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:7b:fa:12 brd ff:ff:ff:ff:ff:ff Jul 6 23:32:38.275389 waagent[1976]: 3: enP60635s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:7b:fa:12 brd ff:ff:ff:ff:ff:ff\ altname enP60635p0s2 Jul 6 23:32:38.275389 waagent[1976]: Executing ['ip', '-4', '-a', '-o', 'address']: Jul 6 23:32:38.275389 waagent[1976]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jul 6 23:32:38.275389 waagent[1976]: 2: eth0 inet 10.200.20.11/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Jul 6 23:32:38.275389 waagent[1976]: Executing ['ip', '-6', '-a', '-o', 'address']: Jul 6 23:32:38.275389 waagent[1976]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jul 6 23:32:38.275389 waagent[1976]: 2: eth0 inet6 fe80::222:48ff:fe7b:fa12/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jul 6 23:32:38.275389 waagent[1976]: 3: enP60635s1 inet6 fe80::222:48ff:fe7b:fa12/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jul 6 23:32:38.303982 waagent[1976]: 2025-07-06T23:32:38.303913Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: CDDDDF3B-741F-424A-A7A1-0193DD60A888;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Jul 6 23:32:38.328827 waagent[1976]: 2025-07-06T23:32:38.328740Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Jul 6 23:32:38.328827 waagent[1976]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jul 6 23:32:38.328827 waagent[1976]: pkts bytes target prot opt in out source destination Jul 6 23:32:38.328827 waagent[1976]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jul 6 23:32:38.328827 waagent[1976]: pkts bytes target prot opt in out source destination Jul 6 23:32:38.328827 waagent[1976]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jul 6 23:32:38.328827 waagent[1976]: pkts bytes target prot opt in out source destination Jul 6 23:32:38.328827 waagent[1976]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jul 6 23:32:38.328827 waagent[1976]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jul 6 23:32:38.328827 waagent[1976]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jul 6 23:32:38.332972 waagent[1976]: 2025-07-06T23:32:38.332896Z INFO EnvHandler ExtHandler Current Firewall rules: Jul 6 23:32:38.332972 waagent[1976]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jul 6 23:32:38.332972 waagent[1976]: pkts bytes target prot opt in out source destination Jul 6 23:32:38.332972 waagent[1976]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jul 6 23:32:38.332972 waagent[1976]: pkts bytes target prot opt in out source destination Jul 6 23:32:38.332972 waagent[1976]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jul 6 23:32:38.332972 waagent[1976]: pkts bytes target prot opt in out source destination Jul 6 23:32:38.332972 waagent[1976]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jul 6 23:32:38.332972 waagent[1976]: 1 60 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jul 6 23:32:38.332972 waagent[1976]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jul 6 23:32:38.333292 waagent[1976]: 2025-07-06T23:32:38.333230Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jul 6 23:32:43.153661 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 6 23:32:43.162375 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:32:43.287337 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:32:43.288734 (kubelet)[2216]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:32:43.413192 kubelet[2216]: E0706 23:32:43.412019 2216 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:32:43.414809 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:32:43.414943 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:32:43.415401 systemd[1]: kubelet.service: Consumed 137ms CPU time, 104.5M memory peak. Jul 6 23:32:47.014549 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 6 23:32:47.015750 systemd[1]: Started sshd@0-10.200.20.11:22-10.200.16.10:34422.service - OpenSSH per-connection server daemon (10.200.16.10:34422). Jul 6 23:32:47.569628 sshd[2224]: Accepted publickey for core from 10.200.16.10 port 34422 ssh2: RSA SHA256:3OqJfzY7AVg+q5elAE97NXiEux0mANjRjysP1FUtbfY Jul 6 23:32:47.570919 sshd-session[2224]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:32:47.575446 systemd-logind[1752]: New session 3 of user core. Jul 6 23:32:47.582312 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 6 23:32:48.006133 systemd[1]: Started sshd@1-10.200.20.11:22-10.200.16.10:34424.service - OpenSSH per-connection server daemon (10.200.16.10:34424). Jul 6 23:32:48.481548 sshd[2229]: Accepted publickey for core from 10.200.16.10 port 34424 ssh2: RSA SHA256:3OqJfzY7AVg+q5elAE97NXiEux0mANjRjysP1FUtbfY Jul 6 23:32:48.482884 sshd-session[2229]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:32:48.487439 systemd-logind[1752]: New session 4 of user core. Jul 6 23:32:48.496287 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 6 23:32:48.828471 sshd[2231]: Connection closed by 10.200.16.10 port 34424 Jul 6 23:32:48.829160 sshd-session[2229]: pam_unix(sshd:session): session closed for user core Jul 6 23:32:48.832358 systemd[1]: sshd@1-10.200.20.11:22-10.200.16.10:34424.service: Deactivated successfully. Jul 6 23:32:48.833900 systemd[1]: session-4.scope: Deactivated successfully. Jul 6 23:32:48.834595 systemd-logind[1752]: Session 4 logged out. Waiting for processes to exit. Jul 6 23:32:48.835756 systemd-logind[1752]: Removed session 4. Jul 6 23:32:48.926432 systemd[1]: Started sshd@2-10.200.20.11:22-10.200.16.10:34430.service - OpenSSH per-connection server daemon (10.200.16.10:34430). Jul 6 23:32:49.401476 sshd[2237]: Accepted publickey for core from 10.200.16.10 port 34430 ssh2: RSA SHA256:3OqJfzY7AVg+q5elAE97NXiEux0mANjRjysP1FUtbfY Jul 6 23:32:49.402730 sshd-session[2237]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:32:49.408001 systemd-logind[1752]: New session 5 of user core. Jul 6 23:32:49.413285 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 6 23:32:49.743678 sshd[2239]: Connection closed by 10.200.16.10 port 34430 Jul 6 23:32:49.744345 sshd-session[2237]: pam_unix(sshd:session): session closed for user core Jul 6 23:32:49.747493 systemd[1]: sshd@2-10.200.20.11:22-10.200.16.10:34430.service: Deactivated successfully. Jul 6 23:32:49.749025 systemd[1]: session-5.scope: Deactivated successfully. Jul 6 23:32:49.749711 systemd-logind[1752]: Session 5 logged out. Waiting for processes to exit. Jul 6 23:32:49.750708 systemd-logind[1752]: Removed session 5. Jul 6 23:32:49.840367 systemd[1]: Started sshd@3-10.200.20.11:22-10.200.16.10:60412.service - OpenSSH per-connection server daemon (10.200.16.10:60412). Jul 6 23:32:50.329724 sshd[2245]: Accepted publickey for core from 10.200.16.10 port 60412 ssh2: RSA SHA256:3OqJfzY7AVg+q5elAE97NXiEux0mANjRjysP1FUtbfY Jul 6 23:32:50.331039 sshd-session[2245]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:32:50.336896 systemd-logind[1752]: New session 6 of user core. Jul 6 23:32:50.342287 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 6 23:32:50.683687 sshd[2247]: Connection closed by 10.200.16.10 port 60412 Jul 6 23:32:50.682799 sshd-session[2245]: pam_unix(sshd:session): session closed for user core Jul 6 23:32:50.686292 systemd-logind[1752]: Session 6 logged out. Waiting for processes to exit. Jul 6 23:32:50.686493 systemd[1]: sshd@3-10.200.20.11:22-10.200.16.10:60412.service: Deactivated successfully. Jul 6 23:32:50.688035 systemd[1]: session-6.scope: Deactivated successfully. Jul 6 23:32:50.690791 systemd-logind[1752]: Removed session 6. Jul 6 23:32:50.772846 systemd[1]: Started sshd@4-10.200.20.11:22-10.200.16.10:60418.service - OpenSSH per-connection server daemon (10.200.16.10:60418). Jul 6 23:32:51.253633 sshd[2253]: Accepted publickey for core from 10.200.16.10 port 60418 ssh2: RSA SHA256:3OqJfzY7AVg+q5elAE97NXiEux0mANjRjysP1FUtbfY Jul 6 23:32:51.254904 sshd-session[2253]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:32:51.258924 systemd-logind[1752]: New session 7 of user core. Jul 6 23:32:51.266276 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 6 23:32:51.602788 sudo[2256]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 6 23:32:51.603056 sudo[2256]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:32:51.623953 sudo[2256]: pam_unix(sudo:session): session closed for user root Jul 6 23:32:51.698785 sshd[2255]: Connection closed by 10.200.16.10 port 60418 Jul 6 23:32:51.697955 sshd-session[2253]: pam_unix(sshd:session): session closed for user core Jul 6 23:32:51.701030 systemd[1]: sshd@4-10.200.20.11:22-10.200.16.10:60418.service: Deactivated successfully. Jul 6 23:32:51.702702 systemd[1]: session-7.scope: Deactivated successfully. Jul 6 23:32:51.703978 systemd-logind[1752]: Session 7 logged out. Waiting for processes to exit. Jul 6 23:32:51.704900 systemd-logind[1752]: Removed session 7. Jul 6 23:32:51.785863 systemd[1]: Started sshd@5-10.200.20.11:22-10.200.16.10:60434.service - OpenSSH per-connection server daemon (10.200.16.10:60434). Jul 6 23:32:52.279242 sshd[2262]: Accepted publickey for core from 10.200.16.10 port 60434 ssh2: RSA SHA256:3OqJfzY7AVg+q5elAE97NXiEux0mANjRjysP1FUtbfY Jul 6 23:32:52.280513 sshd-session[2262]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:32:52.284393 systemd-logind[1752]: New session 8 of user core. Jul 6 23:32:52.292278 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 6 23:32:52.555150 sudo[2266]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 6 23:32:52.555413 sudo[2266]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:32:52.560698 sudo[2266]: pam_unix(sudo:session): session closed for user root Jul 6 23:32:52.565333 sudo[2265]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 6 23:32:52.565587 sudo[2265]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:32:52.577631 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 6 23:32:52.599851 augenrules[2288]: No rules Jul 6 23:32:52.601189 systemd[1]: audit-rules.service: Deactivated successfully. Jul 6 23:32:52.602205 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 6 23:32:52.603567 sudo[2265]: pam_unix(sudo:session): session closed for user root Jul 6 23:32:52.681302 sshd[2264]: Connection closed by 10.200.16.10 port 60434 Jul 6 23:32:52.681703 sshd-session[2262]: pam_unix(sshd:session): session closed for user core Jul 6 23:32:52.685952 systemd-logind[1752]: Session 8 logged out. Waiting for processes to exit. Jul 6 23:32:52.686202 systemd[1]: sshd@5-10.200.20.11:22-10.200.16.10:60434.service: Deactivated successfully. Jul 6 23:32:52.687786 systemd[1]: session-8.scope: Deactivated successfully. Jul 6 23:32:52.689800 systemd-logind[1752]: Removed session 8. Jul 6 23:32:52.767308 systemd[1]: Started sshd@6-10.200.20.11:22-10.200.16.10:60438.service - OpenSSH per-connection server daemon (10.200.16.10:60438). Jul 6 23:32:53.245028 sshd[2297]: Accepted publickey for core from 10.200.16.10 port 60438 ssh2: RSA SHA256:3OqJfzY7AVg+q5elAE97NXiEux0mANjRjysP1FUtbfY Jul 6 23:32:53.246312 sshd-session[2297]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:32:53.250358 systemd-logind[1752]: New session 9 of user core. Jul 6 23:32:53.258352 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 6 23:32:53.513825 sudo[2300]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 6 23:32:53.514099 sudo[2300]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:32:53.515032 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 6 23:32:53.522390 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:32:54.058922 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:32:54.069512 (kubelet)[2317]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:32:54.110538 kubelet[2317]: E0706 23:32:54.110490 2317 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:32:54.113381 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:32:54.113661 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:32:54.114190 systemd[1]: kubelet.service: Consumed 130ms CPU time, 105.3M memory peak. Jul 6 23:32:54.870617 (dockerd)[2333]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 6 23:32:54.871148 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 6 23:32:54.908994 chronyd[1729]: Selected source PHC0 Jul 6 23:32:55.431407 dockerd[2333]: time="2025-07-06T23:32:55.431331240Z" level=info msg="Starting up" Jul 6 23:32:55.766954 dockerd[2333]: time="2025-07-06T23:32:55.766655958Z" level=info msg="Loading containers: start." Jul 6 23:32:55.932362 kernel: Initializing XFRM netlink socket Jul 6 23:32:56.070640 systemd-networkd[1343]: docker0: Link UP Jul 6 23:32:56.131686 dockerd[2333]: time="2025-07-06T23:32:56.131579982Z" level=info msg="Loading containers: done." Jul 6 23:32:56.156989 dockerd[2333]: time="2025-07-06T23:32:56.156860885Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 6 23:32:56.156989 dockerd[2333]: time="2025-07-06T23:32:56.156979365Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Jul 6 23:32:56.157212 dockerd[2333]: time="2025-07-06T23:32:56.157112245Z" level=info msg="Daemon has completed initialization" Jul 6 23:32:56.218039 dockerd[2333]: time="2025-07-06T23:32:56.217623819Z" level=info msg="API listen on /run/docker.sock" Jul 6 23:32:56.217868 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 6 23:32:57.043210 containerd[1785]: time="2025-07-06T23:32:57.042863793Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\"" Jul 6 23:32:57.990935 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount34883445.mount: Deactivated successfully. Jul 6 23:32:59.498785 containerd[1785]: time="2025-07-06T23:32:59.498717468Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:32:59.504474 containerd[1785]: time="2025-07-06T23:32:59.504287593Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.2: active requests=0, bytes read=27351716" Jul 6 23:32:59.510443 containerd[1785]: time="2025-07-06T23:32:59.510404598Z" level=info msg="ImageCreate event name:\"sha256:04ac773cca35cc457f24a6501b6b308d63a2cddd1aec14fe95559bccca3010a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:32:59.516742 containerd[1785]: time="2025-07-06T23:32:59.516621204Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:32:59.518432 containerd[1785]: time="2025-07-06T23:32:59.518242125Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.2\" with image id \"sha256:04ac773cca35cc457f24a6501b6b308d63a2cddd1aec14fe95559bccca3010a4\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\", size \"27348516\" in 2.475334012s" Jul 6 23:32:59.518432 containerd[1785]: time="2025-07-06T23:32:59.518286365Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\" returns image reference \"sha256:04ac773cca35cc457f24a6501b6b308d63a2cddd1aec14fe95559bccca3010a4\"" Jul 6 23:32:59.520170 containerd[1785]: time="2025-07-06T23:32:59.520131207Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\"" Jul 6 23:33:01.020491 containerd[1785]: time="2025-07-06T23:33:01.020437387Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:33:01.025137 containerd[1785]: time="2025-07-06T23:33:01.025065392Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.2: active requests=0, bytes read=23537623" Jul 6 23:33:01.030025 containerd[1785]: time="2025-07-06T23:33:01.029971396Z" level=info msg="ImageCreate event name:\"sha256:99a259072231375ad69a369cdf5620d60cdff72d450951c603fad8a94667af65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:33:01.037768 containerd[1785]: time="2025-07-06T23:33:01.037709083Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:33:01.038935 containerd[1785]: time="2025-07-06T23:33:01.038791404Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.2\" with image id \"sha256:99a259072231375ad69a369cdf5620d60cdff72d450951c603fad8a94667af65\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\", size \"25092541\" in 1.518618277s" Jul 6 23:33:01.038935 containerd[1785]: time="2025-07-06T23:33:01.038831004Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\" returns image reference \"sha256:99a259072231375ad69a369cdf5620d60cdff72d450951c603fad8a94667af65\"" Jul 6 23:33:01.039483 containerd[1785]: time="2025-07-06T23:33:01.039453244Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\"" Jul 6 23:33:02.336189 containerd[1785]: time="2025-07-06T23:33:02.335612363Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:33:02.340310 containerd[1785]: time="2025-07-06T23:33:02.340271127Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.2: active requests=0, bytes read=18293515" Jul 6 23:33:02.346110 containerd[1785]: time="2025-07-06T23:33:02.346066732Z" level=info msg="ImageCreate event name:\"sha256:bb3da57746ca4726b669d35145eb9b4085643c61bbc80b9df3bf1e6021ba9eaf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:33:02.355244 containerd[1785]: time="2025-07-06T23:33:02.355189500Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:33:02.356922 containerd[1785]: time="2025-07-06T23:33:02.356872862Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.2\" with image id \"sha256:bb3da57746ca4726b669d35145eb9b4085643c61bbc80b9df3bf1e6021ba9eaf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\", size \"19848451\" in 1.317379658s" Jul 6 23:33:02.357063 containerd[1785]: time="2025-07-06T23:33:02.357048062Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\" returns image reference \"sha256:bb3da57746ca4726b669d35145eb9b4085643c61bbc80b9df3bf1e6021ba9eaf\"" Jul 6 23:33:02.359198 containerd[1785]: time="2025-07-06T23:33:02.358949543Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\"" Jul 6 23:33:03.569829 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1709931637.mount: Deactivated successfully. Jul 6 23:33:03.938237 containerd[1785]: time="2025-07-06T23:33:03.938086326Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:33:03.941847 containerd[1785]: time="2025-07-06T23:33:03.941784257Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.2: active requests=0, bytes read=28199472" Jul 6 23:33:03.950682 containerd[1785]: time="2025-07-06T23:33:03.950615363Z" level=info msg="ImageCreate event name:\"sha256:c26522e54bad2e6bfbb1bf11500833c94433076a3fa38436a2ec496a422c5455\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:33:03.957316 containerd[1785]: time="2025-07-06T23:33:03.957238503Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:33:03.958045 containerd[1785]: time="2025-07-06T23:33:03.957889785Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.2\" with image id \"sha256:c26522e54bad2e6bfbb1bf11500833c94433076a3fa38436a2ec496a422c5455\", repo tag \"registry.k8s.io/kube-proxy:v1.33.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\", size \"28198491\" in 1.598901282s" Jul 6 23:33:03.958045 containerd[1785]: time="2025-07-06T23:33:03.957925625Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\" returns image reference \"sha256:c26522e54bad2e6bfbb1bf11500833c94433076a3fa38436a2ec496a422c5455\"" Jul 6 23:33:03.958789 containerd[1785]: time="2025-07-06T23:33:03.958554787Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jul 6 23:33:04.254585 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 6 23:33:04.262395 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:33:04.387896 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:33:04.400496 (kubelet)[2596]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:33:04.437879 kubelet[2596]: E0706 23:33:04.437821 2596 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:33:04.440974 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:33:04.441278 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:33:04.441803 systemd[1]: kubelet.service: Consumed 130ms CPU time, 107M memory peak. Jul 6 23:33:05.129710 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1141041202.mount: Deactivated successfully. Jul 6 23:33:07.174915 containerd[1785]: time="2025-07-06T23:33:07.174851638Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:33:07.181688 containerd[1785]: time="2025-07-06T23:33:07.181429525Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152117" Jul 6 23:33:07.195276 containerd[1785]: time="2025-07-06T23:33:07.195216381Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:33:07.202376 containerd[1785]: time="2025-07-06T23:33:07.202299149Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:33:07.203585 containerd[1785]: time="2025-07-06T23:33:07.203552310Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 3.244967683s" Jul 6 23:33:07.203633 containerd[1785]: time="2025-07-06T23:33:07.203587990Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Jul 6 23:33:07.204592 containerd[1785]: time="2025-07-06T23:33:07.204520511Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 6 23:33:07.863388 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1138176147.mount: Deactivated successfully. Jul 6 23:33:07.904168 containerd[1785]: time="2025-07-06T23:33:07.903780670Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:33:07.909645 containerd[1785]: time="2025-07-06T23:33:07.909349276Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Jul 6 23:33:07.917094 containerd[1785]: time="2025-07-06T23:33:07.917020405Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:33:07.925726 containerd[1785]: time="2025-07-06T23:33:07.925666335Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:33:07.926701 containerd[1785]: time="2025-07-06T23:33:07.926352175Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 721.671023ms" Jul 6 23:33:07.926701 containerd[1785]: time="2025-07-06T23:33:07.926387375Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 6 23:33:07.926885 containerd[1785]: time="2025-07-06T23:33:07.926851776Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jul 6 23:33:08.769002 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2579154256.mount: Deactivated successfully. Jul 6 23:33:12.157021 containerd[1785]: time="2025-07-06T23:33:12.155799705Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:33:12.162202 containerd[1785]: time="2025-07-06T23:33:12.162111809Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69334599" Jul 6 23:33:12.168305 containerd[1785]: time="2025-07-06T23:33:12.168245553Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:33:12.179712 containerd[1785]: time="2025-07-06T23:33:12.179651757Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:33:12.181308 containerd[1785]: time="2025-07-06T23:33:12.181141962Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 4.254236986s" Jul 6 23:33:12.181308 containerd[1785]: time="2025-07-06T23:33:12.181179283Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Jul 6 23:33:14.504843 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jul 6 23:33:14.512326 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:33:14.623255 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:33:14.625230 (kubelet)[2743]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:33:14.691063 kubelet[2743]: E0706 23:33:14.691023 2743 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:33:14.693842 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:33:14.694084 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:33:14.694619 systemd[1]: kubelet.service: Consumed 120ms CPU time, 107.4M memory peak. Jul 6 23:33:16.664144 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Jul 6 23:33:16.857687 update_engine[1754]: I20250706 23:33:16.857115 1754 update_attempter.cc:509] Updating boot flags... Jul 6 23:33:16.948273 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 35 scanned by (udev-worker) (2766) Jul 6 23:33:17.138208 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 35 scanned by (udev-worker) (2768) Jul 6 23:33:17.293930 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:33:17.294985 systemd[1]: kubelet.service: Consumed 120ms CPU time, 107.4M memory peak. Jul 6 23:33:17.303435 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:33:17.341686 systemd[1]: Reload requested from client PID 2872 ('systemctl') (unit session-9.scope)... Jul 6 23:33:17.341706 systemd[1]: Reloading... Jul 6 23:33:17.468163 zram_generator::config[2931]: No configuration found. Jul 6 23:33:17.568965 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:33:17.673263 systemd[1]: Reloading finished in 331 ms. Jul 6 23:33:17.719094 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:33:17.725160 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:33:17.730809 systemd[1]: kubelet.service: Deactivated successfully. Jul 6 23:33:17.731093 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:33:17.731164 systemd[1]: kubelet.service: Consumed 93ms CPU time, 94.9M memory peak. Jul 6 23:33:17.733053 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:33:17.858176 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:33:17.862398 (kubelet)[2988]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 6 23:33:17.900685 kubelet[2988]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:33:17.900685 kubelet[2988]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 6 23:33:17.900685 kubelet[2988]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:33:17.901037 kubelet[2988]: I0706 23:33:17.900733 2988 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 6 23:33:18.718989 kubelet[2988]: I0706 23:33:18.717874 2988 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 6 23:33:18.718989 kubelet[2988]: I0706 23:33:18.717909 2988 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 6 23:33:18.718989 kubelet[2988]: I0706 23:33:18.718312 2988 server.go:956] "Client rotation is on, will bootstrap in background" Jul 6 23:33:18.733885 kubelet[2988]: E0706 23:33:18.733848 2988 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.20.11:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.11:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jul 6 23:33:18.735806 kubelet[2988]: I0706 23:33:18.735787 2988 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 6 23:33:18.745366 kubelet[2988]: E0706 23:33:18.745320 2988 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 6 23:33:18.745526 kubelet[2988]: I0706 23:33:18.745513 2988 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 6 23:33:18.748700 kubelet[2988]: I0706 23:33:18.748671 2988 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 6 23:33:18.749986 kubelet[2988]: I0706 23:33:18.749947 2988 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 6 23:33:18.750275 kubelet[2988]: I0706 23:33:18.750090 2988 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230.2.1-a-3b9b3bec0f","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 6 23:33:18.750420 kubelet[2988]: I0706 23:33:18.750407 2988 topology_manager.go:138] "Creating topology manager with none policy" Jul 6 23:33:18.750476 kubelet[2988]: I0706 23:33:18.750468 2988 container_manager_linux.go:303] "Creating device plugin manager" Jul 6 23:33:18.750644 kubelet[2988]: I0706 23:33:18.750632 2988 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:33:18.753328 kubelet[2988]: I0706 23:33:18.753307 2988 kubelet.go:480] "Attempting to sync node with API server" Jul 6 23:33:18.753437 kubelet[2988]: I0706 23:33:18.753424 2988 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 6 23:33:18.753503 kubelet[2988]: I0706 23:33:18.753495 2988 kubelet.go:386] "Adding apiserver pod source" Jul 6 23:33:18.754772 kubelet[2988]: I0706 23:33:18.754757 2988 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 6 23:33:18.757589 kubelet[2988]: E0706 23:33:18.757547 2988 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.11:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.2.1-a-3b9b3bec0f&limit=500&resourceVersion=0\": dial tcp 10.200.20.11:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 6 23:33:18.757674 kubelet[2988]: I0706 23:33:18.757662 2988 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jul 6 23:33:18.758278 kubelet[2988]: I0706 23:33:18.758254 2988 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 6 23:33:18.758341 kubelet[2988]: W0706 23:33:18.758321 2988 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 6 23:33:18.761059 kubelet[2988]: I0706 23:33:18.760838 2988 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 6 23:33:18.761059 kubelet[2988]: I0706 23:33:18.760889 2988 server.go:1289] "Started kubelet" Jul 6 23:33:18.766618 kubelet[2988]: I0706 23:33:18.766463 2988 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 6 23:33:18.769787 kubelet[2988]: E0706 23:33:18.768517 2988 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.11:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.11:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230.2.1-a-3b9b3bec0f.184fcd9013c1abd3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230.2.1-a-3b9b3bec0f,UID:ci-4230.2.1-a-3b9b3bec0f,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230.2.1-a-3b9b3bec0f,},FirstTimestamp:2025-07-06 23:33:18.760856531 +0000 UTC m=+0.895193667,LastTimestamp:2025-07-06 23:33:18.760856531 +0000 UTC m=+0.895193667,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230.2.1-a-3b9b3bec0f,}" Jul 6 23:33:18.771184 kubelet[2988]: E0706 23:33:18.770923 2988 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.11:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.11:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 6 23:33:18.773236 kubelet[2988]: I0706 23:33:18.773109 2988 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 6 23:33:18.773839 kubelet[2988]: E0706 23:33:18.773794 2988 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 6 23:33:18.774156 kubelet[2988]: I0706 23:33:18.773960 2988 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 6 23:33:18.774291 kubelet[2988]: I0706 23:33:18.774265 2988 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 6 23:33:18.774424 kubelet[2988]: I0706 23:33:18.774409 2988 server.go:317] "Adding debug handlers to kubelet server" Jul 6 23:33:18.775022 kubelet[2988]: I0706 23:33:18.775001 2988 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 6 23:33:18.775410 kubelet[2988]: E0706 23:33:18.775385 2988 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.1-a-3b9b3bec0f\" not found" Jul 6 23:33:18.776799 kubelet[2988]: I0706 23:33:18.776531 2988 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 6 23:33:18.776968 kubelet[2988]: I0706 23:33:18.776943 2988 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 6 23:33:18.779053 kubelet[2988]: I0706 23:33:18.779014 2988 factory.go:223] Registration of the systemd container factory successfully Jul 6 23:33:18.779163 kubelet[2988]: I0706 23:33:18.779133 2988 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 6 23:33:18.779608 kubelet[2988]: E0706 23:33:18.779558 2988 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.2.1-a-3b9b3bec0f?timeout=10s\": dial tcp 10.200.20.11:6443: connect: connection refused" interval="200ms" Jul 6 23:33:18.779938 kubelet[2988]: I0706 23:33:18.779903 2988 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 6 23:33:18.779991 kubelet[2988]: I0706 23:33:18.779971 2988 reconciler.go:26] "Reconciler: start to sync state" Jul 6 23:33:18.781269 kubelet[2988]: E0706 23:33:18.780928 2988 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.11:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.11:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 6 23:33:18.781782 kubelet[2988]: I0706 23:33:18.781738 2988 factory.go:223] Registration of the containerd container factory successfully Jul 6 23:33:18.792848 kubelet[2988]: I0706 23:33:18.792817 2988 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 6 23:33:18.793297 kubelet[2988]: I0706 23:33:18.792993 2988 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 6 23:33:18.793297 kubelet[2988]: I0706 23:33:18.793024 2988 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 6 23:33:18.793297 kubelet[2988]: I0706 23:33:18.793032 2988 kubelet.go:2436] "Starting kubelet main sync loop" Jul 6 23:33:18.793297 kubelet[2988]: E0706 23:33:18.793072 2988 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 6 23:33:18.796972 kubelet[2988]: E0706 23:33:18.796937 2988 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.11:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.11:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 6 23:33:18.806752 kubelet[2988]: I0706 23:33:18.806727 2988 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 6 23:33:18.806947 kubelet[2988]: I0706 23:33:18.806934 2988 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 6 23:33:18.807017 kubelet[2988]: I0706 23:33:18.807009 2988 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:33:18.813878 kubelet[2988]: I0706 23:33:18.813851 2988 policy_none.go:49] "None policy: Start" Jul 6 23:33:18.814013 kubelet[2988]: I0706 23:33:18.814002 2988 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 6 23:33:18.814162 kubelet[2988]: I0706 23:33:18.814061 2988 state_mem.go:35] "Initializing new in-memory state store" Jul 6 23:33:18.824506 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 6 23:33:18.838011 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 6 23:33:18.841175 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 6 23:33:18.852931 kubelet[2988]: E0706 23:33:18.852902 2988 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 6 23:33:18.853276 kubelet[2988]: I0706 23:33:18.853262 2988 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 6 23:33:18.853386 kubelet[2988]: I0706 23:33:18.853353 2988 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 6 23:33:18.854563 kubelet[2988]: I0706 23:33:18.854290 2988 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 6 23:33:18.856090 kubelet[2988]: E0706 23:33:18.856074 2988 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 6 23:33:18.856235 kubelet[2988]: E0706 23:33:18.856222 2988 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4230.2.1-a-3b9b3bec0f\" not found" Jul 6 23:33:18.908563 systemd[1]: Created slice kubepods-burstable-podfe565bc3a356744b9ef6ba5613e79a93.slice - libcontainer container kubepods-burstable-podfe565bc3a356744b9ef6ba5613e79a93.slice. Jul 6 23:33:18.916241 kubelet[2988]: E0706 23:33:18.915889 2988 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.1-a-3b9b3bec0f\" not found" node="ci-4230.2.1-a-3b9b3bec0f" Jul 6 23:33:18.921050 systemd[1]: Created slice kubepods-burstable-podaeff9577cb2a0fbca6033ec6ea6e0d3e.slice - libcontainer container kubepods-burstable-podaeff9577cb2a0fbca6033ec6ea6e0d3e.slice. Jul 6 23:33:18.931545 kubelet[2988]: E0706 23:33:18.931508 2988 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.1-a-3b9b3bec0f\" not found" node="ci-4230.2.1-a-3b9b3bec0f" Jul 6 23:33:18.934509 systemd[1]: Created slice kubepods-burstable-poda28136306231d79e98762bab5879d115.slice - libcontainer container kubepods-burstable-poda28136306231d79e98762bab5879d115.slice. Jul 6 23:33:18.936419 kubelet[2988]: E0706 23:33:18.936378 2988 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.1-a-3b9b3bec0f\" not found" node="ci-4230.2.1-a-3b9b3bec0f" Jul 6 23:33:18.955209 kubelet[2988]: I0706 23:33:18.955183 2988 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.1-a-3b9b3bec0f" Jul 6 23:33:18.955589 kubelet[2988]: E0706 23:33:18.955552 2988 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.11:6443/api/v1/nodes\": dial tcp 10.200.20.11:6443: connect: connection refused" node="ci-4230.2.1-a-3b9b3bec0f" Jul 6 23:33:18.980392 kubelet[2988]: E0706 23:33:18.980293 2988 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.2.1-a-3b9b3bec0f?timeout=10s\": dial tcp 10.200.20.11:6443: connect: connection refused" interval="400ms" Jul 6 23:33:18.981614 kubelet[2988]: I0706 23:33:18.981588 2988 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a28136306231d79e98762bab5879d115-k8s-certs\") pod \"kube-apiserver-ci-4230.2.1-a-3b9b3bec0f\" (UID: \"a28136306231d79e98762bab5879d115\") " pod="kube-system/kube-apiserver-ci-4230.2.1-a-3b9b3bec0f" Jul 6 23:33:18.981673 kubelet[2988]: I0706 23:33:18.981619 2988 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fe565bc3a356744b9ef6ba5613e79a93-ca-certs\") pod \"kube-controller-manager-ci-4230.2.1-a-3b9b3bec0f\" (UID: \"fe565bc3a356744b9ef6ba5613e79a93\") " pod="kube-system/kube-controller-manager-ci-4230.2.1-a-3b9b3bec0f" Jul 6 23:33:18.981673 kubelet[2988]: I0706 23:33:18.981637 2988 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fe565bc3a356744b9ef6ba5613e79a93-flexvolume-dir\") pod \"kube-controller-manager-ci-4230.2.1-a-3b9b3bec0f\" (UID: \"fe565bc3a356744b9ef6ba5613e79a93\") " pod="kube-system/kube-controller-manager-ci-4230.2.1-a-3b9b3bec0f" Jul 6 23:33:18.981673 kubelet[2988]: I0706 23:33:18.981652 2988 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fe565bc3a356744b9ef6ba5613e79a93-k8s-certs\") pod \"kube-controller-manager-ci-4230.2.1-a-3b9b3bec0f\" (UID: \"fe565bc3a356744b9ef6ba5613e79a93\") " pod="kube-system/kube-controller-manager-ci-4230.2.1-a-3b9b3bec0f" Jul 6 23:33:18.981673 kubelet[2988]: I0706 23:33:18.981666 2988 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe565bc3a356744b9ef6ba5613e79a93-kubeconfig\") pod \"kube-controller-manager-ci-4230.2.1-a-3b9b3bec0f\" (UID: \"fe565bc3a356744b9ef6ba5613e79a93\") " pod="kube-system/kube-controller-manager-ci-4230.2.1-a-3b9b3bec0f" Jul 6 23:33:18.981760 kubelet[2988]: I0706 23:33:18.981723 2988 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fe565bc3a356744b9ef6ba5613e79a93-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230.2.1-a-3b9b3bec0f\" (UID: \"fe565bc3a356744b9ef6ba5613e79a93\") " pod="kube-system/kube-controller-manager-ci-4230.2.1-a-3b9b3bec0f" Jul 6 23:33:18.981760 kubelet[2988]: I0706 23:33:18.981739 2988 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a28136306231d79e98762bab5879d115-ca-certs\") pod \"kube-apiserver-ci-4230.2.1-a-3b9b3bec0f\" (UID: \"a28136306231d79e98762bab5879d115\") " pod="kube-system/kube-apiserver-ci-4230.2.1-a-3b9b3bec0f" Jul 6 23:33:18.981760 kubelet[2988]: I0706 23:33:18.981755 2988 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a28136306231d79e98762bab5879d115-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230.2.1-a-3b9b3bec0f\" (UID: \"a28136306231d79e98762bab5879d115\") " pod="kube-system/kube-apiserver-ci-4230.2.1-a-3b9b3bec0f" Jul 6 23:33:18.981858 kubelet[2988]: I0706 23:33:18.981770 2988 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/aeff9577cb2a0fbca6033ec6ea6e0d3e-kubeconfig\") pod \"kube-scheduler-ci-4230.2.1-a-3b9b3bec0f\" (UID: \"aeff9577cb2a0fbca6033ec6ea6e0d3e\") " pod="kube-system/kube-scheduler-ci-4230.2.1-a-3b9b3bec0f" Jul 6 23:33:19.157690 kubelet[2988]: I0706 23:33:19.157606 2988 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.1-a-3b9b3bec0f" Jul 6 23:33:19.157991 kubelet[2988]: E0706 23:33:19.157953 2988 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.11:6443/api/v1/nodes\": dial tcp 10.200.20.11:6443: connect: connection refused" node="ci-4230.2.1-a-3b9b3bec0f" Jul 6 23:33:19.217540 containerd[1785]: time="2025-07-06T23:33:19.217459694Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230.2.1-a-3b9b3bec0f,Uid:fe565bc3a356744b9ef6ba5613e79a93,Namespace:kube-system,Attempt:0,}" Jul 6 23:33:19.233264 containerd[1785]: time="2025-07-06T23:33:19.233099196Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230.2.1-a-3b9b3bec0f,Uid:aeff9577cb2a0fbca6033ec6ea6e0d3e,Namespace:kube-system,Attempt:0,}" Jul 6 23:33:19.238469 containerd[1785]: time="2025-07-06T23:33:19.238301324Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230.2.1-a-3b9b3bec0f,Uid:a28136306231d79e98762bab5879d115,Namespace:kube-system,Attempt:0,}" Jul 6 23:33:19.381664 kubelet[2988]: E0706 23:33:19.381609 2988 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.2.1-a-3b9b3bec0f?timeout=10s\": dial tcp 10.200.20.11:6443: connect: connection refused" interval="800ms" Jul 6 23:33:19.560920 kubelet[2988]: I0706 23:33:19.560870 2988 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.1-a-3b9b3bec0f" Jul 6 23:33:19.561361 kubelet[2988]: E0706 23:33:19.561317 2988 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.11:6443/api/v1/nodes\": dial tcp 10.200.20.11:6443: connect: connection refused" node="ci-4230.2.1-a-3b9b3bec0f" Jul 6 23:33:19.664790 kubelet[2988]: E0706 23:33:19.664744 2988 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.11:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.11:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 6 23:33:19.988225 kubelet[2988]: E0706 23:33:19.988084 2988 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.11:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.11:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 6 23:33:20.046692 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3588559173.mount: Deactivated successfully. Jul 6 23:33:20.084623 containerd[1785]: time="2025-07-06T23:33:20.084232257Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:33:20.110065 containerd[1785]: time="2025-07-06T23:33:20.110000534Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Jul 6 23:33:20.121936 containerd[1785]: time="2025-07-06T23:33:20.121877391Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:33:20.137157 containerd[1785]: time="2025-07-06T23:33:20.136626853Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:33:20.144091 containerd[1785]: time="2025-07-06T23:33:20.144056023Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 6 23:33:20.151895 kubelet[2988]: E0706 23:33:20.151857 2988 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.11:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.11:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 6 23:33:20.152227 containerd[1785]: time="2025-07-06T23:33:20.152185315Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:33:20.153066 containerd[1785]: time="2025-07-06T23:33:20.152997756Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 935.456542ms" Jul 6 23:33:20.160329 containerd[1785]: time="2025-07-06T23:33:20.160250726Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:33:20.177158 containerd[1785]: time="2025-07-06T23:33:20.176961870Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 6 23:33:20.178010 containerd[1785]: time="2025-07-06T23:33:20.177972352Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 944.776475ms" Jul 6 23:33:20.183065 kubelet[2988]: E0706 23:33:20.182974 2988 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.2.1-a-3b9b3bec0f?timeout=10s\": dial tcp 10.200.20.11:6443: connect: connection refused" interval="1.6s" Jul 6 23:33:20.186453 kubelet[2988]: E0706 23:33:20.186420 2988 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.11:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.2.1-a-3b9b3bec0f&limit=500&resourceVersion=0\": dial tcp 10.200.20.11:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 6 23:33:20.214886 containerd[1785]: time="2025-07-06T23:33:20.214830805Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 976.451721ms" Jul 6 23:33:20.363520 kubelet[2988]: I0706 23:33:20.363478 2988 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.1-a-3b9b3bec0f" Jul 6 23:33:20.363884 kubelet[2988]: E0706 23:33:20.363855 2988 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.11:6443/api/v1/nodes\": dial tcp 10.200.20.11:6443: connect: connection refused" node="ci-4230.2.1-a-3b9b3bec0f" Jul 6 23:33:20.658802 containerd[1785]: time="2025-07-06T23:33:20.658570841Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:33:20.658802 containerd[1785]: time="2025-07-06T23:33:20.658664961Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:33:20.658802 containerd[1785]: time="2025-07-06T23:33:20.658731401Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:33:20.660257 containerd[1785]: time="2025-07-06T23:33:20.660195844Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:33:20.663496 containerd[1785]: time="2025-07-06T23:33:20.662156366Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:33:20.663496 containerd[1785]: time="2025-07-06T23:33:20.663464568Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:33:20.663821 containerd[1785]: time="2025-07-06T23:33:20.663477528Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:33:20.663821 containerd[1785]: time="2025-07-06T23:33:20.663615848Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:33:20.670018 containerd[1785]: time="2025-07-06T23:33:20.669707777Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:33:20.670018 containerd[1785]: time="2025-07-06T23:33:20.669866497Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:33:20.670018 containerd[1785]: time="2025-07-06T23:33:20.669879377Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:33:20.670411 containerd[1785]: time="2025-07-06T23:33:20.670258258Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:33:20.697384 systemd[1]: Started cri-containerd-1c7bada416047e3572d65e3c3c695aa1591ffb96d832fce3b39f5da3b902c03b.scope - libcontainer container 1c7bada416047e3572d65e3c3c695aa1591ffb96d832fce3b39f5da3b902c03b. Jul 6 23:33:20.702894 systemd[1]: Started cri-containerd-7435bdb8319278c8118cb2e09ace2f4599364908540d1c1348fb44b5f43d9110.scope - libcontainer container 7435bdb8319278c8118cb2e09ace2f4599364908540d1c1348fb44b5f43d9110. Jul 6 23:33:20.704111 systemd[1]: Started cri-containerd-de0a62141923a6c097c0f9661884ca0038bf6065286101222b8a5a0034347d23.scope - libcontainer container de0a62141923a6c097c0f9661884ca0038bf6065286101222b8a5a0034347d23. Jul 6 23:33:20.746854 containerd[1785]: time="2025-07-06T23:33:20.746803928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230.2.1-a-3b9b3bec0f,Uid:a28136306231d79e98762bab5879d115,Namespace:kube-system,Attempt:0,} returns sandbox id \"1c7bada416047e3572d65e3c3c695aa1591ffb96d832fce3b39f5da3b902c03b\"" Jul 6 23:33:20.762189 containerd[1785]: time="2025-07-06T23:33:20.762112830Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230.2.1-a-3b9b3bec0f,Uid:fe565bc3a356744b9ef6ba5613e79a93,Namespace:kube-system,Attempt:0,} returns sandbox id \"de0a62141923a6c097c0f9661884ca0038bf6065286101222b8a5a0034347d23\"" Jul 6 23:33:20.766856 containerd[1785]: time="2025-07-06T23:33:20.766751756Z" level=info msg="CreateContainer within sandbox \"1c7bada416047e3572d65e3c3c695aa1591ffb96d832fce3b39f5da3b902c03b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 6 23:33:20.767830 containerd[1785]: time="2025-07-06T23:33:20.767398197Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230.2.1-a-3b9b3bec0f,Uid:aeff9577cb2a0fbca6033ec6ea6e0d3e,Namespace:kube-system,Attempt:0,} returns sandbox id \"7435bdb8319278c8118cb2e09ace2f4599364908540d1c1348fb44b5f43d9110\"" Jul 6 23:33:20.773828 containerd[1785]: time="2025-07-06T23:33:20.773778486Z" level=info msg="CreateContainer within sandbox \"de0a62141923a6c097c0f9661884ca0038bf6065286101222b8a5a0034347d23\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 6 23:33:20.782653 containerd[1785]: time="2025-07-06T23:33:20.782506019Z" level=info msg="CreateContainer within sandbox \"7435bdb8319278c8118cb2e09ace2f4599364908540d1c1348fb44b5f43d9110\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 6 23:33:20.883789 kubelet[2988]: E0706 23:33:20.883742 2988 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.20.11:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.11:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jul 6 23:33:20.902151 containerd[1785]: time="2025-07-06T23:33:20.901937270Z" level=info msg="CreateContainer within sandbox \"1c7bada416047e3572d65e3c3c695aa1591ffb96d832fce3b39f5da3b902c03b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"2ed5657996e2d2c63c96c3c80e0ece0f68f0320cb07aee248b12810fe9688871\"" Jul 6 23:33:20.902658 containerd[1785]: time="2025-07-06T23:33:20.902633471Z" level=info msg="StartContainer for \"2ed5657996e2d2c63c96c3c80e0ece0f68f0320cb07aee248b12810fe9688871\"" Jul 6 23:33:20.917182 containerd[1785]: time="2025-07-06T23:33:20.916750371Z" level=info msg="CreateContainer within sandbox \"7435bdb8319278c8118cb2e09ace2f4599364908540d1c1348fb44b5f43d9110\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ca57759567b8cda19144fac14a111dde2dc0d2f869c525ca2dbde03c0dd27a2e\"" Jul 6 23:33:20.918265 containerd[1785]: time="2025-07-06T23:33:20.918009973Z" level=info msg="StartContainer for \"ca57759567b8cda19144fac14a111dde2dc0d2f869c525ca2dbde03c0dd27a2e\"" Jul 6 23:33:20.921179 containerd[1785]: time="2025-07-06T23:33:20.921037858Z" level=info msg="CreateContainer within sandbox \"de0a62141923a6c097c0f9661884ca0038bf6065286101222b8a5a0034347d23\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"00a0b197c1939ae3dd1ce94234c7ec93d527a79f2dbae6f5509232f352ab8c5e\"" Jul 6 23:33:20.922382 containerd[1785]: time="2025-07-06T23:33:20.922350300Z" level=info msg="StartContainer for \"00a0b197c1939ae3dd1ce94234c7ec93d527a79f2dbae6f5509232f352ab8c5e\"" Jul 6 23:33:20.931637 systemd[1]: Started cri-containerd-2ed5657996e2d2c63c96c3c80e0ece0f68f0320cb07aee248b12810fe9688871.scope - libcontainer container 2ed5657996e2d2c63c96c3c80e0ece0f68f0320cb07aee248b12810fe9688871. Jul 6 23:33:20.951340 systemd[1]: Started cri-containerd-00a0b197c1939ae3dd1ce94234c7ec93d527a79f2dbae6f5509232f352ab8c5e.scope - libcontainer container 00a0b197c1939ae3dd1ce94234c7ec93d527a79f2dbae6f5509232f352ab8c5e. Jul 6 23:33:20.973488 systemd[1]: Started cri-containerd-ca57759567b8cda19144fac14a111dde2dc0d2f869c525ca2dbde03c0dd27a2e.scope - libcontainer container ca57759567b8cda19144fac14a111dde2dc0d2f869c525ca2dbde03c0dd27a2e. Jul 6 23:33:21.341834 containerd[1785]: time="2025-07-06T23:33:21.341600381Z" level=info msg="StartContainer for \"00a0b197c1939ae3dd1ce94234c7ec93d527a79f2dbae6f5509232f352ab8c5e\" returns successfully" Jul 6 23:33:21.341834 containerd[1785]: time="2025-07-06T23:33:21.341622741Z" level=info msg="StartContainer for \"2ed5657996e2d2c63c96c3c80e0ece0f68f0320cb07aee248b12810fe9688871\" returns successfully" Jul 6 23:33:21.341834 containerd[1785]: time="2025-07-06T23:33:21.341626781Z" level=info msg="StartContainer for \"ca57759567b8cda19144fac14a111dde2dc0d2f869c525ca2dbde03c0dd27a2e\" returns successfully" Jul 6 23:33:21.816228 kubelet[2988]: E0706 23:33:21.816006 2988 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.1-a-3b9b3bec0f\" not found" node="ci-4230.2.1-a-3b9b3bec0f" Jul 6 23:33:21.818926 kubelet[2988]: E0706 23:33:21.818895 2988 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.1-a-3b9b3bec0f\" not found" node="ci-4230.2.1-a-3b9b3bec0f" Jul 6 23:33:21.822604 kubelet[2988]: E0706 23:33:21.822578 2988 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.1-a-3b9b3bec0f\" not found" node="ci-4230.2.1-a-3b9b3bec0f" Jul 6 23:33:21.969562 kubelet[2988]: I0706 23:33:21.969508 2988 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.1-a-3b9b3bec0f" Jul 6 23:33:22.825311 kubelet[2988]: E0706 23:33:22.825275 2988 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.1-a-3b9b3bec0f\" not found" node="ci-4230.2.1-a-3b9b3bec0f" Jul 6 23:33:22.825641 kubelet[2988]: E0706 23:33:22.825606 2988 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.1-a-3b9b3bec0f\" not found" node="ci-4230.2.1-a-3b9b3bec0f" Jul 6 23:33:22.827781 kubelet[2988]: E0706 23:33:22.827749 2988 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.1-a-3b9b3bec0f\" not found" node="ci-4230.2.1-a-3b9b3bec0f" Jul 6 23:33:23.418228 kubelet[2988]: I0706 23:33:23.418193 2988 kubelet_node_status.go:78] "Successfully registered node" node="ci-4230.2.1-a-3b9b3bec0f" Jul 6 23:33:23.476509 kubelet[2988]: I0706 23:33:23.476447 2988 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230.2.1-a-3b9b3bec0f" Jul 6 23:33:23.505778 kubelet[2988]: E0706 23:33:23.505490 2988 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4230.2.1-a-3b9b3bec0f\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4230.2.1-a-3b9b3bec0f" Jul 6 23:33:23.505778 kubelet[2988]: I0706 23:33:23.505530 2988 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230.2.1-a-3b9b3bec0f" Jul 6 23:33:23.512513 kubelet[2988]: E0706 23:33:23.512267 2988 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4230.2.1-a-3b9b3bec0f\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4230.2.1-a-3b9b3bec0f" Jul 6 23:33:23.512513 kubelet[2988]: I0706 23:33:23.512302 2988 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230.2.1-a-3b9b3bec0f" Jul 6 23:33:23.518458 kubelet[2988]: E0706 23:33:23.518417 2988 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4230.2.1-a-3b9b3bec0f\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4230.2.1-a-3b9b3bec0f" Jul 6 23:33:23.765210 kubelet[2988]: I0706 23:33:23.765149 2988 apiserver.go:52] "Watching apiserver" Jul 6 23:33:23.780832 kubelet[2988]: I0706 23:33:23.780791 2988 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 6 23:33:23.824923 kubelet[2988]: I0706 23:33:23.824616 2988 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230.2.1-a-3b9b3bec0f" Jul 6 23:33:23.824923 kubelet[2988]: I0706 23:33:23.824814 2988 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230.2.1-a-3b9b3bec0f" Jul 6 23:33:23.828982 kubelet[2988]: E0706 23:33:23.828932 2988 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4230.2.1-a-3b9b3bec0f\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4230.2.1-a-3b9b3bec0f" Jul 6 23:33:23.829384 kubelet[2988]: E0706 23:33:23.829175 2988 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4230.2.1-a-3b9b3bec0f\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4230.2.1-a-3b9b3bec0f" Jul 6 23:33:25.197685 kubelet[2988]: I0706 23:33:25.196605 2988 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230.2.1-a-3b9b3bec0f" Jul 6 23:33:25.205649 kubelet[2988]: I0706 23:33:25.205611 2988 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jul 6 23:33:25.339903 kubelet[2988]: I0706 23:33:25.339860 2988 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230.2.1-a-3b9b3bec0f" Jul 6 23:33:25.348870 kubelet[2988]: I0706 23:33:25.348831 2988 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jul 6 23:33:25.411812 systemd[1]: Reload requested from client PID 3273 ('systemctl') (unit session-9.scope)... Jul 6 23:33:25.411828 systemd[1]: Reloading... Jul 6 23:33:25.519167 zram_generator::config[3323]: No configuration found. Jul 6 23:33:25.638271 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:33:25.753465 systemd[1]: Reloading finished in 341 ms. Jul 6 23:33:25.774711 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:33:25.790842 systemd[1]: kubelet.service: Deactivated successfully. Jul 6 23:33:25.791091 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:33:25.791161 systemd[1]: kubelet.service: Consumed 1.293s CPU time, 127.6M memory peak. Jul 6 23:33:25.798397 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:33:25.910412 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:33:25.915803 (kubelet)[3384]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 6 23:33:25.954158 kubelet[3384]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:33:25.954158 kubelet[3384]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 6 23:33:25.954158 kubelet[3384]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:33:25.954158 kubelet[3384]: I0706 23:33:25.953614 3384 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 6 23:33:25.962023 kubelet[3384]: I0706 23:33:25.961977 3384 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 6 23:33:25.962023 kubelet[3384]: I0706 23:33:25.962014 3384 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 6 23:33:25.962339 kubelet[3384]: I0706 23:33:25.962316 3384 server.go:956] "Client rotation is on, will bootstrap in background" Jul 6 23:33:25.963852 kubelet[3384]: I0706 23:33:25.963826 3384 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jul 6 23:33:25.966370 kubelet[3384]: I0706 23:33:25.966183 3384 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 6 23:33:25.970058 kubelet[3384]: E0706 23:33:25.969976 3384 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 6 23:33:25.970058 kubelet[3384]: I0706 23:33:25.970056 3384 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 6 23:33:25.973136 kubelet[3384]: I0706 23:33:25.973096 3384 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 6 23:33:25.975153 kubelet[3384]: I0706 23:33:25.973451 3384 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 6 23:33:25.975153 kubelet[3384]: I0706 23:33:25.973481 3384 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230.2.1-a-3b9b3bec0f","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 6 23:33:25.975153 kubelet[3384]: I0706 23:33:25.973808 3384 topology_manager.go:138] "Creating topology manager with none policy" Jul 6 23:33:25.975153 kubelet[3384]: I0706 23:33:25.973819 3384 container_manager_linux.go:303] "Creating device plugin manager" Jul 6 23:33:25.975153 kubelet[3384]: I0706 23:33:25.973880 3384 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:33:25.975371 kubelet[3384]: I0706 23:33:25.974034 3384 kubelet.go:480] "Attempting to sync node with API server" Jul 6 23:33:25.975371 kubelet[3384]: I0706 23:33:25.974046 3384 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 6 23:33:25.975371 kubelet[3384]: I0706 23:33:25.974069 3384 kubelet.go:386] "Adding apiserver pod source" Jul 6 23:33:25.975371 kubelet[3384]: I0706 23:33:25.974081 3384 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 6 23:33:25.976926 kubelet[3384]: I0706 23:33:25.976908 3384 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jul 6 23:33:25.977690 kubelet[3384]: I0706 23:33:25.977674 3384 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 6 23:33:25.982666 kubelet[3384]: I0706 23:33:25.982647 3384 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 6 23:33:25.982840 kubelet[3384]: I0706 23:33:25.982830 3384 server.go:1289] "Started kubelet" Jul 6 23:33:25.985309 kubelet[3384]: I0706 23:33:25.985290 3384 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 6 23:33:25.988056 kubelet[3384]: I0706 23:33:25.988015 3384 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 6 23:33:25.988846 kubelet[3384]: I0706 23:33:25.988817 3384 server.go:317] "Adding debug handlers to kubelet server" Jul 6 23:33:26.000302 kubelet[3384]: I0706 23:33:25.999764 3384 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 6 23:33:26.000302 kubelet[3384]: I0706 23:33:25.999975 3384 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 6 23:33:26.000538 kubelet[3384]: I0706 23:33:26.000494 3384 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 6 23:33:26.011636 kubelet[3384]: I0706 23:33:26.011598 3384 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 6 23:33:26.012129 kubelet[3384]: E0706 23:33:26.011846 3384 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.1-a-3b9b3bec0f\" not found" Jul 6 23:33:26.014854 kubelet[3384]: I0706 23:33:26.014824 3384 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 6 23:33:26.014988 kubelet[3384]: I0706 23:33:26.014970 3384 reconciler.go:26] "Reconciler: start to sync state" Jul 6 23:33:26.018653 kubelet[3384]: I0706 23:33:26.017882 3384 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 6 23:33:26.019208 kubelet[3384]: I0706 23:33:26.019173 3384 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 6 23:33:26.019258 kubelet[3384]: I0706 23:33:26.019201 3384 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 6 23:33:26.019258 kubelet[3384]: I0706 23:33:26.019236 3384 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 6 23:33:26.019258 kubelet[3384]: I0706 23:33:26.019243 3384 kubelet.go:2436] "Starting kubelet main sync loop" Jul 6 23:33:26.019316 kubelet[3384]: E0706 23:33:26.019283 3384 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 6 23:33:26.027082 kubelet[3384]: I0706 23:33:26.023937 3384 factory.go:223] Registration of the systemd container factory successfully Jul 6 23:33:26.028922 kubelet[3384]: I0706 23:33:26.027284 3384 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 6 23:33:26.031360 kubelet[3384]: I0706 23:33:26.031308 3384 factory.go:223] Registration of the containerd container factory successfully Jul 6 23:33:26.089608 kubelet[3384]: I0706 23:33:26.089566 3384 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 6 23:33:26.090310 kubelet[3384]: I0706 23:33:26.089684 3384 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 6 23:33:26.090310 kubelet[3384]: I0706 23:33:26.089707 3384 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:33:26.090310 kubelet[3384]: I0706 23:33:26.089843 3384 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 6 23:33:26.090310 kubelet[3384]: I0706 23:33:26.089853 3384 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 6 23:33:26.090310 kubelet[3384]: I0706 23:33:26.089869 3384 policy_none.go:49] "None policy: Start" Jul 6 23:33:26.090310 kubelet[3384]: I0706 23:33:26.089877 3384 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 6 23:33:26.090310 kubelet[3384]: I0706 23:33:26.089885 3384 state_mem.go:35] "Initializing new in-memory state store" Jul 6 23:33:26.090310 kubelet[3384]: I0706 23:33:26.089979 3384 state_mem.go:75] "Updated machine memory state" Jul 6 23:33:26.095588 kubelet[3384]: E0706 23:33:26.095565 3384 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 6 23:33:26.097237 kubelet[3384]: I0706 23:33:26.096841 3384 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 6 23:33:26.097237 kubelet[3384]: I0706 23:33:26.096856 3384 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 6 23:33:26.098384 kubelet[3384]: I0706 23:33:26.097597 3384 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 6 23:33:26.100017 kubelet[3384]: E0706 23:33:26.099995 3384 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 6 23:33:26.120035 kubelet[3384]: I0706 23:33:26.119996 3384 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230.2.1-a-3b9b3bec0f" Jul 6 23:33:26.120826 kubelet[3384]: I0706 23:33:26.120338 3384 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230.2.1-a-3b9b3bec0f" Jul 6 23:33:26.120826 kubelet[3384]: I0706 23:33:26.120069 3384 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230.2.1-a-3b9b3bec0f" Jul 6 23:33:26.134709 kubelet[3384]: I0706 23:33:26.134570 3384 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jul 6 23:33:26.135258 kubelet[3384]: I0706 23:33:26.135146 3384 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jul 6 23:33:26.135258 kubelet[3384]: E0706 23:33:26.135173 3384 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4230.2.1-a-3b9b3bec0f\" already exists" pod="kube-system/kube-apiserver-ci-4230.2.1-a-3b9b3bec0f" Jul 6 23:33:26.135258 kubelet[3384]: E0706 23:33:26.135206 3384 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4230.2.1-a-3b9b3bec0f\" already exists" pod="kube-system/kube-controller-manager-ci-4230.2.1-a-3b9b3bec0f" Jul 6 23:33:26.135258 kubelet[3384]: I0706 23:33:26.134915 3384 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jul 6 23:33:26.199518 kubelet[3384]: I0706 23:33:26.199445 3384 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.1-a-3b9b3bec0f" Jul 6 23:33:26.217283 kubelet[3384]: I0706 23:33:26.217234 3384 kubelet_node_status.go:124] "Node was previously registered" node="ci-4230.2.1-a-3b9b3bec0f" Jul 6 23:33:26.217595 kubelet[3384]: I0706 23:33:26.217521 3384 kubelet_node_status.go:78] "Successfully registered node" node="ci-4230.2.1-a-3b9b3bec0f" Jul 6 23:33:26.316788 kubelet[3384]: I0706 23:33:26.316659 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a28136306231d79e98762bab5879d115-k8s-certs\") pod \"kube-apiserver-ci-4230.2.1-a-3b9b3bec0f\" (UID: \"a28136306231d79e98762bab5879d115\") " pod="kube-system/kube-apiserver-ci-4230.2.1-a-3b9b3bec0f" Jul 6 23:33:26.316788 kubelet[3384]: I0706 23:33:26.316732 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fe565bc3a356744b9ef6ba5613e79a93-flexvolume-dir\") pod \"kube-controller-manager-ci-4230.2.1-a-3b9b3bec0f\" (UID: \"fe565bc3a356744b9ef6ba5613e79a93\") " pod="kube-system/kube-controller-manager-ci-4230.2.1-a-3b9b3bec0f" Jul 6 23:33:26.316918 kubelet[3384]: I0706 23:33:26.316825 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fe565bc3a356744b9ef6ba5613e79a93-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230.2.1-a-3b9b3bec0f\" (UID: \"fe565bc3a356744b9ef6ba5613e79a93\") " pod="kube-system/kube-controller-manager-ci-4230.2.1-a-3b9b3bec0f" Jul 6 23:33:26.316918 kubelet[3384]: I0706 23:33:26.316848 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a28136306231d79e98762bab5879d115-ca-certs\") pod \"kube-apiserver-ci-4230.2.1-a-3b9b3bec0f\" (UID: \"a28136306231d79e98762bab5879d115\") " pod="kube-system/kube-apiserver-ci-4230.2.1-a-3b9b3bec0f" Jul 6 23:33:26.316918 kubelet[3384]: I0706 23:33:26.316873 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a28136306231d79e98762bab5879d115-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230.2.1-a-3b9b3bec0f\" (UID: \"a28136306231d79e98762bab5879d115\") " pod="kube-system/kube-apiserver-ci-4230.2.1-a-3b9b3bec0f" Jul 6 23:33:26.316918 kubelet[3384]: I0706 23:33:26.316890 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fe565bc3a356744b9ef6ba5613e79a93-ca-certs\") pod \"kube-controller-manager-ci-4230.2.1-a-3b9b3bec0f\" (UID: \"fe565bc3a356744b9ef6ba5613e79a93\") " pod="kube-system/kube-controller-manager-ci-4230.2.1-a-3b9b3bec0f" Jul 6 23:33:26.316918 kubelet[3384]: I0706 23:33:26.316910 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fe565bc3a356744b9ef6ba5613e79a93-k8s-certs\") pod \"kube-controller-manager-ci-4230.2.1-a-3b9b3bec0f\" (UID: \"fe565bc3a356744b9ef6ba5613e79a93\") " pod="kube-system/kube-controller-manager-ci-4230.2.1-a-3b9b3bec0f" Jul 6 23:33:26.317028 kubelet[3384]: I0706 23:33:26.316929 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe565bc3a356744b9ef6ba5613e79a93-kubeconfig\") pod \"kube-controller-manager-ci-4230.2.1-a-3b9b3bec0f\" (UID: \"fe565bc3a356744b9ef6ba5613e79a93\") " pod="kube-system/kube-controller-manager-ci-4230.2.1-a-3b9b3bec0f" Jul 6 23:33:26.317028 kubelet[3384]: I0706 23:33:26.316946 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/aeff9577cb2a0fbca6033ec6ea6e0d3e-kubeconfig\") pod \"kube-scheduler-ci-4230.2.1-a-3b9b3bec0f\" (UID: \"aeff9577cb2a0fbca6033ec6ea6e0d3e\") " pod="kube-system/kube-scheduler-ci-4230.2.1-a-3b9b3bec0f" Jul 6 23:33:26.451718 sudo[3421]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 6 23:33:26.451990 sudo[3421]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jul 6 23:33:26.891991 sudo[3421]: pam_unix(sudo:session): session closed for user root Jul 6 23:33:26.984080 kubelet[3384]: I0706 23:33:26.984021 3384 apiserver.go:52] "Watching apiserver" Jul 6 23:33:27.015410 kubelet[3384]: I0706 23:33:27.015347 3384 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 6 23:33:27.066580 kubelet[3384]: I0706 23:33:27.065454 3384 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230.2.1-a-3b9b3bec0f" Jul 6 23:33:27.066580 kubelet[3384]: I0706 23:33:27.065722 3384 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230.2.1-a-3b9b3bec0f" Jul 6 23:33:27.082153 kubelet[3384]: I0706 23:33:27.080593 3384 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jul 6 23:33:27.082485 kubelet[3384]: E0706 23:33:27.082376 3384 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4230.2.1-a-3b9b3bec0f\" already exists" pod="kube-system/kube-apiserver-ci-4230.2.1-a-3b9b3bec0f" Jul 6 23:33:27.084095 kubelet[3384]: I0706 23:33:27.084055 3384 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jul 6 23:33:27.084201 kubelet[3384]: E0706 23:33:27.084109 3384 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4230.2.1-a-3b9b3bec0f\" already exists" pod="kube-system/kube-scheduler-ci-4230.2.1-a-3b9b3bec0f" Jul 6 23:33:27.133335 kubelet[3384]: I0706 23:33:27.133211 3384 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4230.2.1-a-3b9b3bec0f" podStartSLOduration=2.133193642 podStartE2EDuration="2.133193642s" podCreationTimestamp="2025-07-06 23:33:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:33:27.110993291 +0000 UTC m=+1.191971945" watchObservedRunningTime="2025-07-06 23:33:27.133193642 +0000 UTC m=+1.214172296" Jul 6 23:33:27.156600 kubelet[3384]: I0706 23:33:27.156395 3384 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4230.2.1-a-3b9b3bec0f" podStartSLOduration=1.156374194 podStartE2EDuration="1.156374194s" podCreationTimestamp="2025-07-06 23:33:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:33:27.133866283 +0000 UTC m=+1.214844937" watchObservedRunningTime="2025-07-06 23:33:27.156374194 +0000 UTC m=+1.237352848" Jul 6 23:33:27.186429 kubelet[3384]: I0706 23:33:27.186183 3384 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4230.2.1-a-3b9b3bec0f" podStartSLOduration=2.186162676 podStartE2EDuration="2.186162676s" podCreationTimestamp="2025-07-06 23:33:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:33:27.157201316 +0000 UTC m=+1.238179930" watchObservedRunningTime="2025-07-06 23:33:27.186162676 +0000 UTC m=+1.267141330" Jul 6 23:33:28.562603 sudo[2300]: pam_unix(sudo:session): session closed for user root Jul 6 23:33:28.647931 sshd[2299]: Connection closed by 10.200.16.10 port 60438 Jul 6 23:33:28.648613 sshd-session[2297]: pam_unix(sshd:session): session closed for user core Jul 6 23:33:28.653360 systemd[1]: sshd@6-10.200.20.11:22-10.200.16.10:60438.service: Deactivated successfully. Jul 6 23:33:28.656382 systemd[1]: session-9.scope: Deactivated successfully. Jul 6 23:33:28.656699 systemd[1]: session-9.scope: Consumed 6.826s CPU time, 261.9M memory peak. Jul 6 23:33:28.658224 systemd-logind[1752]: Session 9 logged out. Waiting for processes to exit. Jul 6 23:33:28.659734 systemd-logind[1752]: Removed session 9. Jul 6 23:33:31.799886 kubelet[3384]: I0706 23:33:31.799802 3384 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 6 23:33:31.800323 containerd[1785]: time="2025-07-06T23:33:31.800257771Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 6 23:33:31.801716 kubelet[3384]: I0706 23:33:31.800928 3384 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 6 23:33:32.758419 kubelet[3384]: I0706 23:33:32.757799 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/265cf7e9-19b5-49e4-8c7e-042a204beeb8-hubble-tls\") pod \"cilium-lj9w6\" (UID: \"265cf7e9-19b5-49e4-8c7e-042a204beeb8\") " pod="kube-system/cilium-lj9w6" Jul 6 23:33:32.758419 kubelet[3384]: I0706 23:33:32.757843 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7212687f-07c9-495e-b799-48ed78b29bec-kube-proxy\") pod \"kube-proxy-j9gv5\" (UID: \"7212687f-07c9-495e-b799-48ed78b29bec\") " pod="kube-system/kube-proxy-j9gv5" Jul 6 23:33:32.758419 kubelet[3384]: I0706 23:33:32.757865 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7212687f-07c9-495e-b799-48ed78b29bec-xtables-lock\") pod \"kube-proxy-j9gv5\" (UID: \"7212687f-07c9-495e-b799-48ed78b29bec\") " pod="kube-system/kube-proxy-j9gv5" Jul 6 23:33:32.758419 kubelet[3384]: I0706 23:33:32.757881 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7212687f-07c9-495e-b799-48ed78b29bec-lib-modules\") pod \"kube-proxy-j9gv5\" (UID: \"7212687f-07c9-495e-b799-48ed78b29bec\") " pod="kube-system/kube-proxy-j9gv5" Jul 6 23:33:32.758419 kubelet[3384]: I0706 23:33:32.757895 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/265cf7e9-19b5-49e4-8c7e-042a204beeb8-cilium-cgroup\") pod \"cilium-lj9w6\" (UID: \"265cf7e9-19b5-49e4-8c7e-042a204beeb8\") " pod="kube-system/cilium-lj9w6" Jul 6 23:33:32.758419 kubelet[3384]: I0706 23:33:32.757910 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/265cf7e9-19b5-49e4-8c7e-042a204beeb8-cni-path\") pod \"cilium-lj9w6\" (UID: \"265cf7e9-19b5-49e4-8c7e-042a204beeb8\") " pod="kube-system/cilium-lj9w6" Jul 6 23:33:32.758656 kubelet[3384]: I0706 23:33:32.757924 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/265cf7e9-19b5-49e4-8c7e-042a204beeb8-lib-modules\") pod \"cilium-lj9w6\" (UID: \"265cf7e9-19b5-49e4-8c7e-042a204beeb8\") " pod="kube-system/cilium-lj9w6" Jul 6 23:33:32.758656 kubelet[3384]: I0706 23:33:32.757941 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pj88g\" (UniqueName: \"kubernetes.io/projected/7212687f-07c9-495e-b799-48ed78b29bec-kube-api-access-pj88g\") pod \"kube-proxy-j9gv5\" (UID: \"7212687f-07c9-495e-b799-48ed78b29bec\") " pod="kube-system/kube-proxy-j9gv5" Jul 6 23:33:32.758656 kubelet[3384]: I0706 23:33:32.757960 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/265cf7e9-19b5-49e4-8c7e-042a204beeb8-cilium-run\") pod \"cilium-lj9w6\" (UID: \"265cf7e9-19b5-49e4-8c7e-042a204beeb8\") " pod="kube-system/cilium-lj9w6" Jul 6 23:33:32.758656 kubelet[3384]: I0706 23:33:32.757976 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/265cf7e9-19b5-49e4-8c7e-042a204beeb8-bpf-maps\") pod \"cilium-lj9w6\" (UID: \"265cf7e9-19b5-49e4-8c7e-042a204beeb8\") " pod="kube-system/cilium-lj9w6" Jul 6 23:33:32.758656 kubelet[3384]: I0706 23:33:32.757992 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/265cf7e9-19b5-49e4-8c7e-042a204beeb8-clustermesh-secrets\") pod \"cilium-lj9w6\" (UID: \"265cf7e9-19b5-49e4-8c7e-042a204beeb8\") " pod="kube-system/cilium-lj9w6" Jul 6 23:33:32.758656 kubelet[3384]: I0706 23:33:32.758008 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/265cf7e9-19b5-49e4-8c7e-042a204beeb8-cilium-config-path\") pod \"cilium-lj9w6\" (UID: \"265cf7e9-19b5-49e4-8c7e-042a204beeb8\") " pod="kube-system/cilium-lj9w6" Jul 6 23:33:32.758777 kubelet[3384]: I0706 23:33:32.758024 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/265cf7e9-19b5-49e4-8c7e-042a204beeb8-host-proc-sys-net\") pod \"cilium-lj9w6\" (UID: \"265cf7e9-19b5-49e4-8c7e-042a204beeb8\") " pod="kube-system/cilium-lj9w6" Jul 6 23:33:32.758777 kubelet[3384]: I0706 23:33:32.758038 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/265cf7e9-19b5-49e4-8c7e-042a204beeb8-host-proc-sys-kernel\") pod \"cilium-lj9w6\" (UID: \"265cf7e9-19b5-49e4-8c7e-042a204beeb8\") " pod="kube-system/cilium-lj9w6" Jul 6 23:33:32.758777 kubelet[3384]: I0706 23:33:32.758052 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xg2p\" (UniqueName: \"kubernetes.io/projected/265cf7e9-19b5-49e4-8c7e-042a204beeb8-kube-api-access-8xg2p\") pod \"cilium-lj9w6\" (UID: \"265cf7e9-19b5-49e4-8c7e-042a204beeb8\") " pod="kube-system/cilium-lj9w6" Jul 6 23:33:32.758777 kubelet[3384]: I0706 23:33:32.758065 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/265cf7e9-19b5-49e4-8c7e-042a204beeb8-hostproc\") pod \"cilium-lj9w6\" (UID: \"265cf7e9-19b5-49e4-8c7e-042a204beeb8\") " pod="kube-system/cilium-lj9w6" Jul 6 23:33:32.758777 kubelet[3384]: I0706 23:33:32.758080 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/265cf7e9-19b5-49e4-8c7e-042a204beeb8-etc-cni-netd\") pod \"cilium-lj9w6\" (UID: \"265cf7e9-19b5-49e4-8c7e-042a204beeb8\") " pod="kube-system/cilium-lj9w6" Jul 6 23:33:32.758777 kubelet[3384]: I0706 23:33:32.758098 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/265cf7e9-19b5-49e4-8c7e-042a204beeb8-xtables-lock\") pod \"cilium-lj9w6\" (UID: \"265cf7e9-19b5-49e4-8c7e-042a204beeb8\") " pod="kube-system/cilium-lj9w6" Jul 6 23:33:32.760963 systemd[1]: Created slice kubepods-burstable-pod265cf7e9_19b5_49e4_8c7e_042a204beeb8.slice - libcontainer container kubepods-burstable-pod265cf7e9_19b5_49e4_8c7e_042a204beeb8.slice. Jul 6 23:33:32.767476 systemd[1]: Created slice kubepods-besteffort-pod7212687f_07c9_495e_b799_48ed78b29bec.slice - libcontainer container kubepods-besteffort-pod7212687f_07c9_495e_b799_48ed78b29bec.slice. Jul 6 23:33:32.880969 kubelet[3384]: E0706 23:33:32.880919 3384 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jul 6 23:33:32.880969 kubelet[3384]: E0706 23:33:32.880959 3384 projected.go:194] Error preparing data for projected volume kube-api-access-pj88g for pod kube-system/kube-proxy-j9gv5: configmap "kube-root-ca.crt" not found Jul 6 23:33:32.881490 kubelet[3384]: E0706 23:33:32.881035 3384 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7212687f-07c9-495e-b799-48ed78b29bec-kube-api-access-pj88g podName:7212687f-07c9-495e-b799-48ed78b29bec nodeName:}" failed. No retries permitted until 2025-07-06 23:33:33.381013523 +0000 UTC m=+7.461992137 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-pj88g" (UniqueName: "kubernetes.io/projected/7212687f-07c9-495e-b799-48ed78b29bec-kube-api-access-pj88g") pod "kube-proxy-j9gv5" (UID: "7212687f-07c9-495e-b799-48ed78b29bec") : configmap "kube-root-ca.crt" not found Jul 6 23:33:32.883545 kubelet[3384]: E0706 23:33:32.883505 3384 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jul 6 23:33:32.883760 kubelet[3384]: E0706 23:33:32.883672 3384 projected.go:194] Error preparing data for projected volume kube-api-access-8xg2p for pod kube-system/cilium-lj9w6: configmap "kube-root-ca.crt" not found Jul 6 23:33:32.883760 kubelet[3384]: E0706 23:33:32.883738 3384 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/265cf7e9-19b5-49e4-8c7e-042a204beeb8-kube-api-access-8xg2p podName:265cf7e9-19b5-49e4-8c7e-042a204beeb8 nodeName:}" failed. No retries permitted until 2025-07-06 23:33:33.383720207 +0000 UTC m=+7.464698861 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-8xg2p" (UniqueName: "kubernetes.io/projected/265cf7e9-19b5-49e4-8c7e-042a204beeb8-kube-api-access-8xg2p") pod "cilium-lj9w6" (UID: "265cf7e9-19b5-49e4-8c7e-042a204beeb8") : configmap "kube-root-ca.crt" not found Jul 6 23:33:33.041331 systemd[1]: Created slice kubepods-besteffort-pod19333f96_b03d_4855_8269_abeb59c584fd.slice - libcontainer container kubepods-besteffort-pod19333f96_b03d_4855_8269_abeb59c584fd.slice. Jul 6 23:33:33.060114 kubelet[3384]: I0706 23:33:33.060066 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/19333f96-b03d-4855-8269-abeb59c584fd-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-tdrks\" (UID: \"19333f96-b03d-4855-8269-abeb59c584fd\") " pod="kube-system/cilium-operator-6c4d7847fc-tdrks" Jul 6 23:33:33.060283 kubelet[3384]: I0706 23:33:33.060157 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4qz4\" (UniqueName: \"kubernetes.io/projected/19333f96-b03d-4855-8269-abeb59c584fd-kube-api-access-f4qz4\") pod \"cilium-operator-6c4d7847fc-tdrks\" (UID: \"19333f96-b03d-4855-8269-abeb59c584fd\") " pod="kube-system/cilium-operator-6c4d7847fc-tdrks" Jul 6 23:33:33.344458 containerd[1785]: time="2025-07-06T23:33:33.344323811Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-tdrks,Uid:19333f96-b03d-4855-8269-abeb59c584fd,Namespace:kube-system,Attempt:0,}" Jul 6 23:33:33.422578 containerd[1785]: time="2025-07-06T23:33:33.422194560Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:33:33.422578 containerd[1785]: time="2025-07-06T23:33:33.422255560Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:33:33.422578 containerd[1785]: time="2025-07-06T23:33:33.422266680Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:33:33.422578 containerd[1785]: time="2025-07-06T23:33:33.422348320Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:33:33.447362 systemd[1]: Started cri-containerd-c330d21b992db05508ebef095d62f732bbe6943cd92c671abf20fade4457fed9.scope - libcontainer container c330d21b992db05508ebef095d62f732bbe6943cd92c671abf20fade4457fed9. Jul 6 23:33:33.480145 containerd[1785]: time="2025-07-06T23:33:33.480072361Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-tdrks,Uid:19333f96-b03d-4855-8269-abeb59c584fd,Namespace:kube-system,Attempt:0,} returns sandbox id \"c330d21b992db05508ebef095d62f732bbe6943cd92c671abf20fade4457fed9\"" Jul 6 23:33:33.482557 containerd[1785]: time="2025-07-06T23:33:33.482493164Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 6 23:33:33.665365 containerd[1785]: time="2025-07-06T23:33:33.664914379Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lj9w6,Uid:265cf7e9-19b5-49e4-8c7e-042a204beeb8,Namespace:kube-system,Attempt:0,}" Jul 6 23:33:33.677582 containerd[1785]: time="2025-07-06T23:33:33.677331997Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-j9gv5,Uid:7212687f-07c9-495e-b799-48ed78b29bec,Namespace:kube-system,Attempt:0,}" Jul 6 23:33:33.746913 containerd[1785]: time="2025-07-06T23:33:33.746670214Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:33:33.746913 containerd[1785]: time="2025-07-06T23:33:33.746744334Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:33:33.746913 containerd[1785]: time="2025-07-06T23:33:33.746759134Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:33:33.747899 containerd[1785]: time="2025-07-06T23:33:33.747403575Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:33:33.754156 containerd[1785]: time="2025-07-06T23:33:33.753722784Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:33:33.754562 containerd[1785]: time="2025-07-06T23:33:33.754404825Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:33:33.754562 containerd[1785]: time="2025-07-06T23:33:33.754435785Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:33:33.757438 containerd[1785]: time="2025-07-06T23:33:33.757316109Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:33:33.772381 systemd[1]: Started cri-containerd-05f172a8d9abacd7044702e1ab8dd58dd744cabc5f45edf268afde4fc3e796a8.scope - libcontainer container 05f172a8d9abacd7044702e1ab8dd58dd744cabc5f45edf268afde4fc3e796a8. Jul 6 23:33:33.789354 systemd[1]: Started cri-containerd-7488e97d0e804bce8fd97d4d3c75d8ef80ea162943be31fd8e2bf2e29d6d1f24.scope - libcontainer container 7488e97d0e804bce8fd97d4d3c75d8ef80ea162943be31fd8e2bf2e29d6d1f24. Jul 6 23:33:33.810910 containerd[1785]: time="2025-07-06T23:33:33.810778703Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-j9gv5,Uid:7212687f-07c9-495e-b799-48ed78b29bec,Namespace:kube-system,Attempt:0,} returns sandbox id \"05f172a8d9abacd7044702e1ab8dd58dd744cabc5f45edf268afde4fc3e796a8\"" Jul 6 23:33:33.823591 containerd[1785]: time="2025-07-06T23:33:33.823465801Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lj9w6,Uid:265cf7e9-19b5-49e4-8c7e-042a204beeb8,Namespace:kube-system,Attempt:0,} returns sandbox id \"7488e97d0e804bce8fd97d4d3c75d8ef80ea162943be31fd8e2bf2e29d6d1f24\"" Jul 6 23:33:33.825735 containerd[1785]: time="2025-07-06T23:33:33.824902843Z" level=info msg="CreateContainer within sandbox \"05f172a8d9abacd7044702e1ab8dd58dd744cabc5f45edf268afde4fc3e796a8\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 6 23:33:33.905749 containerd[1785]: time="2025-07-06T23:33:33.905657356Z" level=info msg="CreateContainer within sandbox \"05f172a8d9abacd7044702e1ab8dd58dd744cabc5f45edf268afde4fc3e796a8\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7f5e58a6bfc7398fd865ad4829cc2ce88abaf3aba664332a61057b6488b5c91a\"" Jul 6 23:33:33.906599 containerd[1785]: time="2025-07-06T23:33:33.906557877Z" level=info msg="StartContainer for \"7f5e58a6bfc7398fd865ad4829cc2ce88abaf3aba664332a61057b6488b5c91a\"" Jul 6 23:33:33.942332 systemd[1]: Started cri-containerd-7f5e58a6bfc7398fd865ad4829cc2ce88abaf3aba664332a61057b6488b5c91a.scope - libcontainer container 7f5e58a6bfc7398fd865ad4829cc2ce88abaf3aba664332a61057b6488b5c91a. Jul 6 23:33:33.976251 containerd[1785]: time="2025-07-06T23:33:33.976198695Z" level=info msg="StartContainer for \"7f5e58a6bfc7398fd865ad4829cc2ce88abaf3aba664332a61057b6488b5c91a\" returns successfully" Jul 6 23:33:34.828834 kubelet[3384]: I0706 23:33:34.828498 3384 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-j9gv5" podStartSLOduration=2.828479647 podStartE2EDuration="2.828479647s" podCreationTimestamp="2025-07-06 23:33:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:33:34.100024948 +0000 UTC m=+8.181003602" watchObservedRunningTime="2025-07-06 23:33:34.828479647 +0000 UTC m=+8.909458301" Jul 6 23:33:34.986638 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1029582599.mount: Deactivated successfully. Jul 6 23:33:35.616171 containerd[1785]: time="2025-07-06T23:33:35.615729982Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:33:35.622719 containerd[1785]: time="2025-07-06T23:33:35.622658672Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Jul 6 23:33:35.629452 containerd[1785]: time="2025-07-06T23:33:35.629408481Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:33:35.631294 containerd[1785]: time="2025-07-06T23:33:35.631155723Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 2.148452759s" Jul 6 23:33:35.631294 containerd[1785]: time="2025-07-06T23:33:35.631192643Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jul 6 23:33:35.633985 containerd[1785]: time="2025-07-06T23:33:35.632564005Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 6 23:33:35.640085 containerd[1785]: time="2025-07-06T23:33:35.640043016Z" level=info msg="CreateContainer within sandbox \"c330d21b992db05508ebef095d62f732bbe6943cd92c671abf20fade4457fed9\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 6 23:33:35.681602 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4084385227.mount: Deactivated successfully. Jul 6 23:33:35.697443 containerd[1785]: time="2025-07-06T23:33:35.697392655Z" level=info msg="CreateContainer within sandbox \"c330d21b992db05508ebef095d62f732bbe6943cd92c671abf20fade4457fed9\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"645f604784caff7873d97ca59cc27eb165963a3673117da33a05eb8589f66a07\"" Jul 6 23:33:35.698164 containerd[1785]: time="2025-07-06T23:33:35.698065336Z" level=info msg="StartContainer for \"645f604784caff7873d97ca59cc27eb165963a3673117da33a05eb8589f66a07\"" Jul 6 23:33:35.722320 systemd[1]: Started cri-containerd-645f604784caff7873d97ca59cc27eb165963a3673117da33a05eb8589f66a07.scope - libcontainer container 645f604784caff7873d97ca59cc27eb165963a3673117da33a05eb8589f66a07. Jul 6 23:33:35.751796 containerd[1785]: time="2025-07-06T23:33:35.751736251Z" level=info msg="StartContainer for \"645f604784caff7873d97ca59cc27eb165963a3673117da33a05eb8589f66a07\" returns successfully" Jul 6 23:33:39.582930 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount578075635.mount: Deactivated successfully. Jul 6 23:33:40.790483 kubelet[3384]: I0706 23:33:40.790418 3384 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-tdrks" podStartSLOduration=5.640023448 podStartE2EDuration="7.79040337s" podCreationTimestamp="2025-07-06 23:33:33 +0000 UTC" firstStartedPulling="2025-07-06 23:33:33.481767443 +0000 UTC m=+7.562746057" lastFinishedPulling="2025-07-06 23:33:35.632147285 +0000 UTC m=+9.713125979" observedRunningTime="2025-07-06 23:33:36.119898602 +0000 UTC m=+10.200877256" watchObservedRunningTime="2025-07-06 23:33:40.79040337 +0000 UTC m=+14.871381984" Jul 6 23:33:41.535177 containerd[1785]: time="2025-07-06T23:33:41.533852202Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:33:41.542403 containerd[1785]: time="2025-07-06T23:33:41.542324294Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Jul 6 23:33:41.547821 containerd[1785]: time="2025-07-06T23:33:41.547601021Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:33:41.550003 containerd[1785]: time="2025-07-06T23:33:41.549862985Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 5.91725938s" Jul 6 23:33:41.550003 containerd[1785]: time="2025-07-06T23:33:41.549901305Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jul 6 23:33:41.560713 containerd[1785]: time="2025-07-06T23:33:41.560651360Z" level=info msg="CreateContainer within sandbox \"7488e97d0e804bce8fd97d4d3c75d8ef80ea162943be31fd8e2bf2e29d6d1f24\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 6 23:33:41.614532 containerd[1785]: time="2025-07-06T23:33:41.614476354Z" level=info msg="CreateContainer within sandbox \"7488e97d0e804bce8fd97d4d3c75d8ef80ea162943be31fd8e2bf2e29d6d1f24\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"4e7e268b0002a0176b513d4fb82588839fafa6c3b44760f4b46fcc1236ae3b75\"" Jul 6 23:33:41.616194 containerd[1785]: time="2025-07-06T23:33:41.616110997Z" level=info msg="StartContainer for \"4e7e268b0002a0176b513d4fb82588839fafa6c3b44760f4b46fcc1236ae3b75\"" Jul 6 23:33:41.643316 systemd[1]: Started cri-containerd-4e7e268b0002a0176b513d4fb82588839fafa6c3b44760f4b46fcc1236ae3b75.scope - libcontainer container 4e7e268b0002a0176b513d4fb82588839fafa6c3b44760f4b46fcc1236ae3b75. Jul 6 23:33:41.683286 containerd[1785]: time="2025-07-06T23:33:41.683229530Z" level=info msg="StartContainer for \"4e7e268b0002a0176b513d4fb82588839fafa6c3b44760f4b46fcc1236ae3b75\" returns successfully" Jul 6 23:33:41.690771 systemd[1]: cri-containerd-4e7e268b0002a0176b513d4fb82588839fafa6c3b44760f4b46fcc1236ae3b75.scope: Deactivated successfully. Jul 6 23:33:42.594279 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4e7e268b0002a0176b513d4fb82588839fafa6c3b44760f4b46fcc1236ae3b75-rootfs.mount: Deactivated successfully. Jul 6 23:33:43.354155 containerd[1785]: time="2025-07-06T23:33:43.354060967Z" level=info msg="shim disconnected" id=4e7e268b0002a0176b513d4fb82588839fafa6c3b44760f4b46fcc1236ae3b75 namespace=k8s.io Jul 6 23:33:43.354155 containerd[1785]: time="2025-07-06T23:33:43.354117527Z" level=warning msg="cleaning up after shim disconnected" id=4e7e268b0002a0176b513d4fb82588839fafa6c3b44760f4b46fcc1236ae3b75 namespace=k8s.io Jul 6 23:33:43.354155 containerd[1785]: time="2025-07-06T23:33:43.354151127Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:33:44.119542 containerd[1785]: time="2025-07-06T23:33:44.119488782Z" level=info msg="CreateContainer within sandbox \"7488e97d0e804bce8fd97d4d3c75d8ef80ea162943be31fd8e2bf2e29d6d1f24\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 6 23:33:44.178894 containerd[1785]: time="2025-07-06T23:33:44.178835144Z" level=info msg="CreateContainer within sandbox \"7488e97d0e804bce8fd97d4d3c75d8ef80ea162943be31fd8e2bf2e29d6d1f24\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"1602ad2c89a65c10ff67bee7e172f92073a6072777ab069fd91f3d0b6c325d98\"" Jul 6 23:33:44.180338 containerd[1785]: time="2025-07-06T23:33:44.179463985Z" level=info msg="StartContainer for \"1602ad2c89a65c10ff67bee7e172f92073a6072777ab069fd91f3d0b6c325d98\"" Jul 6 23:33:44.209359 systemd[1]: Started cri-containerd-1602ad2c89a65c10ff67bee7e172f92073a6072777ab069fd91f3d0b6c325d98.scope - libcontainer container 1602ad2c89a65c10ff67bee7e172f92073a6072777ab069fd91f3d0b6c325d98. Jul 6 23:33:44.243249 containerd[1785]: time="2025-07-06T23:33:44.243200033Z" level=info msg="StartContainer for \"1602ad2c89a65c10ff67bee7e172f92073a6072777ab069fd91f3d0b6c325d98\" returns successfully" Jul 6 23:33:44.251140 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 6 23:33:44.251368 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:33:44.251539 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:33:44.259333 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:33:44.261658 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 6 23:33:44.263000 systemd[1]: cri-containerd-1602ad2c89a65c10ff67bee7e172f92073a6072777ab069fd91f3d0b6c325d98.scope: Deactivated successfully. Jul 6 23:33:44.279190 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:33:44.304693 containerd[1785]: time="2025-07-06T23:33:44.304619997Z" level=info msg="shim disconnected" id=1602ad2c89a65c10ff67bee7e172f92073a6072777ab069fd91f3d0b6c325d98 namespace=k8s.io Jul 6 23:33:44.304693 containerd[1785]: time="2025-07-06T23:33:44.304681917Z" level=warning msg="cleaning up after shim disconnected" id=1602ad2c89a65c10ff67bee7e172f92073a6072777ab069fd91f3d0b6c325d98 namespace=k8s.io Jul 6 23:33:44.304693 containerd[1785]: time="2025-07-06T23:33:44.304691397Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:33:45.126537 containerd[1785]: time="2025-07-06T23:33:45.126486650Z" level=info msg="CreateContainer within sandbox \"7488e97d0e804bce8fd97d4d3c75d8ef80ea162943be31fd8e2bf2e29d6d1f24\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 6 23:33:45.151708 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1602ad2c89a65c10ff67bee7e172f92073a6072777ab069fd91f3d0b6c325d98-rootfs.mount: Deactivated successfully. Jul 6 23:33:45.168103 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2380847203.mount: Deactivated successfully. Jul 6 23:33:45.187226 containerd[1785]: time="2025-07-06T23:33:45.187176374Z" level=info msg="CreateContainer within sandbox \"7488e97d0e804bce8fd97d4d3c75d8ef80ea162943be31fd8e2bf2e29d6d1f24\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"2a4a5a37e441842a281e2d239541891be78a9816a5b1ae32897952eb5d205cf0\"" Jul 6 23:33:45.189055 containerd[1785]: time="2025-07-06T23:33:45.187920215Z" level=info msg="StartContainer for \"2a4a5a37e441842a281e2d239541891be78a9816a5b1ae32897952eb5d205cf0\"" Jul 6 23:33:45.218323 systemd[1]: Started cri-containerd-2a4a5a37e441842a281e2d239541891be78a9816a5b1ae32897952eb5d205cf0.scope - libcontainer container 2a4a5a37e441842a281e2d239541891be78a9816a5b1ae32897952eb5d205cf0. Jul 6 23:33:45.252813 systemd[1]: cri-containerd-2a4a5a37e441842a281e2d239541891be78a9816a5b1ae32897952eb5d205cf0.scope: Deactivated successfully. Jul 6 23:33:45.256067 containerd[1785]: time="2025-07-06T23:33:45.255471028Z" level=info msg="StartContainer for \"2a4a5a37e441842a281e2d239541891be78a9816a5b1ae32897952eb5d205cf0\" returns successfully" Jul 6 23:33:45.290810 containerd[1785]: time="2025-07-06T23:33:45.290722957Z" level=info msg="shim disconnected" id=2a4a5a37e441842a281e2d239541891be78a9816a5b1ae32897952eb5d205cf0 namespace=k8s.io Jul 6 23:33:45.290810 containerd[1785]: time="2025-07-06T23:33:45.290778597Z" level=warning msg="cleaning up after shim disconnected" id=2a4a5a37e441842a281e2d239541891be78a9816a5b1ae32897952eb5d205cf0 namespace=k8s.io Jul 6 23:33:45.290810 containerd[1785]: time="2025-07-06T23:33:45.290788517Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:33:46.130169 containerd[1785]: time="2025-07-06T23:33:46.129530473Z" level=info msg="CreateContainer within sandbox \"7488e97d0e804bce8fd97d4d3c75d8ef80ea162943be31fd8e2bf2e29d6d1f24\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 6 23:33:46.153538 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2a4a5a37e441842a281e2d239541891be78a9816a5b1ae32897952eb5d205cf0-rootfs.mount: Deactivated successfully. Jul 6 23:33:46.201956 containerd[1785]: time="2025-07-06T23:33:46.201801933Z" level=info msg="CreateContainer within sandbox \"7488e97d0e804bce8fd97d4d3c75d8ef80ea162943be31fd8e2bf2e29d6d1f24\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"2720f7d13cbc4f308552ed2449856c3f56b48b5a931a3a88c56e5f519a9ab78f\"" Jul 6 23:33:46.202827 containerd[1785]: time="2025-07-06T23:33:46.202803294Z" level=info msg="StartContainer for \"2720f7d13cbc4f308552ed2449856c3f56b48b5a931a3a88c56e5f519a9ab78f\"" Jul 6 23:33:46.226322 systemd[1]: Started cri-containerd-2720f7d13cbc4f308552ed2449856c3f56b48b5a931a3a88c56e5f519a9ab78f.scope - libcontainer container 2720f7d13cbc4f308552ed2449856c3f56b48b5a931a3a88c56e5f519a9ab78f. Jul 6 23:33:46.251143 systemd[1]: cri-containerd-2720f7d13cbc4f308552ed2449856c3f56b48b5a931a3a88c56e5f519a9ab78f.scope: Deactivated successfully. Jul 6 23:33:46.258107 containerd[1785]: time="2025-07-06T23:33:46.257976090Z" level=info msg="StartContainer for \"2720f7d13cbc4f308552ed2449856c3f56b48b5a931a3a88c56e5f519a9ab78f\" returns successfully" Jul 6 23:33:46.288595 containerd[1785]: time="2025-07-06T23:33:46.288324532Z" level=info msg="shim disconnected" id=2720f7d13cbc4f308552ed2449856c3f56b48b5a931a3a88c56e5f519a9ab78f namespace=k8s.io Jul 6 23:33:46.288595 containerd[1785]: time="2025-07-06T23:33:46.288389732Z" level=warning msg="cleaning up after shim disconnected" id=2720f7d13cbc4f308552ed2449856c3f56b48b5a931a3a88c56e5f519a9ab78f namespace=k8s.io Jul 6 23:33:46.288595 containerd[1785]: time="2025-07-06T23:33:46.288397652Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:33:47.132774 containerd[1785]: time="2025-07-06T23:33:47.132722936Z" level=info msg="CreateContainer within sandbox \"7488e97d0e804bce8fd97d4d3c75d8ef80ea162943be31fd8e2bf2e29d6d1f24\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 6 23:33:47.153560 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2720f7d13cbc4f308552ed2449856c3f56b48b5a931a3a88c56e5f519a9ab78f-rootfs.mount: Deactivated successfully. Jul 6 23:33:47.207030 containerd[1785]: time="2025-07-06T23:33:47.206971798Z" level=info msg="CreateContainer within sandbox \"7488e97d0e804bce8fd97d4d3c75d8ef80ea162943be31fd8e2bf2e29d6d1f24\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b7ba3a42cfcb308745fedd8e2f1c12cf813e1b4c7051251dbdf9dca00455837d\"" Jul 6 23:33:47.207961 containerd[1785]: time="2025-07-06T23:33:47.207898880Z" level=info msg="StartContainer for \"b7ba3a42cfcb308745fedd8e2f1c12cf813e1b4c7051251dbdf9dca00455837d\"" Jul 6 23:33:47.240360 systemd[1]: Started cri-containerd-b7ba3a42cfcb308745fedd8e2f1c12cf813e1b4c7051251dbdf9dca00455837d.scope - libcontainer container b7ba3a42cfcb308745fedd8e2f1c12cf813e1b4c7051251dbdf9dca00455837d. Jul 6 23:33:47.276288 containerd[1785]: time="2025-07-06T23:33:47.276235934Z" level=info msg="StartContainer for \"b7ba3a42cfcb308745fedd8e2f1c12cf813e1b4c7051251dbdf9dca00455837d\" returns successfully" Jul 6 23:33:47.425345 kubelet[3384]: I0706 23:33:47.425225 3384 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 6 23:33:47.493555 systemd[1]: Created slice kubepods-burstable-pod6d683346_272b_4413_b8e5_f5c5ed79067d.slice - libcontainer container kubepods-burstable-pod6d683346_272b_4413_b8e5_f5c5ed79067d.slice. Jul 6 23:33:47.505242 systemd[1]: Created slice kubepods-burstable-podbc018e9f_90bb_4e02_bdd0_38ae34fa04cc.slice - libcontainer container kubepods-burstable-podbc018e9f_90bb_4e02_bdd0_38ae34fa04cc.slice. Jul 6 23:33:47.555402 kubelet[3384]: I0706 23:33:47.555344 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvzjb\" (UniqueName: \"kubernetes.io/projected/6d683346-272b-4413-b8e5-f5c5ed79067d-kube-api-access-wvzjb\") pod \"coredns-674b8bbfcf-w87gq\" (UID: \"6d683346-272b-4413-b8e5-f5c5ed79067d\") " pod="kube-system/coredns-674b8bbfcf-w87gq" Jul 6 23:33:47.555402 kubelet[3384]: I0706 23:33:47.555405 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnbmq\" (UniqueName: \"kubernetes.io/projected/bc018e9f-90bb-4e02-bdd0-38ae34fa04cc-kube-api-access-tnbmq\") pod \"coredns-674b8bbfcf-qfblc\" (UID: \"bc018e9f-90bb-4e02-bdd0-38ae34fa04cc\") " pod="kube-system/coredns-674b8bbfcf-qfblc" Jul 6 23:33:47.555712 kubelet[3384]: I0706 23:33:47.555427 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bc018e9f-90bb-4e02-bdd0-38ae34fa04cc-config-volume\") pod \"coredns-674b8bbfcf-qfblc\" (UID: \"bc018e9f-90bb-4e02-bdd0-38ae34fa04cc\") " pod="kube-system/coredns-674b8bbfcf-qfblc" Jul 6 23:33:47.555712 kubelet[3384]: I0706 23:33:47.555451 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6d683346-272b-4413-b8e5-f5c5ed79067d-config-volume\") pod \"coredns-674b8bbfcf-w87gq\" (UID: \"6d683346-272b-4413-b8e5-f5c5ed79067d\") " pod="kube-system/coredns-674b8bbfcf-w87gq" Jul 6 23:33:47.802622 containerd[1785]: time="2025-07-06T23:33:47.802211739Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-w87gq,Uid:6d683346-272b-4413-b8e5-f5c5ed79067d,Namespace:kube-system,Attempt:0,}" Jul 6 23:33:47.813757 containerd[1785]: time="2025-07-06T23:33:47.813311194Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-qfblc,Uid:bc018e9f-90bb-4e02-bdd0-38ae34fa04cc,Namespace:kube-system,Attempt:0,}" Jul 6 23:33:49.447821 systemd-networkd[1343]: cilium_host: Link UP Jul 6 23:33:49.447936 systemd-networkd[1343]: cilium_net: Link UP Jul 6 23:33:49.447940 systemd-networkd[1343]: cilium_net: Gained carrier Jul 6 23:33:49.448078 systemd-networkd[1343]: cilium_host: Gained carrier Jul 6 23:33:49.617348 systemd-networkd[1343]: cilium_vxlan: Link UP Jul 6 23:33:49.617356 systemd-networkd[1343]: cilium_vxlan: Gained carrier Jul 6 23:33:49.914163 kernel: NET: Registered PF_ALG protocol family Jul 6 23:33:50.410367 systemd-networkd[1343]: cilium_net: Gained IPv6LL Jul 6 23:33:50.410632 systemd-networkd[1343]: cilium_host: Gained IPv6LL Jul 6 23:33:50.632248 systemd-networkd[1343]: lxc_health: Link UP Jul 6 23:33:50.642623 systemd-networkd[1343]: lxc_health: Gained carrier Jul 6 23:33:50.666478 systemd-networkd[1343]: cilium_vxlan: Gained IPv6LL Jul 6 23:33:50.931220 kernel: eth0: renamed from tmpb4445 Jul 6 23:33:50.935857 systemd-networkd[1343]: lxc715f271b8e17: Link UP Jul 6 23:33:50.937574 systemd-networkd[1343]: lxc715f271b8e17: Gained carrier Jul 6 23:33:50.961517 systemd-networkd[1343]: lxc9d195d6e009a: Link UP Jul 6 23:33:50.971159 kernel: eth0: renamed from tmp50ece Jul 6 23:33:50.977591 systemd-networkd[1343]: lxc9d195d6e009a: Gained carrier Jul 6 23:33:51.690259 kubelet[3384]: I0706 23:33:51.689854 3384 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-lj9w6" podStartSLOduration=11.964922415 podStartE2EDuration="19.689836836s" podCreationTimestamp="2025-07-06 23:33:32 +0000 UTC" firstStartedPulling="2025-07-06 23:33:33.826254645 +0000 UTC m=+7.907233299" lastFinishedPulling="2025-07-06 23:33:41.551169066 +0000 UTC m=+15.632147720" observedRunningTime="2025-07-06 23:33:48.146201853 +0000 UTC m=+22.227180507" watchObservedRunningTime="2025-07-06 23:33:51.689836836 +0000 UTC m=+25.770815490" Jul 6 23:33:52.138306 systemd-networkd[1343]: lxc_health: Gained IPv6LL Jul 6 23:33:52.523282 systemd-networkd[1343]: lxc9d195d6e009a: Gained IPv6LL Jul 6 23:33:52.971231 systemd-networkd[1343]: lxc715f271b8e17: Gained IPv6LL Jul 6 23:33:54.814741 containerd[1785]: time="2025-07-06T23:33:54.810909701Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:33:54.814741 containerd[1785]: time="2025-07-06T23:33:54.810988102Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:33:54.814741 containerd[1785]: time="2025-07-06T23:33:54.811006942Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:33:54.814741 containerd[1785]: time="2025-07-06T23:33:54.811111662Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:33:54.834252 containerd[1785]: time="2025-07-06T23:33:54.833993291Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:33:54.834252 containerd[1785]: time="2025-07-06T23:33:54.834179851Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:33:54.834252 containerd[1785]: time="2025-07-06T23:33:54.834241132Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:33:54.835458 containerd[1785]: time="2025-07-06T23:33:54.834813212Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:33:54.863339 systemd[1]: Started cri-containerd-b44454b42cd02161b8b10a9eb74768d0e52cba635ba55b39a2fac09bbbfee907.scope - libcontainer container b44454b42cd02161b8b10a9eb74768d0e52cba635ba55b39a2fac09bbbfee907. Jul 6 23:33:54.868657 systemd[1]: Started cri-containerd-50ece4ef5d30f2d23d5ef457ec647ca96a2b14af20d1bcbe25ed6a838688f531.scope - libcontainer container 50ece4ef5d30f2d23d5ef457ec647ca96a2b14af20d1bcbe25ed6a838688f531. Jul 6 23:33:54.921376 containerd[1785]: time="2025-07-06T23:33:54.921320284Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-w87gq,Uid:6d683346-272b-4413-b8e5-f5c5ed79067d,Namespace:kube-system,Attempt:0,} returns sandbox id \"b44454b42cd02161b8b10a9eb74768d0e52cba635ba55b39a2fac09bbbfee907\"" Jul 6 23:33:54.937200 containerd[1785]: time="2025-07-06T23:33:54.937143944Z" level=info msg="CreateContainer within sandbox \"b44454b42cd02161b8b10a9eb74768d0e52cba635ba55b39a2fac09bbbfee907\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 6 23:33:54.951460 containerd[1785]: time="2025-07-06T23:33:54.951398723Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-qfblc,Uid:bc018e9f-90bb-4e02-bdd0-38ae34fa04cc,Namespace:kube-system,Attempt:0,} returns sandbox id \"50ece4ef5d30f2d23d5ef457ec647ca96a2b14af20d1bcbe25ed6a838688f531\"" Jul 6 23:33:54.963399 containerd[1785]: time="2025-07-06T23:33:54.963336178Z" level=info msg="CreateContainer within sandbox \"50ece4ef5d30f2d23d5ef457ec647ca96a2b14af20d1bcbe25ed6a838688f531\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 6 23:33:55.026462 containerd[1785]: time="2025-07-06T23:33:55.026309539Z" level=info msg="CreateContainer within sandbox \"b44454b42cd02161b8b10a9eb74768d0e52cba635ba55b39a2fac09bbbfee907\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4402e2ab4dffd1c5509262e0465df8eb9465d529dfc0acbfdc6e6d6e2cd04ed3\"" Jul 6 23:33:55.028190 containerd[1785]: time="2025-07-06T23:33:55.027114380Z" level=info msg="StartContainer for \"4402e2ab4dffd1c5509262e0465df8eb9465d529dfc0acbfdc6e6d6e2cd04ed3\"" Jul 6 23:33:55.053343 systemd[1]: Started cri-containerd-4402e2ab4dffd1c5509262e0465df8eb9465d529dfc0acbfdc6e6d6e2cd04ed3.scope - libcontainer container 4402e2ab4dffd1c5509262e0465df8eb9465d529dfc0acbfdc6e6d6e2cd04ed3. Jul 6 23:33:55.076981 containerd[1785]: time="2025-07-06T23:33:55.076804284Z" level=info msg="CreateContainer within sandbox \"50ece4ef5d30f2d23d5ef457ec647ca96a2b14af20d1bcbe25ed6a838688f531\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f1bda31a10a7dbc85d3adff2c26df60ce7f3e8cf91bcc77a3e85ae34efd0a100\"" Jul 6 23:33:55.080220 containerd[1785]: time="2025-07-06T23:33:55.079815488Z" level=info msg="StartContainer for \"f1bda31a10a7dbc85d3adff2c26df60ce7f3e8cf91bcc77a3e85ae34efd0a100\"" Jul 6 23:33:55.087019 containerd[1785]: time="2025-07-06T23:33:55.086967577Z" level=info msg="StartContainer for \"4402e2ab4dffd1c5509262e0465df8eb9465d529dfc0acbfdc6e6d6e2cd04ed3\" returns successfully" Jul 6 23:33:55.116569 systemd[1]: Started cri-containerd-f1bda31a10a7dbc85d3adff2c26df60ce7f3e8cf91bcc77a3e85ae34efd0a100.scope - libcontainer container f1bda31a10a7dbc85d3adff2c26df60ce7f3e8cf91bcc77a3e85ae34efd0a100. Jul 6 23:33:55.175696 containerd[1785]: time="2025-07-06T23:33:55.175560452Z" level=info msg="StartContainer for \"f1bda31a10a7dbc85d3adff2c26df60ce7f3e8cf91bcc77a3e85ae34efd0a100\" returns successfully" Jul 6 23:33:55.176476 kubelet[3384]: I0706 23:33:55.176312 3384 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-w87gq" podStartSLOduration=22.176291333 podStartE2EDuration="22.176291333s" podCreationTimestamp="2025-07-06 23:33:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:33:55.172143207 +0000 UTC m=+29.253121861" watchObservedRunningTime="2025-07-06 23:33:55.176291333 +0000 UTC m=+29.257269987" Jul 6 23:33:56.206159 kubelet[3384]: I0706 23:33:56.205814 3384 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-qfblc" podStartSLOduration=23.20579594 podStartE2EDuration="23.20579594s" podCreationTimestamp="2025-07-06 23:33:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:33:56.184448793 +0000 UTC m=+30.265427487" watchObservedRunningTime="2025-07-06 23:33:56.20579594 +0000 UTC m=+30.286774554" Jul 6 23:35:07.727607 systemd[1]: Started sshd@7-10.200.20.11:22-10.200.16.10:51648.service - OpenSSH per-connection server daemon (10.200.16.10:51648). Jul 6 23:35:08.220809 sshd[4780]: Accepted publickey for core from 10.200.16.10 port 51648 ssh2: RSA SHA256:3OqJfzY7AVg+q5elAE97NXiEux0mANjRjysP1FUtbfY Jul 6 23:35:08.222178 sshd-session[4780]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:35:08.226632 systemd-logind[1752]: New session 10 of user core. Jul 6 23:35:08.230288 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 6 23:35:08.680402 sshd[4782]: Connection closed by 10.200.16.10 port 51648 Jul 6 23:35:08.680998 sshd-session[4780]: pam_unix(sshd:session): session closed for user core Jul 6 23:35:08.684554 systemd[1]: sshd@7-10.200.20.11:22-10.200.16.10:51648.service: Deactivated successfully. Jul 6 23:35:08.687056 systemd[1]: session-10.scope: Deactivated successfully. Jul 6 23:35:08.689301 systemd-logind[1752]: Session 10 logged out. Waiting for processes to exit. Jul 6 23:35:08.691170 systemd-logind[1752]: Removed session 10. Jul 6 23:35:13.779815 systemd[1]: Started sshd@8-10.200.20.11:22-10.200.16.10:38748.service - OpenSSH per-connection server daemon (10.200.16.10:38748). Jul 6 23:35:14.258684 sshd[4795]: Accepted publickey for core from 10.200.16.10 port 38748 ssh2: RSA SHA256:3OqJfzY7AVg+q5elAE97NXiEux0mANjRjysP1FUtbfY Jul 6 23:35:14.259957 sshd-session[4795]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:35:14.264363 systemd-logind[1752]: New session 11 of user core. Jul 6 23:35:14.269295 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 6 23:35:14.664156 sshd[4797]: Connection closed by 10.200.16.10 port 38748 Jul 6 23:35:14.663337 sshd-session[4795]: pam_unix(sshd:session): session closed for user core Jul 6 23:35:14.666636 systemd-logind[1752]: Session 11 logged out. Waiting for processes to exit. Jul 6 23:35:14.667281 systemd[1]: sshd@8-10.200.20.11:22-10.200.16.10:38748.service: Deactivated successfully. Jul 6 23:35:14.669935 systemd[1]: session-11.scope: Deactivated successfully. Jul 6 23:35:14.672700 systemd-logind[1752]: Removed session 11. Jul 6 23:35:19.756661 systemd[1]: Started sshd@9-10.200.20.11:22-10.200.16.10:42208.service - OpenSSH per-connection server daemon (10.200.16.10:42208). Jul 6 23:35:20.253733 sshd[4809]: Accepted publickey for core from 10.200.16.10 port 42208 ssh2: RSA SHA256:3OqJfzY7AVg+q5elAE97NXiEux0mANjRjysP1FUtbfY Jul 6 23:35:20.255475 sshd-session[4809]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:35:20.260010 systemd-logind[1752]: New session 12 of user core. Jul 6 23:35:20.267334 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 6 23:35:20.668468 sshd[4811]: Connection closed by 10.200.16.10 port 42208 Jul 6 23:35:20.667976 sshd-session[4809]: pam_unix(sshd:session): session closed for user core Jul 6 23:35:20.671264 systemd-logind[1752]: Session 12 logged out. Waiting for processes to exit. Jul 6 23:35:20.671819 systemd[1]: sshd@9-10.200.20.11:22-10.200.16.10:42208.service: Deactivated successfully. Jul 6 23:35:20.673870 systemd[1]: session-12.scope: Deactivated successfully. Jul 6 23:35:20.676188 systemd-logind[1752]: Removed session 12. Jul 6 23:35:25.768452 systemd[1]: Started sshd@10-10.200.20.11:22-10.200.16.10:42216.service - OpenSSH per-connection server daemon (10.200.16.10:42216). Jul 6 23:35:26.255014 sshd[4823]: Accepted publickey for core from 10.200.16.10 port 42216 ssh2: RSA SHA256:3OqJfzY7AVg+q5elAE97NXiEux0mANjRjysP1FUtbfY Jul 6 23:35:26.256334 sshd-session[4823]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:35:26.261615 systemd-logind[1752]: New session 13 of user core. Jul 6 23:35:26.264389 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 6 23:35:26.670461 sshd[4827]: Connection closed by 10.200.16.10 port 42216 Jul 6 23:35:26.670920 sshd-session[4823]: pam_unix(sshd:session): session closed for user core Jul 6 23:35:26.674880 systemd[1]: sshd@10-10.200.20.11:22-10.200.16.10:42216.service: Deactivated successfully. Jul 6 23:35:26.676666 systemd[1]: session-13.scope: Deactivated successfully. Jul 6 23:35:26.677599 systemd-logind[1752]: Session 13 logged out. Waiting for processes to exit. Jul 6 23:35:26.679169 systemd-logind[1752]: Removed session 13. Jul 6 23:35:26.764811 systemd[1]: Started sshd@11-10.200.20.11:22-10.200.16.10:42222.service - OpenSSH per-connection server daemon (10.200.16.10:42222). Jul 6 23:35:27.253254 sshd[4840]: Accepted publickey for core from 10.200.16.10 port 42222 ssh2: RSA SHA256:3OqJfzY7AVg+q5elAE97NXiEux0mANjRjysP1FUtbfY Jul 6 23:35:27.254515 sshd-session[4840]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:35:27.259081 systemd-logind[1752]: New session 14 of user core. Jul 6 23:35:27.262278 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 6 23:35:27.692519 sshd[4842]: Connection closed by 10.200.16.10 port 42222 Jul 6 23:35:27.692370 sshd-session[4840]: pam_unix(sshd:session): session closed for user core Jul 6 23:35:27.696340 systemd[1]: sshd@11-10.200.20.11:22-10.200.16.10:42222.service: Deactivated successfully. Jul 6 23:35:27.698011 systemd[1]: session-14.scope: Deactivated successfully. Jul 6 23:35:27.698790 systemd-logind[1752]: Session 14 logged out. Waiting for processes to exit. Jul 6 23:35:27.699982 systemd-logind[1752]: Removed session 14. Jul 6 23:35:27.788432 systemd[1]: Started sshd@12-10.200.20.11:22-10.200.16.10:42238.service - OpenSSH per-connection server daemon (10.200.16.10:42238). Jul 6 23:35:28.278955 sshd[4851]: Accepted publickey for core from 10.200.16.10 port 42238 ssh2: RSA SHA256:3OqJfzY7AVg+q5elAE97NXiEux0mANjRjysP1FUtbfY Jul 6 23:35:28.280289 sshd-session[4851]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:35:28.284754 systemd-logind[1752]: New session 15 of user core. Jul 6 23:35:28.290278 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 6 23:35:28.694344 sshd[4853]: Connection closed by 10.200.16.10 port 42238 Jul 6 23:35:28.695080 sshd-session[4851]: pam_unix(sshd:session): session closed for user core Jul 6 23:35:28.698644 systemd[1]: sshd@12-10.200.20.11:22-10.200.16.10:42238.service: Deactivated successfully. Jul 6 23:35:28.702787 systemd[1]: session-15.scope: Deactivated successfully. Jul 6 23:35:28.703888 systemd-logind[1752]: Session 15 logged out. Waiting for processes to exit. Jul 6 23:35:28.705245 systemd-logind[1752]: Removed session 15. Jul 6 23:35:33.782837 systemd[1]: Started sshd@13-10.200.20.11:22-10.200.16.10:47054.service - OpenSSH per-connection server daemon (10.200.16.10:47054). Jul 6 23:35:34.270531 sshd[4865]: Accepted publickey for core from 10.200.16.10 port 47054 ssh2: RSA SHA256:3OqJfzY7AVg+q5elAE97NXiEux0mANjRjysP1FUtbfY Jul 6 23:35:34.271803 sshd-session[4865]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:35:34.276606 systemd-logind[1752]: New session 16 of user core. Jul 6 23:35:34.282278 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 6 23:35:34.677757 sshd[4869]: Connection closed by 10.200.16.10 port 47054 Jul 6 23:35:34.677590 sshd-session[4865]: pam_unix(sshd:session): session closed for user core Jul 6 23:35:34.680763 systemd-logind[1752]: Session 16 logged out. Waiting for processes to exit. Jul 6 23:35:34.680942 systemd[1]: sshd@13-10.200.20.11:22-10.200.16.10:47054.service: Deactivated successfully. Jul 6 23:35:34.683071 systemd[1]: session-16.scope: Deactivated successfully. Jul 6 23:35:34.685591 systemd-logind[1752]: Removed session 16. Jul 6 23:35:34.770423 systemd[1]: Started sshd@14-10.200.20.11:22-10.200.16.10:47068.service - OpenSSH per-connection server daemon (10.200.16.10:47068). Jul 6 23:35:35.248189 sshd[4881]: Accepted publickey for core from 10.200.16.10 port 47068 ssh2: RSA SHA256:3OqJfzY7AVg+q5elAE97NXiEux0mANjRjysP1FUtbfY Jul 6 23:35:35.249471 sshd-session[4881]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:35:35.254691 systemd-logind[1752]: New session 17 of user core. Jul 6 23:35:35.262305 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 6 23:35:35.690157 sshd[4883]: Connection closed by 10.200.16.10 port 47068 Jul 6 23:35:35.690693 sshd-session[4881]: pam_unix(sshd:session): session closed for user core Jul 6 23:35:35.693953 systemd[1]: sshd@14-10.200.20.11:22-10.200.16.10:47068.service: Deactivated successfully. Jul 6 23:35:35.695793 systemd[1]: session-17.scope: Deactivated successfully. Jul 6 23:35:35.696673 systemd-logind[1752]: Session 17 logged out. Waiting for processes to exit. Jul 6 23:35:35.697603 systemd-logind[1752]: Removed session 17. Jul 6 23:35:35.783389 systemd[1]: Started sshd@15-10.200.20.11:22-10.200.16.10:47084.service - OpenSSH per-connection server daemon (10.200.16.10:47084). Jul 6 23:35:36.262903 sshd[4893]: Accepted publickey for core from 10.200.16.10 port 47084 ssh2: RSA SHA256:3OqJfzY7AVg+q5elAE97NXiEux0mANjRjysP1FUtbfY Jul 6 23:35:36.264539 sshd-session[4893]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:35:36.268938 systemd-logind[1752]: New session 18 of user core. Jul 6 23:35:36.273316 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 6 23:35:37.452612 sshd[4895]: Connection closed by 10.200.16.10 port 47084 Jul 6 23:35:37.453321 sshd-session[4893]: pam_unix(sshd:session): session closed for user core Jul 6 23:35:37.456924 systemd-logind[1752]: Session 18 logged out. Waiting for processes to exit. Jul 6 23:35:37.457779 systemd[1]: sshd@15-10.200.20.11:22-10.200.16.10:47084.service: Deactivated successfully. Jul 6 23:35:37.460292 systemd[1]: session-18.scope: Deactivated successfully. Jul 6 23:35:37.461544 systemd-logind[1752]: Removed session 18. Jul 6 23:35:37.548425 systemd[1]: Started sshd@16-10.200.20.11:22-10.200.16.10:47096.service - OpenSSH per-connection server daemon (10.200.16.10:47096). Jul 6 23:35:38.024884 sshd[4911]: Accepted publickey for core from 10.200.16.10 port 47096 ssh2: RSA SHA256:3OqJfzY7AVg+q5elAE97NXiEux0mANjRjysP1FUtbfY Jul 6 23:35:38.026366 sshd-session[4911]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:35:38.031365 systemd-logind[1752]: New session 19 of user core. Jul 6 23:35:38.036521 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 6 23:35:38.555883 sshd[4913]: Connection closed by 10.200.16.10 port 47096 Jul 6 23:35:38.556506 sshd-session[4911]: pam_unix(sshd:session): session closed for user core Jul 6 23:35:38.559972 systemd[1]: sshd@16-10.200.20.11:22-10.200.16.10:47096.service: Deactivated successfully. Jul 6 23:35:38.562783 systemd[1]: session-19.scope: Deactivated successfully. Jul 6 23:35:38.564805 systemd-logind[1752]: Session 19 logged out. Waiting for processes to exit. Jul 6 23:35:38.566572 systemd-logind[1752]: Removed session 19. Jul 6 23:35:38.650371 systemd[1]: Started sshd@17-10.200.20.11:22-10.200.16.10:47102.service - OpenSSH per-connection server daemon (10.200.16.10:47102). Jul 6 23:35:39.141233 sshd[4923]: Accepted publickey for core from 10.200.16.10 port 47102 ssh2: RSA SHA256:3OqJfzY7AVg+q5elAE97NXiEux0mANjRjysP1FUtbfY Jul 6 23:35:39.143392 sshd-session[4923]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:35:39.147582 systemd-logind[1752]: New session 20 of user core. Jul 6 23:35:39.156456 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 6 23:35:39.555761 sshd[4925]: Connection closed by 10.200.16.10 port 47102 Jul 6 23:35:39.556456 sshd-session[4923]: pam_unix(sshd:session): session closed for user core Jul 6 23:35:39.560252 systemd[1]: sshd@17-10.200.20.11:22-10.200.16.10:47102.service: Deactivated successfully. Jul 6 23:35:39.562468 systemd[1]: session-20.scope: Deactivated successfully. Jul 6 23:35:39.563527 systemd-logind[1752]: Session 20 logged out. Waiting for processes to exit. Jul 6 23:35:39.564553 systemd-logind[1752]: Removed session 20. Jul 6 23:35:44.649599 systemd[1]: Started sshd@18-10.200.20.11:22-10.200.16.10:33962.service - OpenSSH per-connection server daemon (10.200.16.10:33962). Jul 6 23:35:45.127044 sshd[4939]: Accepted publickey for core from 10.200.16.10 port 33962 ssh2: RSA SHA256:3OqJfzY7AVg+q5elAE97NXiEux0mANjRjysP1FUtbfY Jul 6 23:35:45.128460 sshd-session[4939]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:35:45.133103 systemd-logind[1752]: New session 21 of user core. Jul 6 23:35:45.135330 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 6 23:35:45.539322 sshd[4941]: Connection closed by 10.200.16.10 port 33962 Jul 6 23:35:45.539945 sshd-session[4939]: pam_unix(sshd:session): session closed for user core Jul 6 23:35:45.543618 systemd[1]: sshd@18-10.200.20.11:22-10.200.16.10:33962.service: Deactivated successfully. Jul 6 23:35:45.545609 systemd[1]: session-21.scope: Deactivated successfully. Jul 6 23:35:45.546531 systemd-logind[1752]: Session 21 logged out. Waiting for processes to exit. Jul 6 23:35:45.547740 systemd-logind[1752]: Removed session 21. Jul 6 23:35:50.634413 systemd[1]: Started sshd@19-10.200.20.11:22-10.200.16.10:36010.service - OpenSSH per-connection server daemon (10.200.16.10:36010). Jul 6 23:35:51.113621 sshd[4952]: Accepted publickey for core from 10.200.16.10 port 36010 ssh2: RSA SHA256:3OqJfzY7AVg+q5elAE97NXiEux0mANjRjysP1FUtbfY Jul 6 23:35:51.115047 sshd-session[4952]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:35:51.119570 systemd-logind[1752]: New session 22 of user core. Jul 6 23:35:51.125349 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 6 23:35:51.522977 sshd[4954]: Connection closed by 10.200.16.10 port 36010 Jul 6 23:35:51.522869 sshd-session[4952]: pam_unix(sshd:session): session closed for user core Jul 6 23:35:51.526844 systemd-logind[1752]: Session 22 logged out. Waiting for processes to exit. Jul 6 23:35:51.527945 systemd[1]: sshd@19-10.200.20.11:22-10.200.16.10:36010.service: Deactivated successfully. Jul 6 23:35:51.530085 systemd[1]: session-22.scope: Deactivated successfully. Jul 6 23:35:51.531838 systemd-logind[1752]: Removed session 22. Jul 6 23:35:51.617504 systemd[1]: Started sshd@20-10.200.20.11:22-10.200.16.10:36012.service - OpenSSH per-connection server daemon (10.200.16.10:36012). Jul 6 23:35:52.096849 sshd[4966]: Accepted publickey for core from 10.200.16.10 port 36012 ssh2: RSA SHA256:3OqJfzY7AVg+q5elAE97NXiEux0mANjRjysP1FUtbfY Jul 6 23:35:52.098357 sshd-session[4966]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:35:52.103658 systemd-logind[1752]: New session 23 of user core. Jul 6 23:35:52.109302 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 6 23:35:54.143151 containerd[1785]: time="2025-07-06T23:35:54.142035970Z" level=info msg="StopContainer for \"645f604784caff7873d97ca59cc27eb165963a3673117da33a05eb8589f66a07\" with timeout 30 (s)" Jul 6 23:35:54.144561 systemd[1]: run-containerd-runc-k8s.io-b7ba3a42cfcb308745fedd8e2f1c12cf813e1b4c7051251dbdf9dca00455837d-runc.2hlrRi.mount: Deactivated successfully. Jul 6 23:35:54.149164 containerd[1785]: time="2025-07-06T23:35:54.148637539Z" level=info msg="Stop container \"645f604784caff7873d97ca59cc27eb165963a3673117da33a05eb8589f66a07\" with signal terminated" Jul 6 23:35:54.159797 containerd[1785]: time="2025-07-06T23:35:54.159743875Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 6 23:35:54.166689 containerd[1785]: time="2025-07-06T23:35:54.166640204Z" level=info msg="StopContainer for \"b7ba3a42cfcb308745fedd8e2f1c12cf813e1b4c7051251dbdf9dca00455837d\" with timeout 2 (s)" Jul 6 23:35:54.166963 containerd[1785]: time="2025-07-06T23:35:54.166928124Z" level=info msg="Stop container \"b7ba3a42cfcb308745fedd8e2f1c12cf813e1b4c7051251dbdf9dca00455837d\" with signal terminated" Jul 6 23:35:54.171414 systemd[1]: cri-containerd-645f604784caff7873d97ca59cc27eb165963a3673117da33a05eb8589f66a07.scope: Deactivated successfully. Jul 6 23:35:54.179566 systemd-networkd[1343]: lxc_health: Link DOWN Jul 6 23:35:54.179676 systemd-networkd[1343]: lxc_health: Lost carrier Jul 6 23:35:54.200115 systemd[1]: cri-containerd-b7ba3a42cfcb308745fedd8e2f1c12cf813e1b4c7051251dbdf9dca00455837d.scope: Deactivated successfully. Jul 6 23:35:54.200797 systemd[1]: cri-containerd-b7ba3a42cfcb308745fedd8e2f1c12cf813e1b4c7051251dbdf9dca00455837d.scope: Consumed 6.689s CPU time, 124.4M memory peak, 136K read from disk, 12.9M written to disk. Jul 6 23:35:54.207413 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-645f604784caff7873d97ca59cc27eb165963a3673117da33a05eb8589f66a07-rootfs.mount: Deactivated successfully. Jul 6 23:35:54.228042 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b7ba3a42cfcb308745fedd8e2f1c12cf813e1b4c7051251dbdf9dca00455837d-rootfs.mount: Deactivated successfully. Jul 6 23:35:54.315671 containerd[1785]: time="2025-07-06T23:35:54.315525048Z" level=info msg="shim disconnected" id=b7ba3a42cfcb308745fedd8e2f1c12cf813e1b4c7051251dbdf9dca00455837d namespace=k8s.io Jul 6 23:35:54.315671 containerd[1785]: time="2025-07-06T23:35:54.315581968Z" level=warning msg="cleaning up after shim disconnected" id=b7ba3a42cfcb308745fedd8e2f1c12cf813e1b4c7051251dbdf9dca00455837d namespace=k8s.io Jul 6 23:35:54.316001 containerd[1785]: time="2025-07-06T23:35:54.315597728Z" level=info msg="shim disconnected" id=645f604784caff7873d97ca59cc27eb165963a3673117da33a05eb8589f66a07 namespace=k8s.io Jul 6 23:35:54.316001 containerd[1785]: time="2025-07-06T23:35:54.315745169Z" level=warning msg="cleaning up after shim disconnected" id=645f604784caff7873d97ca59cc27eb165963a3673117da33a05eb8589f66a07 namespace=k8s.io Jul 6 23:35:54.316001 containerd[1785]: time="2025-07-06T23:35:54.315757009Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:35:54.316001 containerd[1785]: time="2025-07-06T23:35:54.315602368Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:35:54.331467 containerd[1785]: time="2025-07-06T23:35:54.331411430Z" level=warning msg="cleanup warnings time=\"2025-07-06T23:35:54Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 6 23:35:54.340714 containerd[1785]: time="2025-07-06T23:35:54.340657203Z" level=info msg="StopContainer for \"645f604784caff7873d97ca59cc27eb165963a3673117da33a05eb8589f66a07\" returns successfully" Jul 6 23:35:54.341404 containerd[1785]: time="2025-07-06T23:35:54.341369324Z" level=info msg="StopPodSandbox for \"c330d21b992db05508ebef095d62f732bbe6943cd92c671abf20fade4457fed9\"" Jul 6 23:35:54.343181 containerd[1785]: time="2025-07-06T23:35:54.341420564Z" level=info msg="Container to stop \"645f604784caff7873d97ca59cc27eb165963a3673117da33a05eb8589f66a07\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 6 23:35:54.343603 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c330d21b992db05508ebef095d62f732bbe6943cd92c671abf20fade4457fed9-shm.mount: Deactivated successfully. Jul 6 23:35:54.343737 containerd[1785]: time="2025-07-06T23:35:54.343642327Z" level=info msg="StopContainer for \"b7ba3a42cfcb308745fedd8e2f1c12cf813e1b4c7051251dbdf9dca00455837d\" returns successfully" Jul 6 23:35:54.345514 containerd[1785]: time="2025-07-06T23:35:54.345473009Z" level=info msg="StopPodSandbox for \"7488e97d0e804bce8fd97d4d3c75d8ef80ea162943be31fd8e2bf2e29d6d1f24\"" Jul 6 23:35:54.345613 containerd[1785]: time="2025-07-06T23:35:54.345559130Z" level=info msg="Container to stop \"4e7e268b0002a0176b513d4fb82588839fafa6c3b44760f4b46fcc1236ae3b75\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 6 23:35:54.345613 containerd[1785]: time="2025-07-06T23:35:54.345574610Z" level=info msg="Container to stop \"b7ba3a42cfcb308745fedd8e2f1c12cf813e1b4c7051251dbdf9dca00455837d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 6 23:35:54.345613 containerd[1785]: time="2025-07-06T23:35:54.345592890Z" level=info msg="Container to stop \"1602ad2c89a65c10ff67bee7e172f92073a6072777ab069fd91f3d0b6c325d98\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 6 23:35:54.345690 containerd[1785]: time="2025-07-06T23:35:54.345621090Z" level=info msg="Container to stop \"2a4a5a37e441842a281e2d239541891be78a9816a5b1ae32897952eb5d205cf0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 6 23:35:54.345690 containerd[1785]: time="2025-07-06T23:35:54.345634930Z" level=info msg="Container to stop \"2720f7d13cbc4f308552ed2449856c3f56b48b5a931a3a88c56e5f519a9ab78f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 6 23:35:54.351955 systemd[1]: cri-containerd-7488e97d0e804bce8fd97d4d3c75d8ef80ea162943be31fd8e2bf2e29d6d1f24.scope: Deactivated successfully. Jul 6 23:35:54.363378 systemd[1]: cri-containerd-c330d21b992db05508ebef095d62f732bbe6943cd92c671abf20fade4457fed9.scope: Deactivated successfully. Jul 6 23:35:54.403475 containerd[1785]: time="2025-07-06T23:35:54.403223449Z" level=info msg="shim disconnected" id=7488e97d0e804bce8fd97d4d3c75d8ef80ea162943be31fd8e2bf2e29d6d1f24 namespace=k8s.io Jul 6 23:35:54.403475 containerd[1785]: time="2025-07-06T23:35:54.403303209Z" level=warning msg="cleaning up after shim disconnected" id=7488e97d0e804bce8fd97d4d3c75d8ef80ea162943be31fd8e2bf2e29d6d1f24 namespace=k8s.io Jul 6 23:35:54.403475 containerd[1785]: time="2025-07-06T23:35:54.403313729Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:35:54.403902 containerd[1785]: time="2025-07-06T23:35:54.403662449Z" level=info msg="shim disconnected" id=c330d21b992db05508ebef095d62f732bbe6943cd92c671abf20fade4457fed9 namespace=k8s.io Jul 6 23:35:54.403902 containerd[1785]: time="2025-07-06T23:35:54.403707729Z" level=warning msg="cleaning up after shim disconnected" id=c330d21b992db05508ebef095d62f732bbe6943cd92c671abf20fade4457fed9 namespace=k8s.io Jul 6 23:35:54.403902 containerd[1785]: time="2025-07-06T23:35:54.403715729Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:35:54.419244 containerd[1785]: time="2025-07-06T23:35:54.419195111Z" level=info msg="TearDown network for sandbox \"c330d21b992db05508ebef095d62f732bbe6943cd92c671abf20fade4457fed9\" successfully" Jul 6 23:35:54.419637 containerd[1785]: time="2025-07-06T23:35:54.419398191Z" level=info msg="StopPodSandbox for \"c330d21b992db05508ebef095d62f732bbe6943cd92c671abf20fade4457fed9\" returns successfully" Jul 6 23:35:54.421970 containerd[1785]: time="2025-07-06T23:35:54.421860794Z" level=info msg="TearDown network for sandbox \"7488e97d0e804bce8fd97d4d3c75d8ef80ea162943be31fd8e2bf2e29d6d1f24\" successfully" Jul 6 23:35:54.421970 containerd[1785]: time="2025-07-06T23:35:54.421896274Z" level=info msg="StopPodSandbox for \"7488e97d0e804bce8fd97d4d3c75d8ef80ea162943be31fd8e2bf2e29d6d1f24\" returns successfully" Jul 6 23:35:54.488575 kubelet[3384]: I0706 23:35:54.488403 3384 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/265cf7e9-19b5-49e4-8c7e-042a204beeb8-lib-modules\") pod \"265cf7e9-19b5-49e4-8c7e-042a204beeb8\" (UID: \"265cf7e9-19b5-49e4-8c7e-042a204beeb8\") " Jul 6 23:35:54.488575 kubelet[3384]: I0706 23:35:54.488447 3384 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/265cf7e9-19b5-49e4-8c7e-042a204beeb8-etc-cni-netd\") pod \"265cf7e9-19b5-49e4-8c7e-042a204beeb8\" (UID: \"265cf7e9-19b5-49e4-8c7e-042a204beeb8\") " Jul 6 23:35:54.488575 kubelet[3384]: I0706 23:35:54.488464 3384 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/265cf7e9-19b5-49e4-8c7e-042a204beeb8-cilium-cgroup\") pod \"265cf7e9-19b5-49e4-8c7e-042a204beeb8\" (UID: \"265cf7e9-19b5-49e4-8c7e-042a204beeb8\") " Jul 6 23:35:54.488575 kubelet[3384]: I0706 23:35:54.488481 3384 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/265cf7e9-19b5-49e4-8c7e-042a204beeb8-cilium-run\") pod \"265cf7e9-19b5-49e4-8c7e-042a204beeb8\" (UID: \"265cf7e9-19b5-49e4-8c7e-042a204beeb8\") " Jul 6 23:35:54.488575 kubelet[3384]: I0706 23:35:54.488496 3384 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/265cf7e9-19b5-49e4-8c7e-042a204beeb8-host-proc-sys-net\") pod \"265cf7e9-19b5-49e4-8c7e-042a204beeb8\" (UID: \"265cf7e9-19b5-49e4-8c7e-042a204beeb8\") " Jul 6 23:35:54.488575 kubelet[3384]: I0706 23:35:54.488503 3384 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/265cf7e9-19b5-49e4-8c7e-042a204beeb8-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "265cf7e9-19b5-49e4-8c7e-042a204beeb8" (UID: "265cf7e9-19b5-49e4-8c7e-042a204beeb8"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:35:54.489057 kubelet[3384]: I0706 23:35:54.488537 3384 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/265cf7e9-19b5-49e4-8c7e-042a204beeb8-host-proc-sys-kernel\") pod \"265cf7e9-19b5-49e4-8c7e-042a204beeb8\" (UID: \"265cf7e9-19b5-49e4-8c7e-042a204beeb8\") " Jul 6 23:35:54.489057 kubelet[3384]: I0706 23:35:54.488556 3384 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/265cf7e9-19b5-49e4-8c7e-042a204beeb8-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "265cf7e9-19b5-49e4-8c7e-042a204beeb8" (UID: "265cf7e9-19b5-49e4-8c7e-042a204beeb8"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:35:54.489713 kubelet[3384]: I0706 23:35:54.488559 3384 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/19333f96-b03d-4855-8269-abeb59c584fd-cilium-config-path\") pod \"19333f96-b03d-4855-8269-abeb59c584fd\" (UID: \"19333f96-b03d-4855-8269-abeb59c584fd\") " Jul 6 23:35:54.489713 kubelet[3384]: I0706 23:35:54.489220 3384 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/265cf7e9-19b5-49e4-8c7e-042a204beeb8-cni-path\") pod \"265cf7e9-19b5-49e4-8c7e-042a204beeb8\" (UID: \"265cf7e9-19b5-49e4-8c7e-042a204beeb8\") " Jul 6 23:35:54.489713 kubelet[3384]: I0706 23:35:54.489263 3384 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/265cf7e9-19b5-49e4-8c7e-042a204beeb8-hubble-tls\") pod \"265cf7e9-19b5-49e4-8c7e-042a204beeb8\" (UID: \"265cf7e9-19b5-49e4-8c7e-042a204beeb8\") " Jul 6 23:35:54.489713 kubelet[3384]: I0706 23:35:54.489281 3384 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/265cf7e9-19b5-49e4-8c7e-042a204beeb8-clustermesh-secrets\") pod \"265cf7e9-19b5-49e4-8c7e-042a204beeb8\" (UID: \"265cf7e9-19b5-49e4-8c7e-042a204beeb8\") " Jul 6 23:35:54.489713 kubelet[3384]: I0706 23:35:54.489299 3384 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8xg2p\" (UniqueName: \"kubernetes.io/projected/265cf7e9-19b5-49e4-8c7e-042a204beeb8-kube-api-access-8xg2p\") pod \"265cf7e9-19b5-49e4-8c7e-042a204beeb8\" (UID: \"265cf7e9-19b5-49e4-8c7e-042a204beeb8\") " Jul 6 23:35:54.489713 kubelet[3384]: I0706 23:35:54.489327 3384 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/265cf7e9-19b5-49e4-8c7e-042a204beeb8-bpf-maps\") pod \"265cf7e9-19b5-49e4-8c7e-042a204beeb8\" (UID: \"265cf7e9-19b5-49e4-8c7e-042a204beeb8\") " Jul 6 23:35:54.489893 kubelet[3384]: I0706 23:35:54.489344 3384 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/265cf7e9-19b5-49e4-8c7e-042a204beeb8-cilium-config-path\") pod \"265cf7e9-19b5-49e4-8c7e-042a204beeb8\" (UID: \"265cf7e9-19b5-49e4-8c7e-042a204beeb8\") " Jul 6 23:35:54.489893 kubelet[3384]: I0706 23:35:54.489357 3384 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/265cf7e9-19b5-49e4-8c7e-042a204beeb8-hostproc\") pod \"265cf7e9-19b5-49e4-8c7e-042a204beeb8\" (UID: \"265cf7e9-19b5-49e4-8c7e-042a204beeb8\") " Jul 6 23:35:54.489893 kubelet[3384]: I0706 23:35:54.489375 3384 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/265cf7e9-19b5-49e4-8c7e-042a204beeb8-xtables-lock\") pod \"265cf7e9-19b5-49e4-8c7e-042a204beeb8\" (UID: \"265cf7e9-19b5-49e4-8c7e-042a204beeb8\") " Jul 6 23:35:54.489893 kubelet[3384]: I0706 23:35:54.489405 3384 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f4qz4\" (UniqueName: \"kubernetes.io/projected/19333f96-b03d-4855-8269-abeb59c584fd-kube-api-access-f4qz4\") pod \"19333f96-b03d-4855-8269-abeb59c584fd\" (UID: \"19333f96-b03d-4855-8269-abeb59c584fd\") " Jul 6 23:35:54.489893 kubelet[3384]: I0706 23:35:54.489450 3384 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/265cf7e9-19b5-49e4-8c7e-042a204beeb8-lib-modules\") on node \"ci-4230.2.1-a-3b9b3bec0f\" DevicePath \"\"" Jul 6 23:35:54.489893 kubelet[3384]: I0706 23:35:54.489459 3384 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/265cf7e9-19b5-49e4-8c7e-042a204beeb8-cilium-cgroup\") on node \"ci-4230.2.1-a-3b9b3bec0f\" DevicePath \"\"" Jul 6 23:35:54.491155 kubelet[3384]: I0706 23:35:54.490458 3384 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19333f96-b03d-4855-8269-abeb59c584fd-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "19333f96-b03d-4855-8269-abeb59c584fd" (UID: "19333f96-b03d-4855-8269-abeb59c584fd"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 6 23:35:54.491155 kubelet[3384]: I0706 23:35:54.490474 3384 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/265cf7e9-19b5-49e4-8c7e-042a204beeb8-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "265cf7e9-19b5-49e4-8c7e-042a204beeb8" (UID: "265cf7e9-19b5-49e4-8c7e-042a204beeb8"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:35:54.491155 kubelet[3384]: I0706 23:35:54.490510 3384 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/265cf7e9-19b5-49e4-8c7e-042a204beeb8-cni-path" (OuterVolumeSpecName: "cni-path") pod "265cf7e9-19b5-49e4-8c7e-042a204beeb8" (UID: "265cf7e9-19b5-49e4-8c7e-042a204beeb8"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:35:54.491155 kubelet[3384]: I0706 23:35:54.490543 3384 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/265cf7e9-19b5-49e4-8c7e-042a204beeb8-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "265cf7e9-19b5-49e4-8c7e-042a204beeb8" (UID: "265cf7e9-19b5-49e4-8c7e-042a204beeb8"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:35:54.491155 kubelet[3384]: I0706 23:35:54.490560 3384 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/265cf7e9-19b5-49e4-8c7e-042a204beeb8-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "265cf7e9-19b5-49e4-8c7e-042a204beeb8" (UID: "265cf7e9-19b5-49e4-8c7e-042a204beeb8"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:35:54.491343 kubelet[3384]: I0706 23:35:54.490575 3384 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/265cf7e9-19b5-49e4-8c7e-042a204beeb8-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "265cf7e9-19b5-49e4-8c7e-042a204beeb8" (UID: "265cf7e9-19b5-49e4-8c7e-042a204beeb8"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:35:54.491343 kubelet[3384]: I0706 23:35:54.490591 3384 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/265cf7e9-19b5-49e4-8c7e-042a204beeb8-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "265cf7e9-19b5-49e4-8c7e-042a204beeb8" (UID: "265cf7e9-19b5-49e4-8c7e-042a204beeb8"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:35:54.493681 kubelet[3384]: I0706 23:35:54.493544 3384 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/265cf7e9-19b5-49e4-8c7e-042a204beeb8-hostproc" (OuterVolumeSpecName: "hostproc") pod "265cf7e9-19b5-49e4-8c7e-042a204beeb8" (UID: "265cf7e9-19b5-49e4-8c7e-042a204beeb8"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:35:54.494900 kubelet[3384]: I0706 23:35:54.494866 3384 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/265cf7e9-19b5-49e4-8c7e-042a204beeb8-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "265cf7e9-19b5-49e4-8c7e-042a204beeb8" (UID: "265cf7e9-19b5-49e4-8c7e-042a204beeb8"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:35:54.496026 kubelet[3384]: I0706 23:35:54.495994 3384 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/265cf7e9-19b5-49e4-8c7e-042a204beeb8-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "265cf7e9-19b5-49e4-8c7e-042a204beeb8" (UID: "265cf7e9-19b5-49e4-8c7e-042a204beeb8"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 6 23:35:54.497034 kubelet[3384]: I0706 23:35:54.496959 3384 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/265cf7e9-19b5-49e4-8c7e-042a204beeb8-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "265cf7e9-19b5-49e4-8c7e-042a204beeb8" (UID: "265cf7e9-19b5-49e4-8c7e-042a204beeb8"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 6 23:35:54.497177 kubelet[3384]: I0706 23:35:54.497072 3384 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19333f96-b03d-4855-8269-abeb59c584fd-kube-api-access-f4qz4" (OuterVolumeSpecName: "kube-api-access-f4qz4") pod "19333f96-b03d-4855-8269-abeb59c584fd" (UID: "19333f96-b03d-4855-8269-abeb59c584fd"). InnerVolumeSpecName "kube-api-access-f4qz4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 6 23:35:54.497326 kubelet[3384]: I0706 23:35:54.497199 3384 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/265cf7e9-19b5-49e4-8c7e-042a204beeb8-kube-api-access-8xg2p" (OuterVolumeSpecName: "kube-api-access-8xg2p") pod "265cf7e9-19b5-49e4-8c7e-042a204beeb8" (UID: "265cf7e9-19b5-49e4-8c7e-042a204beeb8"). InnerVolumeSpecName "kube-api-access-8xg2p". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 6 23:35:54.497426 kubelet[3384]: I0706 23:35:54.497384 3384 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/265cf7e9-19b5-49e4-8c7e-042a204beeb8-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "265cf7e9-19b5-49e4-8c7e-042a204beeb8" (UID: "265cf7e9-19b5-49e4-8c7e-042a204beeb8"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 6 23:35:54.590609 kubelet[3384]: I0706 23:35:54.590571 3384 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/265cf7e9-19b5-49e4-8c7e-042a204beeb8-etc-cni-netd\") on node \"ci-4230.2.1-a-3b9b3bec0f\" DevicePath \"\"" Jul 6 23:35:54.590907 kubelet[3384]: I0706 23:35:54.590773 3384 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/265cf7e9-19b5-49e4-8c7e-042a204beeb8-cilium-run\") on node \"ci-4230.2.1-a-3b9b3bec0f\" DevicePath \"\"" Jul 6 23:35:54.590907 kubelet[3384]: I0706 23:35:54.590789 3384 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/265cf7e9-19b5-49e4-8c7e-042a204beeb8-host-proc-sys-net\") on node \"ci-4230.2.1-a-3b9b3bec0f\" DevicePath \"\"" Jul 6 23:35:54.590907 kubelet[3384]: I0706 23:35:54.590798 3384 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/265cf7e9-19b5-49e4-8c7e-042a204beeb8-host-proc-sys-kernel\") on node \"ci-4230.2.1-a-3b9b3bec0f\" DevicePath \"\"" Jul 6 23:35:54.590907 kubelet[3384]: I0706 23:35:54.590809 3384 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/19333f96-b03d-4855-8269-abeb59c584fd-cilium-config-path\") on node \"ci-4230.2.1-a-3b9b3bec0f\" DevicePath \"\"" Jul 6 23:35:54.590907 kubelet[3384]: I0706 23:35:54.590820 3384 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/265cf7e9-19b5-49e4-8c7e-042a204beeb8-cni-path\") on node \"ci-4230.2.1-a-3b9b3bec0f\" DevicePath \"\"" Jul 6 23:35:54.590907 kubelet[3384]: I0706 23:35:54.590829 3384 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/265cf7e9-19b5-49e4-8c7e-042a204beeb8-hubble-tls\") on node \"ci-4230.2.1-a-3b9b3bec0f\" DevicePath \"\"" Jul 6 23:35:54.590907 kubelet[3384]: I0706 23:35:54.590839 3384 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/265cf7e9-19b5-49e4-8c7e-042a204beeb8-clustermesh-secrets\") on node \"ci-4230.2.1-a-3b9b3bec0f\" DevicePath \"\"" Jul 6 23:35:54.590907 kubelet[3384]: I0706 23:35:54.590847 3384 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8xg2p\" (UniqueName: \"kubernetes.io/projected/265cf7e9-19b5-49e4-8c7e-042a204beeb8-kube-api-access-8xg2p\") on node \"ci-4230.2.1-a-3b9b3bec0f\" DevicePath \"\"" Jul 6 23:35:54.591091 kubelet[3384]: I0706 23:35:54.590855 3384 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/265cf7e9-19b5-49e4-8c7e-042a204beeb8-bpf-maps\") on node \"ci-4230.2.1-a-3b9b3bec0f\" DevicePath \"\"" Jul 6 23:35:54.591091 kubelet[3384]: I0706 23:35:54.590864 3384 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/265cf7e9-19b5-49e4-8c7e-042a204beeb8-cilium-config-path\") on node \"ci-4230.2.1-a-3b9b3bec0f\" DevicePath \"\"" Jul 6 23:35:54.591091 kubelet[3384]: I0706 23:35:54.590873 3384 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/265cf7e9-19b5-49e4-8c7e-042a204beeb8-hostproc\") on node \"ci-4230.2.1-a-3b9b3bec0f\" DevicePath \"\"" Jul 6 23:35:54.591091 kubelet[3384]: I0706 23:35:54.590883 3384 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/265cf7e9-19b5-49e4-8c7e-042a204beeb8-xtables-lock\") on node \"ci-4230.2.1-a-3b9b3bec0f\" DevicePath \"\"" Jul 6 23:35:54.591091 kubelet[3384]: I0706 23:35:54.590890 3384 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-f4qz4\" (UniqueName: \"kubernetes.io/projected/19333f96-b03d-4855-8269-abeb59c584fd-kube-api-access-f4qz4\") on node \"ci-4230.2.1-a-3b9b3bec0f\" DevicePath \"\"" Jul 6 23:35:55.136814 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7488e97d0e804bce8fd97d4d3c75d8ef80ea162943be31fd8e2bf2e29d6d1f24-rootfs.mount: Deactivated successfully. Jul 6 23:35:55.137158 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7488e97d0e804bce8fd97d4d3c75d8ef80ea162943be31fd8e2bf2e29d6d1f24-shm.mount: Deactivated successfully. Jul 6 23:35:55.137323 systemd[1]: var-lib-kubelet-pods-265cf7e9\x2d19b5\x2d49e4\x2d8c7e\x2d042a204beeb8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8xg2p.mount: Deactivated successfully. Jul 6 23:35:55.137479 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c330d21b992db05508ebef095d62f732bbe6943cd92c671abf20fade4457fed9-rootfs.mount: Deactivated successfully. Jul 6 23:35:55.137607 systemd[1]: var-lib-kubelet-pods-19333f96\x2db03d\x2d4855\x2d8269\x2dabeb59c584fd-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2df4qz4.mount: Deactivated successfully. Jul 6 23:35:55.137662 systemd[1]: var-lib-kubelet-pods-265cf7e9\x2d19b5\x2d49e4\x2d8c7e\x2d042a204beeb8-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 6 23:35:55.137712 systemd[1]: var-lib-kubelet-pods-265cf7e9\x2d19b5\x2d49e4\x2d8c7e\x2d042a204beeb8-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 6 23:35:55.390268 kubelet[3384]: I0706 23:35:55.389867 3384 scope.go:117] "RemoveContainer" containerID="b7ba3a42cfcb308745fedd8e2f1c12cf813e1b4c7051251dbdf9dca00455837d" Jul 6 23:35:55.393937 containerd[1785]: time="2025-07-06T23:35:55.393160007Z" level=info msg="RemoveContainer for \"b7ba3a42cfcb308745fedd8e2f1c12cf813e1b4c7051251dbdf9dca00455837d\"" Jul 6 23:35:55.405911 systemd[1]: Removed slice kubepods-burstable-pod265cf7e9_19b5_49e4_8c7e_042a204beeb8.slice - libcontainer container kubepods-burstable-pod265cf7e9_19b5_49e4_8c7e_042a204beeb8.slice. Jul 6 23:35:55.406058 systemd[1]: kubepods-burstable-pod265cf7e9_19b5_49e4_8c7e_042a204beeb8.slice: Consumed 6.765s CPU time, 124.8M memory peak, 136K read from disk, 12.9M written to disk. Jul 6 23:35:55.413752 systemd[1]: Removed slice kubepods-besteffort-pod19333f96_b03d_4855_8269_abeb59c584fd.slice - libcontainer container kubepods-besteffort-pod19333f96_b03d_4855_8269_abeb59c584fd.slice. Jul 6 23:35:55.417413 containerd[1785]: time="2025-07-06T23:35:55.417375120Z" level=info msg="RemoveContainer for \"b7ba3a42cfcb308745fedd8e2f1c12cf813e1b4c7051251dbdf9dca00455837d\" returns successfully" Jul 6 23:35:55.417822 kubelet[3384]: I0706 23:35:55.417790 3384 scope.go:117] "RemoveContainer" containerID="2720f7d13cbc4f308552ed2449856c3f56b48b5a931a3a88c56e5f519a9ab78f" Jul 6 23:35:55.423424 containerd[1785]: time="2025-07-06T23:35:55.423329728Z" level=info msg="RemoveContainer for \"2720f7d13cbc4f308552ed2449856c3f56b48b5a931a3a88c56e5f519a9ab78f\"" Jul 6 23:35:55.445431 containerd[1785]: time="2025-07-06T23:35:55.445380479Z" level=info msg="RemoveContainer for \"2720f7d13cbc4f308552ed2449856c3f56b48b5a931a3a88c56e5f519a9ab78f\" returns successfully" Jul 6 23:35:55.445937 kubelet[3384]: I0706 23:35:55.445803 3384 scope.go:117] "RemoveContainer" containerID="2a4a5a37e441842a281e2d239541891be78a9816a5b1ae32897952eb5d205cf0" Jul 6 23:35:55.449237 containerd[1785]: time="2025-07-06T23:35:55.449098404Z" level=info msg="RemoveContainer for \"2a4a5a37e441842a281e2d239541891be78a9816a5b1ae32897952eb5d205cf0\"" Jul 6 23:35:55.463616 containerd[1785]: time="2025-07-06T23:35:55.463566984Z" level=info msg="RemoveContainer for \"2a4a5a37e441842a281e2d239541891be78a9816a5b1ae32897952eb5d205cf0\" returns successfully" Jul 6 23:35:55.464115 kubelet[3384]: I0706 23:35:55.463904 3384 scope.go:117] "RemoveContainer" containerID="1602ad2c89a65c10ff67bee7e172f92073a6072777ab069fd91f3d0b6c325d98" Jul 6 23:35:55.465581 containerd[1785]: time="2025-07-06T23:35:55.465526986Z" level=info msg="RemoveContainer for \"1602ad2c89a65c10ff67bee7e172f92073a6072777ab069fd91f3d0b6c325d98\"" Jul 6 23:35:55.480068 containerd[1785]: time="2025-07-06T23:35:55.480019646Z" level=info msg="RemoveContainer for \"1602ad2c89a65c10ff67bee7e172f92073a6072777ab069fd91f3d0b6c325d98\" returns successfully" Jul 6 23:35:55.480384 kubelet[3384]: I0706 23:35:55.480331 3384 scope.go:117] "RemoveContainer" containerID="4e7e268b0002a0176b513d4fb82588839fafa6c3b44760f4b46fcc1236ae3b75" Jul 6 23:35:55.481640 containerd[1785]: time="2025-07-06T23:35:55.481599728Z" level=info msg="RemoveContainer for \"4e7e268b0002a0176b513d4fb82588839fafa6c3b44760f4b46fcc1236ae3b75\"" Jul 6 23:35:55.498435 containerd[1785]: time="2025-07-06T23:35:55.498352831Z" level=info msg="RemoveContainer for \"4e7e268b0002a0176b513d4fb82588839fafa6c3b44760f4b46fcc1236ae3b75\" returns successfully" Jul 6 23:35:55.498761 kubelet[3384]: I0706 23:35:55.498716 3384 scope.go:117] "RemoveContainer" containerID="645f604784caff7873d97ca59cc27eb165963a3673117da33a05eb8589f66a07" Jul 6 23:35:55.500295 containerd[1785]: time="2025-07-06T23:35:55.500240274Z" level=info msg="RemoveContainer for \"645f604784caff7873d97ca59cc27eb165963a3673117da33a05eb8589f66a07\"" Jul 6 23:35:55.514800 containerd[1785]: time="2025-07-06T23:35:55.514756214Z" level=info msg="RemoveContainer for \"645f604784caff7873d97ca59cc27eb165963a3673117da33a05eb8589f66a07\" returns successfully" Jul 6 23:35:56.022401 kubelet[3384]: I0706 23:35:56.022352 3384 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19333f96-b03d-4855-8269-abeb59c584fd" path="/var/lib/kubelet/pods/19333f96-b03d-4855-8269-abeb59c584fd/volumes" Jul 6 23:35:56.022768 kubelet[3384]: I0706 23:35:56.022739 3384 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="265cf7e9-19b5-49e4-8c7e-042a204beeb8" path="/var/lib/kubelet/pods/265cf7e9-19b5-49e4-8c7e-042a204beeb8/volumes" Jul 6 23:35:56.130165 sshd[4968]: Connection closed by 10.200.16.10 port 36012 Jul 6 23:35:56.130808 sshd-session[4966]: pam_unix(sshd:session): session closed for user core Jul 6 23:35:56.135440 systemd[1]: sshd@20-10.200.20.11:22-10.200.16.10:36012.service: Deactivated successfully. Jul 6 23:35:56.137636 systemd[1]: session-23.scope: Deactivated successfully. Jul 6 23:35:56.137862 systemd[1]: session-23.scope: Consumed 1.119s CPU time, 23.7M memory peak. Jul 6 23:35:56.138681 kubelet[3384]: E0706 23:35:56.138322 3384 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 6 23:35:56.138968 systemd-logind[1752]: Session 23 logged out. Waiting for processes to exit. Jul 6 23:35:56.140640 systemd-logind[1752]: Removed session 23. Jul 6 23:35:56.228964 systemd[1]: Started sshd@21-10.200.20.11:22-10.200.16.10:36018.service - OpenSSH per-connection server daemon (10.200.16.10:36018). Jul 6 23:35:56.708328 sshd[5126]: Accepted publickey for core from 10.200.16.10 port 36018 ssh2: RSA SHA256:3OqJfzY7AVg+q5elAE97NXiEux0mANjRjysP1FUtbfY Jul 6 23:35:56.709795 sshd-session[5126]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:35:56.714559 systemd-logind[1752]: New session 24 of user core. Jul 6 23:35:56.721754 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 6 23:35:58.312469 kubelet[3384]: E0706 23:35:58.312411 3384 reflector.go:200] "Failed to watch" err="failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:ci-4230.2.1-a-3b9b3bec0f\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4230.2.1-a-3b9b3bec0f' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"cilium-clustermesh\"" type="*v1.Secret" Jul 6 23:35:58.315038 systemd[1]: Created slice kubepods-burstable-pod17a3956a_d73a_4d7d_a175_0487946385fa.slice - libcontainer container kubepods-burstable-pod17a3956a_d73a_4d7d_a175_0487946385fa.slice. Jul 6 23:35:58.355786 sshd[5128]: Connection closed by 10.200.16.10 port 36018 Jul 6 23:35:58.356450 sshd-session[5126]: pam_unix(sshd:session): session closed for user core Jul 6 23:35:58.362609 systemd[1]: sshd@21-10.200.20.11:22-10.200.16.10:36018.service: Deactivated successfully. Jul 6 23:35:58.369073 systemd[1]: session-24.scope: Deactivated successfully. Jul 6 23:35:58.369892 systemd[1]: session-24.scope: Consumed 1.187s CPU time, 25.6M memory peak. Jul 6 23:35:58.371528 systemd-logind[1752]: Session 24 logged out. Waiting for processes to exit. Jul 6 23:35:58.374594 systemd-logind[1752]: Removed session 24. Jul 6 23:35:58.414686 kubelet[3384]: I0706 23:35:58.414636 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/17a3956a-d73a-4d7d-a175-0487946385fa-cilium-cgroup\") pod \"cilium-jjhpm\" (UID: \"17a3956a-d73a-4d7d-a175-0487946385fa\") " pod="kube-system/cilium-jjhpm" Jul 6 23:35:58.414686 kubelet[3384]: I0706 23:35:58.414685 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/17a3956a-d73a-4d7d-a175-0487946385fa-etc-cni-netd\") pod \"cilium-jjhpm\" (UID: \"17a3956a-d73a-4d7d-a175-0487946385fa\") " pod="kube-system/cilium-jjhpm" Jul 6 23:35:58.415057 kubelet[3384]: I0706 23:35:58.414707 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/17a3956a-d73a-4d7d-a175-0487946385fa-host-proc-sys-kernel\") pod \"cilium-jjhpm\" (UID: \"17a3956a-d73a-4d7d-a175-0487946385fa\") " pod="kube-system/cilium-jjhpm" Jul 6 23:35:58.415057 kubelet[3384]: I0706 23:35:58.414749 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/17a3956a-d73a-4d7d-a175-0487946385fa-xtables-lock\") pod \"cilium-jjhpm\" (UID: \"17a3956a-d73a-4d7d-a175-0487946385fa\") " pod="kube-system/cilium-jjhpm" Jul 6 23:35:58.415057 kubelet[3384]: I0706 23:35:58.414768 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/17a3956a-d73a-4d7d-a175-0487946385fa-clustermesh-secrets\") pod \"cilium-jjhpm\" (UID: \"17a3956a-d73a-4d7d-a175-0487946385fa\") " pod="kube-system/cilium-jjhpm" Jul 6 23:35:58.415057 kubelet[3384]: I0706 23:35:58.414783 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqwsf\" (UniqueName: \"kubernetes.io/projected/17a3956a-d73a-4d7d-a175-0487946385fa-kube-api-access-hqwsf\") pod \"cilium-jjhpm\" (UID: \"17a3956a-d73a-4d7d-a175-0487946385fa\") " pod="kube-system/cilium-jjhpm" Jul 6 23:35:58.415057 kubelet[3384]: I0706 23:35:58.414799 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/17a3956a-d73a-4d7d-a175-0487946385fa-cilium-run\") pod \"cilium-jjhpm\" (UID: \"17a3956a-d73a-4d7d-a175-0487946385fa\") " pod="kube-system/cilium-jjhpm" Jul 6 23:35:58.415057 kubelet[3384]: I0706 23:35:58.414813 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/17a3956a-d73a-4d7d-a175-0487946385fa-lib-modules\") pod \"cilium-jjhpm\" (UID: \"17a3956a-d73a-4d7d-a175-0487946385fa\") " pod="kube-system/cilium-jjhpm" Jul 6 23:35:58.415234 kubelet[3384]: I0706 23:35:58.414855 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/17a3956a-d73a-4d7d-a175-0487946385fa-hubble-tls\") pod \"cilium-jjhpm\" (UID: \"17a3956a-d73a-4d7d-a175-0487946385fa\") " pod="kube-system/cilium-jjhpm" Jul 6 23:35:58.415234 kubelet[3384]: I0706 23:35:58.414891 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/17a3956a-d73a-4d7d-a175-0487946385fa-cni-path\") pod \"cilium-jjhpm\" (UID: \"17a3956a-d73a-4d7d-a175-0487946385fa\") " pod="kube-system/cilium-jjhpm" Jul 6 23:35:58.415234 kubelet[3384]: I0706 23:35:58.414912 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/17a3956a-d73a-4d7d-a175-0487946385fa-hostproc\") pod \"cilium-jjhpm\" (UID: \"17a3956a-d73a-4d7d-a175-0487946385fa\") " pod="kube-system/cilium-jjhpm" Jul 6 23:35:58.415234 kubelet[3384]: I0706 23:35:58.414934 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/17a3956a-d73a-4d7d-a175-0487946385fa-cilium-config-path\") pod \"cilium-jjhpm\" (UID: \"17a3956a-d73a-4d7d-a175-0487946385fa\") " pod="kube-system/cilium-jjhpm" Jul 6 23:35:58.415234 kubelet[3384]: I0706 23:35:58.414964 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/17a3956a-d73a-4d7d-a175-0487946385fa-cilium-ipsec-secrets\") pod \"cilium-jjhpm\" (UID: \"17a3956a-d73a-4d7d-a175-0487946385fa\") " pod="kube-system/cilium-jjhpm" Jul 6 23:35:58.415234 kubelet[3384]: I0706 23:35:58.414985 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/17a3956a-d73a-4d7d-a175-0487946385fa-host-proc-sys-net\") pod \"cilium-jjhpm\" (UID: \"17a3956a-d73a-4d7d-a175-0487946385fa\") " pod="kube-system/cilium-jjhpm" Jul 6 23:35:58.415362 kubelet[3384]: I0706 23:35:58.415003 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/17a3956a-d73a-4d7d-a175-0487946385fa-bpf-maps\") pod \"cilium-jjhpm\" (UID: \"17a3956a-d73a-4d7d-a175-0487946385fa\") " pod="kube-system/cilium-jjhpm" Jul 6 23:35:58.455457 systemd[1]: Started sshd@22-10.200.20.11:22-10.200.16.10:36032.service - OpenSSH per-connection server daemon (10.200.16.10:36032). Jul 6 23:35:58.931488 sshd[5138]: Accepted publickey for core from 10.200.16.10 port 36032 ssh2: RSA SHA256:3OqJfzY7AVg+q5elAE97NXiEux0mANjRjysP1FUtbfY Jul 6 23:35:58.933068 sshd-session[5138]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:35:58.938059 systemd-logind[1752]: New session 25 of user core. Jul 6 23:35:58.947408 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 6 23:35:59.280892 sshd[5143]: Connection closed by 10.200.16.10 port 36032 Jul 6 23:35:59.281579 sshd-session[5138]: pam_unix(sshd:session): session closed for user core Jul 6 23:35:59.285467 systemd[1]: sshd@22-10.200.20.11:22-10.200.16.10:36032.service: Deactivated successfully. Jul 6 23:35:59.289093 systemd[1]: session-25.scope: Deactivated successfully. Jul 6 23:35:59.290026 systemd-logind[1752]: Session 25 logged out. Waiting for processes to exit. Jul 6 23:35:59.292499 systemd-logind[1752]: Removed session 25. Jul 6 23:35:59.361883 kubelet[3384]: I0706 23:35:59.361701 3384 setters.go:618] "Node became not ready" node="ci-4230.2.1-a-3b9b3bec0f" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-07-06T23:35:59Z","lastTransitionTime":"2025-07-06T23:35:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 6 23:35:59.374768 systemd[1]: Started sshd@23-10.200.20.11:22-10.200.16.10:36040.service - OpenSSH per-connection server daemon (10.200.16.10:36040). Jul 6 23:35:59.525955 kubelet[3384]: E0706 23:35:59.525901 3384 secret.go:189] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Jul 6 23:35:59.526107 kubelet[3384]: E0706 23:35:59.526012 3384 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/17a3956a-d73a-4d7d-a175-0487946385fa-clustermesh-secrets podName:17a3956a-d73a-4d7d-a175-0487946385fa nodeName:}" failed. No retries permitted until 2025-07-06 23:36:00.025991764 +0000 UTC m=+154.106970378 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/17a3956a-d73a-4d7d-a175-0487946385fa-clustermesh-secrets") pod "cilium-jjhpm" (UID: "17a3956a-d73a-4d7d-a175-0487946385fa") : failed to sync secret cache: timed out waiting for the condition Jul 6 23:35:59.861567 sshd[5150]: Accepted publickey for core from 10.200.16.10 port 36040 ssh2: RSA SHA256:3OqJfzY7AVg+q5elAE97NXiEux0mANjRjysP1FUtbfY Jul 6 23:35:59.862932 sshd-session[5150]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:35:59.867589 systemd-logind[1752]: New session 26 of user core. Jul 6 23:35:59.874388 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 6 23:36:00.419593 containerd[1785]: time="2025-07-06T23:36:00.419545526Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jjhpm,Uid:17a3956a-d73a-4d7d-a175-0487946385fa,Namespace:kube-system,Attempt:0,}" Jul 6 23:36:00.475875 containerd[1785]: time="2025-07-06T23:36:00.475733724Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:36:00.475875 containerd[1785]: time="2025-07-06T23:36:00.475791884Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:36:00.475875 containerd[1785]: time="2025-07-06T23:36:00.475847764Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:36:00.476235 containerd[1785]: time="2025-07-06T23:36:00.475972725Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:36:00.501338 systemd[1]: Started cri-containerd-9c91686dd411d6375dc18f1c8b0de1e5a416126baec9cdd9093bd086e7892b46.scope - libcontainer container 9c91686dd411d6375dc18f1c8b0de1e5a416126baec9cdd9093bd086e7892b46. Jul 6 23:36:00.526153 containerd[1785]: time="2025-07-06T23:36:00.526061594Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jjhpm,Uid:17a3956a-d73a-4d7d-a175-0487946385fa,Namespace:kube-system,Attempt:0,} returns sandbox id \"9c91686dd411d6375dc18f1c8b0de1e5a416126baec9cdd9093bd086e7892b46\"" Jul 6 23:36:00.537323 containerd[1785]: time="2025-07-06T23:36:00.537270090Z" level=info msg="CreateContainer within sandbox \"9c91686dd411d6375dc18f1c8b0de1e5a416126baec9cdd9093bd086e7892b46\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 6 23:36:00.587018 containerd[1785]: time="2025-07-06T23:36:00.586957639Z" level=info msg="CreateContainer within sandbox \"9c91686dd411d6375dc18f1c8b0de1e5a416126baec9cdd9093bd086e7892b46\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6b23fe00bf07e21dc04ae858cdf39e8127c3c9f199e8a0650a97c6d6b736a8e7\"" Jul 6 23:36:00.588359 containerd[1785]: time="2025-07-06T23:36:00.588297801Z" level=info msg="StartContainer for \"6b23fe00bf07e21dc04ae858cdf39e8127c3c9f199e8a0650a97c6d6b736a8e7\"" Jul 6 23:36:00.615386 systemd[1]: Started cri-containerd-6b23fe00bf07e21dc04ae858cdf39e8127c3c9f199e8a0650a97c6d6b736a8e7.scope - libcontainer container 6b23fe00bf07e21dc04ae858cdf39e8127c3c9f199e8a0650a97c6d6b736a8e7. Jul 6 23:36:00.647383 containerd[1785]: time="2025-07-06T23:36:00.647326923Z" level=info msg="StartContainer for \"6b23fe00bf07e21dc04ae858cdf39e8127c3c9f199e8a0650a97c6d6b736a8e7\" returns successfully" Jul 6 23:36:00.649903 systemd[1]: cri-containerd-6b23fe00bf07e21dc04ae858cdf39e8127c3c9f199e8a0650a97c6d6b736a8e7.scope: Deactivated successfully. Jul 6 23:36:00.745227 containerd[1785]: time="2025-07-06T23:36:00.744999819Z" level=info msg="shim disconnected" id=6b23fe00bf07e21dc04ae858cdf39e8127c3c9f199e8a0650a97c6d6b736a8e7 namespace=k8s.io Jul 6 23:36:00.745764 containerd[1785]: time="2025-07-06T23:36:00.745458139Z" level=warning msg="cleaning up after shim disconnected" id=6b23fe00bf07e21dc04ae858cdf39e8127c3c9f199e8a0650a97c6d6b736a8e7 namespace=k8s.io Jul 6 23:36:00.745764 containerd[1785]: time="2025-07-06T23:36:00.745479059Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:36:01.140103 kubelet[3384]: E0706 23:36:01.140057 3384 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 6 23:36:01.435568 containerd[1785]: time="2025-07-06T23:36:01.435446618Z" level=info msg="CreateContainer within sandbox \"9c91686dd411d6375dc18f1c8b0de1e5a416126baec9cdd9093bd086e7892b46\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 6 23:36:01.469610 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3531063706.mount: Deactivated successfully. Jul 6 23:36:01.485874 containerd[1785]: time="2025-07-06T23:36:01.485736088Z" level=info msg="CreateContainer within sandbox \"9c91686dd411d6375dc18f1c8b0de1e5a416126baec9cdd9093bd086e7892b46\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e32571cb6900ea227b1759560b77d0c66fbd828c00629b2347a223718b8f1529\"" Jul 6 23:36:01.486683 containerd[1785]: time="2025-07-06T23:36:01.486483769Z" level=info msg="StartContainer for \"e32571cb6900ea227b1759560b77d0c66fbd828c00629b2347a223718b8f1529\"" Jul 6 23:36:01.516337 systemd[1]: Started cri-containerd-e32571cb6900ea227b1759560b77d0c66fbd828c00629b2347a223718b8f1529.scope - libcontainer container e32571cb6900ea227b1759560b77d0c66fbd828c00629b2347a223718b8f1529. Jul 6 23:36:01.553195 containerd[1785]: time="2025-07-06T23:36:01.552969261Z" level=info msg="StartContainer for \"e32571cb6900ea227b1759560b77d0c66fbd828c00629b2347a223718b8f1529\" returns successfully" Jul 6 23:36:01.560934 systemd[1]: cri-containerd-e32571cb6900ea227b1759560b77d0c66fbd828c00629b2347a223718b8f1529.scope: Deactivated successfully. Jul 6 23:36:01.605803 containerd[1785]: time="2025-07-06T23:36:01.605719455Z" level=info msg="shim disconnected" id=e32571cb6900ea227b1759560b77d0c66fbd828c00629b2347a223718b8f1529 namespace=k8s.io Jul 6 23:36:01.605803 containerd[1785]: time="2025-07-06T23:36:01.605796375Z" level=warning msg="cleaning up after shim disconnected" id=e32571cb6900ea227b1759560b77d0c66fbd828c00629b2347a223718b8f1529 namespace=k8s.io Jul 6 23:36:01.605803 containerd[1785]: time="2025-07-06T23:36:01.605806775Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:36:02.130649 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e32571cb6900ea227b1759560b77d0c66fbd828c00629b2347a223718b8f1529-rootfs.mount: Deactivated successfully. Jul 6 23:36:02.439226 containerd[1785]: time="2025-07-06T23:36:02.438847213Z" level=info msg="CreateContainer within sandbox \"9c91686dd411d6375dc18f1c8b0de1e5a416126baec9cdd9093bd086e7892b46\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 6 23:36:02.500204 containerd[1785]: time="2025-07-06T23:36:02.500148498Z" level=info msg="CreateContainer within sandbox \"9c91686dd411d6375dc18f1c8b0de1e5a416126baec9cdd9093bd086e7892b46\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"33230c38401a1c9750783fcc3f021bc1418885233b26997344b7578278329399\"" Jul 6 23:36:02.500761 containerd[1785]: time="2025-07-06T23:36:02.500733819Z" level=info msg="StartContainer for \"33230c38401a1c9750783fcc3f021bc1418885233b26997344b7578278329399\"" Jul 6 23:36:02.539454 systemd[1]: Started cri-containerd-33230c38401a1c9750783fcc3f021bc1418885233b26997344b7578278329399.scope - libcontainer container 33230c38401a1c9750783fcc3f021bc1418885233b26997344b7578278329399. Jul 6 23:36:02.579609 systemd[1]: cri-containerd-33230c38401a1c9750783fcc3f021bc1418885233b26997344b7578278329399.scope: Deactivated successfully. Jul 6 23:36:02.581983 containerd[1785]: time="2025-07-06T23:36:02.581831331Z" level=info msg="StartContainer for \"33230c38401a1c9750783fcc3f021bc1418885233b26997344b7578278329399\" returns successfully" Jul 6 23:36:02.624703 containerd[1785]: time="2025-07-06T23:36:02.624587711Z" level=info msg="shim disconnected" id=33230c38401a1c9750783fcc3f021bc1418885233b26997344b7578278329399 namespace=k8s.io Jul 6 23:36:02.624703 containerd[1785]: time="2025-07-06T23:36:02.624647071Z" level=warning msg="cleaning up after shim disconnected" id=33230c38401a1c9750783fcc3f021bc1418885233b26997344b7578278329399 namespace=k8s.io Jul 6 23:36:02.624703 containerd[1785]: time="2025-07-06T23:36:02.624655711Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:36:03.130783 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-33230c38401a1c9750783fcc3f021bc1418885233b26997344b7578278329399-rootfs.mount: Deactivated successfully. Jul 6 23:36:03.446575 containerd[1785]: time="2025-07-06T23:36:03.446200213Z" level=info msg="CreateContainer within sandbox \"9c91686dd411d6375dc18f1c8b0de1e5a416126baec9cdd9093bd086e7892b46\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 6 23:36:03.505300 containerd[1785]: time="2025-07-06T23:36:03.505246215Z" level=info msg="CreateContainer within sandbox \"9c91686dd411d6375dc18f1c8b0de1e5a416126baec9cdd9093bd086e7892b46\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c4bb24ed952240bb54d030a0117af33ed770aa950cd59c3c440f4915b3cfa823\"" Jul 6 23:36:03.506863 containerd[1785]: time="2025-07-06T23:36:03.505863576Z" level=info msg="StartContainer for \"c4bb24ed952240bb54d030a0117af33ed770aa950cd59c3c440f4915b3cfa823\"" Jul 6 23:36:03.540378 systemd[1]: Started cri-containerd-c4bb24ed952240bb54d030a0117af33ed770aa950cd59c3c440f4915b3cfa823.scope - libcontainer container c4bb24ed952240bb54d030a0117af33ed770aa950cd59c3c440f4915b3cfa823. Jul 6 23:36:03.569919 systemd[1]: cri-containerd-c4bb24ed952240bb54d030a0117af33ed770aa950cd59c3c440f4915b3cfa823.scope: Deactivated successfully. Jul 6 23:36:03.576351 containerd[1785]: time="2025-07-06T23:36:03.576214073Z" level=info msg="StartContainer for \"c4bb24ed952240bb54d030a0117af33ed770aa950cd59c3c440f4915b3cfa823\" returns successfully" Jul 6 23:36:03.612754 containerd[1785]: time="2025-07-06T23:36:03.612501764Z" level=info msg="shim disconnected" id=c4bb24ed952240bb54d030a0117af33ed770aa950cd59c3c440f4915b3cfa823 namespace=k8s.io Jul 6 23:36:03.612754 containerd[1785]: time="2025-07-06T23:36:03.612562884Z" level=warning msg="cleaning up after shim disconnected" id=c4bb24ed952240bb54d030a0117af33ed770aa950cd59c3c440f4915b3cfa823 namespace=k8s.io Jul 6 23:36:03.612754 containerd[1785]: time="2025-07-06T23:36:03.612571284Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:36:04.132321 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c4bb24ed952240bb54d030a0117af33ed770aa950cd59c3c440f4915b3cfa823-rootfs.mount: Deactivated successfully. Jul 6 23:36:04.455021 containerd[1785]: time="2025-07-06T23:36:04.454872454Z" level=info msg="CreateContainer within sandbox \"9c91686dd411d6375dc18f1c8b0de1e5a416126baec9cdd9093bd086e7892b46\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 6 23:36:04.516912 containerd[1785]: time="2025-07-06T23:36:04.516795341Z" level=info msg="CreateContainer within sandbox \"9c91686dd411d6375dc18f1c8b0de1e5a416126baec9cdd9093bd086e7892b46\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"6be3dfb768e37a2e04cddd4895d29633f9d294193b7fc56a4fd7f1397f3ec61c\"" Jul 6 23:36:04.517910 containerd[1785]: time="2025-07-06T23:36:04.517849822Z" level=info msg="StartContainer for \"6be3dfb768e37a2e04cddd4895d29633f9d294193b7fc56a4fd7f1397f3ec61c\"" Jul 6 23:36:04.553395 systemd[1]: Started cri-containerd-6be3dfb768e37a2e04cddd4895d29633f9d294193b7fc56a4fd7f1397f3ec61c.scope - libcontainer container 6be3dfb768e37a2e04cddd4895d29633f9d294193b7fc56a4fd7f1397f3ec61c. Jul 6 23:36:04.588696 containerd[1785]: time="2025-07-06T23:36:04.588355280Z" level=info msg="StartContainer for \"6be3dfb768e37a2e04cddd4895d29633f9d294193b7fc56a4fd7f1397f3ec61c\" returns successfully" Jul 6 23:36:05.084231 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jul 6 23:36:05.478271 kubelet[3384]: I0706 23:36:05.478094 3384 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-jjhpm" podStartSLOduration=7.478076117 podStartE2EDuration="7.478076117s" podCreationTimestamp="2025-07-06 23:35:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:36:05.476672795 +0000 UTC m=+159.557651449" watchObservedRunningTime="2025-07-06 23:36:05.478076117 +0000 UTC m=+159.559054731" Jul 6 23:36:06.436962 systemd[1]: run-containerd-runc-k8s.io-6be3dfb768e37a2e04cddd4895d29633f9d294193b7fc56a4fd7f1397f3ec61c-runc.KhHhCi.mount: Deactivated successfully. Jul 6 23:36:07.895110 systemd-networkd[1343]: lxc_health: Link UP Jul 6 23:36:07.919324 systemd-networkd[1343]: lxc_health: Gained carrier Jul 6 23:36:08.620419 systemd[1]: run-containerd-runc-k8s.io-6be3dfb768e37a2e04cddd4895d29633f9d294193b7fc56a4fd7f1397f3ec61c-runc.UHJ1kM.mount: Deactivated successfully. Jul 6 23:36:09.546385 systemd-networkd[1343]: lxc_health: Gained IPv6LL Jul 6 23:36:12.997597 kubelet[3384]: E0706 23:36:12.997385 3384 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:45742->127.0.0.1:46649: write tcp 127.0.0.1:45742->127.0.0.1:46649: write: broken pipe Jul 6 23:36:13.070550 sshd[5152]: Connection closed by 10.200.16.10 port 36040 Jul 6 23:36:13.071218 sshd-session[5150]: pam_unix(sshd:session): session closed for user core Jul 6 23:36:13.074819 systemd[1]: sshd@23-10.200.20.11:22-10.200.16.10:36040.service: Deactivated successfully. Jul 6 23:36:13.076770 systemd[1]: session-26.scope: Deactivated successfully. Jul 6 23:36:13.077620 systemd-logind[1752]: Session 26 logged out. Waiting for processes to exit. Jul 6 23:36:13.078649 systemd-logind[1752]: Removed session 26.