Jul 6 23:20:47.296810 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 6 23:20:47.296832 kernel: Linux version 6.6.95-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Sun Jul 6 21:51:54 -00 2025 Jul 6 23:20:47.296841 kernel: KASLR enabled Jul 6 23:20:47.296847 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jul 6 23:20:47.296854 kernel: printk: bootconsole [pl11] enabled Jul 6 23:20:47.296859 kernel: efi: EFI v2.7 by EDK II Jul 6 23:20:47.296866 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f20e698 RNG=0x3fd5f998 MEMRESERVE=0x3e477598 Jul 6 23:20:47.296872 kernel: random: crng init done Jul 6 23:20:47.296878 kernel: secureboot: Secure boot disabled Jul 6 23:20:47.296884 kernel: ACPI: Early table checksum verification disabled Jul 6 23:20:47.296889 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Jul 6 23:20:47.296895 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 6 23:20:47.296901 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 6 23:20:47.296908 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jul 6 23:20:47.296916 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 6 23:20:47.296922 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 6 23:20:47.296928 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 6 23:20:47.296936 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 6 23:20:47.296942 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 6 23:20:47.296948 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 6 23:20:47.296954 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jul 6 23:20:47.296961 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 6 23:20:47.296967 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jul 6 23:20:47.296973 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Jul 6 23:20:47.296979 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Jul 6 23:20:47.296986 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Jul 6 23:20:47.296992 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Jul 6 23:20:47.296998 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Jul 6 23:20:47.297006 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Jul 6 23:20:47.297012 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Jul 6 23:20:47.297018 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Jul 6 23:20:47.297024 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Jul 6 23:20:47.297030 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Jul 6 23:20:47.297037 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Jul 6 23:20:47.297043 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Jul 6 23:20:47.297049 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Jul 6 23:20:47.297055 kernel: Zone ranges: Jul 6 23:20:47.297061 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jul 6 23:20:47.297067 kernel: DMA32 empty Jul 6 23:20:47.297073 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jul 6 23:20:47.297083 kernel: Movable zone start for each node Jul 6 23:20:47.297090 kernel: Early memory node ranges Jul 6 23:20:47.297097 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jul 6 23:20:47.297103 kernel: node 0: [mem 0x0000000000824000-0x000000003e45ffff] Jul 6 23:20:47.297110 kernel: node 0: [mem 0x000000003e460000-0x000000003e46ffff] Jul 6 23:20:47.299167 kernel: node 0: [mem 0x000000003e470000-0x000000003e54ffff] Jul 6 23:20:47.299188 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Jul 6 23:20:47.299195 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Jul 6 23:20:47.299202 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Jul 6 23:20:47.299208 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Jul 6 23:20:47.299215 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jul 6 23:20:47.299222 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jul 6 23:20:47.299229 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jul 6 23:20:47.299236 kernel: psci: probing for conduit method from ACPI. Jul 6 23:20:47.299242 kernel: psci: PSCIv1.1 detected in firmware. Jul 6 23:20:47.299249 kernel: psci: Using standard PSCI v0.2 function IDs Jul 6 23:20:47.299255 kernel: psci: MIGRATE_INFO_TYPE not supported. Jul 6 23:20:47.299266 kernel: psci: SMC Calling Convention v1.4 Jul 6 23:20:47.299273 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jul 6 23:20:47.299280 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jul 6 23:20:47.299286 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jul 6 23:20:47.299293 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jul 6 23:20:47.299300 kernel: pcpu-alloc: [0] 0 [0] 1 Jul 6 23:20:47.299306 kernel: Detected PIPT I-cache on CPU0 Jul 6 23:20:47.299313 kernel: CPU features: detected: GIC system register CPU interface Jul 6 23:20:47.299320 kernel: CPU features: detected: Hardware dirty bit management Jul 6 23:20:47.299326 kernel: CPU features: detected: Spectre-BHB Jul 6 23:20:47.299333 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 6 23:20:47.299341 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 6 23:20:47.299351 kernel: CPU features: detected: ARM erratum 1418040 Jul 6 23:20:47.299358 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Jul 6 23:20:47.299365 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 6 23:20:47.299371 kernel: alternatives: applying boot alternatives Jul 6 23:20:47.299379 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=ca8feb1f79a67c117068f051b5f829d3e40170c022cd5834bd6789cba9641479 Jul 6 23:20:47.299387 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 6 23:20:47.299393 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 6 23:20:47.299400 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 6 23:20:47.299407 kernel: Fallback order for Node 0: 0 Jul 6 23:20:47.299413 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Jul 6 23:20:47.299422 kernel: Policy zone: Normal Jul 6 23:20:47.299428 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 6 23:20:47.299435 kernel: software IO TLB: area num 2. Jul 6 23:20:47.299442 kernel: software IO TLB: mapped [mem 0x0000000036540000-0x000000003a540000] (64MB) Jul 6 23:20:47.299449 kernel: Memory: 3983592K/4194160K available (10368K kernel code, 2186K rwdata, 8104K rodata, 38336K init, 897K bss, 210568K reserved, 0K cma-reserved) Jul 6 23:20:47.299456 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 6 23:20:47.299462 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 6 23:20:47.299469 kernel: rcu: RCU event tracing is enabled. Jul 6 23:20:47.299476 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 6 23:20:47.299483 kernel: Trampoline variant of Tasks RCU enabled. Jul 6 23:20:47.299490 kernel: Tracing variant of Tasks RCU enabled. Jul 6 23:20:47.299498 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 6 23:20:47.299505 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 6 23:20:47.299511 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 6 23:20:47.299518 kernel: GICv3: 960 SPIs implemented Jul 6 23:20:47.299524 kernel: GICv3: 0 Extended SPIs implemented Jul 6 23:20:47.299531 kernel: Root IRQ handler: gic_handle_irq Jul 6 23:20:47.299537 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jul 6 23:20:47.299544 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jul 6 23:20:47.299550 kernel: ITS: No ITS available, not enabling LPIs Jul 6 23:20:47.299557 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 6 23:20:47.299564 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 6 23:20:47.299571 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 6 23:20:47.299579 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 6 23:20:47.299586 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 6 23:20:47.299592 kernel: Console: colour dummy device 80x25 Jul 6 23:20:47.299599 kernel: printk: console [tty1] enabled Jul 6 23:20:47.299606 kernel: ACPI: Core revision 20230628 Jul 6 23:20:47.299614 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 6 23:20:47.299621 kernel: pid_max: default: 32768 minimum: 301 Jul 6 23:20:47.299627 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 6 23:20:47.299634 kernel: landlock: Up and running. Jul 6 23:20:47.299642 kernel: SELinux: Initializing. Jul 6 23:20:47.299649 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 6 23:20:47.299656 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 6 23:20:47.299663 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 6 23:20:47.299670 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 6 23:20:47.299677 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Jul 6 23:20:47.299684 kernel: Hyper-V: Host Build 10.0.22477.1619-1-0 Jul 6 23:20:47.299697 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jul 6 23:20:47.299704 kernel: rcu: Hierarchical SRCU implementation. Jul 6 23:20:47.299711 kernel: rcu: Max phase no-delay instances is 400. Jul 6 23:20:47.299719 kernel: Remapping and enabling EFI services. Jul 6 23:20:47.299726 kernel: smp: Bringing up secondary CPUs ... Jul 6 23:20:47.299734 kernel: Detected PIPT I-cache on CPU1 Jul 6 23:20:47.299741 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jul 6 23:20:47.299748 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 6 23:20:47.299755 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 6 23:20:47.299763 kernel: smp: Brought up 1 node, 2 CPUs Jul 6 23:20:47.299771 kernel: SMP: Total of 2 processors activated. Jul 6 23:20:47.299778 kernel: CPU features: detected: 32-bit EL0 Support Jul 6 23:20:47.299785 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jul 6 23:20:47.299793 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 6 23:20:47.299800 kernel: CPU features: detected: CRC32 instructions Jul 6 23:20:47.299807 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 6 23:20:47.299814 kernel: CPU features: detected: LSE atomic instructions Jul 6 23:20:47.299821 kernel: CPU features: detected: Privileged Access Never Jul 6 23:20:47.299828 kernel: CPU: All CPU(s) started at EL1 Jul 6 23:20:47.299837 kernel: alternatives: applying system-wide alternatives Jul 6 23:20:47.299844 kernel: devtmpfs: initialized Jul 6 23:20:47.299851 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 6 23:20:47.299859 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 6 23:20:47.299866 kernel: pinctrl core: initialized pinctrl subsystem Jul 6 23:20:47.299873 kernel: SMBIOS 3.1.0 present. Jul 6 23:20:47.299880 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Jul 6 23:20:47.299887 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 6 23:20:47.299895 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 6 23:20:47.299903 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 6 23:20:47.299911 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 6 23:20:47.299918 kernel: audit: initializing netlink subsys (disabled) Jul 6 23:20:47.299925 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Jul 6 23:20:47.299933 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 6 23:20:47.299940 kernel: cpuidle: using governor menu Jul 6 23:20:47.299947 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 6 23:20:47.299954 kernel: ASID allocator initialised with 32768 entries Jul 6 23:20:47.299961 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 6 23:20:47.299970 kernel: Serial: AMBA PL011 UART driver Jul 6 23:20:47.299978 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 6 23:20:47.299985 kernel: Modules: 0 pages in range for non-PLT usage Jul 6 23:20:47.299992 kernel: Modules: 509264 pages in range for PLT usage Jul 6 23:20:47.299999 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 6 23:20:47.300007 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 6 23:20:47.300014 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 6 23:20:47.300021 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 6 23:20:47.300028 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 6 23:20:47.300037 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 6 23:20:47.300044 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 6 23:20:47.300051 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 6 23:20:47.300058 kernel: ACPI: Added _OSI(Module Device) Jul 6 23:20:47.300065 kernel: ACPI: Added _OSI(Processor Device) Jul 6 23:20:47.300072 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 6 23:20:47.300079 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 6 23:20:47.300086 kernel: ACPI: Interpreter enabled Jul 6 23:20:47.300094 kernel: ACPI: Using GIC for interrupt routing Jul 6 23:20:47.300102 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jul 6 23:20:47.300110 kernel: printk: console [ttyAMA0] enabled Jul 6 23:20:47.305186 kernel: printk: bootconsole [pl11] disabled Jul 6 23:20:47.305212 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jul 6 23:20:47.305220 kernel: iommu: Default domain type: Translated Jul 6 23:20:47.305228 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 6 23:20:47.305235 kernel: efivars: Registered efivars operations Jul 6 23:20:47.305243 kernel: vgaarb: loaded Jul 6 23:20:47.305251 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 6 23:20:47.305264 kernel: VFS: Disk quotas dquot_6.6.0 Jul 6 23:20:47.305272 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 6 23:20:47.305279 kernel: pnp: PnP ACPI init Jul 6 23:20:47.305286 kernel: pnp: PnP ACPI: found 0 devices Jul 6 23:20:47.305293 kernel: NET: Registered PF_INET protocol family Jul 6 23:20:47.305301 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 6 23:20:47.305308 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 6 23:20:47.305315 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 6 23:20:47.305322 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 6 23:20:47.305332 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 6 23:20:47.305339 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 6 23:20:47.305346 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 6 23:20:47.305354 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 6 23:20:47.305361 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 6 23:20:47.305368 kernel: PCI: CLS 0 bytes, default 64 Jul 6 23:20:47.305375 kernel: kvm [1]: HYP mode not available Jul 6 23:20:47.305382 kernel: Initialise system trusted keyrings Jul 6 23:20:47.305390 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 6 23:20:47.305399 kernel: Key type asymmetric registered Jul 6 23:20:47.305406 kernel: Asymmetric key parser 'x509' registered Jul 6 23:20:47.305413 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 6 23:20:47.305420 kernel: io scheduler mq-deadline registered Jul 6 23:20:47.305427 kernel: io scheduler kyber registered Jul 6 23:20:47.305434 kernel: io scheduler bfq registered Jul 6 23:20:47.305441 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 6 23:20:47.305449 kernel: thunder_xcv, ver 1.0 Jul 6 23:20:47.305456 kernel: thunder_bgx, ver 1.0 Jul 6 23:20:47.305464 kernel: nicpf, ver 1.0 Jul 6 23:20:47.305472 kernel: nicvf, ver 1.0 Jul 6 23:20:47.305627 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 6 23:20:47.305699 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-06T23:20:46 UTC (1751844046) Jul 6 23:20:47.305709 kernel: efifb: probing for efifb Jul 6 23:20:47.305717 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jul 6 23:20:47.305724 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jul 6 23:20:47.305746 kernel: efifb: scrolling: redraw Jul 6 23:20:47.305757 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jul 6 23:20:47.305764 kernel: Console: switching to colour frame buffer device 128x48 Jul 6 23:20:47.305771 kernel: fb0: EFI VGA frame buffer device Jul 6 23:20:47.305779 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jul 6 23:20:47.305786 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 6 23:20:47.305793 kernel: No ACPI PMU IRQ for CPU0 Jul 6 23:20:47.305800 kernel: No ACPI PMU IRQ for CPU1 Jul 6 23:20:47.305807 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Jul 6 23:20:47.305814 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jul 6 23:20:47.305823 kernel: watchdog: Hard watchdog permanently disabled Jul 6 23:20:47.305830 kernel: NET: Registered PF_INET6 protocol family Jul 6 23:20:47.305838 kernel: Segment Routing with IPv6 Jul 6 23:20:47.305845 kernel: In-situ OAM (IOAM) with IPv6 Jul 6 23:20:47.305852 kernel: NET: Registered PF_PACKET protocol family Jul 6 23:20:47.305859 kernel: Key type dns_resolver registered Jul 6 23:20:47.305866 kernel: registered taskstats version 1 Jul 6 23:20:47.305873 kernel: Loading compiled-in X.509 certificates Jul 6 23:20:47.305880 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.95-flatcar: b86e6d3bec2e587f2e5c37def91c4582416a83e3' Jul 6 23:20:47.305889 kernel: Key type .fscrypt registered Jul 6 23:20:47.305896 kernel: Key type fscrypt-provisioning registered Jul 6 23:20:47.305903 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 6 23:20:47.305910 kernel: ima: Allocated hash algorithm: sha1 Jul 6 23:20:47.305917 kernel: ima: No architecture policies found Jul 6 23:20:47.305925 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 6 23:20:47.305932 kernel: clk: Disabling unused clocks Jul 6 23:20:47.305939 kernel: Freeing unused kernel memory: 38336K Jul 6 23:20:47.305946 kernel: Run /init as init process Jul 6 23:20:47.305956 kernel: with arguments: Jul 6 23:20:47.305963 kernel: /init Jul 6 23:20:47.305970 kernel: with environment: Jul 6 23:20:47.305984 kernel: HOME=/ Jul 6 23:20:47.305991 kernel: TERM=linux Jul 6 23:20:47.305998 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 6 23:20:47.306006 systemd[1]: Successfully made /usr/ read-only. Jul 6 23:20:47.306017 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 6 23:20:47.306027 systemd[1]: Detected virtualization microsoft. Jul 6 23:20:47.306034 systemd[1]: Detected architecture arm64. Jul 6 23:20:47.306042 systemd[1]: Running in initrd. Jul 6 23:20:47.306049 systemd[1]: No hostname configured, using default hostname. Jul 6 23:20:47.306057 systemd[1]: Hostname set to . Jul 6 23:20:47.306065 systemd[1]: Initializing machine ID from random generator. Jul 6 23:20:47.306072 systemd[1]: Queued start job for default target initrd.target. Jul 6 23:20:47.306080 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:20:47.306089 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:20:47.306098 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 6 23:20:47.306106 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 6 23:20:47.306114 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 6 23:20:47.306141 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 6 23:20:47.306150 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 6 23:20:47.306161 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 6 23:20:47.306169 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:20:47.306177 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:20:47.306184 systemd[1]: Reached target paths.target - Path Units. Jul 6 23:20:47.306192 systemd[1]: Reached target slices.target - Slice Units. Jul 6 23:20:47.306200 systemd[1]: Reached target swap.target - Swaps. Jul 6 23:20:47.306207 systemd[1]: Reached target timers.target - Timer Units. Jul 6 23:20:47.306215 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 6 23:20:47.306223 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 6 23:20:47.306232 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 6 23:20:47.306240 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 6 23:20:47.306248 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:20:47.306256 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 6 23:20:47.306264 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:20:47.306272 systemd[1]: Reached target sockets.target - Socket Units. Jul 6 23:20:47.306280 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 6 23:20:47.306288 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 6 23:20:47.306297 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 6 23:20:47.306305 systemd[1]: Starting systemd-fsck-usr.service... Jul 6 23:20:47.306313 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 6 23:20:47.306320 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 6 23:20:47.306350 systemd-journald[218]: Collecting audit messages is disabled. Jul 6 23:20:47.306372 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:20:47.306381 systemd-journald[218]: Journal started Jul 6 23:20:47.306400 systemd-journald[218]: Runtime Journal (/run/log/journal/a551890cdef7465f85205a5809785974) is 8M, max 78.5M, 70.5M free. Jul 6 23:20:47.307051 systemd-modules-load[220]: Inserted module 'overlay' Jul 6 23:20:47.322461 systemd[1]: Started systemd-journald.service - Journal Service. Jul 6 23:20:47.328003 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 6 23:20:47.354220 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 6 23:20:47.354244 kernel: Bridge firewalling registered Jul 6 23:20:47.347234 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:20:47.352407 systemd-modules-load[220]: Inserted module 'br_netfilter' Jul 6 23:20:47.360061 systemd[1]: Finished systemd-fsck-usr.service. Jul 6 23:20:47.370594 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 6 23:20:47.380762 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:20:47.402592 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 6 23:20:47.415979 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:20:47.432021 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 6 23:20:47.445338 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 6 23:20:47.455857 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:20:47.468948 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:20:47.481811 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 6 23:20:47.488633 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:20:47.518431 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 6 23:20:47.533296 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 6 23:20:47.559708 dracut-cmdline[251]: dracut-dracut-053 Jul 6 23:20:47.559708 dracut-cmdline[251]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=ca8feb1f79a67c117068f051b5f829d3e40170c022cd5834bd6789cba9641479 Jul 6 23:20:47.553516 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 6 23:20:47.638994 kernel: SCSI subsystem initialized Jul 6 23:20:47.639027 kernel: Loading iSCSI transport class v2.0-870. Jul 6 23:20:47.614326 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:20:47.662367 kernel: iscsi: registered transport (tcp) Jul 6 23:20:47.626575 systemd-resolved[256]: Positive Trust Anchors: Jul 6 23:20:47.626585 systemd-resolved[256]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 6 23:20:47.626617 systemd-resolved[256]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 6 23:20:47.730703 kernel: iscsi: registered transport (qla4xxx) Jul 6 23:20:47.730728 kernel: QLogic iSCSI HBA Driver Jul 6 23:20:47.628860 systemd-resolved[256]: Defaulting to hostname 'linux'. Jul 6 23:20:47.632104 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 6 23:20:47.651631 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:20:47.773898 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 6 23:20:47.789370 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 6 23:20:47.822424 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 6 23:20:47.822476 kernel: device-mapper: uevent: version 1.0.3 Jul 6 23:20:47.829034 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 6 23:20:47.879148 kernel: raid6: neonx8 gen() 15767 MB/s Jul 6 23:20:47.897131 kernel: raid6: neonx4 gen() 15824 MB/s Jul 6 23:20:47.917135 kernel: raid6: neonx2 gen() 13246 MB/s Jul 6 23:20:47.938134 kernel: raid6: neonx1 gen() 10520 MB/s Jul 6 23:20:47.958128 kernel: raid6: int64x8 gen() 6796 MB/s Jul 6 23:20:47.978128 kernel: raid6: int64x4 gen() 7350 MB/s Jul 6 23:20:47.999134 kernel: raid6: int64x2 gen() 6115 MB/s Jul 6 23:20:48.022768 kernel: raid6: int64x1 gen() 5062 MB/s Jul 6 23:20:48.022788 kernel: raid6: using algorithm neonx4 gen() 15824 MB/s Jul 6 23:20:48.049862 kernel: raid6: .... xor() 12457 MB/s, rmw enabled Jul 6 23:20:48.049874 kernel: raid6: using neon recovery algorithm Jul 6 23:20:48.061740 kernel: xor: measuring software checksum speed Jul 6 23:20:48.061769 kernel: 8regs : 21618 MB/sec Jul 6 23:20:48.065132 kernel: 32regs : 21653 MB/sec Jul 6 23:20:48.068528 kernel: arm64_neon : 27965 MB/sec Jul 6 23:20:48.072540 kernel: xor: using function: arm64_neon (27965 MB/sec) Jul 6 23:20:48.123136 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 6 23:20:48.132707 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 6 23:20:48.155350 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:20:48.181319 systemd-udevd[438]: Using default interface naming scheme 'v255'. Jul 6 23:20:48.187145 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:20:48.207235 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 6 23:20:48.231013 dracut-pre-trigger[450]: rd.md=0: removing MD RAID activation Jul 6 23:20:48.263833 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 6 23:20:48.287282 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 6 23:20:48.320474 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:20:48.344052 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 6 23:20:48.377084 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 6 23:20:48.396098 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 6 23:20:48.410323 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:20:48.425760 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 6 23:20:48.445943 kernel: hv_vmbus: Vmbus version:5.3 Jul 6 23:20:48.449434 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 6 23:20:48.468482 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 6 23:20:48.491163 kernel: hv_vmbus: registering driver hid_hyperv Jul 6 23:20:48.491186 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 6 23:20:48.491196 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 6 23:20:48.507428 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Jul 6 23:20:48.507487 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jul 6 23:20:48.507635 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 6 23:20:48.519055 kernel: PTP clock support registered Jul 6 23:20:48.507796 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:20:48.542244 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 6 23:20:48.627073 kernel: hv_vmbus: registering driver hyperv_keyboard Jul 6 23:20:48.627108 kernel: hv_vmbus: registering driver hv_storvsc Jul 6 23:20:48.627119 kernel: hv_utils: Registering HyperV Utility Driver Jul 6 23:20:48.627128 kernel: hv_vmbus: registering driver hv_netvsc Jul 6 23:20:48.627137 kernel: hv_vmbus: registering driver hv_utils Jul 6 23:20:48.627146 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Jul 6 23:20:48.627156 kernel: scsi host1: storvsc_host_t Jul 6 23:20:48.627328 kernel: scsi host0: storvsc_host_t Jul 6 23:20:48.627615 kernel: hv_utils: Heartbeat IC version 3.0 Jul 6 23:20:48.627631 kernel: hv_utils: Shutdown IC version 3.2 Jul 6 23:20:48.627641 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jul 6 23:20:48.627668 kernel: hv_utils: TimeSync IC version 4.0 Jul 6 23:20:48.580184 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:20:48.639517 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jul 6 23:20:48.580399 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:20:48.620461 systemd-resolved[256]: Clock change detected. Flushing caches. Jul 6 23:20:48.639267 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:20:48.666299 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:20:48.673869 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 6 23:20:48.703877 kernel: hv_netvsc 00224879-9804-0022-4879-980400224879 eth0: VF slot 1 added Jul 6 23:20:48.704034 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jul 6 23:20:48.682028 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:20:48.725284 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 6 23:20:48.725305 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jul 6 23:20:48.682147 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:20:48.725023 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:20:48.771775 kernel: hv_vmbus: registering driver hv_pci Jul 6 23:20:48.771797 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jul 6 23:20:48.771960 kernel: hv_pci 06b62502-9938-4785-b15e-3a132c46c26d: PCI VMBus probing: Using version 0x10004 Jul 6 23:20:48.771916 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:20:48.787957 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jul 6 23:20:48.794758 kernel: hv_pci 06b62502-9938-4785-b15e-3a132c46c26d: PCI host bridge to bus 9938:00 Jul 6 23:20:48.799558 kernel: sd 0:0:0:0: [sda] Write Protect is off Jul 6 23:20:48.799747 kernel: pci_bus 9938:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jul 6 23:20:48.805660 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jul 6 23:20:48.805816 kernel: pci_bus 9938:00: No busn resource found for root bus, will use [bus 00-ff] Jul 6 23:20:48.818794 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jul 6 23:20:48.818963 kernel: pci 9938:00:02.0: [15b3:1018] type 00 class 0x020000 Jul 6 23:20:48.812033 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 6 23:20:48.835250 kernel: pci 9938:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Jul 6 23:20:48.853751 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 6 23:20:48.853802 kernel: pci 9938:00:02.0: enabling Extended Tags Jul 6 23:20:48.853832 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jul 6 23:20:48.869956 kernel: pci 9938:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 9938:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Jul 6 23:20:48.887325 kernel: pci_bus 9938:00: busn_res: [bus 00-ff] end is updated to 00 Jul 6 23:20:48.887529 kernel: pci 9938:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Jul 6 23:20:48.899013 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:20:48.940422 kernel: mlx5_core 9938:00:02.0: enabling device (0000 -> 0002) Jul 6 23:20:48.946749 kernel: mlx5_core 9938:00:02.0: firmware version: 16.30.1284 Jul 6 23:20:49.146110 kernel: hv_netvsc 00224879-9804-0022-4879-980400224879 eth0: VF registering: eth1 Jul 6 23:20:49.146317 kernel: mlx5_core 9938:00:02.0 eth1: joined to eth0 Jul 6 23:20:49.153851 kernel: mlx5_core 9938:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Jul 6 23:20:49.165764 kernel: mlx5_core 9938:00:02.0 enP39224s1: renamed from eth1 Jul 6 23:20:49.368067 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jul 6 23:20:49.438760 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (487) Jul 6 23:20:49.455609 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jul 6 23:20:49.479863 kernel: BTRFS: device fsid 990dd864-0c88-4d4d-9797-49057844458a devid 1 transid 35 /dev/sda3 scanned by (udev-worker) (502) Jul 6 23:20:49.489890 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jul 6 23:20:49.505641 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jul 6 23:20:49.527881 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jul 6 23:20:49.544846 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 6 23:20:49.565752 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 6 23:20:50.580805 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 6 23:20:50.581534 disk-uuid[605]: The operation has completed successfully. Jul 6 23:20:50.642705 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 6 23:20:50.642845 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 6 23:20:50.687858 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 6 23:20:50.701008 sh[691]: Success Jul 6 23:20:50.729779 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 6 23:20:50.908638 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 6 23:20:50.930672 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 6 23:20:50.940913 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 6 23:20:50.971315 kernel: BTRFS info (device dm-0): first mount of filesystem 990dd864-0c88-4d4d-9797-49057844458a Jul 6 23:20:50.971365 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 6 23:20:50.971376 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 6 23:20:50.984138 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 6 23:20:50.988411 kernel: BTRFS info (device dm-0): using free space tree Jul 6 23:20:51.269685 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 6 23:20:51.275025 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 6 23:20:51.295001 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 6 23:20:51.308570 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 6 23:20:51.341282 kernel: BTRFS info (device sda6): first mount of filesystem 297af9a7-3de6-47a6-b022-d94c20ff287b Jul 6 23:20:51.341341 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 6 23:20:51.345694 kernel: BTRFS info (device sda6): using free space tree Jul 6 23:20:51.365788 kernel: BTRFS info (device sda6): auto enabling async discard Jul 6 23:20:51.376809 kernel: BTRFS info (device sda6): last unmount of filesystem 297af9a7-3de6-47a6-b022-d94c20ff287b Jul 6 23:20:51.382350 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 6 23:20:51.397010 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 6 23:20:51.446763 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 6 23:20:51.464899 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 6 23:20:51.495276 systemd-networkd[872]: lo: Link UP Jul 6 23:20:51.495290 systemd-networkd[872]: lo: Gained carrier Jul 6 23:20:51.497118 systemd-networkd[872]: Enumeration completed Jul 6 23:20:51.498680 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 6 23:20:51.499085 systemd-networkd[872]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:20:51.499089 systemd-networkd[872]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 6 23:20:51.508808 systemd[1]: Reached target network.target - Network. Jul 6 23:20:51.593769 kernel: mlx5_core 9938:00:02.0 enP39224s1: Link up Jul 6 23:20:51.633969 kernel: hv_netvsc 00224879-9804-0022-4879-980400224879 eth0: Data path switched to VF: enP39224s1 Jul 6 23:20:51.634366 systemd-networkd[872]: enP39224s1: Link UP Jul 6 23:20:51.638333 systemd-networkd[872]: eth0: Link UP Jul 6 23:20:51.638484 systemd-networkd[872]: eth0: Gained carrier Jul 6 23:20:51.638496 systemd-networkd[872]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:20:51.658567 systemd-networkd[872]: enP39224s1: Gained carrier Jul 6 23:20:51.672789 systemd-networkd[872]: eth0: DHCPv4 address 10.200.20.19/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jul 6 23:20:52.094199 ignition[799]: Ignition 2.20.0 Jul 6 23:20:52.097427 ignition[799]: Stage: fetch-offline Jul 6 23:20:52.097472 ignition[799]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:20:52.101475 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 6 23:20:52.097480 ignition[799]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 6 23:20:52.097582 ignition[799]: parsed url from cmdline: "" Jul 6 23:20:52.097586 ignition[799]: no config URL provided Jul 6 23:20:52.097591 ignition[799]: reading system config file "/usr/lib/ignition/user.ign" Jul 6 23:20:52.097598 ignition[799]: no config at "/usr/lib/ignition/user.ign" Jul 6 23:20:52.130012 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 6 23:20:52.097603 ignition[799]: failed to fetch config: resource requires networking Jul 6 23:20:52.097803 ignition[799]: Ignition finished successfully Jul 6 23:20:52.154330 ignition[885]: Ignition 2.20.0 Jul 6 23:20:52.154337 ignition[885]: Stage: fetch Jul 6 23:20:52.154556 ignition[885]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:20:52.154569 ignition[885]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 6 23:20:52.154703 ignition[885]: parsed url from cmdline: "" Jul 6 23:20:52.154707 ignition[885]: no config URL provided Jul 6 23:20:52.154714 ignition[885]: reading system config file "/usr/lib/ignition/user.ign" Jul 6 23:20:52.154724 ignition[885]: no config at "/usr/lib/ignition/user.ign" Jul 6 23:20:52.154765 ignition[885]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jul 6 23:20:52.247971 ignition[885]: GET result: OK Jul 6 23:20:52.248112 ignition[885]: config has been read from IMDS userdata Jul 6 23:20:52.248157 ignition[885]: parsing config with SHA512: 5e0aba9899b11431b9d3bb9758a933c2ce1fec0ee1e4f3a851b0802b16001bb462b35567ad39ba4fd51211c0fcfcc4729c5f36e5fc70e6aad48123a4d88b8d21 Jul 6 23:20:52.253047 unknown[885]: fetched base config from "system" Jul 6 23:20:52.253514 ignition[885]: fetch: fetch complete Jul 6 23:20:52.253056 unknown[885]: fetched base config from "system" Jul 6 23:20:52.253522 ignition[885]: fetch: fetch passed Jul 6 23:20:52.253061 unknown[885]: fetched user config from "azure" Jul 6 23:20:52.253570 ignition[885]: Ignition finished successfully Jul 6 23:20:52.257374 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 6 23:20:52.298023 ignition[892]: Ignition 2.20.0 Jul 6 23:20:52.277031 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 6 23:20:52.298030 ignition[892]: Stage: kargs Jul 6 23:20:52.311333 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 6 23:20:52.298243 ignition[892]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:20:52.298267 ignition[892]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 6 23:20:52.301847 ignition[892]: kargs: kargs passed Jul 6 23:20:52.301904 ignition[892]: Ignition finished successfully Jul 6 23:20:52.336986 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 6 23:20:52.359312 ignition[899]: Ignition 2.20.0 Jul 6 23:20:52.359322 ignition[899]: Stage: disks Jul 6 23:20:52.359529 ignition[899]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:20:52.366003 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 6 23:20:52.359539 ignition[899]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 6 23:20:52.377799 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 6 23:20:52.360472 ignition[899]: disks: disks passed Jul 6 23:20:52.388908 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 6 23:20:52.360514 ignition[899]: Ignition finished successfully Jul 6 23:20:52.402013 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 6 23:20:52.414007 systemd[1]: Reached target sysinit.target - System Initialization. Jul 6 23:20:52.422907 systemd[1]: Reached target basic.target - Basic System. Jul 6 23:20:52.451919 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 6 23:20:52.523256 systemd-fsck[908]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jul 6 23:20:52.533887 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 6 23:20:52.551937 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 6 23:20:52.611775 kernel: EXT4-fs (sda9): mounted filesystem efd38a90-a3d5-48a9-85e4-1ea6162daba0 r/w with ordered data mode. Quota mode: none. Jul 6 23:20:52.612979 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 6 23:20:52.618503 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 6 23:20:52.659820 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 6 23:20:52.667871 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 6 23:20:52.680004 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jul 6 23:20:52.688153 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 6 23:20:52.688189 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 6 23:20:52.701389 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 6 23:20:52.743093 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (919) Jul 6 23:20:52.743113 kernel: BTRFS info (device sda6): first mount of filesystem 297af9a7-3de6-47a6-b022-d94c20ff287b Jul 6 23:20:52.743294 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 6 23:20:52.769021 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 6 23:20:52.769045 kernel: BTRFS info (device sda6): using free space tree Jul 6 23:20:52.769055 kernel: BTRFS info (device sda6): auto enabling async discard Jul 6 23:20:52.770457 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 6 23:20:52.783050 systemd-networkd[872]: enP39224s1: Gained IPv6LL Jul 6 23:20:53.246215 coreos-metadata[921]: Jul 06 23:20:53.246 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jul 6 23:20:53.256257 coreos-metadata[921]: Jul 06 23:20:53.256 INFO Fetch successful Jul 6 23:20:53.261931 coreos-metadata[921]: Jul 06 23:20:53.261 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jul 6 23:20:53.273813 coreos-metadata[921]: Jul 06 23:20:53.273 INFO Fetch successful Jul 6 23:20:53.287347 coreos-metadata[921]: Jul 06 23:20:53.287 INFO wrote hostname ci-4230.2.1-a-cc9ddc1e95 to /sysroot/etc/hostname Jul 6 23:20:53.296592 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 6 23:20:53.549933 systemd-networkd[872]: eth0: Gained IPv6LL Jul 6 23:20:53.563349 initrd-setup-root[949]: cut: /sysroot/etc/passwd: No such file or directory Jul 6 23:20:53.659929 initrd-setup-root[956]: cut: /sysroot/etc/group: No such file or directory Jul 6 23:20:53.665980 initrd-setup-root[963]: cut: /sysroot/etc/shadow: No such file or directory Jul 6 23:20:53.672017 initrd-setup-root[970]: cut: /sysroot/etc/gshadow: No such file or directory Jul 6 23:20:54.851777 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 6 23:20:54.867919 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 6 23:20:54.875911 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 6 23:20:54.892764 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 6 23:20:54.903860 kernel: BTRFS info (device sda6): last unmount of filesystem 297af9a7-3de6-47a6-b022-d94c20ff287b Jul 6 23:20:54.915056 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 6 23:20:54.930239 ignition[1041]: INFO : Ignition 2.20.0 Jul 6 23:20:54.930239 ignition[1041]: INFO : Stage: mount Jul 6 23:20:54.938303 ignition[1041]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:20:54.938303 ignition[1041]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 6 23:20:54.938303 ignition[1041]: INFO : mount: mount passed Jul 6 23:20:54.938303 ignition[1041]: INFO : Ignition finished successfully Jul 6 23:20:54.935695 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 6 23:20:54.954944 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 6 23:20:54.974007 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 6 23:20:55.005185 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (1049) Jul 6 23:20:55.005232 kernel: BTRFS info (device sda6): first mount of filesystem 297af9a7-3de6-47a6-b022-d94c20ff287b Jul 6 23:20:55.010931 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 6 23:20:55.014898 kernel: BTRFS info (device sda6): using free space tree Jul 6 23:20:55.020754 kernel: BTRFS info (device sda6): auto enabling async discard Jul 6 23:20:55.022835 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 6 23:20:55.051405 ignition[1066]: INFO : Ignition 2.20.0 Jul 6 23:20:55.055387 ignition[1066]: INFO : Stage: files Jul 6 23:20:55.055387 ignition[1066]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:20:55.055387 ignition[1066]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 6 23:20:55.055387 ignition[1066]: DEBUG : files: compiled without relabeling support, skipping Jul 6 23:20:55.076596 ignition[1066]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 6 23:20:55.076596 ignition[1066]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 6 23:20:55.128805 ignition[1066]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 6 23:20:55.136055 ignition[1066]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 6 23:20:55.136055 ignition[1066]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 6 23:20:55.130238 unknown[1066]: wrote ssh authorized keys file for user: core Jul 6 23:20:55.163830 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jul 6 23:20:55.174157 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Jul 6 23:20:55.214661 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 6 23:20:55.281604 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jul 6 23:20:55.281604 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 6 23:20:55.281604 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jul 6 23:20:55.753252 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 6 23:20:55.827623 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 6 23:20:55.837051 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 6 23:20:55.837051 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 6 23:20:55.837051 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 6 23:20:55.837051 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 6 23:20:55.837051 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 6 23:20:55.837051 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 6 23:20:55.837051 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 6 23:20:55.837051 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 6 23:20:55.837051 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 6 23:20:55.837051 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 6 23:20:55.837051 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 6 23:20:55.837051 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 6 23:20:55.837051 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 6 23:20:55.837051 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Jul 6 23:20:56.521740 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 6 23:20:56.744440 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 6 23:20:56.744440 ignition[1066]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 6 23:20:56.767773 ignition[1066]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 6 23:20:56.767773 ignition[1066]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 6 23:20:56.767773 ignition[1066]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 6 23:20:56.767773 ignition[1066]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jul 6 23:20:56.767773 ignition[1066]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jul 6 23:20:56.767773 ignition[1066]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 6 23:20:56.767773 ignition[1066]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 6 23:20:56.767773 ignition[1066]: INFO : files: files passed Jul 6 23:20:56.767773 ignition[1066]: INFO : Ignition finished successfully Jul 6 23:20:56.762523 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 6 23:20:56.790501 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 6 23:20:56.802930 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 6 23:20:56.845781 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 6 23:20:56.845877 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 6 23:20:56.883968 initrd-setup-root-after-ignition[1094]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:20:56.883968 initrd-setup-root-after-ignition[1094]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:20:56.881275 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 6 23:20:56.918627 initrd-setup-root-after-ignition[1098]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:20:56.890869 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 6 23:20:56.927034 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 6 23:20:56.966070 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 6 23:20:56.966197 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 6 23:20:56.977617 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 6 23:20:56.989118 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 6 23:20:56.999499 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 6 23:20:57.013977 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 6 23:20:57.035218 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 6 23:20:57.053006 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 6 23:20:57.072533 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 6 23:20:57.072655 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 6 23:20:57.084051 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:20:57.095930 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:20:57.108281 systemd[1]: Stopped target timers.target - Timer Units. Jul 6 23:20:57.119980 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 6 23:20:57.120065 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 6 23:20:57.135256 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 6 23:20:57.146520 systemd[1]: Stopped target basic.target - Basic System. Jul 6 23:20:57.155937 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 6 23:20:57.165998 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 6 23:20:57.177282 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 6 23:20:57.188717 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 6 23:20:57.199684 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 6 23:20:57.211379 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 6 23:20:57.223383 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 6 23:20:57.233535 systemd[1]: Stopped target swap.target - Swaps. Jul 6 23:20:57.243046 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 6 23:20:57.243140 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 6 23:20:57.257213 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:20:57.268103 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:20:57.279436 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 6 23:20:57.285302 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:20:57.292101 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 6 23:20:57.292176 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 6 23:20:57.310154 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 6 23:20:57.310209 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 6 23:20:57.321368 systemd[1]: ignition-files.service: Deactivated successfully. Jul 6 23:20:57.321420 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 6 23:20:57.331788 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jul 6 23:20:57.331836 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 6 23:20:57.359923 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 6 23:20:57.367881 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 6 23:20:57.406931 ignition[1119]: INFO : Ignition 2.20.0 Jul 6 23:20:57.406931 ignition[1119]: INFO : Stage: umount Jul 6 23:20:57.406931 ignition[1119]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:20:57.406931 ignition[1119]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 6 23:20:57.406931 ignition[1119]: INFO : umount: umount passed Jul 6 23:20:57.406931 ignition[1119]: INFO : Ignition finished successfully Jul 6 23:20:57.380997 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 6 23:20:57.381205 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:20:57.396617 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 6 23:20:57.396684 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 6 23:20:57.417168 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 6 23:20:57.419347 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 6 23:20:57.425968 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 6 23:20:57.426063 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 6 23:20:57.436300 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 6 23:20:57.436367 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 6 23:20:57.448680 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 6 23:20:57.448728 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 6 23:20:57.458397 systemd[1]: Stopped target network.target - Network. Jul 6 23:20:57.468624 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 6 23:20:57.468702 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 6 23:20:57.479921 systemd[1]: Stopped target paths.target - Path Units. Jul 6 23:20:57.490157 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 6 23:20:57.501766 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:20:57.513010 systemd[1]: Stopped target slices.target - Slice Units. Jul 6 23:20:57.523154 systemd[1]: Stopped target sockets.target - Socket Units. Jul 6 23:20:57.533755 systemd[1]: iscsid.socket: Deactivated successfully. Jul 6 23:20:57.533821 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 6 23:20:57.544557 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 6 23:20:57.544589 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 6 23:20:57.555601 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 6 23:20:57.555677 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 6 23:20:57.566117 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 6 23:20:57.566169 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 6 23:20:57.576914 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 6 23:20:57.587426 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 6 23:20:57.598779 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 6 23:20:57.609863 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 6 23:20:57.610016 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 6 23:20:57.626297 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 6 23:20:57.626566 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 6 23:20:57.626686 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 6 23:20:57.643793 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 6 23:20:57.644018 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 6 23:20:57.644251 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 6 23:20:57.655370 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 6 23:20:57.655449 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:20:57.666108 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 6 23:20:57.851870 kernel: hv_netvsc 00224879-9804-0022-4879-980400224879 eth0: Data path switched from VF: enP39224s1 Jul 6 23:20:57.666183 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 6 23:20:57.692940 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 6 23:20:57.702303 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 6 23:20:57.702388 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 6 23:20:57.714004 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 6 23:20:57.714062 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:20:57.729689 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 6 23:20:57.729756 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 6 23:20:57.735465 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 6 23:20:57.735511 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:20:57.751574 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:20:57.762688 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 6 23:20:57.762809 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 6 23:20:57.797023 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 6 23:20:57.797192 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:20:57.808137 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 6 23:20:57.808186 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 6 23:20:57.817847 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 6 23:20:57.817881 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:20:57.827950 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 6 23:20:57.828011 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 6 23:20:57.851997 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 6 23:20:57.852092 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 6 23:20:57.863152 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 6 23:20:57.863220 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:20:57.893965 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 6 23:20:57.908788 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 6 23:20:57.908863 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:20:57.925696 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 6 23:20:57.926067 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 6 23:20:57.937633 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 6 23:20:57.937690 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:20:57.950168 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:20:57.950214 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:20:57.969435 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 6 23:20:57.969502 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 6 23:20:57.969964 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 6 23:20:58.146986 systemd-journald[218]: Received SIGTERM from PID 1 (systemd). Jul 6 23:20:57.970079 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 6 23:20:57.982328 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 6 23:20:57.982419 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 6 23:20:57.994212 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 6 23:20:58.021947 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 6 23:20:58.053323 systemd[1]: Switching root. Jul 6 23:20:58.177645 systemd-journald[218]: Journal stopped Jul 6 23:21:02.581831 kernel: SELinux: policy capability network_peer_controls=1 Jul 6 23:21:02.581855 kernel: SELinux: policy capability open_perms=1 Jul 6 23:21:02.581865 kernel: SELinux: policy capability extended_socket_class=1 Jul 6 23:21:02.581873 kernel: SELinux: policy capability always_check_network=0 Jul 6 23:21:02.581882 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 6 23:21:02.581890 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 6 23:21:02.581898 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 6 23:21:02.581906 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 6 23:21:02.581914 kernel: audit: type=1403 audit(1751844059.287:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 6 23:21:02.581924 systemd[1]: Successfully loaded SELinux policy in 176.599ms. Jul 6 23:21:02.581936 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.592ms. Jul 6 23:21:02.581945 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 6 23:21:02.581954 systemd[1]: Detected virtualization microsoft. Jul 6 23:21:02.581962 systemd[1]: Detected architecture arm64. Jul 6 23:21:02.581971 systemd[1]: Detected first boot. Jul 6 23:21:02.581982 systemd[1]: Hostname set to . Jul 6 23:21:02.581990 systemd[1]: Initializing machine ID from random generator. Jul 6 23:21:02.581999 zram_generator::config[1161]: No configuration found. Jul 6 23:21:02.582008 kernel: NET: Registered PF_VSOCK protocol family Jul 6 23:21:02.582018 systemd[1]: Populated /etc with preset unit settings. Jul 6 23:21:02.582027 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 6 23:21:02.582035 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 6 23:21:02.582046 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 6 23:21:02.582054 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 6 23:21:02.582063 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 6 23:21:02.582072 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 6 23:21:02.582081 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 6 23:21:02.582090 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 6 23:21:02.582099 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 6 23:21:02.582110 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 6 23:21:02.582119 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 6 23:21:02.582128 systemd[1]: Created slice user.slice - User and Session Slice. Jul 6 23:21:02.582137 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:21:02.582146 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:21:02.582155 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 6 23:21:02.582164 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 6 23:21:02.582172 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 6 23:21:02.582183 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 6 23:21:02.582192 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jul 6 23:21:02.582201 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:21:02.582213 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 6 23:21:02.582222 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 6 23:21:02.582231 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 6 23:21:02.582240 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 6 23:21:02.582249 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:21:02.582260 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 6 23:21:02.582269 systemd[1]: Reached target slices.target - Slice Units. Jul 6 23:21:02.582278 systemd[1]: Reached target swap.target - Swaps. Jul 6 23:21:02.582287 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 6 23:21:02.582296 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 6 23:21:02.582305 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 6 23:21:02.582316 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:21:02.582325 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 6 23:21:02.582334 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:21:02.582343 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 6 23:21:02.582358 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 6 23:21:02.582368 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 6 23:21:02.582377 systemd[1]: Mounting media.mount - External Media Directory... Jul 6 23:21:02.582388 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 6 23:21:02.582397 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 6 23:21:02.582406 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 6 23:21:02.582417 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 6 23:21:02.582426 systemd[1]: Reached target machines.target - Containers. Jul 6 23:21:02.582435 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 6 23:21:02.582445 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:21:02.582454 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 6 23:21:02.582464 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 6 23:21:02.582474 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:21:02.582483 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 6 23:21:02.582492 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:21:02.582502 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 6 23:21:02.582511 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:21:02.582520 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 6 23:21:02.582529 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 6 23:21:02.582540 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 6 23:21:02.582549 kernel: fuse: init (API version 7.39) Jul 6 23:21:02.582557 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 6 23:21:02.582567 systemd[1]: Stopped systemd-fsck-usr.service. Jul 6 23:21:02.582576 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 6 23:21:02.582585 kernel: loop: module loaded Jul 6 23:21:02.582594 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 6 23:21:02.582603 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 6 23:21:02.582612 kernel: ACPI: bus type drm_connector registered Jul 6 23:21:02.582622 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 6 23:21:02.582648 systemd-journald[1265]: Collecting audit messages is disabled. Jul 6 23:21:02.582671 systemd-journald[1265]: Journal started Jul 6 23:21:02.582692 systemd-journald[1265]: Runtime Journal (/run/log/journal/896a4be7ff29421088c82681488d8e18) is 8M, max 78.5M, 70.5M free. Jul 6 23:21:01.688146 systemd[1]: Queued start job for default target multi-user.target. Jul 6 23:21:01.700679 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jul 6 23:21:01.701103 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 6 23:21:01.702968 systemd[1]: systemd-journald.service: Consumed 3.192s CPU time. Jul 6 23:21:02.600767 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 6 23:21:02.616525 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 6 23:21:02.632167 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 6 23:21:02.640776 systemd[1]: verity-setup.service: Deactivated successfully. Jul 6 23:21:02.640860 systemd[1]: Stopped verity-setup.service. Jul 6 23:21:02.657427 systemd[1]: Started systemd-journald.service - Journal Service. Jul 6 23:21:02.658244 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 6 23:21:02.664011 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 6 23:21:02.670110 systemd[1]: Mounted media.mount - External Media Directory. Jul 6 23:21:02.675205 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 6 23:21:02.681014 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 6 23:21:02.687194 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 6 23:21:02.693902 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 6 23:21:02.700436 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:21:02.707158 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 6 23:21:02.707315 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 6 23:21:02.713698 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:21:02.714082 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:21:02.720300 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 6 23:21:02.720451 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 6 23:21:02.726291 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:21:02.726439 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:21:02.733267 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 6 23:21:02.733422 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 6 23:21:02.739431 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:21:02.739590 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:21:02.746032 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 6 23:21:02.752310 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 6 23:21:02.759326 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 6 23:21:02.766452 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 6 23:21:02.773455 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:21:02.788901 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 6 23:21:02.803855 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 6 23:21:02.810483 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 6 23:21:02.816505 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 6 23:21:02.816552 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 6 23:21:02.822961 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 6 23:21:02.830575 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 6 23:21:02.839951 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 6 23:21:02.847328 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:21:02.848514 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 6 23:21:02.857930 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 6 23:21:02.865968 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 6 23:21:02.867034 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 6 23:21:02.872813 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 6 23:21:02.873849 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:21:02.880901 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 6 23:21:02.889722 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 6 23:21:02.894917 systemd-journald[1265]: Time spent on flushing to /var/log/journal/896a4be7ff29421088c82681488d8e18 is 22.748ms for 914 entries. Jul 6 23:21:02.894917 systemd-journald[1265]: System Journal (/var/log/journal/896a4be7ff29421088c82681488d8e18) is 8M, max 2.6G, 2.6G free. Jul 6 23:21:02.939449 systemd-journald[1265]: Received client request to flush runtime journal. Jul 6 23:21:02.910950 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 6 23:21:02.920166 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 6 23:21:02.926897 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 6 23:21:02.934793 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 6 23:21:02.943958 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 6 23:21:02.951949 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 6 23:21:02.964822 udevadm[1304]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jul 6 23:21:02.965807 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 6 23:21:02.981071 kernel: loop0: detected capacity change from 0 to 28720 Jul 6 23:21:02.980951 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 6 23:21:03.025612 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:21:03.046972 systemd-tmpfiles[1302]: ACLs are not supported, ignoring. Jul 6 23:21:03.046989 systemd-tmpfiles[1302]: ACLs are not supported, ignoring. Jul 6 23:21:03.053858 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 6 23:21:03.068943 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 6 23:21:03.081680 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 6 23:21:03.083810 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 6 23:21:03.140988 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 6 23:21:03.152009 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 6 23:21:03.173495 systemd-tmpfiles[1320]: ACLs are not supported, ignoring. Jul 6 23:21:03.173893 systemd-tmpfiles[1320]: ACLs are not supported, ignoring. Jul 6 23:21:03.178324 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:21:03.307771 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 6 23:21:03.420768 kernel: loop1: detected capacity change from 0 to 123192 Jul 6 23:21:03.712814 kernel: loop2: detected capacity change from 0 to 211168 Jul 6 23:21:03.758772 kernel: loop3: detected capacity change from 0 to 113512 Jul 6 23:21:04.032991 kernel: loop4: detected capacity change from 0 to 28720 Jul 6 23:21:04.041813 kernel: loop5: detected capacity change from 0 to 123192 Jul 6 23:21:04.065769 kernel: loop6: detected capacity change from 0 to 211168 Jul 6 23:21:04.080429 kernel: loop7: detected capacity change from 0 to 113512 Jul 6 23:21:04.082617 (sd-merge)[1328]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jul 6 23:21:04.083406 (sd-merge)[1328]: Merged extensions into '/usr'. Jul 6 23:21:04.087162 systemd[1]: Reload requested from client PID 1301 ('systemd-sysext') (unit systemd-sysext.service)... Jul 6 23:21:04.087179 systemd[1]: Reloading... Jul 6 23:21:04.176774 zram_generator::config[1357]: No configuration found. Jul 6 23:21:04.311011 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:21:04.382098 systemd[1]: Reloading finished in 294 ms. Jul 6 23:21:04.396789 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 6 23:21:04.405705 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 6 23:21:04.424120 systemd[1]: Starting ensure-sysext.service... Jul 6 23:21:04.431047 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 6 23:21:04.443646 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:21:04.473591 systemd[1]: Reload requested from client PID 1412 ('systemctl') (unit ensure-sysext.service)... Jul 6 23:21:04.473607 systemd[1]: Reloading... Jul 6 23:21:04.489917 systemd-udevd[1415]: Using default interface naming scheme 'v255'. Jul 6 23:21:04.498068 systemd-tmpfiles[1413]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 6 23:21:04.498316 systemd-tmpfiles[1413]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 6 23:21:04.499079 systemd-tmpfiles[1413]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 6 23:21:04.499341 systemd-tmpfiles[1413]: ACLs are not supported, ignoring. Jul 6 23:21:04.499388 systemd-tmpfiles[1413]: ACLs are not supported, ignoring. Jul 6 23:21:04.532100 systemd-tmpfiles[1413]: Detected autofs mount point /boot during canonicalization of boot. Jul 6 23:21:04.532112 systemd-tmpfiles[1413]: Skipping /boot Jul 6 23:21:04.551387 systemd-tmpfiles[1413]: Detected autofs mount point /boot during canonicalization of boot. Jul 6 23:21:04.551407 systemd-tmpfiles[1413]: Skipping /boot Jul 6 23:21:04.561824 zram_generator::config[1444]: No configuration found. Jul 6 23:21:04.694674 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:21:04.811754 kernel: mousedev: PS/2 mouse device common for all mice Jul 6 23:21:04.818319 systemd[1]: Reloading finished in 344 ms. Jul 6 23:21:04.828485 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:21:04.855163 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:21:04.877967 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jul 6 23:21:04.881943 systemd[1]: Expecting device dev-ptp_hyperv.device - /dev/ptp_hyperv... Jul 6 23:21:04.884781 kernel: hv_vmbus: registering driver hyperv_fb Jul 6 23:21:04.884854 kernel: hv_vmbus: registering driver hv_balloon Jul 6 23:21:04.892784 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jul 6 23:21:04.892870 kernel: hv_balloon: Memory hot add disabled on ARM64 Jul 6 23:21:04.901542 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jul 6 23:21:04.916205 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jul 6 23:21:04.923025 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 6 23:21:04.934383 kernel: Console: switching to colour dummy device 80x25 Jul 6 23:21:04.939756 kernel: Console: switching to colour frame buffer device 128x48 Jul 6 23:21:04.953249 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 6 23:21:04.964925 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:21:04.972025 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:21:04.987972 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 6 23:21:04.998828 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:21:05.011998 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:21:05.021573 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:21:05.021633 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 6 23:21:05.027240 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 6 23:21:05.047094 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 6 23:21:05.062869 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 35 scanned by (udev-worker) (1502) Jul 6 23:21:05.073109 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 6 23:21:05.081857 systemd[1]: Reached target time-set.target - System Time Set. Jul 6 23:21:05.097046 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 6 23:21:05.110167 systemd[1]: Finished ensure-sysext.service. Jul 6 23:21:05.116182 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:21:05.118730 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:21:05.132461 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 6 23:21:05.132665 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 6 23:21:05.140521 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:21:05.140713 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:21:05.147883 augenrules[1615]: No rules Jul 6 23:21:05.152464 systemd[1]: audit-rules.service: Deactivated successfully. Jul 6 23:21:05.153082 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 6 23:21:05.162474 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:21:05.162848 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:21:05.171498 systemd[1]: Condition check resulted in dev-ptp_hyperv.device - /dev/ptp_hyperv being skipped. Jul 6 23:21:05.197433 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 6 23:21:05.241015 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jul 6 23:21:05.257054 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 6 23:21:05.263598 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 6 23:21:05.263777 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 6 23:21:05.265519 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 6 23:21:05.273557 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:21:05.281845 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 6 23:21:05.305790 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 6 23:21:05.321804 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 6 23:21:05.333796 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 6 23:21:05.340920 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 6 23:21:05.429026 systemd-resolved[1576]: Positive Trust Anchors: Jul 6 23:21:05.429046 systemd-resolved[1576]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 6 23:21:05.429077 systemd-resolved[1576]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 6 23:21:05.432588 systemd-resolved[1576]: Using system hostname 'ci-4230.2.1-a-cc9ddc1e95'. Jul 6 23:21:05.434313 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 6 23:21:05.440562 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:21:05.481928 systemd-networkd[1564]: lo: Link UP Jul 6 23:21:05.483044 lvm[1647]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 6 23:21:05.481941 systemd-networkd[1564]: lo: Gained carrier Jul 6 23:21:05.484787 systemd-networkd[1564]: Enumeration completed Jul 6 23:21:05.484889 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 6 23:21:05.491441 systemd-networkd[1564]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:21:05.491449 systemd-networkd[1564]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 6 23:21:05.493270 systemd[1]: Reached target network.target - Network. Jul 6 23:21:05.502921 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 6 23:21:05.517912 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 6 23:21:05.526891 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 6 23:21:05.537064 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:21:05.553080 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 6 23:21:05.570633 lvm[1661]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 6 23:21:05.571905 kernel: mlx5_core 9938:00:02.0 enP39224s1: Link up Jul 6 23:21:05.603603 kernel: hv_netvsc 00224879-9804-0022-4879-980400224879 eth0: Data path switched to VF: enP39224s1 Jul 6 23:21:05.605321 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 6 23:21:05.605616 systemd-networkd[1564]: enP39224s1: Link UP Jul 6 23:21:05.605710 systemd-networkd[1564]: eth0: Link UP Jul 6 23:21:05.605713 systemd-networkd[1564]: eth0: Gained carrier Jul 6 23:21:05.605729 systemd-networkd[1564]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:21:05.613483 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 6 23:21:05.624838 systemd-networkd[1564]: enP39224s1: Gained carrier Jul 6 23:21:05.630836 systemd-networkd[1564]: eth0: DHCPv4 address 10.200.20.19/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jul 6 23:21:05.689111 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:21:06.243570 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 6 23:21:06.250724 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 6 23:21:07.117956 systemd-networkd[1564]: eth0: Gained IPv6LL Jul 6 23:21:07.120587 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 6 23:21:07.127936 systemd[1]: Reached target network-online.target - Network is Online. Jul 6 23:21:07.551509 ldconfig[1296]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 6 23:21:07.568807 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 6 23:21:07.578940 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 6 23:21:07.592875 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 6 23:21:07.599038 systemd[1]: Reached target sysinit.target - System Initialization. Jul 6 23:21:07.606306 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 6 23:21:07.612862 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 6 23:21:07.619987 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 6 23:21:07.625876 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 6 23:21:07.632428 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 6 23:21:07.638920 systemd-networkd[1564]: enP39224s1: Gained IPv6LL Jul 6 23:21:07.639109 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 6 23:21:07.639138 systemd[1]: Reached target paths.target - Path Units. Jul 6 23:21:07.643891 systemd[1]: Reached target timers.target - Timer Units. Jul 6 23:21:07.650129 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 6 23:21:07.657797 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 6 23:21:07.665805 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 6 23:21:07.673430 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 6 23:21:07.680316 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 6 23:21:07.697442 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 6 23:21:07.703531 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 6 23:21:07.710229 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 6 23:21:07.717574 systemd[1]: Reached target sockets.target - Socket Units. Jul 6 23:21:07.722771 systemd[1]: Reached target basic.target - Basic System. Jul 6 23:21:07.727798 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 6 23:21:07.727828 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 6 23:21:07.738847 systemd[1]: Starting chronyd.service - NTP client/server... Jul 6 23:21:07.746978 systemd[1]: Starting containerd.service - containerd container runtime... Jul 6 23:21:07.764296 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 6 23:21:07.782929 (chronyd)[1674]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Jul 6 23:21:07.783960 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 6 23:21:07.790220 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 6 23:21:07.799998 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 6 23:21:07.806853 jq[1681]: false Jul 6 23:21:07.807857 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 6 23:21:07.808020 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Jul 6 23:21:07.809898 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jul 6 23:21:07.814858 chronyd[1685]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Jul 6 23:21:07.819476 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jul 6 23:21:07.821072 KVP[1683]: KVP starting; pid is:1683 Jul 6 23:21:07.821504 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:21:07.830095 KVP[1683]: KVP LIC Version: 3.1 Jul 6 23:21:07.832434 kernel: hv_utils: KVP IC version 4.0 Jul 6 23:21:07.834658 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 6 23:21:07.843288 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 6 23:21:07.848285 chronyd[1685]: Timezone right/UTC failed leap second check, ignoring Jul 6 23:21:07.848516 chronyd[1685]: Loaded seccomp filter (level 2) Jul 6 23:21:07.851829 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 6 23:21:07.863996 extend-filesystems[1682]: Found loop4 Jul 6 23:21:07.871542 extend-filesystems[1682]: Found loop5 Jul 6 23:21:07.871542 extend-filesystems[1682]: Found loop6 Jul 6 23:21:07.871542 extend-filesystems[1682]: Found loop7 Jul 6 23:21:07.871542 extend-filesystems[1682]: Found sda Jul 6 23:21:07.871542 extend-filesystems[1682]: Found sda1 Jul 6 23:21:07.871542 extend-filesystems[1682]: Found sda2 Jul 6 23:21:07.871542 extend-filesystems[1682]: Found sda3 Jul 6 23:21:07.871542 extend-filesystems[1682]: Found usr Jul 6 23:21:07.871542 extend-filesystems[1682]: Found sda4 Jul 6 23:21:07.871542 extend-filesystems[1682]: Found sda6 Jul 6 23:21:07.871542 extend-filesystems[1682]: Found sda7 Jul 6 23:21:07.871542 extend-filesystems[1682]: Found sda9 Jul 6 23:21:07.871542 extend-filesystems[1682]: Checking size of /dev/sda9 Jul 6 23:21:08.010148 extend-filesystems[1682]: Old size kept for /dev/sda9 Jul 6 23:21:08.010148 extend-filesystems[1682]: Found sr0 Jul 6 23:21:07.928205 dbus-daemon[1680]: [system] SELinux support is enabled Jul 6 23:21:07.876052 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 6 23:21:08.048723 coreos-metadata[1676]: Jul 06 23:21:08.036 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jul 6 23:21:08.048723 coreos-metadata[1676]: Jul 06 23:21:08.047 INFO Fetch successful Jul 6 23:21:08.048723 coreos-metadata[1676]: Jul 06 23:21:08.048 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jul 6 23:21:07.897078 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 6 23:21:07.929250 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 6 23:21:07.944887 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 6 23:21:07.950081 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 6 23:21:07.958039 systemd[1]: Starting update-engine.service - Update Engine... Jul 6 23:21:08.049577 jq[1718]: true Jul 6 23:21:07.979875 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 6 23:21:08.000689 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 6 23:21:08.031985 systemd[1]: Started chronyd.service - NTP client/server. Jul 6 23:21:08.048253 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 6 23:21:08.050775 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 6 23:21:08.051260 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 6 23:21:08.052004 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 6 23:21:08.053464 update_engine[1715]: I20250706 23:21:08.053028 1715 main.cc:92] Flatcar Update Engine starting Jul 6 23:21:08.054680 update_engine[1715]: I20250706 23:21:08.054641 1715 update_check_scheduler.cc:74] Next update check in 11m51s Jul 6 23:21:08.057963 coreos-metadata[1676]: Jul 06 23:21:08.057 INFO Fetch successful Jul 6 23:21:08.059793 coreos-metadata[1676]: Jul 06 23:21:08.059 INFO Fetching http://168.63.129.16/machine/1267c692-3576-40f4-a149-c6b548eea5d5/430b33ce%2D7310%2D4fb7%2Dad61%2D4cbc7884d315.%5Fci%2D4230.2.1%2Da%2Dcc9ddc1e95?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jul 6 23:21:08.064919 coreos-metadata[1676]: Jul 06 23:21:08.064 INFO Fetch successful Jul 6 23:21:08.064919 coreos-metadata[1676]: Jul 06 23:21:08.064 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jul 6 23:21:08.066253 systemd[1]: motdgen.service: Deactivated successfully. Jul 6 23:21:08.066446 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 6 23:21:08.082304 coreos-metadata[1676]: Jul 06 23:21:08.081 INFO Fetch successful Jul 6 23:21:08.081664 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 6 23:21:08.090813 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 35 scanned by (udev-worker) (1517) Jul 6 23:21:08.095245 systemd-logind[1707]: New seat seat0. Jul 6 23:21:08.100015 systemd-logind[1707]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Jul 6 23:21:08.102981 systemd[1]: Started systemd-logind.service - User Login Management. Jul 6 23:21:08.115587 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 6 23:21:08.116845 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 6 23:21:08.143664 jq[1743]: true Jul 6 23:21:08.151152 (ntainerd)[1744]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 6 23:21:08.164835 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 6 23:21:08.201412 dbus-daemon[1680]: [system] Successfully activated service 'org.freedesktop.systemd1' Jul 6 23:21:08.218444 systemd[1]: Started update-engine.service - Update Engine. Jul 6 23:21:08.230344 tar[1734]: linux-arm64/LICENSE Jul 6 23:21:08.230344 tar[1734]: linux-arm64/helm Jul 6 23:21:08.231616 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 6 23:21:08.231848 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 6 23:21:08.231976 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 6 23:21:08.242962 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 6 23:21:08.243081 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 6 23:21:08.264565 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 6 23:21:08.378831 bash[1806]: Updated "/home/core/.ssh/authorized_keys" Jul 6 23:21:08.381605 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 6 23:21:08.394912 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 6 23:21:08.513906 locksmithd[1797]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 6 23:21:08.638818 containerd[1744]: time="2025-07-06T23:21:08.637235880Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jul 6 23:21:08.671729 containerd[1744]: time="2025-07-06T23:21:08.671670040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 6 23:21:08.673468 containerd[1744]: time="2025-07-06T23:21:08.673415960Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.95-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:21:08.674419 containerd[1744]: time="2025-07-06T23:21:08.674382920Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 6 23:21:08.674569 containerd[1744]: time="2025-07-06T23:21:08.674552800Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 6 23:21:08.674834 containerd[1744]: time="2025-07-06T23:21:08.674812000Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 6 23:21:08.675013 containerd[1744]: time="2025-07-06T23:21:08.674995240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 6 23:21:08.675184 containerd[1744]: time="2025-07-06T23:21:08.675163000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:21:08.675556 containerd[1744]: time="2025-07-06T23:21:08.675538600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 6 23:21:08.676384 containerd[1744]: time="2025-07-06T23:21:08.676357600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:21:08.676524 containerd[1744]: time="2025-07-06T23:21:08.676507040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 6 23:21:08.677684 containerd[1744]: time="2025-07-06T23:21:08.676836800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:21:08.677684 containerd[1744]: time="2025-07-06T23:21:08.676855120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 6 23:21:08.677684 containerd[1744]: time="2025-07-06T23:21:08.676971760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 6 23:21:08.677684 containerd[1744]: time="2025-07-06T23:21:08.677189480Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 6 23:21:08.677684 containerd[1744]: time="2025-07-06T23:21:08.677361920Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:21:08.677684 containerd[1744]: time="2025-07-06T23:21:08.677376160Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 6 23:21:08.677684 containerd[1744]: time="2025-07-06T23:21:08.677464960Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 6 23:21:08.677684 containerd[1744]: time="2025-07-06T23:21:08.677542080Z" level=info msg="metadata content store policy set" policy=shared Jul 6 23:21:08.694691 containerd[1744]: time="2025-07-06T23:21:08.694522000Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 6 23:21:08.694691 containerd[1744]: time="2025-07-06T23:21:08.694604720Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 6 23:21:08.694691 containerd[1744]: time="2025-07-06T23:21:08.694625920Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 6 23:21:08.694691 containerd[1744]: time="2025-07-06T23:21:08.694642760Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 6 23:21:08.694691 containerd[1744]: time="2025-07-06T23:21:08.694658680Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 6 23:21:08.694977 containerd[1744]: time="2025-07-06T23:21:08.694903880Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 6 23:21:08.695820 containerd[1744]: time="2025-07-06T23:21:08.695199440Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 6 23:21:08.695820 containerd[1744]: time="2025-07-06T23:21:08.695321200Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 6 23:21:08.695820 containerd[1744]: time="2025-07-06T23:21:08.695337520Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 6 23:21:08.695820 containerd[1744]: time="2025-07-06T23:21:08.695352040Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 6 23:21:08.695820 containerd[1744]: time="2025-07-06T23:21:08.695367200Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 6 23:21:08.695820 containerd[1744]: time="2025-07-06T23:21:08.695381600Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 6 23:21:08.695820 containerd[1744]: time="2025-07-06T23:21:08.695394680Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 6 23:21:08.695820 containerd[1744]: time="2025-07-06T23:21:08.695409800Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 6 23:21:08.695820 containerd[1744]: time="2025-07-06T23:21:08.695426560Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 6 23:21:08.695820 containerd[1744]: time="2025-07-06T23:21:08.695440720Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 6 23:21:08.695820 containerd[1744]: time="2025-07-06T23:21:08.695453800Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 6 23:21:08.695820 containerd[1744]: time="2025-07-06T23:21:08.695467120Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 6 23:21:08.695820 containerd[1744]: time="2025-07-06T23:21:08.695489440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 6 23:21:08.695820 containerd[1744]: time="2025-07-06T23:21:08.695503320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 6 23:21:08.696770 containerd[1744]: time="2025-07-06T23:21:08.695516240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 6 23:21:08.696770 containerd[1744]: time="2025-07-06T23:21:08.695529560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 6 23:21:08.696770 containerd[1744]: time="2025-07-06T23:21:08.695541840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 6 23:21:08.696770 containerd[1744]: time="2025-07-06T23:21:08.695555800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 6 23:21:08.696770 containerd[1744]: time="2025-07-06T23:21:08.695567640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 6 23:21:08.696770 containerd[1744]: time="2025-07-06T23:21:08.695582600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 6 23:21:08.696770 containerd[1744]: time="2025-07-06T23:21:08.695595680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 6 23:21:08.696770 containerd[1744]: time="2025-07-06T23:21:08.695609600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 6 23:21:08.696770 containerd[1744]: time="2025-07-06T23:21:08.695624560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 6 23:21:08.696770 containerd[1744]: time="2025-07-06T23:21:08.695646680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 6 23:21:08.696770 containerd[1744]: time="2025-07-06T23:21:08.695659800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 6 23:21:08.696770 containerd[1744]: time="2025-07-06T23:21:08.695674480Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 6 23:21:08.696770 containerd[1744]: time="2025-07-06T23:21:08.695697640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 6 23:21:08.696770 containerd[1744]: time="2025-07-06T23:21:08.695711280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 6 23:21:08.696770 containerd[1744]: time="2025-07-06T23:21:08.695723720Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 6 23:21:08.699001 containerd[1744]: time="2025-07-06T23:21:08.697862960Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 6 23:21:08.699001 containerd[1744]: time="2025-07-06T23:21:08.697896040Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 6 23:21:08.699001 containerd[1744]: time="2025-07-06T23:21:08.697909920Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 6 23:21:08.699001 containerd[1744]: time="2025-07-06T23:21:08.697922960Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 6 23:21:08.699001 containerd[1744]: time="2025-07-06T23:21:08.697932400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 6 23:21:08.699001 containerd[1744]: time="2025-07-06T23:21:08.697947840Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 6 23:21:08.699001 containerd[1744]: time="2025-07-06T23:21:08.697965000Z" level=info msg="NRI interface is disabled by configuration." Jul 6 23:21:08.699001 containerd[1744]: time="2025-07-06T23:21:08.697979960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 6 23:21:08.699154 containerd[1744]: time="2025-07-06T23:21:08.698309560Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 6 23:21:08.699154 containerd[1744]: time="2025-07-06T23:21:08.698362760Z" level=info msg="Connect containerd service" Jul 6 23:21:08.699154 containerd[1744]: time="2025-07-06T23:21:08.698403760Z" level=info msg="using legacy CRI server" Jul 6 23:21:08.699154 containerd[1744]: time="2025-07-06T23:21:08.698411360Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 6 23:21:08.699154 containerd[1744]: time="2025-07-06T23:21:08.698545880Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 6 23:21:08.702257 containerd[1744]: time="2025-07-06T23:21:08.701349240Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 6 23:21:08.702257 containerd[1744]: time="2025-07-06T23:21:08.701795400Z" level=info msg="Start subscribing containerd event" Jul 6 23:21:08.702257 containerd[1744]: time="2025-07-06T23:21:08.701871160Z" level=info msg="Start recovering state" Jul 6 23:21:08.702257 containerd[1744]: time="2025-07-06T23:21:08.701952240Z" level=info msg="Start event monitor" Jul 6 23:21:08.702257 containerd[1744]: time="2025-07-06T23:21:08.701965840Z" level=info msg="Start snapshots syncer" Jul 6 23:21:08.702257 containerd[1744]: time="2025-07-06T23:21:08.701976640Z" level=info msg="Start cni network conf syncer for default" Jul 6 23:21:08.702257 containerd[1744]: time="2025-07-06T23:21:08.701985720Z" level=info msg="Start streaming server" Jul 6 23:21:08.703526 containerd[1744]: time="2025-07-06T23:21:08.702850920Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 6 23:21:08.703526 containerd[1744]: time="2025-07-06T23:21:08.702910120Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 6 23:21:08.703077 systemd[1]: Started containerd.service - containerd container runtime. Jul 6 23:21:08.710835 containerd[1744]: time="2025-07-06T23:21:08.709701800Z" level=info msg="containerd successfully booted in 0.074083s" Jul 6 23:21:09.081945 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:21:09.101162 (kubelet)[1826]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:21:09.200964 tar[1734]: linux-arm64/README.md Jul 6 23:21:09.221314 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 6 23:21:09.511346 sshd_keygen[1702]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 6 23:21:09.520766 kubelet[1826]: E0706 23:21:09.519404 1826 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:21:09.521604 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:21:09.521896 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:21:09.522298 systemd[1]: kubelet.service: Consumed 744ms CPU time, 256.9M memory peak. Jul 6 23:21:09.535620 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 6 23:21:09.549319 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 6 23:21:09.556521 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jul 6 23:21:09.564253 systemd[1]: issuegen.service: Deactivated successfully. Jul 6 23:21:09.564823 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 6 23:21:09.581452 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 6 23:21:09.590988 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jul 6 23:21:09.601953 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 6 23:21:09.613139 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 6 23:21:09.622038 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jul 6 23:21:09.630310 systemd[1]: Reached target getty.target - Login Prompts. Jul 6 23:21:09.636907 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 6 23:21:09.643090 systemd[1]: Startup finished in 662ms (kernel) + 12.355s (initrd) + 10.530s (userspace) = 23.548s. Jul 6 23:21:09.847210 login[1857]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Jul 6 23:21:09.849193 login[1858]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:21:09.855522 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 6 23:21:09.862027 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 6 23:21:09.870713 systemd-logind[1707]: New session 2 of user core. Jul 6 23:21:09.876099 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 6 23:21:09.882010 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 6 23:21:09.894051 (systemd)[1865]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 6 23:21:09.896527 systemd-logind[1707]: New session c1 of user core. Jul 6 23:21:10.046653 systemd[1865]: Queued start job for default target default.target. Jul 6 23:21:10.053867 systemd[1865]: Created slice app.slice - User Application Slice. Jul 6 23:21:10.053899 systemd[1865]: Reached target paths.target - Paths. Jul 6 23:21:10.053942 systemd[1865]: Reached target timers.target - Timers. Jul 6 23:21:10.055177 systemd[1865]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 6 23:21:10.064496 systemd[1865]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 6 23:21:10.064557 systemd[1865]: Reached target sockets.target - Sockets. Jul 6 23:21:10.064602 systemd[1865]: Reached target basic.target - Basic System. Jul 6 23:21:10.064629 systemd[1865]: Reached target default.target - Main User Target. Jul 6 23:21:10.064655 systemd[1865]: Startup finished in 160ms. Jul 6 23:21:10.065003 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 6 23:21:10.067139 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 6 23:21:10.847654 login[1857]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:21:10.852545 systemd-logind[1707]: New session 1 of user core. Jul 6 23:21:10.856886 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 6 23:21:11.044386 waagent[1854]: 2025-07-06T23:21:11.044286Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Jul 6 23:21:11.049631 waagent[1854]: 2025-07-06T23:21:11.049561Z INFO Daemon Daemon OS: flatcar 4230.2.1 Jul 6 23:21:11.053907 waagent[1854]: 2025-07-06T23:21:11.053853Z INFO Daemon Daemon Python: 3.11.11 Jul 6 23:21:11.058109 waagent[1854]: 2025-07-06T23:21:11.058049Z INFO Daemon Daemon Run daemon Jul 6 23:21:11.062085 waagent[1854]: 2025-07-06T23:21:11.061970Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4230.2.1' Jul 6 23:21:11.070930 waagent[1854]: 2025-07-06T23:21:11.070857Z INFO Daemon Daemon Using waagent for provisioning Jul 6 23:21:11.076223 waagent[1854]: 2025-07-06T23:21:11.076172Z INFO Daemon Daemon Activate resource disk Jul 6 23:21:11.080641 waagent[1854]: 2025-07-06T23:21:11.080593Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jul 6 23:21:11.092948 waagent[1854]: 2025-07-06T23:21:11.092889Z INFO Daemon Daemon Found device: None Jul 6 23:21:11.097118 waagent[1854]: 2025-07-06T23:21:11.097068Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jul 6 23:21:11.105259 waagent[1854]: 2025-07-06T23:21:11.105175Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jul 6 23:21:11.116517 waagent[1854]: 2025-07-06T23:21:11.116466Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jul 6 23:21:11.122541 waagent[1854]: 2025-07-06T23:21:11.122492Z INFO Daemon Daemon Running default provisioning handler Jul 6 23:21:11.133594 waagent[1854]: 2025-07-06T23:21:11.133528Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jul 6 23:21:11.146567 waagent[1854]: 2025-07-06T23:21:11.146496Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jul 6 23:21:11.155810 waagent[1854]: 2025-07-06T23:21:11.155746Z INFO Daemon Daemon cloud-init is enabled: False Jul 6 23:21:11.160586 waagent[1854]: 2025-07-06T23:21:11.160532Z INFO Daemon Daemon Copying ovf-env.xml Jul 6 23:21:11.250580 waagent[1854]: 2025-07-06T23:21:11.250475Z INFO Daemon Daemon Successfully mounted dvd Jul 6 23:21:11.278333 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jul 6 23:21:11.279886 waagent[1854]: 2025-07-06T23:21:11.279810Z INFO Daemon Daemon Detect protocol endpoint Jul 6 23:21:11.284758 waagent[1854]: 2025-07-06T23:21:11.284690Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jul 6 23:21:11.290419 waagent[1854]: 2025-07-06T23:21:11.290359Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jul 6 23:21:11.297196 waagent[1854]: 2025-07-06T23:21:11.297146Z INFO Daemon Daemon Test for route to 168.63.129.16 Jul 6 23:21:11.302635 waagent[1854]: 2025-07-06T23:21:11.302586Z INFO Daemon Daemon Route to 168.63.129.16 exists Jul 6 23:21:11.307900 waagent[1854]: 2025-07-06T23:21:11.307855Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jul 6 23:21:11.395132 waagent[1854]: 2025-07-06T23:21:11.395021Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jul 6 23:21:11.401540 waagent[1854]: 2025-07-06T23:21:11.401510Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jul 6 23:21:11.406581 waagent[1854]: 2025-07-06T23:21:11.406533Z INFO Daemon Daemon Server preferred version:2015-04-05 Jul 6 23:21:11.630255 waagent[1854]: 2025-07-06T23:21:11.630153Z INFO Daemon Daemon Initializing goal state during protocol detection Jul 6 23:21:11.636459 waagent[1854]: 2025-07-06T23:21:11.636387Z INFO Daemon Daemon Forcing an update of the goal state. Jul 6 23:21:11.645430 waagent[1854]: 2025-07-06T23:21:11.645342Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jul 6 23:21:11.664310 waagent[1854]: 2025-07-06T23:21:11.664259Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.175 Jul 6 23:21:11.669873 waagent[1854]: 2025-07-06T23:21:11.669819Z INFO Daemon Jul 6 23:21:11.672517 waagent[1854]: 2025-07-06T23:21:11.672471Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 4f426074-64fa-442a-8b46-87b736ca6861 eTag: 17616615767561507132 source: Fabric] Jul 6 23:21:11.683137 waagent[1854]: 2025-07-06T23:21:11.683087Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jul 6 23:21:11.689695 waagent[1854]: 2025-07-06T23:21:11.689646Z INFO Daemon Jul 6 23:21:11.692322 waagent[1854]: 2025-07-06T23:21:11.692276Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jul 6 23:21:11.702979 waagent[1854]: 2025-07-06T23:21:11.702938Z INFO Daemon Daemon Downloading artifacts profile blob Jul 6 23:21:11.789082 waagent[1854]: 2025-07-06T23:21:11.788992Z INFO Daemon Downloaded certificate {'thumbprint': 'C2DD1F5DD453C83AB9D2A095FB6C4F91A03D3695', 'hasPrivateKey': True} Jul 6 23:21:11.798499 waagent[1854]: 2025-07-06T23:21:11.798444Z INFO Daemon Downloaded certificate {'thumbprint': 'BB5225FCD9B7F6CBE35E2996B8E7059D19DF06CA', 'hasPrivateKey': False} Jul 6 23:21:11.807961 waagent[1854]: 2025-07-06T23:21:11.807906Z INFO Daemon Fetch goal state completed Jul 6 23:21:11.819715 waagent[1854]: 2025-07-06T23:21:11.819668Z INFO Daemon Daemon Starting provisioning Jul 6 23:21:11.824349 waagent[1854]: 2025-07-06T23:21:11.824296Z INFO Daemon Daemon Handle ovf-env.xml. Jul 6 23:21:11.828808 waagent[1854]: 2025-07-06T23:21:11.828762Z INFO Daemon Daemon Set hostname [ci-4230.2.1-a-cc9ddc1e95] Jul 6 23:21:11.849772 waagent[1854]: 2025-07-06T23:21:11.849230Z INFO Daemon Daemon Publish hostname [ci-4230.2.1-a-cc9ddc1e95] Jul 6 23:21:11.855432 waagent[1854]: 2025-07-06T23:21:11.855364Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jul 6 23:21:11.861528 waagent[1854]: 2025-07-06T23:21:11.861468Z INFO Daemon Daemon Primary interface is [eth0] Jul 6 23:21:11.873485 systemd-networkd[1564]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:21:11.873500 systemd-networkd[1564]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 6 23:21:11.873548 systemd-networkd[1564]: eth0: DHCP lease lost Jul 6 23:21:11.874639 waagent[1854]: 2025-07-06T23:21:11.874565Z INFO Daemon Daemon Create user account if not exists Jul 6 23:21:11.880251 waagent[1854]: 2025-07-06T23:21:11.880186Z INFO Daemon Daemon User core already exists, skip useradd Jul 6 23:21:11.885697 waagent[1854]: 2025-07-06T23:21:11.885616Z INFO Daemon Daemon Configure sudoer Jul 6 23:21:11.890159 waagent[1854]: 2025-07-06T23:21:11.890092Z INFO Daemon Daemon Configure sshd Jul 6 23:21:11.894935 waagent[1854]: 2025-07-06T23:21:11.894851Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jul 6 23:21:11.907709 waagent[1854]: 2025-07-06T23:21:11.907582Z INFO Daemon Daemon Deploy ssh public key. Jul 6 23:21:11.913817 systemd-networkd[1564]: eth0: DHCPv4 address 10.200.20.19/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jul 6 23:21:13.014316 waagent[1854]: 2025-07-06T23:21:13.014236Z INFO Daemon Daemon Provisioning complete Jul 6 23:21:13.033415 waagent[1854]: 2025-07-06T23:21:13.033364Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jul 6 23:21:13.039178 waagent[1854]: 2025-07-06T23:21:13.039119Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jul 6 23:21:13.048435 waagent[1854]: 2025-07-06T23:21:13.048370Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Jul 6 23:21:13.182724 waagent[1919]: 2025-07-06T23:21:13.182644Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Jul 6 23:21:13.183595 waagent[1919]: 2025-07-06T23:21:13.183191Z INFO ExtHandler ExtHandler OS: flatcar 4230.2.1 Jul 6 23:21:13.183595 waagent[1919]: 2025-07-06T23:21:13.183266Z INFO ExtHandler ExtHandler Python: 3.11.11 Jul 6 23:21:13.721926 waagent[1919]: 2025-07-06T23:21:13.721762Z INFO ExtHandler ExtHandler Distro: flatcar-4230.2.1; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.11; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Jul 6 23:21:13.724327 waagent[1919]: 2025-07-06T23:21:13.723423Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 6 23:21:13.724327 waagent[1919]: 2025-07-06T23:21:13.723516Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 6 23:21:13.732103 waagent[1919]: 2025-07-06T23:21:13.732030Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jul 6 23:21:13.738143 waagent[1919]: 2025-07-06T23:21:13.738091Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.175 Jul 6 23:21:13.738693 waagent[1919]: 2025-07-06T23:21:13.738645Z INFO ExtHandler Jul 6 23:21:13.738797 waagent[1919]: 2025-07-06T23:21:13.738763Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: e789a386-b71e-427f-ba73-b682a2e7855e eTag: 17616615767561507132 source: Fabric] Jul 6 23:21:13.739119 waagent[1919]: 2025-07-06T23:21:13.739076Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jul 6 23:21:13.745605 waagent[1919]: 2025-07-06T23:21:13.744641Z INFO ExtHandler Jul 6 23:21:13.745605 waagent[1919]: 2025-07-06T23:21:13.744853Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jul 6 23:21:13.749772 waagent[1919]: 2025-07-06T23:21:13.749535Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jul 6 23:21:13.987257 waagent[1919]: 2025-07-06T23:21:13.987101Z INFO ExtHandler Downloaded certificate {'thumbprint': 'C2DD1F5DD453C83AB9D2A095FB6C4F91A03D3695', 'hasPrivateKey': True} Jul 6 23:21:13.987666 waagent[1919]: 2025-07-06T23:21:13.987619Z INFO ExtHandler Downloaded certificate {'thumbprint': 'BB5225FCD9B7F6CBE35E2996B8E7059D19DF06CA', 'hasPrivateKey': False} Jul 6 23:21:13.988146 waagent[1919]: 2025-07-06T23:21:13.988100Z INFO ExtHandler Fetch goal state completed Jul 6 23:21:14.006595 waagent[1919]: 2025-07-06T23:21:14.006531Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1919 Jul 6 23:21:14.006795 waagent[1919]: 2025-07-06T23:21:14.006721Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jul 6 23:21:14.008483 waagent[1919]: 2025-07-06T23:21:14.008434Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4230.2.1', '', 'Flatcar Container Linux by Kinvolk'] Jul 6 23:21:14.008901 waagent[1919]: 2025-07-06T23:21:14.008858Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jul 6 23:21:14.114876 waagent[1919]: 2025-07-06T23:21:14.114827Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jul 6 23:21:14.115084 waagent[1919]: 2025-07-06T23:21:14.115043Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jul 6 23:21:14.121158 waagent[1919]: 2025-07-06T23:21:14.121115Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jul 6 23:21:14.127076 systemd[1]: Reload requested from client PID 1934 ('systemctl') (unit waagent.service)... Jul 6 23:21:14.127088 systemd[1]: Reloading... Jul 6 23:21:14.215781 zram_generator::config[1973]: No configuration found. Jul 6 23:21:14.318730 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:21:14.416804 systemd[1]: Reloading finished in 289 ms. Jul 6 23:21:14.432390 waagent[1919]: 2025-07-06T23:21:14.432016Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Jul 6 23:21:14.438299 systemd[1]: Reload requested from client PID 2027 ('systemctl') (unit waagent.service)... Jul 6 23:21:14.438313 systemd[1]: Reloading... Jul 6 23:21:14.526868 zram_generator::config[2069]: No configuration found. Jul 6 23:21:14.628072 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:21:14.727137 systemd[1]: Reloading finished in 288 ms. Jul 6 23:21:14.738871 waagent[1919]: 2025-07-06T23:21:14.738040Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jul 6 23:21:14.738871 waagent[1919]: 2025-07-06T23:21:14.738211Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jul 6 23:21:15.115782 waagent[1919]: 2025-07-06T23:21:15.115632Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jul 6 23:21:15.116411 waagent[1919]: 2025-07-06T23:21:15.116335Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Jul 6 23:21:15.117335 waagent[1919]: 2025-07-06T23:21:15.117241Z INFO ExtHandler ExtHandler Starting env monitor service. Jul 6 23:21:15.117772 waagent[1919]: 2025-07-06T23:21:15.117663Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jul 6 23:21:15.118334 waagent[1919]: 2025-07-06T23:21:15.118224Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jul 6 23:21:15.118424 waagent[1919]: 2025-07-06T23:21:15.118327Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jul 6 23:21:15.119557 waagent[1919]: 2025-07-06T23:21:15.118681Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 6 23:21:15.119557 waagent[1919]: 2025-07-06T23:21:15.118785Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 6 23:21:15.119557 waagent[1919]: 2025-07-06T23:21:15.118932Z INFO EnvHandler ExtHandler Configure routes Jul 6 23:21:15.119557 waagent[1919]: 2025-07-06T23:21:15.118993Z INFO EnvHandler ExtHandler Gateway:None Jul 6 23:21:15.119557 waagent[1919]: 2025-07-06T23:21:15.119061Z INFO EnvHandler ExtHandler Routes:None Jul 6 23:21:15.119894 waagent[1919]: 2025-07-06T23:21:15.119840Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jul 6 23:21:15.120054 waagent[1919]: 2025-07-06T23:21:15.120021Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jul 6 23:21:15.121928 waagent[1919]: 2025-07-06T23:21:15.121858Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jul 6 23:21:15.122053 waagent[1919]: 2025-07-06T23:21:15.122011Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 6 23:21:15.123306 waagent[1919]: 2025-07-06T23:21:15.123188Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 6 23:21:15.124836 waagent[1919]: 2025-07-06T23:21:15.124355Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jul 6 23:21:15.126187 waagent[1919]: 2025-07-06T23:21:15.126114Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jul 6 23:21:15.126187 waagent[1919]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jul 6 23:21:15.126187 waagent[1919]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Jul 6 23:21:15.126187 waagent[1919]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jul 6 23:21:15.126187 waagent[1919]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jul 6 23:21:15.126187 waagent[1919]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jul 6 23:21:15.126187 waagent[1919]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jul 6 23:21:15.134223 waagent[1919]: 2025-07-06T23:21:15.134174Z INFO ExtHandler ExtHandler Jul 6 23:21:15.134454 waagent[1919]: 2025-07-06T23:21:15.134415Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 7e971729-d925-4d46-91ab-8ccda5d056b2 correlation 4df486db-83bc-4be8-b164-1a0fbec6e351 created: 2025-07-06T23:20:06.318581Z] Jul 6 23:21:15.134990 waagent[1919]: 2025-07-06T23:21:15.134938Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jul 6 23:21:15.135661 waagent[1919]: 2025-07-06T23:21:15.135621Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Jul 6 23:21:15.165609 waagent[1919]: 2025-07-06T23:21:15.165522Z INFO MonitorHandler ExtHandler Network interfaces: Jul 6 23:21:15.165609 waagent[1919]: Executing ['ip', '-a', '-o', 'link']: Jul 6 23:21:15.165609 waagent[1919]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jul 6 23:21:15.165609 waagent[1919]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:79:98:04 brd ff:ff:ff:ff:ff:ff Jul 6 23:21:15.165609 waagent[1919]: 3: enP39224s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:79:98:04 brd ff:ff:ff:ff:ff:ff\ altname enP39224p0s2 Jul 6 23:21:15.165609 waagent[1919]: Executing ['ip', '-4', '-a', '-o', 'address']: Jul 6 23:21:15.165609 waagent[1919]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jul 6 23:21:15.165609 waagent[1919]: 2: eth0 inet 10.200.20.19/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Jul 6 23:21:15.165609 waagent[1919]: Executing ['ip', '-6', '-a', '-o', 'address']: Jul 6 23:21:15.165609 waagent[1919]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jul 6 23:21:15.165609 waagent[1919]: 2: eth0 inet6 fe80::222:48ff:fe79:9804/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jul 6 23:21:15.165609 waagent[1919]: 3: enP39224s1 inet6 fe80::222:48ff:fe79:9804/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jul 6 23:21:15.180444 waagent[1919]: 2025-07-06T23:21:15.180313Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 0B982F97-AF69-4907-B897-62F5B12AAB7E;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Jul 6 23:21:15.189985 waagent[1919]: 2025-07-06T23:21:15.189909Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Jul 6 23:21:15.189985 waagent[1919]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jul 6 23:21:15.189985 waagent[1919]: pkts bytes target prot opt in out source destination Jul 6 23:21:15.189985 waagent[1919]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jul 6 23:21:15.189985 waagent[1919]: pkts bytes target prot opt in out source destination Jul 6 23:21:15.189985 waagent[1919]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jul 6 23:21:15.189985 waagent[1919]: pkts bytes target prot opt in out source destination Jul 6 23:21:15.189985 waagent[1919]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jul 6 23:21:15.189985 waagent[1919]: 8 998 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jul 6 23:21:15.189985 waagent[1919]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jul 6 23:21:15.193560 waagent[1919]: 2025-07-06T23:21:15.193486Z INFO EnvHandler ExtHandler Current Firewall rules: Jul 6 23:21:15.193560 waagent[1919]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jul 6 23:21:15.193560 waagent[1919]: pkts bytes target prot opt in out source destination Jul 6 23:21:15.193560 waagent[1919]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jul 6 23:21:15.193560 waagent[1919]: pkts bytes target prot opt in out source destination Jul 6 23:21:15.193560 waagent[1919]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jul 6 23:21:15.193560 waagent[1919]: pkts bytes target prot opt in out source destination Jul 6 23:21:15.193560 waagent[1919]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jul 6 23:21:15.193560 waagent[1919]: 12 1413 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jul 6 23:21:15.193560 waagent[1919]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jul 6 23:21:15.193903 waagent[1919]: 2025-07-06T23:21:15.193859Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jul 6 23:21:19.739178 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 6 23:21:19.749012 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:21:19.864153 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:21:19.876050 (kubelet)[2159]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:21:19.980622 kubelet[2159]: E0706 23:21:19.980564 2159 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:21:19.983975 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:21:19.984245 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:21:19.984610 systemd[1]: kubelet.service: Consumed 132ms CPU time, 105.7M memory peak. Jul 6 23:21:23.165379 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 6 23:21:23.174257 systemd[1]: Started sshd@0-10.200.20.19:22-10.200.16.10:52972.service - OpenSSH per-connection server daemon (10.200.16.10:52972). Jul 6 23:21:23.728308 sshd[2167]: Accepted publickey for core from 10.200.16.10 port 52972 ssh2: RSA SHA256:03hkvBavc73fCnbrqThnCFPODKWBbuy7ZtqQR+/MThM Jul 6 23:21:23.729768 sshd-session[2167]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:21:23.734298 systemd-logind[1707]: New session 3 of user core. Jul 6 23:21:23.741999 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 6 23:21:24.148373 systemd[1]: Started sshd@1-10.200.20.19:22-10.200.16.10:52980.service - OpenSSH per-connection server daemon (10.200.16.10:52980). Jul 6 23:21:24.636053 sshd[2172]: Accepted publickey for core from 10.200.16.10 port 52980 ssh2: RSA SHA256:03hkvBavc73fCnbrqThnCFPODKWBbuy7ZtqQR+/MThM Jul 6 23:21:24.637408 sshd-session[2172]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:21:24.643673 systemd-logind[1707]: New session 4 of user core. Jul 6 23:21:24.649947 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 6 23:21:24.975550 sshd[2174]: Connection closed by 10.200.16.10 port 52980 Jul 6 23:21:24.975382 sshd-session[2172]: pam_unix(sshd:session): session closed for user core Jul 6 23:21:24.978547 systemd[1]: sshd@1-10.200.20.19:22-10.200.16.10:52980.service: Deactivated successfully. Jul 6 23:21:24.981397 systemd[1]: session-4.scope: Deactivated successfully. Jul 6 23:21:24.983136 systemd-logind[1707]: Session 4 logged out. Waiting for processes to exit. Jul 6 23:21:24.984441 systemd-logind[1707]: Removed session 4. Jul 6 23:21:25.066078 systemd[1]: Started sshd@2-10.200.20.19:22-10.200.16.10:52992.service - OpenSSH per-connection server daemon (10.200.16.10:52992). Jul 6 23:21:25.520123 sshd[2180]: Accepted publickey for core from 10.200.16.10 port 52992 ssh2: RSA SHA256:03hkvBavc73fCnbrqThnCFPODKWBbuy7ZtqQR+/MThM Jul 6 23:21:25.521448 sshd-session[2180]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:21:25.526943 systemd-logind[1707]: New session 5 of user core. Jul 6 23:21:25.532997 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 6 23:21:25.845867 sshd[2182]: Connection closed by 10.200.16.10 port 52992 Jul 6 23:21:25.846397 sshd-session[2180]: pam_unix(sshd:session): session closed for user core Jul 6 23:21:25.850186 systemd[1]: sshd@2-10.200.20.19:22-10.200.16.10:52992.service: Deactivated successfully. Jul 6 23:21:25.851949 systemd[1]: session-5.scope: Deactivated successfully. Jul 6 23:21:25.852630 systemd-logind[1707]: Session 5 logged out. Waiting for processes to exit. Jul 6 23:21:25.853765 systemd-logind[1707]: Removed session 5. Jul 6 23:21:25.932727 systemd[1]: Started sshd@3-10.200.20.19:22-10.200.16.10:53004.service - OpenSSH per-connection server daemon (10.200.16.10:53004). Jul 6 23:21:26.415258 sshd[2188]: Accepted publickey for core from 10.200.16.10 port 53004 ssh2: RSA SHA256:03hkvBavc73fCnbrqThnCFPODKWBbuy7ZtqQR+/MThM Jul 6 23:21:26.416525 sshd-session[2188]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:21:26.420518 systemd-logind[1707]: New session 6 of user core. Jul 6 23:21:26.427911 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 6 23:21:26.755608 sshd[2190]: Connection closed by 10.200.16.10 port 53004 Jul 6 23:21:26.755439 sshd-session[2188]: pam_unix(sshd:session): session closed for user core Jul 6 23:21:26.759847 systemd[1]: sshd@3-10.200.20.19:22-10.200.16.10:53004.service: Deactivated successfully. Jul 6 23:21:26.761659 systemd[1]: session-6.scope: Deactivated successfully. Jul 6 23:21:26.762381 systemd-logind[1707]: Session 6 logged out. Waiting for processes to exit. Jul 6 23:21:26.763446 systemd-logind[1707]: Removed session 6. Jul 6 23:21:26.843014 systemd[1]: Started sshd@4-10.200.20.19:22-10.200.16.10:53020.service - OpenSSH per-connection server daemon (10.200.16.10:53020). Jul 6 23:21:27.294003 sshd[2196]: Accepted publickey for core from 10.200.16.10 port 53020 ssh2: RSA SHA256:03hkvBavc73fCnbrqThnCFPODKWBbuy7ZtqQR+/MThM Jul 6 23:21:27.295393 sshd-session[2196]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:21:27.300073 systemd-logind[1707]: New session 7 of user core. Jul 6 23:21:27.307989 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 6 23:21:27.637535 sudo[2199]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 6 23:21:27.637837 sudo[2199]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:21:27.663388 sudo[2199]: pam_unix(sudo:session): session closed for user root Jul 6 23:21:27.735834 sshd[2198]: Connection closed by 10.200.16.10 port 53020 Jul 6 23:21:27.735662 sshd-session[2196]: pam_unix(sshd:session): session closed for user core Jul 6 23:21:27.738647 systemd[1]: sshd@4-10.200.20.19:22-10.200.16.10:53020.service: Deactivated successfully. Jul 6 23:21:27.740398 systemd[1]: session-7.scope: Deactivated successfully. Jul 6 23:21:27.741700 systemd-logind[1707]: Session 7 logged out. Waiting for processes to exit. Jul 6 23:21:27.742628 systemd-logind[1707]: Removed session 7. Jul 6 23:21:27.830060 systemd[1]: Started sshd@5-10.200.20.19:22-10.200.16.10:53036.service - OpenSSH per-connection server daemon (10.200.16.10:53036). Jul 6 23:21:28.321560 sshd[2205]: Accepted publickey for core from 10.200.16.10 port 53036 ssh2: RSA SHA256:03hkvBavc73fCnbrqThnCFPODKWBbuy7ZtqQR+/MThM Jul 6 23:21:28.322937 sshd-session[2205]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:21:28.327011 systemd-logind[1707]: New session 8 of user core. Jul 6 23:21:28.337898 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 6 23:21:28.597942 sudo[2209]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 6 23:21:28.598244 sudo[2209]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:21:28.601818 sudo[2209]: pam_unix(sudo:session): session closed for user root Jul 6 23:21:28.606998 sudo[2208]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 6 23:21:28.607280 sudo[2208]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:21:28.629230 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 6 23:21:28.653766 augenrules[2231]: No rules Jul 6 23:21:28.655428 systemd[1]: audit-rules.service: Deactivated successfully. Jul 6 23:21:28.655633 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 6 23:21:28.658050 sudo[2208]: pam_unix(sudo:session): session closed for user root Jul 6 23:21:28.735725 sshd[2207]: Connection closed by 10.200.16.10 port 53036 Jul 6 23:21:28.736495 sshd-session[2205]: pam_unix(sshd:session): session closed for user core Jul 6 23:21:28.740220 systemd-logind[1707]: Session 8 logged out. Waiting for processes to exit. Jul 6 23:21:28.740827 systemd[1]: sshd@5-10.200.20.19:22-10.200.16.10:53036.service: Deactivated successfully. Jul 6 23:21:28.742919 systemd[1]: session-8.scope: Deactivated successfully. Jul 6 23:21:28.744209 systemd-logind[1707]: Removed session 8. Jul 6 23:21:28.827026 systemd[1]: Started sshd@6-10.200.20.19:22-10.200.16.10:53050.service - OpenSSH per-connection server daemon (10.200.16.10:53050). Jul 6 23:21:29.305771 sshd[2240]: Accepted publickey for core from 10.200.16.10 port 53050 ssh2: RSA SHA256:03hkvBavc73fCnbrqThnCFPODKWBbuy7ZtqQR+/MThM Jul 6 23:21:29.309249 sshd-session[2240]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:21:29.315376 systemd-logind[1707]: New session 9 of user core. Jul 6 23:21:29.320961 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 6 23:21:29.574837 sudo[2243]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 6 23:21:29.575153 sudo[2243]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:21:29.989238 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 6 23:21:29.994002 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:21:30.106885 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:21:30.120150 (kubelet)[2263]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:21:30.223669 kubelet[2263]: E0706 23:21:30.223522 2263 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:21:30.226127 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:21:30.226286 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:21:30.226808 systemd[1]: kubelet.service: Consumed 206ms CPU time, 107.1M memory peak. Jul 6 23:21:31.245020 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 6 23:21:31.245205 (dockerd)[2275]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 6 23:21:31.655066 chronyd[1685]: Selected source PHC0 Jul 6 23:21:31.941692 dockerd[2275]: time="2025-07-06T23:21:31.941553839Z" level=info msg="Starting up" Jul 6 23:21:32.179075 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport792676281-merged.mount: Deactivated successfully. Jul 6 23:21:32.240373 dockerd[2275]: time="2025-07-06T23:21:32.240292394Z" level=info msg="Loading containers: start." Jul 6 23:21:32.436807 kernel: Initializing XFRM netlink socket Jul 6 23:21:32.579038 systemd-networkd[1564]: docker0: Link UP Jul 6 23:21:32.617223 dockerd[2275]: time="2025-07-06T23:21:32.617106526Z" level=info msg="Loading containers: done." Jul 6 23:21:32.636980 dockerd[2275]: time="2025-07-06T23:21:32.636923692Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 6 23:21:32.637151 dockerd[2275]: time="2025-07-06T23:21:32.637038365Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Jul 6 23:21:32.637200 dockerd[2275]: time="2025-07-06T23:21:32.637173116Z" level=info msg="Daemon has completed initialization" Jul 6 23:21:32.691646 dockerd[2275]: time="2025-07-06T23:21:32.691578849Z" level=info msg="API listen on /run/docker.sock" Jul 6 23:21:32.692054 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 6 23:21:33.266970 containerd[1744]: time="2025-07-06T23:21:33.266925396Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\"" Jul 6 23:21:34.121384 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4039504073.mount: Deactivated successfully. Jul 6 23:21:35.503790 containerd[1744]: time="2025-07-06T23:21:35.503431399Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:21:35.508531 containerd[1744]: time="2025-07-06T23:21:35.508457032Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.2: active requests=0, bytes read=27351716" Jul 6 23:21:35.511626 containerd[1744]: time="2025-07-06T23:21:35.511548107Z" level=info msg="ImageCreate event name:\"sha256:04ac773cca35cc457f24a6501b6b308d63a2cddd1aec14fe95559bccca3010a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:21:35.516001 containerd[1744]: time="2025-07-06T23:21:35.515961140Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:21:35.517659 containerd[1744]: time="2025-07-06T23:21:35.517097899Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.2\" with image id \"sha256:04ac773cca35cc457f24a6501b6b308d63a2cddd1aec14fe95559bccca3010a4\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\", size \"27348516\" in 2.250127463s" Jul 6 23:21:35.517659 containerd[1744]: time="2025-07-06T23:21:35.517143419Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\" returns image reference \"sha256:04ac773cca35cc457f24a6501b6b308d63a2cddd1aec14fe95559bccca3010a4\"" Jul 6 23:21:35.518504 containerd[1744]: time="2025-07-06T23:21:35.518466057Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\"" Jul 6 23:21:36.886788 containerd[1744]: time="2025-07-06T23:21:36.886573839Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:21:36.892398 containerd[1744]: time="2025-07-06T23:21:36.892358670Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.2: active requests=0, bytes read=23537623" Jul 6 23:21:36.896777 containerd[1744]: time="2025-07-06T23:21:36.896607544Z" level=info msg="ImageCreate event name:\"sha256:99a259072231375ad69a369cdf5620d60cdff72d450951c603fad8a94667af65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:21:36.901877 containerd[1744]: time="2025-07-06T23:21:36.901810896Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:21:36.903245 containerd[1744]: time="2025-07-06T23:21:36.903110334Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.2\" with image id \"sha256:99a259072231375ad69a369cdf5620d60cdff72d450951c603fad8a94667af65\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\", size \"25092541\" in 1.384596197s" Jul 6 23:21:36.903245 containerd[1744]: time="2025-07-06T23:21:36.903146494Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\" returns image reference \"sha256:99a259072231375ad69a369cdf5620d60cdff72d450951c603fad8a94667af65\"" Jul 6 23:21:36.903626 containerd[1744]: time="2025-07-06T23:21:36.903595213Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\"" Jul 6 23:21:38.057792 containerd[1744]: time="2025-07-06T23:21:38.057015198Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:21:38.059684 containerd[1744]: time="2025-07-06T23:21:38.059422634Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.2: active requests=0, bytes read=18293515" Jul 6 23:21:38.063388 containerd[1744]: time="2025-07-06T23:21:38.063342348Z" level=info msg="ImageCreate event name:\"sha256:bb3da57746ca4726b669d35145eb9b4085643c61bbc80b9df3bf1e6021ba9eaf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:21:38.069201 containerd[1744]: time="2025-07-06T23:21:38.069144940Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:21:38.070584 containerd[1744]: time="2025-07-06T23:21:38.070455298Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.2\" with image id \"sha256:bb3da57746ca4726b669d35145eb9b4085643c61bbc80b9df3bf1e6021ba9eaf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\", size \"19848451\" in 1.166821285s" Jul 6 23:21:38.070584 containerd[1744]: time="2025-07-06T23:21:38.070492338Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\" returns image reference \"sha256:bb3da57746ca4726b669d35145eb9b4085643c61bbc80b9df3bf1e6021ba9eaf\"" Jul 6 23:21:38.071349 containerd[1744]: time="2025-07-06T23:21:38.071172857Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\"" Jul 6 23:21:39.194537 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2793452847.mount: Deactivated successfully. Jul 6 23:21:39.570116 containerd[1744]: time="2025-07-06T23:21:39.570057642Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:21:39.574071 containerd[1744]: time="2025-07-06T23:21:39.574017516Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.2: active requests=0, bytes read=28199472" Jul 6 23:21:39.577294 containerd[1744]: time="2025-07-06T23:21:39.577251751Z" level=info msg="ImageCreate event name:\"sha256:c26522e54bad2e6bfbb1bf11500833c94433076a3fa38436a2ec496a422c5455\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:21:39.582589 containerd[1744]: time="2025-07-06T23:21:39.582520663Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:21:39.583501 containerd[1744]: time="2025-07-06T23:21:39.583112862Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.2\" with image id \"sha256:c26522e54bad2e6bfbb1bf11500833c94433076a3fa38436a2ec496a422c5455\", repo tag \"registry.k8s.io/kube-proxy:v1.33.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\", size \"28198491\" in 1.511908605s" Jul 6 23:21:39.583501 containerd[1744]: time="2025-07-06T23:21:39.583152542Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\" returns image reference \"sha256:c26522e54bad2e6bfbb1bf11500833c94433076a3fa38436a2ec496a422c5455\"" Jul 6 23:21:39.584078 containerd[1744]: time="2025-07-06T23:21:39.583704941Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jul 6 23:21:40.239122 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 6 23:21:40.244978 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:21:40.270113 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1332689289.mount: Deactivated successfully. Jul 6 23:21:40.384670 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:21:40.399076 (kubelet)[2542]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:21:40.480482 kubelet[2542]: E0706 23:21:40.480417 2542 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:21:40.483443 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:21:40.483595 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:21:40.484849 systemd[1]: kubelet.service: Consumed 136ms CPU time, 107M memory peak. Jul 6 23:21:42.614990 containerd[1744]: time="2025-07-06T23:21:42.614926302Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:21:42.617642 containerd[1744]: time="2025-07-06T23:21:42.617354380Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152117" Jul 6 23:21:42.621476 containerd[1744]: time="2025-07-06T23:21:42.621409338Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:21:42.626629 containerd[1744]: time="2025-07-06T23:21:42.626557255Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:21:42.627931 containerd[1744]: time="2025-07-06T23:21:42.627787534Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 3.044018313s" Jul 6 23:21:42.627931 containerd[1744]: time="2025-07-06T23:21:42.627824894Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Jul 6 23:21:42.628538 containerd[1744]: time="2025-07-06T23:21:42.628416254Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 6 23:21:43.202965 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount260229386.mount: Deactivated successfully. Jul 6 23:21:43.235552 containerd[1744]: time="2025-07-06T23:21:43.235454220Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:21:43.238480 containerd[1744]: time="2025-07-06T23:21:43.238255499Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Jul 6 23:21:43.243905 containerd[1744]: time="2025-07-06T23:21:43.243815856Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:21:43.249433 containerd[1744]: time="2025-07-06T23:21:43.249365012Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:21:43.250333 containerd[1744]: time="2025-07-06T23:21:43.250191132Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 621.507718ms" Jul 6 23:21:43.250333 containerd[1744]: time="2025-07-06T23:21:43.250228172Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 6 23:21:43.252787 containerd[1744]: time="2025-07-06T23:21:43.252516771Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jul 6 23:21:43.914247 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3180447979.mount: Deactivated successfully. Jul 6 23:21:47.082505 containerd[1744]: time="2025-07-06T23:21:47.082446163Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:21:47.086698 containerd[1744]: time="2025-07-06T23:21:47.086643077Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69334599" Jul 6 23:21:47.093028 containerd[1744]: time="2025-07-06T23:21:47.092974469Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:21:47.103466 containerd[1744]: time="2025-07-06T23:21:47.102819535Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:21:47.104243 containerd[1744]: time="2025-07-06T23:21:47.104212933Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 3.851654403s" Jul 6 23:21:47.104282 containerd[1744]: time="2025-07-06T23:21:47.104247693Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Jul 6 23:21:50.490432 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jul 6 23:21:50.499326 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:21:50.611925 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:21:50.617911 (kubelet)[2685]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:21:50.657246 kubelet[2685]: E0706 23:21:50.657199 2685 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:21:50.660492 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:21:50.660881 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:21:50.661304 systemd[1]: kubelet.service: Consumed 134ms CPU time, 107M memory peak. Jul 6 23:21:52.067773 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:21:52.067925 systemd[1]: kubelet.service: Consumed 134ms CPU time, 107M memory peak. Jul 6 23:21:52.076011 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:21:52.107903 systemd[1]: Reload requested from client PID 2700 ('systemctl') (unit session-9.scope)... Jul 6 23:21:52.107918 systemd[1]: Reloading... Jul 6 23:21:52.252304 zram_generator::config[2756]: No configuration found. Jul 6 23:21:52.355878 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:21:52.460994 systemd[1]: Reloading finished in 352 ms. Jul 6 23:21:52.504438 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:21:52.515783 (kubelet)[2804]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 6 23:21:52.516667 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:21:52.518456 systemd[1]: kubelet.service: Deactivated successfully. Jul 6 23:21:52.518719 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:21:52.518866 systemd[1]: kubelet.service: Consumed 94ms CPU time, 95M memory peak. Jul 6 23:21:52.526190 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:21:52.636634 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:21:52.645393 (kubelet)[2818]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 6 23:21:52.683393 kubelet[2818]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:21:52.683393 kubelet[2818]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 6 23:21:52.683393 kubelet[2818]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:21:52.683879 kubelet[2818]: I0706 23:21:52.683453 2818 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 6 23:21:53.046002 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Jul 6 23:21:53.116832 update_engine[1715]: I20250706 23:21:53.116762 1715 update_attempter.cc:509] Updating boot flags... Jul 6 23:21:53.277212 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 35 scanned by (udev-worker) (2839) Jul 6 23:21:53.551005 kubelet[2818]: I0706 23:21:53.550959 2818 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 6 23:21:53.551005 kubelet[2818]: I0706 23:21:53.550994 2818 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 6 23:21:53.551258 kubelet[2818]: I0706 23:21:53.551235 2818 server.go:956] "Client rotation is on, will bootstrap in background" Jul 6 23:21:53.573057 kubelet[2818]: E0706 23:21:53.572888 2818 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.20.19:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.19:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jul 6 23:21:53.575197 kubelet[2818]: I0706 23:21:53.575160 2818 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 6 23:21:53.585278 kubelet[2818]: E0706 23:21:53.585221 2818 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 6 23:21:53.585278 kubelet[2818]: I0706 23:21:53.585280 2818 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 6 23:21:53.589294 kubelet[2818]: I0706 23:21:53.589264 2818 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 6 23:21:53.589571 kubelet[2818]: I0706 23:21:53.589538 2818 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 6 23:21:53.589744 kubelet[2818]: I0706 23:21:53.589570 2818 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230.2.1-a-cc9ddc1e95","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 6 23:21:53.589839 kubelet[2818]: I0706 23:21:53.589773 2818 topology_manager.go:138] "Creating topology manager with none policy" Jul 6 23:21:53.589839 kubelet[2818]: I0706 23:21:53.589783 2818 container_manager_linux.go:303] "Creating device plugin manager" Jul 6 23:21:53.589948 kubelet[2818]: I0706 23:21:53.589925 2818 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:21:53.592810 kubelet[2818]: I0706 23:21:53.592780 2818 kubelet.go:480] "Attempting to sync node with API server" Jul 6 23:21:53.592939 kubelet[2818]: I0706 23:21:53.592919 2818 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 6 23:21:53.592977 kubelet[2818]: I0706 23:21:53.592964 2818 kubelet.go:386] "Adding apiserver pod source" Jul 6 23:21:53.593002 kubelet[2818]: I0706 23:21:53.592992 2818 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 6 23:21:53.597999 kubelet[2818]: E0706 23:21:53.597961 2818 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.2.1-a-cc9ddc1e95&limit=500&resourceVersion=0\": dial tcp 10.200.20.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 6 23:21:53.600130 kubelet[2818]: E0706 23:21:53.600086 2818 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.19:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 6 23:21:53.600238 kubelet[2818]: I0706 23:21:53.600205 2818 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jul 6 23:21:53.600902 kubelet[2818]: I0706 23:21:53.600843 2818 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 6 23:21:53.600957 kubelet[2818]: W0706 23:21:53.600914 2818 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 6 23:21:53.603880 kubelet[2818]: I0706 23:21:53.603683 2818 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 6 23:21:53.603880 kubelet[2818]: I0706 23:21:53.603768 2818 server.go:1289] "Started kubelet" Jul 6 23:21:53.604424 kubelet[2818]: I0706 23:21:53.604370 2818 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 6 23:21:53.605318 kubelet[2818]: I0706 23:21:53.605271 2818 server.go:317] "Adding debug handlers to kubelet server" Jul 6 23:21:53.606644 kubelet[2818]: I0706 23:21:53.606074 2818 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 6 23:21:53.606644 kubelet[2818]: I0706 23:21:53.606417 2818 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 6 23:21:53.607626 kubelet[2818]: E0706 23:21:53.606551 2818 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.19:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.19:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230.2.1-a-cc9ddc1e95.184fccf08d353776 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230.2.1-a-cc9ddc1e95,UID:ci-4230.2.1-a-cc9ddc1e95,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230.2.1-a-cc9ddc1e95,},FirstTimestamp:2025-07-06 23:21:53.603704694 +0000 UTC m=+0.954496836,LastTimestamp:2025-07-06 23:21:53.603704694 +0000 UTC m=+0.954496836,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230.2.1-a-cc9ddc1e95,}" Jul 6 23:21:53.611418 kubelet[2818]: I0706 23:21:53.611382 2818 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 6 23:21:53.614103 kubelet[2818]: I0706 23:21:53.614049 2818 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 6 23:21:53.616640 kubelet[2818]: I0706 23:21:53.615965 2818 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 6 23:21:53.616640 kubelet[2818]: E0706 23:21:53.616103 2818 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.1-a-cc9ddc1e95\" not found" Jul 6 23:21:53.616998 kubelet[2818]: I0706 23:21:53.616972 2818 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 6 23:21:53.617040 kubelet[2818]: I0706 23:21:53.617036 2818 reconciler.go:26] "Reconciler: start to sync state" Jul 6 23:21:53.618213 kubelet[2818]: E0706 23:21:53.618164 2818 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.19:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 6 23:21:53.618314 kubelet[2818]: E0706 23:21:53.618259 2818 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.2.1-a-cc9ddc1e95?timeout=10s\": dial tcp 10.200.20.19:6443: connect: connection refused" interval="200ms" Jul 6 23:21:53.618443 kubelet[2818]: E0706 23:21:53.618410 2818 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 6 23:21:53.620592 kubelet[2818]: I0706 23:21:53.620551 2818 factory.go:223] Registration of the systemd container factory successfully Jul 6 23:21:53.620716 kubelet[2818]: I0706 23:21:53.620685 2818 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 6 23:21:53.622553 kubelet[2818]: I0706 23:21:53.622526 2818 factory.go:223] Registration of the containerd container factory successfully Jul 6 23:21:53.643471 kubelet[2818]: I0706 23:21:53.643180 2818 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 6 23:21:53.643471 kubelet[2818]: I0706 23:21:53.643209 2818 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 6 23:21:53.643471 kubelet[2818]: I0706 23:21:53.643231 2818 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:21:53.651996 kubelet[2818]: I0706 23:21:53.651968 2818 policy_none.go:49] "None policy: Start" Jul 6 23:21:53.652410 kubelet[2818]: I0706 23:21:53.652137 2818 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 6 23:21:53.652410 kubelet[2818]: I0706 23:21:53.652161 2818 state_mem.go:35] "Initializing new in-memory state store" Jul 6 23:21:53.652618 kubelet[2818]: I0706 23:21:53.652575 2818 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 6 23:21:53.653703 kubelet[2818]: I0706 23:21:53.653641 2818 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 6 23:21:53.653703 kubelet[2818]: I0706 23:21:53.653662 2818 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 6 23:21:53.653703 kubelet[2818]: I0706 23:21:53.653684 2818 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 6 23:21:53.653703 kubelet[2818]: I0706 23:21:53.653699 2818 kubelet.go:2436] "Starting kubelet main sync loop" Jul 6 23:21:53.653922 kubelet[2818]: E0706 23:21:53.653837 2818 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 6 23:21:53.657714 kubelet[2818]: E0706 23:21:53.657638 2818 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.19:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 6 23:21:53.664303 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 6 23:21:53.673326 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 6 23:21:53.677269 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 6 23:21:53.688089 kubelet[2818]: E0706 23:21:53.688055 2818 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 6 23:21:53.688436 kubelet[2818]: I0706 23:21:53.688355 2818 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 6 23:21:53.688436 kubelet[2818]: I0706 23:21:53.688366 2818 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 6 23:21:53.689829 kubelet[2818]: I0706 23:21:53.688653 2818 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 6 23:21:53.690150 kubelet[2818]: E0706 23:21:53.690115 2818 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 6 23:21:53.690624 kubelet[2818]: E0706 23:21:53.690175 2818 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4230.2.1-a-cc9ddc1e95\" not found" Jul 6 23:21:53.768095 systemd[1]: Created slice kubepods-burstable-podb9ff3cb0d23c787a878355bee6365032.slice - libcontainer container kubepods-burstable-podb9ff3cb0d23c787a878355bee6365032.slice. Jul 6 23:21:53.773768 kubelet[2818]: E0706 23:21:53.773680 2818 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.1-a-cc9ddc1e95\" not found" node="ci-4230.2.1-a-cc9ddc1e95" Jul 6 23:21:53.782878 systemd[1]: Created slice kubepods-burstable-pod698a6827b5e79b7b6ef3770be330c87c.slice - libcontainer container kubepods-burstable-pod698a6827b5e79b7b6ef3770be330c87c.slice. Jul 6 23:21:53.792600 kubelet[2818]: I0706 23:21:53.792387 2818 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.1-a-cc9ddc1e95" Jul 6 23:21:53.793412 kubelet[2818]: E0706 23:21:53.793184 2818 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.1-a-cc9ddc1e95\" not found" node="ci-4230.2.1-a-cc9ddc1e95" Jul 6 23:21:53.793492 kubelet[2818]: E0706 23:21:53.793454 2818 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.19:6443/api/v1/nodes\": dial tcp 10.200.20.19:6443: connect: connection refused" node="ci-4230.2.1-a-cc9ddc1e95" Jul 6 23:21:53.797462 systemd[1]: Created slice kubepods-burstable-pod2fab4cd439cbc1ba4e9c54bc2d06c99d.slice - libcontainer container kubepods-burstable-pod2fab4cd439cbc1ba4e9c54bc2d06c99d.slice. Jul 6 23:21:53.799306 kubelet[2818]: E0706 23:21:53.799270 2818 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.1-a-cc9ddc1e95\" not found" node="ci-4230.2.1-a-cc9ddc1e95" Jul 6 23:21:53.818936 kubelet[2818]: I0706 23:21:53.818834 2818 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/698a6827b5e79b7b6ef3770be330c87c-ca-certs\") pod \"kube-apiserver-ci-4230.2.1-a-cc9ddc1e95\" (UID: \"698a6827b5e79b7b6ef3770be330c87c\") " pod="kube-system/kube-apiserver-ci-4230.2.1-a-cc9ddc1e95" Jul 6 23:21:53.819482 kubelet[2818]: I0706 23:21:53.819451 2818 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/698a6827b5e79b7b6ef3770be330c87c-k8s-certs\") pod \"kube-apiserver-ci-4230.2.1-a-cc9ddc1e95\" (UID: \"698a6827b5e79b7b6ef3770be330c87c\") " pod="kube-system/kube-apiserver-ci-4230.2.1-a-cc9ddc1e95" Jul 6 23:21:53.819482 kubelet[2818]: I0706 23:21:53.819491 2818 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/698a6827b5e79b7b6ef3770be330c87c-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230.2.1-a-cc9ddc1e95\" (UID: \"698a6827b5e79b7b6ef3770be330c87c\") " pod="kube-system/kube-apiserver-ci-4230.2.1-a-cc9ddc1e95" Jul 6 23:21:53.819482 kubelet[2818]: I0706 23:21:53.819515 2818 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2fab4cd439cbc1ba4e9c54bc2d06c99d-k8s-certs\") pod \"kube-controller-manager-ci-4230.2.1-a-cc9ddc1e95\" (UID: \"2fab4cd439cbc1ba4e9c54bc2d06c99d\") " pod="kube-system/kube-controller-manager-ci-4230.2.1-a-cc9ddc1e95" Jul 6 23:21:53.819706 kubelet[2818]: I0706 23:21:53.819538 2818 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b9ff3cb0d23c787a878355bee6365032-kubeconfig\") pod \"kube-scheduler-ci-4230.2.1-a-cc9ddc1e95\" (UID: \"b9ff3cb0d23c787a878355bee6365032\") " pod="kube-system/kube-scheduler-ci-4230.2.1-a-cc9ddc1e95" Jul 6 23:21:53.819706 kubelet[2818]: I0706 23:21:53.819552 2818 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2fab4cd439cbc1ba4e9c54bc2d06c99d-ca-certs\") pod \"kube-controller-manager-ci-4230.2.1-a-cc9ddc1e95\" (UID: \"2fab4cd439cbc1ba4e9c54bc2d06c99d\") " pod="kube-system/kube-controller-manager-ci-4230.2.1-a-cc9ddc1e95" Jul 6 23:21:53.819706 kubelet[2818]: I0706 23:21:53.819567 2818 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2fab4cd439cbc1ba4e9c54bc2d06c99d-flexvolume-dir\") pod \"kube-controller-manager-ci-4230.2.1-a-cc9ddc1e95\" (UID: \"2fab4cd439cbc1ba4e9c54bc2d06c99d\") " pod="kube-system/kube-controller-manager-ci-4230.2.1-a-cc9ddc1e95" Jul 6 23:21:53.819706 kubelet[2818]: I0706 23:21:53.819581 2818 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2fab4cd439cbc1ba4e9c54bc2d06c99d-kubeconfig\") pod \"kube-controller-manager-ci-4230.2.1-a-cc9ddc1e95\" (UID: \"2fab4cd439cbc1ba4e9c54bc2d06c99d\") " pod="kube-system/kube-controller-manager-ci-4230.2.1-a-cc9ddc1e95" Jul 6 23:21:53.819706 kubelet[2818]: E0706 23:21:53.819118 2818 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.2.1-a-cc9ddc1e95?timeout=10s\": dial tcp 10.200.20.19:6443: connect: connection refused" interval="400ms" Jul 6 23:21:53.819843 kubelet[2818]: I0706 23:21:53.819597 2818 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2fab4cd439cbc1ba4e9c54bc2d06c99d-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230.2.1-a-cc9ddc1e95\" (UID: \"2fab4cd439cbc1ba4e9c54bc2d06c99d\") " pod="kube-system/kube-controller-manager-ci-4230.2.1-a-cc9ddc1e95" Jul 6 23:21:53.995572 kubelet[2818]: I0706 23:21:53.995524 2818 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.1-a-cc9ddc1e95" Jul 6 23:21:53.995935 kubelet[2818]: E0706 23:21:53.995903 2818 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.19:6443/api/v1/nodes\": dial tcp 10.200.20.19:6443: connect: connection refused" node="ci-4230.2.1-a-cc9ddc1e95" Jul 6 23:21:54.077414 containerd[1744]: time="2025-07-06T23:21:54.077273557Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230.2.1-a-cc9ddc1e95,Uid:b9ff3cb0d23c787a878355bee6365032,Namespace:kube-system,Attempt:0,}" Jul 6 23:21:54.094646 containerd[1744]: time="2025-07-06T23:21:54.094542773Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230.2.1-a-cc9ddc1e95,Uid:698a6827b5e79b7b6ef3770be330c87c,Namespace:kube-system,Attempt:0,}" Jul 6 23:21:54.100496 containerd[1744]: time="2025-07-06T23:21:54.100217485Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230.2.1-a-cc9ddc1e95,Uid:2fab4cd439cbc1ba4e9c54bc2d06c99d,Namespace:kube-system,Attempt:0,}" Jul 6 23:21:54.220495 kubelet[2818]: E0706 23:21:54.220453 2818 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.2.1-a-cc9ddc1e95?timeout=10s\": dial tcp 10.200.20.19:6443: connect: connection refused" interval="800ms" Jul 6 23:21:54.398069 kubelet[2818]: I0706 23:21:54.397948 2818 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.1-a-cc9ddc1e95" Jul 6 23:21:54.398796 kubelet[2818]: E0706 23:21:54.398722 2818 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.19:6443/api/v1/nodes\": dial tcp 10.200.20.19:6443: connect: connection refused" node="ci-4230.2.1-a-cc9ddc1e95" Jul 6 23:21:54.547007 kubelet[2818]: E0706 23:21:54.546960 2818 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.2.1-a-cc9ddc1e95&limit=500&resourceVersion=0\": dial tcp 10.200.20.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 6 23:21:54.562729 kubelet[2818]: E0706 23:21:54.562690 2818 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.19:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 6 23:21:54.609124 kubelet[2818]: E0706 23:21:54.609088 2818 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.19:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 6 23:21:54.828678 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2142370056.mount: Deactivated successfully. Jul 6 23:21:54.834380 kubelet[2818]: E0706 23:21:54.834329 2818 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.19:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 6 23:21:54.858358 containerd[1744]: time="2025-07-06T23:21:54.858296873Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:21:54.872051 containerd[1744]: time="2025-07-06T23:21:54.871982294Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Jul 6 23:21:54.875821 containerd[1744]: time="2025-07-06T23:21:54.875781009Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:21:54.879758 containerd[1744]: time="2025-07-06T23:21:54.878725205Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:21:54.885519 containerd[1744]: time="2025-07-06T23:21:54.885454395Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 6 23:21:54.892678 containerd[1744]: time="2025-07-06T23:21:54.892636466Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:21:54.897352 containerd[1744]: time="2025-07-06T23:21:54.897302619Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:21:54.899631 containerd[1744]: time="2025-07-06T23:21:54.899349496Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 821.990899ms" Jul 6 23:21:54.900511 containerd[1744]: time="2025-07-06T23:21:54.900156175Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 6 23:21:54.911650 containerd[1744]: time="2025-07-06T23:21:54.911604479Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 811.306274ms" Jul 6 23:21:54.918834 containerd[1744]: time="2025-07-06T23:21:54.918782949Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 824.154056ms" Jul 6 23:21:55.021971 kubelet[2818]: E0706 23:21:55.021911 2818 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.2.1-a-cc9ddc1e95?timeout=10s\": dial tcp 10.200.20.19:6443: connect: connection refused" interval="1.6s" Jul 6 23:21:55.201324 kubelet[2818]: I0706 23:21:55.200886 2818 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.1-a-cc9ddc1e95" Jul 6 23:21:55.201324 kubelet[2818]: E0706 23:21:55.201215 2818 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.19:6443/api/v1/nodes\": dial tcp 10.200.20.19:6443: connect: connection refused" node="ci-4230.2.1-a-cc9ddc1e95" Jul 6 23:21:55.569818 containerd[1744]: time="2025-07-06T23:21:55.569605686Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:21:55.569818 containerd[1744]: time="2025-07-06T23:21:55.569686566Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:21:55.569818 containerd[1744]: time="2025-07-06T23:21:55.569703046Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:21:55.570451 containerd[1744]: time="2025-07-06T23:21:55.570019125Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:21:55.575755 containerd[1744]: time="2025-07-06T23:21:55.575357358Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:21:55.575755 containerd[1744]: time="2025-07-06T23:21:55.575420118Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:21:55.575755 containerd[1744]: time="2025-07-06T23:21:55.575431918Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:21:55.575755 containerd[1744]: time="2025-07-06T23:21:55.575507518Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:21:55.580293 containerd[1744]: time="2025-07-06T23:21:55.579913512Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:21:55.580293 containerd[1744]: time="2025-07-06T23:21:55.579975032Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:21:55.580293 containerd[1744]: time="2025-07-06T23:21:55.579985712Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:21:55.580293 containerd[1744]: time="2025-07-06T23:21:55.580060072Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:21:55.615054 systemd[1]: Started cri-containerd-6de62b16ac118922d93726b1bde77e6b0ffc7b09eb8944a85c6dc0f95dd3e2cb.scope - libcontainer container 6de62b16ac118922d93726b1bde77e6b0ffc7b09eb8944a85c6dc0f95dd3e2cb. Jul 6 23:21:55.617512 systemd[1]: Started cri-containerd-d22f8c4815763138dbd3a67b6fd87674ffdb72a2d90f582b5ca1025c412fae09.scope - libcontainer container d22f8c4815763138dbd3a67b6fd87674ffdb72a2d90f582b5ca1025c412fae09. Jul 6 23:21:55.620390 systemd[1]: Started cri-containerd-f49557753f0ec1ffd72593b94b29cf3c34ee96e2acf966e99e76b2ffb144c3c9.scope - libcontainer container f49557753f0ec1ffd72593b94b29cf3c34ee96e2acf966e99e76b2ffb144c3c9. Jul 6 23:21:55.675547 kubelet[2818]: E0706 23:21:55.675468 2818 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.20.19:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.19:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jul 6 23:21:55.682094 containerd[1744]: time="2025-07-06T23:21:55.681963730Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230.2.1-a-cc9ddc1e95,Uid:b9ff3cb0d23c787a878355bee6365032,Namespace:kube-system,Attempt:0,} returns sandbox id \"d22f8c4815763138dbd3a67b6fd87674ffdb72a2d90f582b5ca1025c412fae09\"" Jul 6 23:21:55.682691 containerd[1744]: time="2025-07-06T23:21:55.682023090Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230.2.1-a-cc9ddc1e95,Uid:2fab4cd439cbc1ba4e9c54bc2d06c99d,Namespace:kube-system,Attempt:0,} returns sandbox id \"f49557753f0ec1ffd72593b94b29cf3c34ee96e2acf966e99e76b2ffb144c3c9\"" Jul 6 23:21:55.686028 containerd[1744]: time="2025-07-06T23:21:55.685917165Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230.2.1-a-cc9ddc1e95,Uid:698a6827b5e79b7b6ef3770be330c87c,Namespace:kube-system,Attempt:0,} returns sandbox id \"6de62b16ac118922d93726b1bde77e6b0ffc7b09eb8944a85c6dc0f95dd3e2cb\"" Jul 6 23:21:55.705073 containerd[1744]: time="2025-07-06T23:21:55.705028578Z" level=info msg="CreateContainer within sandbox \"d22f8c4815763138dbd3a67b6fd87674ffdb72a2d90f582b5ca1025c412fae09\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 6 23:21:55.762905 containerd[1744]: time="2025-07-06T23:21:55.762858299Z" level=info msg="CreateContainer within sandbox \"f49557753f0ec1ffd72593b94b29cf3c34ee96e2acf966e99e76b2ffb144c3c9\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 6 23:21:55.770651 containerd[1744]: time="2025-07-06T23:21:55.770579968Z" level=info msg="CreateContainer within sandbox \"6de62b16ac118922d93726b1bde77e6b0ffc7b09eb8944a85c6dc0f95dd3e2cb\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 6 23:21:56.295883 kubelet[2818]: E0706 23:21:56.295824 2818 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.19:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 6 23:21:56.570846 containerd[1744]: time="2025-07-06T23:21:56.570184067Z" level=info msg="CreateContainer within sandbox \"d22f8c4815763138dbd3a67b6fd87674ffdb72a2d90f582b5ca1025c412fae09\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"12529c3e7af858fb23681ac7a9bf80c4f60522a8c5d6c4ced3b90d4010cde6b5\"" Jul 6 23:21:56.571152 containerd[1744]: time="2025-07-06T23:21:56.570940826Z" level=info msg="StartContainer for \"12529c3e7af858fb23681ac7a9bf80c4f60522a8c5d6c4ced3b90d4010cde6b5\"" Jul 6 23:21:56.586630 containerd[1744]: time="2025-07-06T23:21:56.586355085Z" level=info msg="CreateContainer within sandbox \"f49557753f0ec1ffd72593b94b29cf3c34ee96e2acf966e99e76b2ffb144c3c9\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"bdbd1185ec1f88817b44a73214b8bd34254e1c464637c00d74fbf368b61c3611\"" Jul 6 23:21:56.588683 containerd[1744]: time="2025-07-06T23:21:56.588407602Z" level=info msg="StartContainer for \"bdbd1185ec1f88817b44a73214b8bd34254e1c464637c00d74fbf368b61c3611\"" Jul 6 23:21:56.612938 systemd[1]: Started cri-containerd-12529c3e7af858fb23681ac7a9bf80c4f60522a8c5d6c4ced3b90d4010cde6b5.scope - libcontainer container 12529c3e7af858fb23681ac7a9bf80c4f60522a8c5d6c4ced3b90d4010cde6b5. Jul 6 23:21:56.624260 kubelet[2818]: E0706 23:21:56.624214 2818 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.2.1-a-cc9ddc1e95?timeout=10s\": dial tcp 10.200.20.19:6443: connect: connection refused" interval="3.2s" Jul 6 23:21:56.627401 containerd[1744]: time="2025-07-06T23:21:56.627352068Z" level=info msg="CreateContainer within sandbox \"6de62b16ac118922d93726b1bde77e6b0ffc7b09eb8944a85c6dc0f95dd3e2cb\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"394a6b43c80c04b1e7d5d868eda450bf939250c49cdf28f7b766d2b9c6f0fb2b\"" Jul 6 23:21:56.628172 containerd[1744]: time="2025-07-06T23:21:56.628093067Z" level=info msg="StartContainer for \"394a6b43c80c04b1e7d5d868eda450bf939250c49cdf28f7b766d2b9c6f0fb2b\"" Jul 6 23:21:56.634050 systemd[1]: Started cri-containerd-bdbd1185ec1f88817b44a73214b8bd34254e1c464637c00d74fbf368b61c3611.scope - libcontainer container bdbd1185ec1f88817b44a73214b8bd34254e1c464637c00d74fbf368b61c3611. Jul 6 23:21:56.663949 systemd[1]: Started cri-containerd-394a6b43c80c04b1e7d5d868eda450bf939250c49cdf28f7b766d2b9c6f0fb2b.scope - libcontainer container 394a6b43c80c04b1e7d5d868eda450bf939250c49cdf28f7b766d2b9c6f0fb2b. Jul 6 23:21:56.682432 containerd[1744]: time="2025-07-06T23:21:56.682390873Z" level=info msg="StartContainer for \"12529c3e7af858fb23681ac7a9bf80c4f60522a8c5d6c4ced3b90d4010cde6b5\" returns successfully" Jul 6 23:21:56.688957 kubelet[2818]: E0706 23:21:56.688776 2818 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.19:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.19:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230.2.1-a-cc9ddc1e95.184fccf08d353776 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230.2.1-a-cc9ddc1e95,UID:ci-4230.2.1-a-cc9ddc1e95,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230.2.1-a-cc9ddc1e95,},FirstTimestamp:2025-07-06 23:21:53.603704694 +0000 UTC m=+0.954496836,LastTimestamp:2025-07-06 23:21:53.603704694 +0000 UTC m=+0.954496836,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230.2.1-a-cc9ddc1e95,}" Jul 6 23:21:56.713051 containerd[1744]: time="2025-07-06T23:21:56.713000590Z" level=info msg="StartContainer for \"bdbd1185ec1f88817b44a73214b8bd34254e1c464637c00d74fbf368b61c3611\" returns successfully" Jul 6 23:21:56.742987 containerd[1744]: time="2025-07-06T23:21:56.742944509Z" level=info msg="StartContainer for \"394a6b43c80c04b1e7d5d868eda450bf939250c49cdf28f7b766d2b9c6f0fb2b\" returns successfully" Jul 6 23:21:56.786115 kubelet[2818]: E0706 23:21:56.786016 2818 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.2.1-a-cc9ddc1e95&limit=500&resourceVersion=0\": dial tcp 10.200.20.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 6 23:21:56.803779 kubelet[2818]: I0706 23:21:56.803749 2818 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.1-a-cc9ddc1e95" Jul 6 23:21:56.804916 kubelet[2818]: E0706 23:21:56.804875 2818 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.19:6443/api/v1/nodes\": dial tcp 10.200.20.19:6443: connect: connection refused" node="ci-4230.2.1-a-cc9ddc1e95" Jul 6 23:21:57.690776 kubelet[2818]: E0706 23:21:57.689838 2818 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.1-a-cc9ddc1e95\" not found" node="ci-4230.2.1-a-cc9ddc1e95" Jul 6 23:21:57.695610 kubelet[2818]: E0706 23:21:57.695074 2818 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.1-a-cc9ddc1e95\" not found" node="ci-4230.2.1-a-cc9ddc1e95" Jul 6 23:21:57.695961 kubelet[2818]: E0706 23:21:57.695247 2818 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.1-a-cc9ddc1e95\" not found" node="ci-4230.2.1-a-cc9ddc1e95" Jul 6 23:21:58.698433 kubelet[2818]: E0706 23:21:58.698090 2818 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.1-a-cc9ddc1e95\" not found" node="ci-4230.2.1-a-cc9ddc1e95" Jul 6 23:21:58.700340 kubelet[2818]: E0706 23:21:58.700123 2818 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.1-a-cc9ddc1e95\" not found" node="ci-4230.2.1-a-cc9ddc1e95" Jul 6 23:21:58.700340 kubelet[2818]: E0706 23:21:58.700211 2818 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.1-a-cc9ddc1e95\" not found" node="ci-4230.2.1-a-cc9ddc1e95" Jul 6 23:21:59.368624 kubelet[2818]: E0706 23:21:59.368583 2818 csi_plugin.go:397] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4230.2.1-a-cc9ddc1e95" not found Jul 6 23:21:59.699416 kubelet[2818]: E0706 23:21:59.699302 2818 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.1-a-cc9ddc1e95\" not found" node="ci-4230.2.1-a-cc9ddc1e95" Jul 6 23:21:59.700768 kubelet[2818]: E0706 23:21:59.699904 2818 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.1-a-cc9ddc1e95\" not found" node="ci-4230.2.1-a-cc9ddc1e95" Jul 6 23:21:59.732092 kubelet[2818]: E0706 23:21:59.732030 2818 csi_plugin.go:397] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4230.2.1-a-cc9ddc1e95" not found Jul 6 23:21:59.834226 kubelet[2818]: E0706 23:21:59.834063 2818 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4230.2.1-a-cc9ddc1e95\" not found" node="ci-4230.2.1-a-cc9ddc1e95" Jul 6 23:22:00.008225 kubelet[2818]: I0706 23:22:00.007344 2818 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.1-a-cc9ddc1e95" Jul 6 23:22:00.022237 kubelet[2818]: I0706 23:22:00.022086 2818 kubelet_node_status.go:78] "Successfully registered node" node="ci-4230.2.1-a-cc9ddc1e95" Jul 6 23:22:00.022237 kubelet[2818]: E0706 23:22:00.022135 2818 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4230.2.1-a-cc9ddc1e95\": node \"ci-4230.2.1-a-cc9ddc1e95\" not found" Jul 6 23:22:00.040874 kubelet[2818]: E0706 23:22:00.040834 2818 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.1-a-cc9ddc1e95\" not found" Jul 6 23:22:00.141107 kubelet[2818]: E0706 23:22:00.141052 2818 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.1-a-cc9ddc1e95\" not found" Jul 6 23:22:00.242017 kubelet[2818]: E0706 23:22:00.241967 2818 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.1-a-cc9ddc1e95\" not found" Jul 6 23:22:00.342807 kubelet[2818]: E0706 23:22:00.342620 2818 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.1-a-cc9ddc1e95\" not found" Jul 6 23:22:00.443530 kubelet[2818]: E0706 23:22:00.443490 2818 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.1-a-cc9ddc1e95\" not found" Jul 6 23:22:00.543962 kubelet[2818]: E0706 23:22:00.543915 2818 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.1-a-cc9ddc1e95\" not found" Jul 6 23:22:00.644939 kubelet[2818]: E0706 23:22:00.644840 2818 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.1-a-cc9ddc1e95\" not found" Jul 6 23:22:00.700865 kubelet[2818]: E0706 23:22:00.700400 2818 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.1-a-cc9ddc1e95\" not found" node="ci-4230.2.1-a-cc9ddc1e95" Jul 6 23:22:00.700865 kubelet[2818]: E0706 23:22:00.700711 2818 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.1-a-cc9ddc1e95\" not found" node="ci-4230.2.1-a-cc9ddc1e95" Jul 6 23:22:00.745211 kubelet[2818]: E0706 23:22:00.745182 2818 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.1-a-cc9ddc1e95\" not found" Jul 6 23:22:00.845741 kubelet[2818]: E0706 23:22:00.845691 2818 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.1-a-cc9ddc1e95\" not found" Jul 6 23:22:00.946870 kubelet[2818]: E0706 23:22:00.946722 2818 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.1-a-cc9ddc1e95\" not found" Jul 6 23:22:01.047588 kubelet[2818]: E0706 23:22:01.047542 2818 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.1-a-cc9ddc1e95\" not found" Jul 6 23:22:01.148478 kubelet[2818]: E0706 23:22:01.148434 2818 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.1-a-cc9ddc1e95\" not found" Jul 6 23:22:01.209885 systemd[1]: Reload requested from client PID 3167 ('systemctl') (unit session-9.scope)... Jul 6 23:22:01.209906 systemd[1]: Reloading... Jul 6 23:22:01.249135 kubelet[2818]: E0706 23:22:01.249088 2818 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.1-a-cc9ddc1e95\" not found" Jul 6 23:22:01.330826 zram_generator::config[3217]: No configuration found. Jul 6 23:22:01.350043 kubelet[2818]: E0706 23:22:01.349999 2818 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.1-a-cc9ddc1e95\" not found" Jul 6 23:22:01.442132 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:22:01.450638 kubelet[2818]: E0706 23:22:01.450592 2818 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.1-a-cc9ddc1e95\" not found" Jul 6 23:22:01.551848 kubelet[2818]: E0706 23:22:01.551801 2818 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.1-a-cc9ddc1e95\" not found" Jul 6 23:22:01.561606 systemd[1]: Reloading finished in 351 ms. Jul 6 23:22:01.588411 kubelet[2818]: I0706 23:22:01.588326 2818 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 6 23:22:01.591412 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:22:01.605087 systemd[1]: kubelet.service: Deactivated successfully. Jul 6 23:22:01.605358 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:22:01.605434 systemd[1]: kubelet.service: Consumed 1.252s CPU time, 125.3M memory peak. Jul 6 23:22:01.614336 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:22:01.729064 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:22:01.733471 (kubelet)[3278]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 6 23:22:02.150847 kubelet[3278]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:22:02.150847 kubelet[3278]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 6 23:22:02.150847 kubelet[3278]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:22:02.150847 kubelet[3278]: I0706 23:22:01.840571 3278 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 6 23:22:02.150847 kubelet[3278]: I0706 23:22:01.846898 3278 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 6 23:22:02.150847 kubelet[3278]: I0706 23:22:01.846923 3278 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 6 23:22:02.150847 kubelet[3278]: I0706 23:22:01.847155 3278 server.go:956] "Client rotation is on, will bootstrap in background" Jul 6 23:22:02.153642 kubelet[3278]: I0706 23:22:02.151981 3278 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jul 6 23:22:02.155216 kubelet[3278]: I0706 23:22:02.155172 3278 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 6 23:22:02.161510 kubelet[3278]: E0706 23:22:02.161460 3278 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 6 23:22:02.161510 kubelet[3278]: I0706 23:22:02.161506 3278 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 6 23:22:02.168301 kubelet[3278]: I0706 23:22:02.168232 3278 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 6 23:22:02.168713 kubelet[3278]: I0706 23:22:02.168502 3278 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 6 23:22:02.168713 kubelet[3278]: I0706 23:22:02.168536 3278 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230.2.1-a-cc9ddc1e95","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 6 23:22:02.168713 kubelet[3278]: I0706 23:22:02.168705 3278 topology_manager.go:138] "Creating topology manager with none policy" Jul 6 23:22:02.168713 kubelet[3278]: I0706 23:22:02.168714 3278 container_manager_linux.go:303] "Creating device plugin manager" Jul 6 23:22:02.169110 kubelet[3278]: I0706 23:22:02.168791 3278 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:22:02.169110 kubelet[3278]: I0706 23:22:02.168933 3278 kubelet.go:480] "Attempting to sync node with API server" Jul 6 23:22:02.169110 kubelet[3278]: I0706 23:22:02.168948 3278 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 6 23:22:02.169110 kubelet[3278]: I0706 23:22:02.168972 3278 kubelet.go:386] "Adding apiserver pod source" Jul 6 23:22:02.169110 kubelet[3278]: I0706 23:22:02.168982 3278 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 6 23:22:02.172296 kubelet[3278]: I0706 23:22:02.172272 3278 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jul 6 23:22:02.173223 kubelet[3278]: I0706 23:22:02.173147 3278 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 6 23:22:02.178560 kubelet[3278]: I0706 23:22:02.178461 3278 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 6 23:22:02.178560 kubelet[3278]: I0706 23:22:02.178516 3278 server.go:1289] "Started kubelet" Jul 6 23:22:02.182575 kubelet[3278]: I0706 23:22:02.182419 3278 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 6 23:22:02.194248 kubelet[3278]: I0706 23:22:02.192968 3278 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 6 23:22:02.194248 kubelet[3278]: I0706 23:22:02.194070 3278 server.go:317] "Adding debug handlers to kubelet server" Jul 6 23:22:02.203211 kubelet[3278]: I0706 23:22:02.202445 3278 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 6 23:22:02.203211 kubelet[3278]: I0706 23:22:02.202805 3278 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 6 23:22:02.205775 kubelet[3278]: I0706 23:22:02.205282 3278 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 6 23:22:02.207853 kubelet[3278]: I0706 23:22:02.207816 3278 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 6 23:22:02.210500 kubelet[3278]: E0706 23:22:02.209094 3278 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.1-a-cc9ddc1e95\" not found" Jul 6 23:22:02.210500 kubelet[3278]: I0706 23:22:02.209799 3278 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 6 23:22:02.210500 kubelet[3278]: I0706 23:22:02.209950 3278 reconciler.go:26] "Reconciler: start to sync state" Jul 6 23:22:02.213437 kubelet[3278]: I0706 23:22:02.213403 3278 factory.go:223] Registration of the systemd container factory successfully Jul 6 23:22:02.213687 kubelet[3278]: I0706 23:22:02.213662 3278 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 6 23:22:02.216149 kubelet[3278]: I0706 23:22:02.216117 3278 factory.go:223] Registration of the containerd container factory successfully Jul 6 23:22:02.227093 kubelet[3278]: I0706 23:22:02.227039 3278 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 6 23:22:02.228107 kubelet[3278]: I0706 23:22:02.228080 3278 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 6 23:22:02.228107 kubelet[3278]: I0706 23:22:02.228108 3278 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 6 23:22:02.228176 kubelet[3278]: I0706 23:22:02.228129 3278 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 6 23:22:02.228176 kubelet[3278]: I0706 23:22:02.228135 3278 kubelet.go:2436] "Starting kubelet main sync loop" Jul 6 23:22:02.228222 kubelet[3278]: E0706 23:22:02.228180 3278 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 6 23:22:02.288926 kubelet[3278]: I0706 23:22:02.288064 3278 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 6 23:22:02.288926 kubelet[3278]: I0706 23:22:02.288088 3278 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 6 23:22:02.288926 kubelet[3278]: I0706 23:22:02.288108 3278 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:22:02.288926 kubelet[3278]: I0706 23:22:02.288243 3278 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 6 23:22:02.288926 kubelet[3278]: I0706 23:22:02.288254 3278 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 6 23:22:02.288926 kubelet[3278]: I0706 23:22:02.288273 3278 policy_none.go:49] "None policy: Start" Jul 6 23:22:02.288926 kubelet[3278]: I0706 23:22:02.288282 3278 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 6 23:22:02.288926 kubelet[3278]: I0706 23:22:02.288290 3278 state_mem.go:35] "Initializing new in-memory state store" Jul 6 23:22:02.288926 kubelet[3278]: I0706 23:22:02.288373 3278 state_mem.go:75] "Updated machine memory state" Jul 6 23:22:02.291280 sudo[3313]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 6 23:22:02.291670 sudo[3313]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jul 6 23:22:02.294357 kubelet[3278]: E0706 23:22:02.293589 3278 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 6 23:22:02.294357 kubelet[3278]: I0706 23:22:02.293867 3278 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 6 23:22:02.294357 kubelet[3278]: I0706 23:22:02.293881 3278 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 6 23:22:02.294357 kubelet[3278]: I0706 23:22:02.294168 3278 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 6 23:22:02.297678 kubelet[3278]: E0706 23:22:02.297435 3278 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 6 23:22:02.329373 kubelet[3278]: I0706 23:22:02.329324 3278 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230.2.1-a-cc9ddc1e95" Jul 6 23:22:02.330074 kubelet[3278]: I0706 23:22:02.329770 3278 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230.2.1-a-cc9ddc1e95" Jul 6 23:22:02.330074 kubelet[3278]: I0706 23:22:02.330043 3278 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230.2.1-a-cc9ddc1e95" Jul 6 23:22:02.343581 kubelet[3278]: I0706 23:22:02.343194 3278 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jul 6 23:22:02.345649 kubelet[3278]: I0706 23:22:02.345628 3278 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jul 6 23:22:02.346888 kubelet[3278]: I0706 23:22:02.346755 3278 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jul 6 23:22:02.396553 kubelet[3278]: I0706 23:22:02.396520 3278 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.1-a-cc9ddc1e95" Jul 6 23:22:02.410636 kubelet[3278]: I0706 23:22:02.409407 3278 kubelet_node_status.go:124] "Node was previously registered" node="ci-4230.2.1-a-cc9ddc1e95" Jul 6 23:22:02.410636 kubelet[3278]: I0706 23:22:02.409492 3278 kubelet_node_status.go:78] "Successfully registered node" node="ci-4230.2.1-a-cc9ddc1e95" Jul 6 23:22:02.411244 kubelet[3278]: I0706 23:22:02.411029 3278 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2fab4cd439cbc1ba4e9c54bc2d06c99d-flexvolume-dir\") pod \"kube-controller-manager-ci-4230.2.1-a-cc9ddc1e95\" (UID: \"2fab4cd439cbc1ba4e9c54bc2d06c99d\") " pod="kube-system/kube-controller-manager-ci-4230.2.1-a-cc9ddc1e95" Jul 6 23:22:02.411244 kubelet[3278]: I0706 23:22:02.411059 3278 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2fab4cd439cbc1ba4e9c54bc2d06c99d-kubeconfig\") pod \"kube-controller-manager-ci-4230.2.1-a-cc9ddc1e95\" (UID: \"2fab4cd439cbc1ba4e9c54bc2d06c99d\") " pod="kube-system/kube-controller-manager-ci-4230.2.1-a-cc9ddc1e95" Jul 6 23:22:02.411244 kubelet[3278]: I0706 23:22:02.411078 3278 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/698a6827b5e79b7b6ef3770be330c87c-ca-certs\") pod \"kube-apiserver-ci-4230.2.1-a-cc9ddc1e95\" (UID: \"698a6827b5e79b7b6ef3770be330c87c\") " pod="kube-system/kube-apiserver-ci-4230.2.1-a-cc9ddc1e95" Jul 6 23:22:02.411244 kubelet[3278]: I0706 23:22:02.411093 3278 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/698a6827b5e79b7b6ef3770be330c87c-k8s-certs\") pod \"kube-apiserver-ci-4230.2.1-a-cc9ddc1e95\" (UID: \"698a6827b5e79b7b6ef3770be330c87c\") " pod="kube-system/kube-apiserver-ci-4230.2.1-a-cc9ddc1e95" Jul 6 23:22:02.411244 kubelet[3278]: I0706 23:22:02.411113 3278 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/698a6827b5e79b7b6ef3770be330c87c-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230.2.1-a-cc9ddc1e95\" (UID: \"698a6827b5e79b7b6ef3770be330c87c\") " pod="kube-system/kube-apiserver-ci-4230.2.1-a-cc9ddc1e95" Jul 6 23:22:02.411397 kubelet[3278]: I0706 23:22:02.411128 3278 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2fab4cd439cbc1ba4e9c54bc2d06c99d-ca-certs\") pod \"kube-controller-manager-ci-4230.2.1-a-cc9ddc1e95\" (UID: \"2fab4cd439cbc1ba4e9c54bc2d06c99d\") " pod="kube-system/kube-controller-manager-ci-4230.2.1-a-cc9ddc1e95" Jul 6 23:22:02.411397 kubelet[3278]: I0706 23:22:02.411142 3278 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2fab4cd439cbc1ba4e9c54bc2d06c99d-k8s-certs\") pod \"kube-controller-manager-ci-4230.2.1-a-cc9ddc1e95\" (UID: \"2fab4cd439cbc1ba4e9c54bc2d06c99d\") " pod="kube-system/kube-controller-manager-ci-4230.2.1-a-cc9ddc1e95" Jul 6 23:22:02.411397 kubelet[3278]: I0706 23:22:02.411157 3278 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2fab4cd439cbc1ba4e9c54bc2d06c99d-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230.2.1-a-cc9ddc1e95\" (UID: \"2fab4cd439cbc1ba4e9c54bc2d06c99d\") " pod="kube-system/kube-controller-manager-ci-4230.2.1-a-cc9ddc1e95" Jul 6 23:22:02.411397 kubelet[3278]: I0706 23:22:02.411172 3278 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b9ff3cb0d23c787a878355bee6365032-kubeconfig\") pod \"kube-scheduler-ci-4230.2.1-a-cc9ddc1e95\" (UID: \"b9ff3cb0d23c787a878355bee6365032\") " pod="kube-system/kube-scheduler-ci-4230.2.1-a-cc9ddc1e95" Jul 6 23:22:02.767513 sudo[3313]: pam_unix(sudo:session): session closed for user root Jul 6 23:22:03.169692 kubelet[3278]: I0706 23:22:03.169566 3278 apiserver.go:52] "Watching apiserver" Jul 6 23:22:03.210875 kubelet[3278]: I0706 23:22:03.210817 3278 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 6 23:22:03.273603 kubelet[3278]: I0706 23:22:03.273554 3278 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230.2.1-a-cc9ddc1e95" Jul 6 23:22:03.290948 kubelet[3278]: I0706 23:22:03.290895 3278 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jul 6 23:22:03.292848 kubelet[3278]: E0706 23:22:03.292820 3278 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4230.2.1-a-cc9ddc1e95\" already exists" pod="kube-system/kube-scheduler-ci-4230.2.1-a-cc9ddc1e95" Jul 6 23:22:03.305760 kubelet[3278]: I0706 23:22:03.305667 3278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4230.2.1-a-cc9ddc1e95" podStartSLOduration=1.305651313 podStartE2EDuration="1.305651313s" podCreationTimestamp="2025-07-06 23:22:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:22:03.305546953 +0000 UTC m=+1.568748881" watchObservedRunningTime="2025-07-06 23:22:03.305651313 +0000 UTC m=+1.568853201" Jul 6 23:22:03.340900 kubelet[3278]: I0706 23:22:03.340836 3278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4230.2.1-a-cc9ddc1e95" podStartSLOduration=1.340817785 podStartE2EDuration="1.340817785s" podCreationTimestamp="2025-07-06 23:22:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:22:03.327541483 +0000 UTC m=+1.590743371" watchObservedRunningTime="2025-07-06 23:22:03.340817785 +0000 UTC m=+1.604019713" Jul 6 23:22:03.354074 kubelet[3278]: I0706 23:22:03.354007 3278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4230.2.1-a-cc9ddc1e95" podStartSLOduration=1.353988647 podStartE2EDuration="1.353988647s" podCreationTimestamp="2025-07-06 23:22:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:22:03.341705504 +0000 UTC m=+1.604907392" watchObservedRunningTime="2025-07-06 23:22:03.353988647 +0000 UTC m=+1.617190575" Jul 6 23:22:04.929809 sudo[2243]: pam_unix(sudo:session): session closed for user root Jul 6 23:22:05.001039 sshd[2242]: Connection closed by 10.200.16.10 port 53050 Jul 6 23:22:05.001631 sshd-session[2240]: pam_unix(sshd:session): session closed for user core Jul 6 23:22:05.005448 systemd[1]: sshd@6-10.200.20.19:22-10.200.16.10:53050.service: Deactivated successfully. Jul 6 23:22:05.007531 systemd[1]: session-9.scope: Deactivated successfully. Jul 6 23:22:05.007714 systemd[1]: session-9.scope: Consumed 7.400s CPU time, 264.1M memory peak. Jul 6 23:22:05.009128 systemd-logind[1707]: Session 9 logged out. Waiting for processes to exit. Jul 6 23:22:05.010272 systemd-logind[1707]: Removed session 9. Jul 6 23:22:07.586411 kubelet[3278]: I0706 23:22:07.586370 3278 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 6 23:22:07.587376 containerd[1744]: time="2025-07-06T23:22:07.587100048Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 6 23:22:07.587777 kubelet[3278]: I0706 23:22:07.587573 3278 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 6 23:22:08.665770 systemd[1]: Created slice kubepods-besteffort-pod29ed82a9_84da_4a2d_8cd8_4783ed5df787.slice - libcontainer container kubepods-besteffort-pod29ed82a9_84da_4a2d_8cd8_4783ed5df787.slice. Jul 6 23:22:08.682335 systemd[1]: Created slice kubepods-burstable-podafa2f1e9_0bbc_4679_9540_5ff5a5f68490.slice - libcontainer container kubepods-burstable-podafa2f1e9_0bbc_4679_9540_5ff5a5f68490.slice. Jul 6 23:22:08.751941 kubelet[3278]: I0706 23:22:08.751838 3278 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/29ed82a9-84da-4a2d-8cd8-4783ed5df787-lib-modules\") pod \"kube-proxy-bb2sx\" (UID: \"29ed82a9-84da-4a2d-8cd8-4783ed5df787\") " pod="kube-system/kube-proxy-bb2sx" Jul 6 23:22:08.751941 kubelet[3278]: I0706 23:22:08.751884 3278 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/afa2f1e9-0bbc-4679-9540-5ff5a5f68490-bpf-maps\") pod \"cilium-bj9pm\" (UID: \"afa2f1e9-0bbc-4679-9540-5ff5a5f68490\") " pod="kube-system/cilium-bj9pm" Jul 6 23:22:08.751941 kubelet[3278]: I0706 23:22:08.751903 3278 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/afa2f1e9-0bbc-4679-9540-5ff5a5f68490-clustermesh-secrets\") pod \"cilium-bj9pm\" (UID: \"afa2f1e9-0bbc-4679-9540-5ff5a5f68490\") " pod="kube-system/cilium-bj9pm" Jul 6 23:22:08.751941 kubelet[3278]: I0706 23:22:08.751944 3278 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/afa2f1e9-0bbc-4679-9540-5ff5a5f68490-host-proc-sys-net\") pod \"cilium-bj9pm\" (UID: \"afa2f1e9-0bbc-4679-9540-5ff5a5f68490\") " pod="kube-system/cilium-bj9pm" Jul 6 23:22:08.752373 kubelet[3278]: I0706 23:22:08.751964 3278 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/29ed82a9-84da-4a2d-8cd8-4783ed5df787-xtables-lock\") pod \"kube-proxy-bb2sx\" (UID: \"29ed82a9-84da-4a2d-8cd8-4783ed5df787\") " pod="kube-system/kube-proxy-bb2sx" Jul 6 23:22:08.752373 kubelet[3278]: I0706 23:22:08.751978 3278 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frdbq\" (UniqueName: \"kubernetes.io/projected/29ed82a9-84da-4a2d-8cd8-4783ed5df787-kube-api-access-frdbq\") pod \"kube-proxy-bb2sx\" (UID: \"29ed82a9-84da-4a2d-8cd8-4783ed5df787\") " pod="kube-system/kube-proxy-bb2sx" Jul 6 23:22:08.752373 kubelet[3278]: I0706 23:22:08.751997 3278 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/afa2f1e9-0bbc-4679-9540-5ff5a5f68490-hostproc\") pod \"cilium-bj9pm\" (UID: \"afa2f1e9-0bbc-4679-9540-5ff5a5f68490\") " pod="kube-system/cilium-bj9pm" Jul 6 23:22:08.752373 kubelet[3278]: I0706 23:22:08.752013 3278 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/afa2f1e9-0bbc-4679-9540-5ff5a5f68490-hubble-tls\") pod \"cilium-bj9pm\" (UID: \"afa2f1e9-0bbc-4679-9540-5ff5a5f68490\") " pod="kube-system/cilium-bj9pm" Jul 6 23:22:08.752373 kubelet[3278]: I0706 23:22:08.752031 3278 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsf4s\" (UniqueName: \"kubernetes.io/projected/afa2f1e9-0bbc-4679-9540-5ff5a5f68490-kube-api-access-xsf4s\") pod \"cilium-bj9pm\" (UID: \"afa2f1e9-0bbc-4679-9540-5ff5a5f68490\") " pod="kube-system/cilium-bj9pm" Jul 6 23:22:08.752373 kubelet[3278]: I0706 23:22:08.752046 3278 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/afa2f1e9-0bbc-4679-9540-5ff5a5f68490-cilium-run\") pod \"cilium-bj9pm\" (UID: \"afa2f1e9-0bbc-4679-9540-5ff5a5f68490\") " pod="kube-system/cilium-bj9pm" Jul 6 23:22:08.752511 kubelet[3278]: I0706 23:22:08.752061 3278 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/afa2f1e9-0bbc-4679-9540-5ff5a5f68490-cilium-cgroup\") pod \"cilium-bj9pm\" (UID: \"afa2f1e9-0bbc-4679-9540-5ff5a5f68490\") " pod="kube-system/cilium-bj9pm" Jul 6 23:22:08.752511 kubelet[3278]: I0706 23:22:08.752085 3278 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/afa2f1e9-0bbc-4679-9540-5ff5a5f68490-etc-cni-netd\") pod \"cilium-bj9pm\" (UID: \"afa2f1e9-0bbc-4679-9540-5ff5a5f68490\") " pod="kube-system/cilium-bj9pm" Jul 6 23:22:08.752511 kubelet[3278]: I0706 23:22:08.752099 3278 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/afa2f1e9-0bbc-4679-9540-5ff5a5f68490-xtables-lock\") pod \"cilium-bj9pm\" (UID: \"afa2f1e9-0bbc-4679-9540-5ff5a5f68490\") " pod="kube-system/cilium-bj9pm" Jul 6 23:22:08.752511 kubelet[3278]: I0706 23:22:08.752114 3278 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/afa2f1e9-0bbc-4679-9540-5ff5a5f68490-cilium-config-path\") pod \"cilium-bj9pm\" (UID: \"afa2f1e9-0bbc-4679-9540-5ff5a5f68490\") " pod="kube-system/cilium-bj9pm" Jul 6 23:22:08.752511 kubelet[3278]: I0706 23:22:08.752130 3278 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/afa2f1e9-0bbc-4679-9540-5ff5a5f68490-host-proc-sys-kernel\") pod \"cilium-bj9pm\" (UID: \"afa2f1e9-0bbc-4679-9540-5ff5a5f68490\") " pod="kube-system/cilium-bj9pm" Jul 6 23:22:08.752511 kubelet[3278]: I0706 23:22:08.752150 3278 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/29ed82a9-84da-4a2d-8cd8-4783ed5df787-kube-proxy\") pod \"kube-proxy-bb2sx\" (UID: \"29ed82a9-84da-4a2d-8cd8-4783ed5df787\") " pod="kube-system/kube-proxy-bb2sx" Jul 6 23:22:08.752628 kubelet[3278]: I0706 23:22:08.752165 3278 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/afa2f1e9-0bbc-4679-9540-5ff5a5f68490-cni-path\") pod \"cilium-bj9pm\" (UID: \"afa2f1e9-0bbc-4679-9540-5ff5a5f68490\") " pod="kube-system/cilium-bj9pm" Jul 6 23:22:08.752628 kubelet[3278]: I0706 23:22:08.752178 3278 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/afa2f1e9-0bbc-4679-9540-5ff5a5f68490-lib-modules\") pod \"cilium-bj9pm\" (UID: \"afa2f1e9-0bbc-4679-9540-5ff5a5f68490\") " pod="kube-system/cilium-bj9pm" Jul 6 23:22:08.815830 systemd[1]: Created slice kubepods-besteffort-pod79787c53_f9ea_41cb_a146_26572ac15ac1.slice - libcontainer container kubepods-besteffort-pod79787c53_f9ea_41cb_a146_26572ac15ac1.slice. Jul 6 23:22:08.854789 kubelet[3278]: I0706 23:22:08.853456 3278 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bd6b9\" (UniqueName: \"kubernetes.io/projected/79787c53-f9ea-41cb-a146-26572ac15ac1-kube-api-access-bd6b9\") pod \"cilium-operator-6c4d7847fc-7cjvf\" (UID: \"79787c53-f9ea-41cb-a146-26572ac15ac1\") " pod="kube-system/cilium-operator-6c4d7847fc-7cjvf" Jul 6 23:22:08.854789 kubelet[3278]: I0706 23:22:08.853702 3278 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/79787c53-f9ea-41cb-a146-26572ac15ac1-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-7cjvf\" (UID: \"79787c53-f9ea-41cb-a146-26572ac15ac1\") " pod="kube-system/cilium-operator-6c4d7847fc-7cjvf" Jul 6 23:22:08.978935 containerd[1744]: time="2025-07-06T23:22:08.978814849Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bb2sx,Uid:29ed82a9-84da-4a2d-8cd8-4783ed5df787,Namespace:kube-system,Attempt:0,}" Jul 6 23:22:08.988692 containerd[1744]: time="2025-07-06T23:22:08.988643038Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bj9pm,Uid:afa2f1e9-0bbc-4679-9540-5ff5a5f68490,Namespace:kube-system,Attempt:0,}" Jul 6 23:22:09.026229 containerd[1744]: time="2025-07-06T23:22:09.026110438Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:22:09.026487 containerd[1744]: time="2025-07-06T23:22:09.026238837Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:22:09.026830 containerd[1744]: time="2025-07-06T23:22:09.026271237Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:22:09.026830 containerd[1744]: time="2025-07-06T23:22:09.026763837Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:22:09.046054 systemd[1]: Started cri-containerd-393069dafe9683285310c520e0eea2066ea5bef682b3da25529a3b17b7938414.scope - libcontainer container 393069dafe9683285310c520e0eea2066ea5bef682b3da25529a3b17b7938414. Jul 6 23:22:09.048478 containerd[1744]: time="2025-07-06T23:22:09.048041974Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:22:09.048478 containerd[1744]: time="2025-07-06T23:22:09.048106454Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:22:09.048478 containerd[1744]: time="2025-07-06T23:22:09.048121973Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:22:09.048478 containerd[1744]: time="2025-07-06T23:22:09.048205413Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:22:09.071013 systemd[1]: Started cri-containerd-e27a40f159914c1a19dc26ab463a0a9d24b2bc8b0dd9744476e9a66377438b55.scope - libcontainer container e27a40f159914c1a19dc26ab463a0a9d24b2bc8b0dd9744476e9a66377438b55. Jul 6 23:22:09.085008 containerd[1744]: time="2025-07-06T23:22:09.084961653Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bb2sx,Uid:29ed82a9-84da-4a2d-8cd8-4783ed5df787,Namespace:kube-system,Attempt:0,} returns sandbox id \"393069dafe9683285310c520e0eea2066ea5bef682b3da25529a3b17b7938414\"" Jul 6 23:22:09.098216 containerd[1744]: time="2025-07-06T23:22:09.097166160Z" level=info msg="CreateContainer within sandbox \"393069dafe9683285310c520e0eea2066ea5bef682b3da25529a3b17b7938414\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 6 23:22:09.104658 containerd[1744]: time="2025-07-06T23:22:09.104578712Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bj9pm,Uid:afa2f1e9-0bbc-4679-9540-5ff5a5f68490,Namespace:kube-system,Attempt:0,} returns sandbox id \"e27a40f159914c1a19dc26ab463a0a9d24b2bc8b0dd9744476e9a66377438b55\"" Jul 6 23:22:09.107383 containerd[1744]: time="2025-07-06T23:22:09.106994069Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 6 23:22:09.120622 containerd[1744]: time="2025-07-06T23:22:09.120330215Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-7cjvf,Uid:79787c53-f9ea-41cb-a146-26572ac15ac1,Namespace:kube-system,Attempt:0,}" Jul 6 23:22:09.191874 containerd[1744]: time="2025-07-06T23:22:09.190909138Z" level=info msg="CreateContainer within sandbox \"393069dafe9683285310c520e0eea2066ea5bef682b3da25529a3b17b7938414\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1bc5cfe248c457c2d21dce462f4d9ef5963da4d32d9235b42405e8f4a360b063\"" Jul 6 23:22:09.191874 containerd[1744]: time="2025-07-06T23:22:09.191817697Z" level=info msg="StartContainer for \"1bc5cfe248c457c2d21dce462f4d9ef5963da4d32d9235b42405e8f4a360b063\"" Jul 6 23:22:09.207089 containerd[1744]: time="2025-07-06T23:22:09.206955240Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:22:09.207089 containerd[1744]: time="2025-07-06T23:22:09.207030680Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:22:09.207089 containerd[1744]: time="2025-07-06T23:22:09.207041360Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:22:09.207375 containerd[1744]: time="2025-07-06T23:22:09.207132880Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:22:09.220683 systemd[1]: Started cri-containerd-1bc5cfe248c457c2d21dce462f4d9ef5963da4d32d9235b42405e8f4a360b063.scope - libcontainer container 1bc5cfe248c457c2d21dce462f4d9ef5963da4d32d9235b42405e8f4a360b063. Jul 6 23:22:09.229966 systemd[1]: Started cri-containerd-409967e382bb9d519c4f75cb8891c4e9b5995d0150957906a4b89eac438a405c.scope - libcontainer container 409967e382bb9d519c4f75cb8891c4e9b5995d0150957906a4b89eac438a405c. Jul 6 23:22:09.271096 containerd[1744]: time="2025-07-06T23:22:09.271040930Z" level=info msg="StartContainer for \"1bc5cfe248c457c2d21dce462f4d9ef5963da4d32d9235b42405e8f4a360b063\" returns successfully" Jul 6 23:22:09.279970 containerd[1744]: time="2025-07-06T23:22:09.279909281Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-7cjvf,Uid:79787c53-f9ea-41cb-a146-26572ac15ac1,Namespace:kube-system,Attempt:0,} returns sandbox id \"409967e382bb9d519c4f75cb8891c4e9b5995d0150957906a4b89eac438a405c\"" Jul 6 23:22:09.327400 kubelet[3278]: I0706 23:22:09.327155 3278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-bb2sx" podStartSLOduration=1.327133509 podStartE2EDuration="1.327133509s" podCreationTimestamp="2025-07-06 23:22:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:22:09.311437406 +0000 UTC m=+7.574639334" watchObservedRunningTime="2025-07-06 23:22:09.327133509 +0000 UTC m=+7.590335437" Jul 6 23:22:13.322307 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1473904382.mount: Deactivated successfully. Jul 6 23:22:14.901912 containerd[1744]: time="2025-07-06T23:22:14.901000775Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:22:14.904816 containerd[1744]: time="2025-07-06T23:22:14.904767490Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Jul 6 23:22:14.909790 containerd[1744]: time="2025-07-06T23:22:14.909720363Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:22:14.911622 containerd[1744]: time="2025-07-06T23:22:14.911586240Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 5.804205411s" Jul 6 23:22:14.912108 containerd[1744]: time="2025-07-06T23:22:14.912084360Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jul 6 23:22:14.914427 containerd[1744]: time="2025-07-06T23:22:14.914038997Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 6 23:22:14.924165 containerd[1744]: time="2025-07-06T23:22:14.924104704Z" level=info msg="CreateContainer within sandbox \"e27a40f159914c1a19dc26ab463a0a9d24b2bc8b0dd9744476e9a66377438b55\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 6 23:22:14.970607 containerd[1744]: time="2025-07-06T23:22:14.970545802Z" level=info msg="CreateContainer within sandbox \"e27a40f159914c1a19dc26ab463a0a9d24b2bc8b0dd9744476e9a66377438b55\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9ec5716bd91de2cd17417f832a0fa06536ceb9e3e76d044cf7f0dbefedfb7bb6\"" Jul 6 23:22:14.971780 containerd[1744]: time="2025-07-06T23:22:14.971647521Z" level=info msg="StartContainer for \"9ec5716bd91de2cd17417f832a0fa06536ceb9e3e76d044cf7f0dbefedfb7bb6\"" Jul 6 23:22:15.000931 systemd[1]: Started cri-containerd-9ec5716bd91de2cd17417f832a0fa06536ceb9e3e76d044cf7f0dbefedfb7bb6.scope - libcontainer container 9ec5716bd91de2cd17417f832a0fa06536ceb9e3e76d044cf7f0dbefedfb7bb6. Jul 6 23:22:15.028594 containerd[1744]: time="2025-07-06T23:22:15.028541726Z" level=info msg="StartContainer for \"9ec5716bd91de2cd17417f832a0fa06536ceb9e3e76d044cf7f0dbefedfb7bb6\" returns successfully" Jul 6 23:22:15.036992 systemd[1]: cri-containerd-9ec5716bd91de2cd17417f832a0fa06536ceb9e3e76d044cf7f0dbefedfb7bb6.scope: Deactivated successfully. Jul 6 23:22:15.955183 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9ec5716bd91de2cd17417f832a0fa06536ceb9e3e76d044cf7f0dbefedfb7bb6-rootfs.mount: Deactivated successfully. Jul 6 23:22:16.838447 containerd[1744]: time="2025-07-06T23:22:16.838336769Z" level=info msg="shim disconnected" id=9ec5716bd91de2cd17417f832a0fa06536ceb9e3e76d044cf7f0dbefedfb7bb6 namespace=k8s.io Jul 6 23:22:16.838447 containerd[1744]: time="2025-07-06T23:22:16.838417329Z" level=warning msg="cleaning up after shim disconnected" id=9ec5716bd91de2cd17417f832a0fa06536ceb9e3e76d044cf7f0dbefedfb7bb6 namespace=k8s.io Jul 6 23:22:16.838447 containerd[1744]: time="2025-07-06T23:22:16.838426529Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:22:17.321321 containerd[1744]: time="2025-07-06T23:22:17.321194330Z" level=info msg="CreateContainer within sandbox \"e27a40f159914c1a19dc26ab463a0a9d24b2bc8b0dd9744476e9a66377438b55\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 6 23:22:17.394889 containerd[1744]: time="2025-07-06T23:22:17.394836272Z" level=info msg="CreateContainer within sandbox \"e27a40f159914c1a19dc26ab463a0a9d24b2bc8b0dd9744476e9a66377438b55\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"3162478a7a2a51b47900579a49602e2a1836cedd0fa65def8f8de5a46286462d\"" Jul 6 23:22:17.397051 containerd[1744]: time="2025-07-06T23:22:17.395794151Z" level=info msg="StartContainer for \"3162478a7a2a51b47900579a49602e2a1836cedd0fa65def8f8de5a46286462d\"" Jul 6 23:22:17.432947 systemd[1]: Started cri-containerd-3162478a7a2a51b47900579a49602e2a1836cedd0fa65def8f8de5a46286462d.scope - libcontainer container 3162478a7a2a51b47900579a49602e2a1836cedd0fa65def8f8de5a46286462d. Jul 6 23:22:17.461181 containerd[1744]: time="2025-07-06T23:22:17.461045024Z" level=info msg="StartContainer for \"3162478a7a2a51b47900579a49602e2a1836cedd0fa65def8f8de5a46286462d\" returns successfully" Jul 6 23:22:17.468471 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 6 23:22:17.468689 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:22:17.469521 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:22:17.474146 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:22:17.474349 systemd[1]: cri-containerd-3162478a7a2a51b47900579a49602e2a1836cedd0fa65def8f8de5a46286462d.scope: Deactivated successfully. Jul 6 23:22:17.492096 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:22:17.514618 containerd[1744]: time="2025-07-06T23:22:17.514545314Z" level=info msg="shim disconnected" id=3162478a7a2a51b47900579a49602e2a1836cedd0fa65def8f8de5a46286462d namespace=k8s.io Jul 6 23:22:17.514618 containerd[1744]: time="2025-07-06T23:22:17.514602113Z" level=warning msg="cleaning up after shim disconnected" id=3162478a7a2a51b47900579a49602e2a1836cedd0fa65def8f8de5a46286462d namespace=k8s.io Jul 6 23:22:17.514618 containerd[1744]: time="2025-07-06T23:22:17.514610873Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:22:18.329047 containerd[1744]: time="2025-07-06T23:22:18.328611796Z" level=info msg="CreateContainer within sandbox \"e27a40f159914c1a19dc26ab463a0a9d24b2bc8b0dd9744476e9a66377438b55\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 6 23:22:18.379902 systemd[1]: run-containerd-runc-k8s.io-3162478a7a2a51b47900579a49602e2a1836cedd0fa65def8f8de5a46286462d-runc.GkTNON.mount: Deactivated successfully. Jul 6 23:22:18.380475 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3162478a7a2a51b47900579a49602e2a1836cedd0fa65def8f8de5a46286462d-rootfs.mount: Deactivated successfully. Jul 6 23:22:18.389067 containerd[1744]: time="2025-07-06T23:22:18.388944356Z" level=info msg="CreateContainer within sandbox \"e27a40f159914c1a19dc26ab463a0a9d24b2bc8b0dd9744476e9a66377438b55\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"bd909ff70f733398ecdf2b44d104a9c9e642abdc7a1b5a4ef31c6ed20fe5447b\"" Jul 6 23:22:18.390846 containerd[1744]: time="2025-07-06T23:22:18.390774353Z" level=info msg="StartContainer for \"bd909ff70f733398ecdf2b44d104a9c9e642abdc7a1b5a4ef31c6ed20fe5447b\"" Jul 6 23:22:18.431950 systemd[1]: Started cri-containerd-bd909ff70f733398ecdf2b44d104a9c9e642abdc7a1b5a4ef31c6ed20fe5447b.scope - libcontainer container bd909ff70f733398ecdf2b44d104a9c9e642abdc7a1b5a4ef31c6ed20fe5447b. Jul 6 23:22:18.470172 systemd[1]: cri-containerd-bd909ff70f733398ecdf2b44d104a9c9e642abdc7a1b5a4ef31c6ed20fe5447b.scope: Deactivated successfully. Jul 6 23:22:18.474353 containerd[1744]: time="2025-07-06T23:22:18.474107843Z" level=info msg="StartContainer for \"bd909ff70f733398ecdf2b44d104a9c9e642abdc7a1b5a4ef31c6ed20fe5447b\" returns successfully" Jul 6 23:22:18.507070 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bd909ff70f733398ecdf2b44d104a9c9e642abdc7a1b5a4ef31c6ed20fe5447b-rootfs.mount: Deactivated successfully. Jul 6 23:22:18.784811 containerd[1744]: time="2025-07-06T23:22:18.784698792Z" level=info msg="shim disconnected" id=bd909ff70f733398ecdf2b44d104a9c9e642abdc7a1b5a4ef31c6ed20fe5447b namespace=k8s.io Jul 6 23:22:18.784811 containerd[1744]: time="2025-07-06T23:22:18.784769711Z" level=warning msg="cleaning up after shim disconnected" id=bd909ff70f733398ecdf2b44d104a9c9e642abdc7a1b5a4ef31c6ed20fe5447b namespace=k8s.io Jul 6 23:22:18.784811 containerd[1744]: time="2025-07-06T23:22:18.784777671Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:22:18.860023 containerd[1744]: time="2025-07-06T23:22:18.859963132Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:22:18.863037 containerd[1744]: time="2025-07-06T23:22:18.862860568Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Jul 6 23:22:18.869508 containerd[1744]: time="2025-07-06T23:22:18.869460439Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:22:18.870797 containerd[1744]: time="2025-07-06T23:22:18.870647558Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 3.956578281s" Jul 6 23:22:18.870797 containerd[1744]: time="2025-07-06T23:22:18.870684958Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jul 6 23:22:18.879506 containerd[1744]: time="2025-07-06T23:22:18.879465626Z" level=info msg="CreateContainer within sandbox \"409967e382bb9d519c4f75cb8891c4e9b5995d0150957906a4b89eac438a405c\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 6 23:22:18.925711 containerd[1744]: time="2025-07-06T23:22:18.925661685Z" level=info msg="CreateContainer within sandbox \"409967e382bb9d519c4f75cb8891c4e9b5995d0150957906a4b89eac438a405c\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"8c76f8f5aa09b720e0f60042eee21cd09325b382c0f5a88a5942c01920c8b48a\"" Jul 6 23:22:18.926387 containerd[1744]: time="2025-07-06T23:22:18.926298764Z" level=info msg="StartContainer for \"8c76f8f5aa09b720e0f60042eee21cd09325b382c0f5a88a5942c01920c8b48a\"" Jul 6 23:22:18.948938 systemd[1]: Started cri-containerd-8c76f8f5aa09b720e0f60042eee21cd09325b382c0f5a88a5942c01920c8b48a.scope - libcontainer container 8c76f8f5aa09b720e0f60042eee21cd09325b382c0f5a88a5942c01920c8b48a. Jul 6 23:22:18.977847 containerd[1744]: time="2025-07-06T23:22:18.977789536Z" level=info msg="StartContainer for \"8c76f8f5aa09b720e0f60042eee21cd09325b382c0f5a88a5942c01920c8b48a\" returns successfully" Jul 6 23:22:19.335188 containerd[1744]: time="2025-07-06T23:22:19.335027783Z" level=info msg="CreateContainer within sandbox \"e27a40f159914c1a19dc26ab463a0a9d24b2bc8b0dd9744476e9a66377438b55\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 6 23:22:19.378087 containerd[1744]: time="2025-07-06T23:22:19.378025806Z" level=info msg="CreateContainer within sandbox \"e27a40f159914c1a19dc26ab463a0a9d24b2bc8b0dd9744476e9a66377438b55\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"331698af999b88bbccca664f5f2ebd3d579dd58d7b82d60f831c63d89d0270a4\"" Jul 6 23:22:19.384456 containerd[1744]: time="2025-07-06T23:22:19.381496121Z" level=info msg="StartContainer for \"331698af999b88bbccca664f5f2ebd3d579dd58d7b82d60f831c63d89d0270a4\"" Jul 6 23:22:19.423447 systemd[1]: run-containerd-runc-k8s.io-331698af999b88bbccca664f5f2ebd3d579dd58d7b82d60f831c63d89d0270a4-runc.7iYJ7O.mount: Deactivated successfully. Jul 6 23:22:19.436866 kubelet[3278]: I0706 23:22:19.435975 3278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-7cjvf" podStartSLOduration=1.84708329 podStartE2EDuration="11.435954609s" podCreationTimestamp="2025-07-06 23:22:08 +0000 UTC" firstStartedPulling="2025-07-06 23:22:09.282691078 +0000 UTC m=+7.545893006" lastFinishedPulling="2025-07-06 23:22:18.871562397 +0000 UTC m=+17.134764325" observedRunningTime="2025-07-06 23:22:19.355267476 +0000 UTC m=+17.618469404" watchObservedRunningTime="2025-07-06 23:22:19.435954609 +0000 UTC m=+17.699156537" Jul 6 23:22:19.442245 systemd[1]: Started cri-containerd-331698af999b88bbccca664f5f2ebd3d579dd58d7b82d60f831c63d89d0270a4.scope - libcontainer container 331698af999b88bbccca664f5f2ebd3d579dd58d7b82d60f831c63d89d0270a4. Jul 6 23:22:19.510047 systemd[1]: cri-containerd-331698af999b88bbccca664f5f2ebd3d579dd58d7b82d60f831c63d89d0270a4.scope: Deactivated successfully. Jul 6 23:22:19.523267 containerd[1744]: time="2025-07-06T23:22:19.523164894Z" level=info msg="StartContainer for \"331698af999b88bbccca664f5f2ebd3d579dd58d7b82d60f831c63d89d0270a4\" returns successfully" Jul 6 23:22:19.553697 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-331698af999b88bbccca664f5f2ebd3d579dd58d7b82d60f831c63d89d0270a4-rootfs.mount: Deactivated successfully. Jul 6 23:22:19.572808 containerd[1744]: time="2025-07-06T23:22:19.572649548Z" level=info msg="shim disconnected" id=331698af999b88bbccca664f5f2ebd3d579dd58d7b82d60f831c63d89d0270a4 namespace=k8s.io Jul 6 23:22:19.572808 containerd[1744]: time="2025-07-06T23:22:19.572844708Z" level=warning msg="cleaning up after shim disconnected" id=331698af999b88bbccca664f5f2ebd3d579dd58d7b82d60f831c63d89d0270a4 namespace=k8s.io Jul 6 23:22:19.572808 containerd[1744]: time="2025-07-06T23:22:19.572855628Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:22:20.341025 containerd[1744]: time="2025-07-06T23:22:20.340973499Z" level=info msg="CreateContainer within sandbox \"e27a40f159914c1a19dc26ab463a0a9d24b2bc8b0dd9744476e9a66377438b55\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 6 23:22:20.377408 containerd[1744]: time="2025-07-06T23:22:20.377353816Z" level=info msg="CreateContainer within sandbox \"e27a40f159914c1a19dc26ab463a0a9d24b2bc8b0dd9744476e9a66377438b55\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f62d19b4cb110ccfaf68a7502dcd83ede0aaabbf3c00752bb9b7f04a2f4f160c\"" Jul 6 23:22:20.380628 containerd[1744]: time="2025-07-06T23:22:20.379670573Z" level=info msg="StartContainer for \"f62d19b4cb110ccfaf68a7502dcd83ede0aaabbf3c00752bb9b7f04a2f4f160c\"" Jul 6 23:22:20.414019 systemd[1]: Started cri-containerd-f62d19b4cb110ccfaf68a7502dcd83ede0aaabbf3c00752bb9b7f04a2f4f160c.scope - libcontainer container f62d19b4cb110ccfaf68a7502dcd83ede0aaabbf3c00752bb9b7f04a2f4f160c. Jul 6 23:22:20.450218 containerd[1744]: time="2025-07-06T23:22:20.449974890Z" level=info msg="StartContainer for \"f62d19b4cb110ccfaf68a7502dcd83ede0aaabbf3c00752bb9b7f04a2f4f160c\" returns successfully" Jul 6 23:22:20.634455 kubelet[3278]: I0706 23:22:20.633294 3278 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 6 23:22:20.695708 systemd[1]: Created slice kubepods-burstable-pod224cb93d_c333_46f3_90c9_0c6efd00abbe.slice - libcontainer container kubepods-burstable-pod224cb93d_c333_46f3_90c9_0c6efd00abbe.slice. Jul 6 23:22:20.707626 systemd[1]: Created slice kubepods-burstable-pod36aed297_e219_43c5_9d5c_4df3ba639c98.slice - libcontainer container kubepods-burstable-pod36aed297_e219_43c5_9d5c_4df3ba639c98.slice. Jul 6 23:22:20.732607 kubelet[3278]: I0706 23:22:20.732444 3278 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zvx8\" (UniqueName: \"kubernetes.io/projected/36aed297-e219-43c5-9d5c-4df3ba639c98-kube-api-access-4zvx8\") pod \"coredns-674b8bbfcf-h8cdx\" (UID: \"36aed297-e219-43c5-9d5c-4df3ba639c98\") " pod="kube-system/coredns-674b8bbfcf-h8cdx" Jul 6 23:22:20.732607 kubelet[3278]: I0706 23:22:20.732487 3278 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/224cb93d-c333-46f3-90c9-0c6efd00abbe-config-volume\") pod \"coredns-674b8bbfcf-48bzs\" (UID: \"224cb93d-c333-46f3-90c9-0c6efd00abbe\") " pod="kube-system/coredns-674b8bbfcf-48bzs" Jul 6 23:22:20.732607 kubelet[3278]: I0706 23:22:20.732506 3278 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slkzd\" (UniqueName: \"kubernetes.io/projected/224cb93d-c333-46f3-90c9-0c6efd00abbe-kube-api-access-slkzd\") pod \"coredns-674b8bbfcf-48bzs\" (UID: \"224cb93d-c333-46f3-90c9-0c6efd00abbe\") " pod="kube-system/coredns-674b8bbfcf-48bzs" Jul 6 23:22:20.732607 kubelet[3278]: I0706 23:22:20.732523 3278 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/36aed297-e219-43c5-9d5c-4df3ba639c98-config-volume\") pod \"coredns-674b8bbfcf-h8cdx\" (UID: \"36aed297-e219-43c5-9d5c-4df3ba639c98\") " pod="kube-system/coredns-674b8bbfcf-h8cdx" Jul 6 23:22:21.002668 containerd[1744]: time="2025-07-06T23:22:21.002090958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-48bzs,Uid:224cb93d-c333-46f3-90c9-0c6efd00abbe,Namespace:kube-system,Attempt:0,}" Jul 6 23:22:21.014101 containerd[1744]: time="2025-07-06T23:22:21.013249025Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-h8cdx,Uid:36aed297-e219-43c5-9d5c-4df3ba639c98,Namespace:kube-system,Attempt:0,}" Jul 6 23:22:22.699228 systemd-networkd[1564]: cilium_host: Link UP Jul 6 23:22:22.699344 systemd-networkd[1564]: cilium_net: Link UP Jul 6 23:22:22.699347 systemd-networkd[1564]: cilium_net: Gained carrier Jul 6 23:22:22.699460 systemd-networkd[1564]: cilium_host: Gained carrier Jul 6 23:22:22.699584 systemd-networkd[1564]: cilium_host: Gained IPv6LL Jul 6 23:22:22.846987 systemd-networkd[1564]: cilium_vxlan: Link UP Jul 6 23:22:22.846996 systemd-networkd[1564]: cilium_vxlan: Gained carrier Jul 6 23:22:22.855923 systemd-networkd[1564]: cilium_net: Gained IPv6LL Jul 6 23:22:23.166850 kernel: NET: Registered PF_ALG protocol family Jul 6 23:22:23.883002 systemd-networkd[1564]: lxc_health: Link UP Jul 6 23:22:23.898383 systemd-networkd[1564]: lxc_health: Gained carrier Jul 6 23:22:24.101757 systemd-networkd[1564]: lxcbf9054a8dc8e: Link UP Jul 6 23:22:24.108842 kernel: eth0: renamed from tmpa3438 Jul 6 23:22:24.114943 systemd-networkd[1564]: lxcbf9054a8dc8e: Gained carrier Jul 6 23:22:24.127334 systemd-networkd[1564]: lxc5ec173b58e05: Link UP Jul 6 23:22:24.135841 kernel: eth0: renamed from tmp5d9c7 Jul 6 23:22:24.143026 systemd-networkd[1564]: lxc5ec173b58e05: Gained carrier Jul 6 23:22:24.557873 systemd-networkd[1564]: cilium_vxlan: Gained IPv6LL Jul 6 23:22:25.016721 kubelet[3278]: I0706 23:22:25.016629 3278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-bj9pm" podStartSLOduration=11.210066647 podStartE2EDuration="17.016613056s" podCreationTimestamp="2025-07-06 23:22:08 +0000 UTC" firstStartedPulling="2025-07-06 23:22:09.10651739 +0000 UTC m=+7.369719318" lastFinishedPulling="2025-07-06 23:22:14.913063799 +0000 UTC m=+13.176265727" observedRunningTime="2025-07-06 23:22:21.382898948 +0000 UTC m=+19.646100876" watchObservedRunningTime="2025-07-06 23:22:25.016613056 +0000 UTC m=+23.279814984" Jul 6 23:22:25.709896 systemd-networkd[1564]: lxcbf9054a8dc8e: Gained IPv6LL Jul 6 23:22:25.774882 systemd-networkd[1564]: lxc_health: Gained IPv6LL Jul 6 23:22:25.903448 systemd-networkd[1564]: lxc5ec173b58e05: Gained IPv6LL Jul 6 23:22:27.941248 containerd[1744]: time="2025-07-06T23:22:27.941072181Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:22:27.941248 containerd[1744]: time="2025-07-06T23:22:27.941134981Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:22:27.943213 containerd[1744]: time="2025-07-06T23:22:27.941150221Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:22:27.943213 containerd[1744]: time="2025-07-06T23:22:27.941227861Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:22:27.973757 containerd[1744]: time="2025-07-06T23:22:27.972994377Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:22:27.973757 containerd[1744]: time="2025-07-06T23:22:27.973382337Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:22:27.973757 containerd[1744]: time="2025-07-06T23:22:27.973396297Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:22:27.973757 containerd[1744]: time="2025-07-06T23:22:27.973495857Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:22:27.973955 systemd[1]: Started cri-containerd-a3438ccc4cc180d557412998941c5ab34b58ed3f5237d2ce4a3de2e215e33eff.scope - libcontainer container a3438ccc4cc180d557412998941c5ab34b58ed3f5237d2ce4a3de2e215e33eff. Jul 6 23:22:28.008983 systemd[1]: Started cri-containerd-5d9c7d7093df4f20bdb161360458260de59f51ae78f860dace9fb89ae60b0666.scope - libcontainer container 5d9c7d7093df4f20bdb161360458260de59f51ae78f860dace9fb89ae60b0666. Jul 6 23:22:28.051470 containerd[1744]: time="2025-07-06T23:22:28.051353109Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-48bzs,Uid:224cb93d-c333-46f3-90c9-0c6efd00abbe,Namespace:kube-system,Attempt:0,} returns sandbox id \"a3438ccc4cc180d557412998941c5ab34b58ed3f5237d2ce4a3de2e215e33eff\"" Jul 6 23:22:28.066229 containerd[1744]: time="2025-07-06T23:22:28.066035528Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-h8cdx,Uid:36aed297-e219-43c5-9d5c-4df3ba639c98,Namespace:kube-system,Attempt:0,} returns sandbox id \"5d9c7d7093df4f20bdb161360458260de59f51ae78f860dace9fb89ae60b0666\"" Jul 6 23:22:28.069168 containerd[1744]: time="2025-07-06T23:22:28.069015844Z" level=info msg="CreateContainer within sandbox \"a3438ccc4cc180d557412998941c5ab34b58ed3f5237d2ce4a3de2e215e33eff\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 6 23:22:28.086592 containerd[1744]: time="2025-07-06T23:22:28.086548740Z" level=info msg="CreateContainer within sandbox \"5d9c7d7093df4f20bdb161360458260de59f51ae78f860dace9fb89ae60b0666\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 6 23:22:28.121979 containerd[1744]: time="2025-07-06T23:22:28.121761531Z" level=info msg="CreateContainer within sandbox \"a3438ccc4cc180d557412998941c5ab34b58ed3f5237d2ce4a3de2e215e33eff\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"91fa308e0a8eefea8fc144ba59fd281c22e1966111fd5a8809229a1c92041e44\"" Jul 6 23:22:28.123930 containerd[1744]: time="2025-07-06T23:22:28.122988690Z" level=info msg="StartContainer for \"91fa308e0a8eefea8fc144ba59fd281c22e1966111fd5a8809229a1c92041e44\"" Jul 6 23:22:28.148997 containerd[1744]: time="2025-07-06T23:22:28.148941694Z" level=info msg="CreateContainer within sandbox \"5d9c7d7093df4f20bdb161360458260de59f51ae78f860dace9fb89ae60b0666\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e9a67ade3a5ea27e194d13d357ce67bc49ce1db308b74610fde2e40e5d52abb0\"" Jul 6 23:22:28.150074 containerd[1744]: time="2025-07-06T23:22:28.150024812Z" level=info msg="StartContainer for \"e9a67ade3a5ea27e194d13d357ce67bc49ce1db308b74610fde2e40e5d52abb0\"" Jul 6 23:22:28.160991 systemd[1]: Started cri-containerd-91fa308e0a8eefea8fc144ba59fd281c22e1966111fd5a8809229a1c92041e44.scope - libcontainer container 91fa308e0a8eefea8fc144ba59fd281c22e1966111fd5a8809229a1c92041e44. Jul 6 23:22:28.190049 systemd[1]: Started cri-containerd-e9a67ade3a5ea27e194d13d357ce67bc49ce1db308b74610fde2e40e5d52abb0.scope - libcontainer container e9a67ade3a5ea27e194d13d357ce67bc49ce1db308b74610fde2e40e5d52abb0. Jul 6 23:22:28.207217 containerd[1744]: time="2025-07-06T23:22:28.207106173Z" level=info msg="StartContainer for \"91fa308e0a8eefea8fc144ba59fd281c22e1966111fd5a8809229a1c92041e44\" returns successfully" Jul 6 23:22:28.239303 containerd[1744]: time="2025-07-06T23:22:28.239253969Z" level=info msg="StartContainer for \"e9a67ade3a5ea27e194d13d357ce67bc49ce1db308b74610fde2e40e5d52abb0\" returns successfully" Jul 6 23:22:28.374364 kubelet[3278]: I0706 23:22:28.374304 3278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-h8cdx" podStartSLOduration=20.374289502 podStartE2EDuration="20.374289502s" podCreationTimestamp="2025-07-06 23:22:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:22:28.373268703 +0000 UTC m=+26.636470631" watchObservedRunningTime="2025-07-06 23:22:28.374289502 +0000 UTC m=+26.637491430" Jul 6 23:22:28.392775 kubelet[3278]: I0706 23:22:28.392340 3278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-48bzs" podStartSLOduration=20.392324277 podStartE2EDuration="20.392324277s" podCreationTimestamp="2025-07-06 23:22:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:22:28.391461478 +0000 UTC m=+26.654663406" watchObservedRunningTime="2025-07-06 23:22:28.392324277 +0000 UTC m=+26.655526205" Jul 6 23:23:39.347140 systemd[1]: Started sshd@7-10.200.20.19:22-10.200.16.10:54114.service - OpenSSH per-connection server daemon (10.200.16.10:54114). Jul 6 23:23:39.826372 sshd[4671]: Accepted publickey for core from 10.200.16.10 port 54114 ssh2: RSA SHA256:03hkvBavc73fCnbrqThnCFPODKWBbuy7ZtqQR+/MThM Jul 6 23:23:39.827985 sshd-session[4671]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:23:39.833983 systemd-logind[1707]: New session 10 of user core. Jul 6 23:23:39.838967 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 6 23:23:40.271376 sshd[4675]: Connection closed by 10.200.16.10 port 54114 Jul 6 23:23:40.272020 sshd-session[4671]: pam_unix(sshd:session): session closed for user core Jul 6 23:23:40.276000 systemd[1]: sshd@7-10.200.20.19:22-10.200.16.10:54114.service: Deactivated successfully. Jul 6 23:23:40.279099 systemd[1]: session-10.scope: Deactivated successfully. Jul 6 23:23:40.280047 systemd-logind[1707]: Session 10 logged out. Waiting for processes to exit. Jul 6 23:23:40.281602 systemd-logind[1707]: Removed session 10. Jul 6 23:23:45.369456 systemd[1]: Started sshd@8-10.200.20.19:22-10.200.16.10:45312.service - OpenSSH per-connection server daemon (10.200.16.10:45312). Jul 6 23:23:45.849481 sshd[4689]: Accepted publickey for core from 10.200.16.10 port 45312 ssh2: RSA SHA256:03hkvBavc73fCnbrqThnCFPODKWBbuy7ZtqQR+/MThM Jul 6 23:23:45.850998 sshd-session[4689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:23:45.856209 systemd-logind[1707]: New session 11 of user core. Jul 6 23:23:45.863953 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 6 23:23:46.262208 sshd[4691]: Connection closed by 10.200.16.10 port 45312 Jul 6 23:23:46.262609 sshd-session[4689]: pam_unix(sshd:session): session closed for user core Jul 6 23:23:46.266464 systemd[1]: sshd@8-10.200.20.19:22-10.200.16.10:45312.service: Deactivated successfully. Jul 6 23:23:46.268501 systemd[1]: session-11.scope: Deactivated successfully. Jul 6 23:23:46.269948 systemd-logind[1707]: Session 11 logged out. Waiting for processes to exit. Jul 6 23:23:46.270901 systemd-logind[1707]: Removed session 11. Jul 6 23:23:51.357457 systemd[1]: Started sshd@9-10.200.20.19:22-10.200.16.10:47356.service - OpenSSH per-connection server daemon (10.200.16.10:47356). Jul 6 23:23:51.833948 sshd[4704]: Accepted publickey for core from 10.200.16.10 port 47356 ssh2: RSA SHA256:03hkvBavc73fCnbrqThnCFPODKWBbuy7ZtqQR+/MThM Jul 6 23:23:51.835253 sshd-session[4704]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:23:51.839981 systemd-logind[1707]: New session 12 of user core. Jul 6 23:23:51.844951 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 6 23:23:52.240482 sshd[4706]: Connection closed by 10.200.16.10 port 47356 Jul 6 23:23:52.241243 sshd-session[4704]: pam_unix(sshd:session): session closed for user core Jul 6 23:23:52.244650 systemd[1]: sshd@9-10.200.20.19:22-10.200.16.10:47356.service: Deactivated successfully. Jul 6 23:23:52.247061 systemd[1]: session-12.scope: Deactivated successfully. Jul 6 23:23:52.248104 systemd-logind[1707]: Session 12 logged out. Waiting for processes to exit. Jul 6 23:23:52.249473 systemd-logind[1707]: Removed session 12. Jul 6 23:23:57.331048 systemd[1]: Started sshd@10-10.200.20.19:22-10.200.16.10:47364.service - OpenSSH per-connection server daemon (10.200.16.10:47364). Jul 6 23:23:57.809811 sshd[4719]: Accepted publickey for core from 10.200.16.10 port 47364 ssh2: RSA SHA256:03hkvBavc73fCnbrqThnCFPODKWBbuy7ZtqQR+/MThM Jul 6 23:23:57.811134 sshd-session[4719]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:23:57.816470 systemd-logind[1707]: New session 13 of user core. Jul 6 23:23:57.819905 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 6 23:23:58.221331 sshd[4721]: Connection closed by 10.200.16.10 port 47364 Jul 6 23:23:58.222081 sshd-session[4719]: pam_unix(sshd:session): session closed for user core Jul 6 23:23:58.226482 systemd[1]: sshd@10-10.200.20.19:22-10.200.16.10:47364.service: Deactivated successfully. Jul 6 23:23:58.229077 systemd[1]: session-13.scope: Deactivated successfully. Jul 6 23:23:58.231989 systemd-logind[1707]: Session 13 logged out. Waiting for processes to exit. Jul 6 23:23:58.233408 systemd-logind[1707]: Removed session 13. Jul 6 23:23:58.315081 systemd[1]: Started sshd@11-10.200.20.19:22-10.200.16.10:47372.service - OpenSSH per-connection server daemon (10.200.16.10:47372). Jul 6 23:23:58.792748 sshd[4734]: Accepted publickey for core from 10.200.16.10 port 47372 ssh2: RSA SHA256:03hkvBavc73fCnbrqThnCFPODKWBbuy7ZtqQR+/MThM Jul 6 23:23:58.793993 sshd-session[4734]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:23:58.800376 systemd-logind[1707]: New session 14 of user core. Jul 6 23:23:58.807002 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 6 23:23:59.244572 sshd[4736]: Connection closed by 10.200.16.10 port 47372 Jul 6 23:23:59.245172 sshd-session[4734]: pam_unix(sshd:session): session closed for user core Jul 6 23:23:59.248900 systemd-logind[1707]: Session 14 logged out. Waiting for processes to exit. Jul 6 23:23:59.249900 systemd[1]: sshd@11-10.200.20.19:22-10.200.16.10:47372.service: Deactivated successfully. Jul 6 23:23:59.252621 systemd[1]: session-14.scope: Deactivated successfully. Jul 6 23:23:59.253799 systemd-logind[1707]: Removed session 14. Jul 6 23:23:59.339018 systemd[1]: Started sshd@12-10.200.20.19:22-10.200.16.10:47382.service - OpenSSH per-connection server daemon (10.200.16.10:47382). Jul 6 23:23:59.819043 sshd[4745]: Accepted publickey for core from 10.200.16.10 port 47382 ssh2: RSA SHA256:03hkvBavc73fCnbrqThnCFPODKWBbuy7ZtqQR+/MThM Jul 6 23:23:59.820434 sshd-session[4745]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:23:59.824677 systemd-logind[1707]: New session 15 of user core. Jul 6 23:23:59.832899 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 6 23:24:00.222679 sshd[4747]: Connection closed by 10.200.16.10 port 47382 Jul 6 23:24:00.224236 sshd-session[4745]: pam_unix(sshd:session): session closed for user core Jul 6 23:24:00.228317 systemd[1]: sshd@12-10.200.20.19:22-10.200.16.10:47382.service: Deactivated successfully. Jul 6 23:24:00.228462 systemd-logind[1707]: Session 15 logged out. Waiting for processes to exit. Jul 6 23:24:00.232960 systemd[1]: session-15.scope: Deactivated successfully. Jul 6 23:24:00.234645 systemd-logind[1707]: Removed session 15. Jul 6 23:24:05.315052 systemd[1]: Started sshd@13-10.200.20.19:22-10.200.16.10:36002.service - OpenSSH per-connection server daemon (10.200.16.10:36002). Jul 6 23:24:05.797391 sshd[4761]: Accepted publickey for core from 10.200.16.10 port 36002 ssh2: RSA SHA256:03hkvBavc73fCnbrqThnCFPODKWBbuy7ZtqQR+/MThM Jul 6 23:24:05.798761 sshd-session[4761]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:24:05.803455 systemd-logind[1707]: New session 16 of user core. Jul 6 23:24:05.806907 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 6 23:24:06.213432 sshd[4763]: Connection closed by 10.200.16.10 port 36002 Jul 6 23:24:06.213945 sshd-session[4761]: pam_unix(sshd:session): session closed for user core Jul 6 23:24:06.218379 systemd[1]: sshd@13-10.200.20.19:22-10.200.16.10:36002.service: Deactivated successfully. Jul 6 23:24:06.220353 systemd[1]: session-16.scope: Deactivated successfully. Jul 6 23:24:06.221208 systemd-logind[1707]: Session 16 logged out. Waiting for processes to exit. Jul 6 23:24:06.222582 systemd-logind[1707]: Removed session 16. Jul 6 23:24:06.312029 systemd[1]: Started sshd@14-10.200.20.19:22-10.200.16.10:36018.service - OpenSSH per-connection server daemon (10.200.16.10:36018). Jul 6 23:24:06.794925 sshd[4775]: Accepted publickey for core from 10.200.16.10 port 36018 ssh2: RSA SHA256:03hkvBavc73fCnbrqThnCFPODKWBbuy7ZtqQR+/MThM Jul 6 23:24:06.797331 sshd-session[4775]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:24:06.802444 systemd-logind[1707]: New session 17 of user core. Jul 6 23:24:06.809908 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 6 23:24:07.255773 sshd[4777]: Connection closed by 10.200.16.10 port 36018 Jul 6 23:24:07.254863 sshd-session[4775]: pam_unix(sshd:session): session closed for user core Jul 6 23:24:07.258916 systemd[1]: sshd@14-10.200.20.19:22-10.200.16.10:36018.service: Deactivated successfully. Jul 6 23:24:07.261853 systemd[1]: session-17.scope: Deactivated successfully. Jul 6 23:24:07.263284 systemd-logind[1707]: Session 17 logged out. Waiting for processes to exit. Jul 6 23:24:07.265050 systemd-logind[1707]: Removed session 17. Jul 6 23:24:07.361084 systemd[1]: Started sshd@15-10.200.20.19:22-10.200.16.10:36032.service - OpenSSH per-connection server daemon (10.200.16.10:36032). Jul 6 23:24:07.841720 sshd[4788]: Accepted publickey for core from 10.200.16.10 port 36032 ssh2: RSA SHA256:03hkvBavc73fCnbrqThnCFPODKWBbuy7ZtqQR+/MThM Jul 6 23:24:07.843396 sshd-session[4788]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:24:07.848958 systemd-logind[1707]: New session 18 of user core. Jul 6 23:24:07.853048 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 6 23:24:08.938779 sshd[4790]: Connection closed by 10.200.16.10 port 36032 Jul 6 23:24:08.939362 sshd-session[4788]: pam_unix(sshd:session): session closed for user core Jul 6 23:24:08.943726 systemd[1]: sshd@15-10.200.20.19:22-10.200.16.10:36032.service: Deactivated successfully. Jul 6 23:24:08.946569 systemd[1]: session-18.scope: Deactivated successfully. Jul 6 23:24:08.949572 systemd-logind[1707]: Session 18 logged out. Waiting for processes to exit. Jul 6 23:24:08.950603 systemd-logind[1707]: Removed session 18. Jul 6 23:24:09.025544 systemd[1]: Started sshd@16-10.200.20.19:22-10.200.16.10:36042.service - OpenSSH per-connection server daemon (10.200.16.10:36042). Jul 6 23:24:09.507848 sshd[4807]: Accepted publickey for core from 10.200.16.10 port 36042 ssh2: RSA SHA256:03hkvBavc73fCnbrqThnCFPODKWBbuy7ZtqQR+/MThM Jul 6 23:24:09.509238 sshd-session[4807]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:24:09.513369 systemd-logind[1707]: New session 19 of user core. Jul 6 23:24:09.519917 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 6 23:24:10.035589 sshd[4812]: Connection closed by 10.200.16.10 port 36042 Jul 6 23:24:10.036275 sshd-session[4807]: pam_unix(sshd:session): session closed for user core Jul 6 23:24:10.039613 systemd[1]: sshd@16-10.200.20.19:22-10.200.16.10:36042.service: Deactivated successfully. Jul 6 23:24:10.042491 systemd[1]: session-19.scope: Deactivated successfully. Jul 6 23:24:10.043348 systemd-logind[1707]: Session 19 logged out. Waiting for processes to exit. Jul 6 23:24:10.044386 systemd-logind[1707]: Removed session 19. Jul 6 23:24:10.129019 systemd[1]: Started sshd@17-10.200.20.19:22-10.200.16.10:60222.service - OpenSSH per-connection server daemon (10.200.16.10:60222). Jul 6 23:24:10.610359 sshd[4822]: Accepted publickey for core from 10.200.16.10 port 60222 ssh2: RSA SHA256:03hkvBavc73fCnbrqThnCFPODKWBbuy7ZtqQR+/MThM Jul 6 23:24:10.611664 sshd-session[4822]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:24:10.616717 systemd-logind[1707]: New session 20 of user core. Jul 6 23:24:10.624951 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 6 23:24:11.009228 sshd[4824]: Connection closed by 10.200.16.10 port 60222 Jul 6 23:24:11.008785 sshd-session[4822]: pam_unix(sshd:session): session closed for user core Jul 6 23:24:11.011980 systemd[1]: sshd@17-10.200.20.19:22-10.200.16.10:60222.service: Deactivated successfully. Jul 6 23:24:11.014722 systemd[1]: session-20.scope: Deactivated successfully. Jul 6 23:24:11.015722 systemd-logind[1707]: Session 20 logged out. Waiting for processes to exit. Jul 6 23:24:11.016667 systemd-logind[1707]: Removed session 20. Jul 6 23:24:16.099921 systemd[1]: Started sshd@18-10.200.20.19:22-10.200.16.10:60234.service - OpenSSH per-connection server daemon (10.200.16.10:60234). Jul 6 23:24:16.581536 sshd[4838]: Accepted publickey for core from 10.200.16.10 port 60234 ssh2: RSA SHA256:03hkvBavc73fCnbrqThnCFPODKWBbuy7ZtqQR+/MThM Jul 6 23:24:16.582899 sshd-session[4838]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:24:16.587025 systemd-logind[1707]: New session 21 of user core. Jul 6 23:24:16.599982 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 6 23:24:17.003130 sshd[4840]: Connection closed by 10.200.16.10 port 60234 Jul 6 23:24:17.002632 sshd-session[4838]: pam_unix(sshd:session): session closed for user core Jul 6 23:24:17.006283 systemd-logind[1707]: Session 21 logged out. Waiting for processes to exit. Jul 6 23:24:17.006991 systemd[1]: sshd@18-10.200.20.19:22-10.200.16.10:60234.service: Deactivated successfully. Jul 6 23:24:17.009465 systemd[1]: session-21.scope: Deactivated successfully. Jul 6 23:24:17.010808 systemd-logind[1707]: Removed session 21. Jul 6 23:24:22.097043 systemd[1]: Started sshd@19-10.200.20.19:22-10.200.16.10:43694.service - OpenSSH per-connection server daemon (10.200.16.10:43694). Jul 6 23:24:22.577857 sshd[4852]: Accepted publickey for core from 10.200.16.10 port 43694 ssh2: RSA SHA256:03hkvBavc73fCnbrqThnCFPODKWBbuy7ZtqQR+/MThM Jul 6 23:24:22.579113 sshd-session[4852]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:24:22.584656 systemd-logind[1707]: New session 22 of user core. Jul 6 23:24:22.592913 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 6 23:24:22.986419 sshd[4854]: Connection closed by 10.200.16.10 port 43694 Jul 6 23:24:22.987319 sshd-session[4852]: pam_unix(sshd:session): session closed for user core Jul 6 23:24:22.991054 systemd-logind[1707]: Session 22 logged out. Waiting for processes to exit. Jul 6 23:24:22.991512 systemd[1]: sshd@19-10.200.20.19:22-10.200.16.10:43694.service: Deactivated successfully. Jul 6 23:24:22.993627 systemd[1]: session-22.scope: Deactivated successfully. Jul 6 23:24:22.995112 systemd-logind[1707]: Removed session 22. Jul 6 23:24:23.078019 systemd[1]: Started sshd@20-10.200.20.19:22-10.200.16.10:43708.service - OpenSSH per-connection server daemon (10.200.16.10:43708). Jul 6 23:24:23.559156 sshd[4865]: Accepted publickey for core from 10.200.16.10 port 43708 ssh2: RSA SHA256:03hkvBavc73fCnbrqThnCFPODKWBbuy7ZtqQR+/MThM Jul 6 23:24:23.560504 sshd-session[4865]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:24:23.566001 systemd-logind[1707]: New session 23 of user core. Jul 6 23:24:23.573962 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 6 23:24:26.058866 containerd[1744]: time="2025-07-06T23:24:26.058667481Z" level=info msg="StopContainer for \"8c76f8f5aa09b720e0f60042eee21cd09325b382c0f5a88a5942c01920c8b48a\" with timeout 30 (s)" Jul 6 23:24:26.064651 containerd[1744]: time="2025-07-06T23:24:26.063722475Z" level=info msg="Stop container \"8c76f8f5aa09b720e0f60042eee21cd09325b382c0f5a88a5942c01920c8b48a\" with signal terminated" Jul 6 23:24:26.078439 systemd[1]: cri-containerd-8c76f8f5aa09b720e0f60042eee21cd09325b382c0f5a88a5942c01920c8b48a.scope: Deactivated successfully. Jul 6 23:24:26.082120 containerd[1744]: time="2025-07-06T23:24:26.082061292Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 6 23:24:26.095495 containerd[1744]: time="2025-07-06T23:24:26.095458955Z" level=info msg="StopContainer for \"f62d19b4cb110ccfaf68a7502dcd83ede0aaabbf3c00752bb9b7f04a2f4f160c\" with timeout 2 (s)" Jul 6 23:24:26.096114 containerd[1744]: time="2025-07-06T23:24:26.096064834Z" level=info msg="Stop container \"f62d19b4cb110ccfaf68a7502dcd83ede0aaabbf3c00752bb9b7f04a2f4f160c\" with signal terminated" Jul 6 23:24:26.103120 systemd-networkd[1564]: lxc_health: Link DOWN Jul 6 23:24:26.103565 systemd-networkd[1564]: lxc_health: Lost carrier Jul 6 23:24:26.114230 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8c76f8f5aa09b720e0f60042eee21cd09325b382c0f5a88a5942c01920c8b48a-rootfs.mount: Deactivated successfully. Jul 6 23:24:26.119617 systemd[1]: cri-containerd-f62d19b4cb110ccfaf68a7502dcd83ede0aaabbf3c00752bb9b7f04a2f4f160c.scope: Deactivated successfully. Jul 6 23:24:26.120304 systemd[1]: cri-containerd-f62d19b4cb110ccfaf68a7502dcd83ede0aaabbf3c00752bb9b7f04a2f4f160c.scope: Consumed 6.608s CPU time, 124M memory peak, 136K read from disk, 12.9M written to disk. Jul 6 23:24:26.143966 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f62d19b4cb110ccfaf68a7502dcd83ede0aaabbf3c00752bb9b7f04a2f4f160c-rootfs.mount: Deactivated successfully. Jul 6 23:24:26.189070 containerd[1744]: time="2025-07-06T23:24:26.188931996Z" level=info msg="shim disconnected" id=f62d19b4cb110ccfaf68a7502dcd83ede0aaabbf3c00752bb9b7f04a2f4f160c namespace=k8s.io Jul 6 23:24:26.189070 containerd[1744]: time="2025-07-06T23:24:26.189004036Z" level=warning msg="cleaning up after shim disconnected" id=f62d19b4cb110ccfaf68a7502dcd83ede0aaabbf3c00752bb9b7f04a2f4f160c namespace=k8s.io Jul 6 23:24:26.189070 containerd[1744]: time="2025-07-06T23:24:26.189015236Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:24:26.189316 containerd[1744]: time="2025-07-06T23:24:26.189116876Z" level=info msg="shim disconnected" id=8c76f8f5aa09b720e0f60042eee21cd09325b382c0f5a88a5942c01920c8b48a namespace=k8s.io Jul 6 23:24:26.189316 containerd[1744]: time="2025-07-06T23:24:26.189265755Z" level=warning msg="cleaning up after shim disconnected" id=8c76f8f5aa09b720e0f60042eee21cd09325b382c0f5a88a5942c01920c8b48a namespace=k8s.io Jul 6 23:24:26.189316 containerd[1744]: time="2025-07-06T23:24:26.189277875Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:24:26.215973 containerd[1744]: time="2025-07-06T23:24:26.215869162Z" level=info msg="StopContainer for \"8c76f8f5aa09b720e0f60042eee21cd09325b382c0f5a88a5942c01920c8b48a\" returns successfully" Jul 6 23:24:26.216779 containerd[1744]: time="2025-07-06T23:24:26.216614321Z" level=info msg="StopPodSandbox for \"409967e382bb9d519c4f75cb8891c4e9b5995d0150957906a4b89eac438a405c\"" Jul 6 23:24:26.216779 containerd[1744]: time="2025-07-06T23:24:26.216653641Z" level=info msg="Container to stop \"8c76f8f5aa09b720e0f60042eee21cd09325b382c0f5a88a5942c01920c8b48a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 6 23:24:26.220341 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-409967e382bb9d519c4f75cb8891c4e9b5995d0150957906a4b89eac438a405c-shm.mount: Deactivated successfully. Jul 6 23:24:26.221538 containerd[1744]: time="2025-07-06T23:24:26.221483074Z" level=info msg="StopContainer for \"f62d19b4cb110ccfaf68a7502dcd83ede0aaabbf3c00752bb9b7f04a2f4f160c\" returns successfully" Jul 6 23:24:26.222355 containerd[1744]: time="2025-07-06T23:24:26.222131874Z" level=info msg="StopPodSandbox for \"e27a40f159914c1a19dc26ab463a0a9d24b2bc8b0dd9744476e9a66377438b55\"" Jul 6 23:24:26.222355 containerd[1744]: time="2025-07-06T23:24:26.222179433Z" level=info msg="Container to stop \"331698af999b88bbccca664f5f2ebd3d579dd58d7b82d60f831c63d89d0270a4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 6 23:24:26.222355 containerd[1744]: time="2025-07-06T23:24:26.222191593Z" level=info msg="Container to stop \"f62d19b4cb110ccfaf68a7502dcd83ede0aaabbf3c00752bb9b7f04a2f4f160c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 6 23:24:26.222355 containerd[1744]: time="2025-07-06T23:24:26.222201313Z" level=info msg="Container to stop \"9ec5716bd91de2cd17417f832a0fa06536ceb9e3e76d044cf7f0dbefedfb7bb6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 6 23:24:26.222355 containerd[1744]: time="2025-07-06T23:24:26.222210193Z" level=info msg="Container to stop \"bd909ff70f733398ecdf2b44d104a9c9e642abdc7a1b5a4ef31c6ed20fe5447b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 6 23:24:26.222355 containerd[1744]: time="2025-07-06T23:24:26.222218753Z" level=info msg="Container to stop \"3162478a7a2a51b47900579a49602e2a1836cedd0fa65def8f8de5a46286462d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 6 23:24:26.229132 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e27a40f159914c1a19dc26ab463a0a9d24b2bc8b0dd9744476e9a66377438b55-shm.mount: Deactivated successfully. Jul 6 23:24:26.230015 systemd[1]: cri-containerd-409967e382bb9d519c4f75cb8891c4e9b5995d0150957906a4b89eac438a405c.scope: Deactivated successfully. Jul 6 23:24:26.238182 systemd[1]: cri-containerd-e27a40f159914c1a19dc26ab463a0a9d24b2bc8b0dd9744476e9a66377438b55.scope: Deactivated successfully. Jul 6 23:24:26.280612 containerd[1744]: time="2025-07-06T23:24:26.280376919Z" level=info msg="shim disconnected" id=409967e382bb9d519c4f75cb8891c4e9b5995d0150957906a4b89eac438a405c namespace=k8s.io Jul 6 23:24:26.280612 containerd[1744]: time="2025-07-06T23:24:26.280455679Z" level=warning msg="cleaning up after shim disconnected" id=409967e382bb9d519c4f75cb8891c4e9b5995d0150957906a4b89eac438a405c namespace=k8s.io Jul 6 23:24:26.280612 containerd[1744]: time="2025-07-06T23:24:26.280465799Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:24:26.284015 containerd[1744]: time="2025-07-06T23:24:26.283604035Z" level=info msg="shim disconnected" id=e27a40f159914c1a19dc26ab463a0a9d24b2bc8b0dd9744476e9a66377438b55 namespace=k8s.io Jul 6 23:24:26.284015 containerd[1744]: time="2025-07-06T23:24:26.283665035Z" level=warning msg="cleaning up after shim disconnected" id=e27a40f159914c1a19dc26ab463a0a9d24b2bc8b0dd9744476e9a66377438b55 namespace=k8s.io Jul 6 23:24:26.284015 containerd[1744]: time="2025-07-06T23:24:26.283674875Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:24:26.301043 containerd[1744]: time="2025-07-06T23:24:26.300991013Z" level=warning msg="cleanup warnings time=\"2025-07-06T23:24:26Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 6 23:24:26.304184 containerd[1744]: time="2025-07-06T23:24:26.304141769Z" level=info msg="TearDown network for sandbox \"409967e382bb9d519c4f75cb8891c4e9b5995d0150957906a4b89eac438a405c\" successfully" Jul 6 23:24:26.304362 containerd[1744]: time="2025-07-06T23:24:26.304348209Z" level=info msg="StopPodSandbox for \"409967e382bb9d519c4f75cb8891c4e9b5995d0150957906a4b89eac438a405c\" returns successfully" Jul 6 23:24:26.308854 containerd[1744]: time="2025-07-06T23:24:26.308729083Z" level=info msg="TearDown network for sandbox \"e27a40f159914c1a19dc26ab463a0a9d24b2bc8b0dd9744476e9a66377438b55\" successfully" Jul 6 23:24:26.308854 containerd[1744]: time="2025-07-06T23:24:26.308842083Z" level=info msg="StopPodSandbox for \"e27a40f159914c1a19dc26ab463a0a9d24b2bc8b0dd9744476e9a66377438b55\" returns successfully" Jul 6 23:24:26.380102 kubelet[3278]: I0706 23:24:26.379494 3278 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/afa2f1e9-0bbc-4679-9540-5ff5a5f68490-cni-path\") pod \"afa2f1e9-0bbc-4679-9540-5ff5a5f68490\" (UID: \"afa2f1e9-0bbc-4679-9540-5ff5a5f68490\") " Jul 6 23:24:26.380102 kubelet[3278]: I0706 23:24:26.379550 3278 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/afa2f1e9-0bbc-4679-9540-5ff5a5f68490-clustermesh-secrets\") pod \"afa2f1e9-0bbc-4679-9540-5ff5a5f68490\" (UID: \"afa2f1e9-0bbc-4679-9540-5ff5a5f68490\") " Jul 6 23:24:26.380102 kubelet[3278]: I0706 23:24:26.379566 3278 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/afa2f1e9-0bbc-4679-9540-5ff5a5f68490-etc-cni-netd\") pod \"afa2f1e9-0bbc-4679-9540-5ff5a5f68490\" (UID: \"afa2f1e9-0bbc-4679-9540-5ff5a5f68490\") " Jul 6 23:24:26.380102 kubelet[3278]: I0706 23:24:26.379581 3278 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/afa2f1e9-0bbc-4679-9540-5ff5a5f68490-bpf-maps\") pod \"afa2f1e9-0bbc-4679-9540-5ff5a5f68490\" (UID: \"afa2f1e9-0bbc-4679-9540-5ff5a5f68490\") " Jul 6 23:24:26.380102 kubelet[3278]: I0706 23:24:26.379577 3278 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afa2f1e9-0bbc-4679-9540-5ff5a5f68490-cni-path" (OuterVolumeSpecName: "cni-path") pod "afa2f1e9-0bbc-4679-9540-5ff5a5f68490" (UID: "afa2f1e9-0bbc-4679-9540-5ff5a5f68490"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:24:26.380102 kubelet[3278]: I0706 23:24:26.379620 3278 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afa2f1e9-0bbc-4679-9540-5ff5a5f68490-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "afa2f1e9-0bbc-4679-9540-5ff5a5f68490" (UID: "afa2f1e9-0bbc-4679-9540-5ff5a5f68490"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:24:26.380592 kubelet[3278]: I0706 23:24:26.379595 3278 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/afa2f1e9-0bbc-4679-9540-5ff5a5f68490-lib-modules\") pod \"afa2f1e9-0bbc-4679-9540-5ff5a5f68490\" (UID: \"afa2f1e9-0bbc-4679-9540-5ff5a5f68490\") " Jul 6 23:24:26.380592 kubelet[3278]: I0706 23:24:26.379650 3278 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afa2f1e9-0bbc-4679-9540-5ff5a5f68490-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "afa2f1e9-0bbc-4679-9540-5ff5a5f68490" (UID: "afa2f1e9-0bbc-4679-9540-5ff5a5f68490"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:24:26.380592 kubelet[3278]: I0706 23:24:26.379661 3278 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xsf4s\" (UniqueName: \"kubernetes.io/projected/afa2f1e9-0bbc-4679-9540-5ff5a5f68490-kube-api-access-xsf4s\") pod \"afa2f1e9-0bbc-4679-9540-5ff5a5f68490\" (UID: \"afa2f1e9-0bbc-4679-9540-5ff5a5f68490\") " Jul 6 23:24:26.380592 kubelet[3278]: I0706 23:24:26.379663 3278 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afa2f1e9-0bbc-4679-9540-5ff5a5f68490-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "afa2f1e9-0bbc-4679-9540-5ff5a5f68490" (UID: "afa2f1e9-0bbc-4679-9540-5ff5a5f68490"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:24:26.383018 kubelet[3278]: I0706 23:24:26.381997 3278 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/afa2f1e9-0bbc-4679-9540-5ff5a5f68490-cilium-run\") pod \"afa2f1e9-0bbc-4679-9540-5ff5a5f68490\" (UID: \"afa2f1e9-0bbc-4679-9540-5ff5a5f68490\") " Jul 6 23:24:26.383018 kubelet[3278]: I0706 23:24:26.382056 3278 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/79787c53-f9ea-41cb-a146-26572ac15ac1-cilium-config-path\") pod \"79787c53-f9ea-41cb-a146-26572ac15ac1\" (UID: \"79787c53-f9ea-41cb-a146-26572ac15ac1\") " Jul 6 23:24:26.383018 kubelet[3278]: I0706 23:24:26.382073 3278 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/afa2f1e9-0bbc-4679-9540-5ff5a5f68490-host-proc-sys-net\") pod \"afa2f1e9-0bbc-4679-9540-5ff5a5f68490\" (UID: \"afa2f1e9-0bbc-4679-9540-5ff5a5f68490\") " Jul 6 23:24:26.383018 kubelet[3278]: I0706 23:24:26.382089 3278 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/afa2f1e9-0bbc-4679-9540-5ff5a5f68490-cilium-cgroup\") pod \"afa2f1e9-0bbc-4679-9540-5ff5a5f68490\" (UID: \"afa2f1e9-0bbc-4679-9540-5ff5a5f68490\") " Jul 6 23:24:26.383018 kubelet[3278]: I0706 23:24:26.382106 3278 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/afa2f1e9-0bbc-4679-9540-5ff5a5f68490-host-proc-sys-kernel\") pod \"afa2f1e9-0bbc-4679-9540-5ff5a5f68490\" (UID: \"afa2f1e9-0bbc-4679-9540-5ff5a5f68490\") " Jul 6 23:24:26.383018 kubelet[3278]: I0706 23:24:26.382124 3278 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/afa2f1e9-0bbc-4679-9540-5ff5a5f68490-xtables-lock\") pod \"afa2f1e9-0bbc-4679-9540-5ff5a5f68490\" (UID: \"afa2f1e9-0bbc-4679-9540-5ff5a5f68490\") " Jul 6 23:24:26.383231 kubelet[3278]: I0706 23:24:26.382140 3278 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/afa2f1e9-0bbc-4679-9540-5ff5a5f68490-hostproc\") pod \"afa2f1e9-0bbc-4679-9540-5ff5a5f68490\" (UID: \"afa2f1e9-0bbc-4679-9540-5ff5a5f68490\") " Jul 6 23:24:26.383231 kubelet[3278]: I0706 23:24:26.382159 3278 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bd6b9\" (UniqueName: \"kubernetes.io/projected/79787c53-f9ea-41cb-a146-26572ac15ac1-kube-api-access-bd6b9\") pod \"79787c53-f9ea-41cb-a146-26572ac15ac1\" (UID: \"79787c53-f9ea-41cb-a146-26572ac15ac1\") " Jul 6 23:24:26.383231 kubelet[3278]: I0706 23:24:26.382177 3278 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/afa2f1e9-0bbc-4679-9540-5ff5a5f68490-hubble-tls\") pod \"afa2f1e9-0bbc-4679-9540-5ff5a5f68490\" (UID: \"afa2f1e9-0bbc-4679-9540-5ff5a5f68490\") " Jul 6 23:24:26.383231 kubelet[3278]: I0706 23:24:26.382194 3278 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/afa2f1e9-0bbc-4679-9540-5ff5a5f68490-cilium-config-path\") pod \"afa2f1e9-0bbc-4679-9540-5ff5a5f68490\" (UID: \"afa2f1e9-0bbc-4679-9540-5ff5a5f68490\") " Jul 6 23:24:26.383231 kubelet[3278]: I0706 23:24:26.382244 3278 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/afa2f1e9-0bbc-4679-9540-5ff5a5f68490-cni-path\") on node \"ci-4230.2.1-a-cc9ddc1e95\" DevicePath \"\"" Jul 6 23:24:26.383231 kubelet[3278]: I0706 23:24:26.382254 3278 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/afa2f1e9-0bbc-4679-9540-5ff5a5f68490-etc-cni-netd\") on node \"ci-4230.2.1-a-cc9ddc1e95\" DevicePath \"\"" Jul 6 23:24:26.383231 kubelet[3278]: I0706 23:24:26.382267 3278 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/afa2f1e9-0bbc-4679-9540-5ff5a5f68490-bpf-maps\") on node \"ci-4230.2.1-a-cc9ddc1e95\" DevicePath \"\"" Jul 6 23:24:26.383371 kubelet[3278]: I0706 23:24:26.382275 3278 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/afa2f1e9-0bbc-4679-9540-5ff5a5f68490-lib-modules\") on node \"ci-4230.2.1-a-cc9ddc1e95\" DevicePath \"\"" Jul 6 23:24:26.383755 kubelet[3278]: I0706 23:24:26.383708 3278 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afa2f1e9-0bbc-4679-9540-5ff5a5f68490-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "afa2f1e9-0bbc-4679-9540-5ff5a5f68490" (UID: "afa2f1e9-0bbc-4679-9540-5ff5a5f68490"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:24:26.384250 kubelet[3278]: I0706 23:24:26.384205 3278 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afa2f1e9-0bbc-4679-9540-5ff5a5f68490-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "afa2f1e9-0bbc-4679-9540-5ff5a5f68490" (UID: "afa2f1e9-0bbc-4679-9540-5ff5a5f68490"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:24:26.384355 kubelet[3278]: I0706 23:24:26.384341 3278 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afa2f1e9-0bbc-4679-9540-5ff5a5f68490-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "afa2f1e9-0bbc-4679-9540-5ff5a5f68490" (UID: "afa2f1e9-0bbc-4679-9540-5ff5a5f68490"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:24:26.384425 kubelet[3278]: I0706 23:24:26.384411 3278 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afa2f1e9-0bbc-4679-9540-5ff5a5f68490-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "afa2f1e9-0bbc-4679-9540-5ff5a5f68490" (UID: "afa2f1e9-0bbc-4679-9540-5ff5a5f68490"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:24:26.384779 kubelet[3278]: I0706 23:24:26.384483 3278 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afa2f1e9-0bbc-4679-9540-5ff5a5f68490-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "afa2f1e9-0bbc-4679-9540-5ff5a5f68490" (UID: "afa2f1e9-0bbc-4679-9540-5ff5a5f68490"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:24:26.384779 kubelet[3278]: I0706 23:24:26.384505 3278 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afa2f1e9-0bbc-4679-9540-5ff5a5f68490-hostproc" (OuterVolumeSpecName: "hostproc") pod "afa2f1e9-0bbc-4679-9540-5ff5a5f68490" (UID: "afa2f1e9-0bbc-4679-9540-5ff5a5f68490"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:24:26.384970 kubelet[3278]: I0706 23:24:26.384948 3278 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afa2f1e9-0bbc-4679-9540-5ff5a5f68490-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "afa2f1e9-0bbc-4679-9540-5ff5a5f68490" (UID: "afa2f1e9-0bbc-4679-9540-5ff5a5f68490"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 6 23:24:26.386177 kubelet[3278]: I0706 23:24:26.386149 3278 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79787c53-f9ea-41cb-a146-26572ac15ac1-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "79787c53-f9ea-41cb-a146-26572ac15ac1" (UID: "79787c53-f9ea-41cb-a146-26572ac15ac1"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 6 23:24:26.387199 kubelet[3278]: I0706 23:24:26.387151 3278 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afa2f1e9-0bbc-4679-9540-5ff5a5f68490-kube-api-access-xsf4s" (OuterVolumeSpecName: "kube-api-access-xsf4s") pod "afa2f1e9-0bbc-4679-9540-5ff5a5f68490" (UID: "afa2f1e9-0bbc-4679-9540-5ff5a5f68490"). InnerVolumeSpecName "kube-api-access-xsf4s". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 6 23:24:26.387625 kubelet[3278]: I0706 23:24:26.387599 3278 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afa2f1e9-0bbc-4679-9540-5ff5a5f68490-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "afa2f1e9-0bbc-4679-9540-5ff5a5f68490" (UID: "afa2f1e9-0bbc-4679-9540-5ff5a5f68490"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 6 23:24:26.388965 kubelet[3278]: I0706 23:24:26.388929 3278 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79787c53-f9ea-41cb-a146-26572ac15ac1-kube-api-access-bd6b9" (OuterVolumeSpecName: "kube-api-access-bd6b9") pod "79787c53-f9ea-41cb-a146-26572ac15ac1" (UID: "79787c53-f9ea-41cb-a146-26572ac15ac1"). InnerVolumeSpecName "kube-api-access-bd6b9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 6 23:24:26.389637 kubelet[3278]: I0706 23:24:26.389606 3278 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afa2f1e9-0bbc-4679-9540-5ff5a5f68490-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "afa2f1e9-0bbc-4679-9540-5ff5a5f68490" (UID: "afa2f1e9-0bbc-4679-9540-5ff5a5f68490"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 6 23:24:26.482622 kubelet[3278]: I0706 23:24:26.482579 3278 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/afa2f1e9-0bbc-4679-9540-5ff5a5f68490-hubble-tls\") on node \"ci-4230.2.1-a-cc9ddc1e95\" DevicePath \"\"" Jul 6 23:24:26.482797 kubelet[3278]: I0706 23:24:26.482641 3278 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/afa2f1e9-0bbc-4679-9540-5ff5a5f68490-cilium-config-path\") on node \"ci-4230.2.1-a-cc9ddc1e95\" DevicePath \"\"" Jul 6 23:24:26.482797 kubelet[3278]: I0706 23:24:26.482652 3278 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/afa2f1e9-0bbc-4679-9540-5ff5a5f68490-clustermesh-secrets\") on node \"ci-4230.2.1-a-cc9ddc1e95\" DevicePath \"\"" Jul 6 23:24:26.482797 kubelet[3278]: I0706 23:24:26.482661 3278 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xsf4s\" (UniqueName: \"kubernetes.io/projected/afa2f1e9-0bbc-4679-9540-5ff5a5f68490-kube-api-access-xsf4s\") on node \"ci-4230.2.1-a-cc9ddc1e95\" DevicePath \"\"" Jul 6 23:24:26.482797 kubelet[3278]: I0706 23:24:26.482670 3278 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/afa2f1e9-0bbc-4679-9540-5ff5a5f68490-cilium-run\") on node \"ci-4230.2.1-a-cc9ddc1e95\" DevicePath \"\"" Jul 6 23:24:26.482797 kubelet[3278]: I0706 23:24:26.482679 3278 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/79787c53-f9ea-41cb-a146-26572ac15ac1-cilium-config-path\") on node \"ci-4230.2.1-a-cc9ddc1e95\" DevicePath \"\"" Jul 6 23:24:26.482797 kubelet[3278]: I0706 23:24:26.482687 3278 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/afa2f1e9-0bbc-4679-9540-5ff5a5f68490-host-proc-sys-net\") on node \"ci-4230.2.1-a-cc9ddc1e95\" DevicePath \"\"" Jul 6 23:24:26.482797 kubelet[3278]: I0706 23:24:26.482698 3278 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/afa2f1e9-0bbc-4679-9540-5ff5a5f68490-cilium-cgroup\") on node \"ci-4230.2.1-a-cc9ddc1e95\" DevicePath \"\"" Jul 6 23:24:26.482797 kubelet[3278]: I0706 23:24:26.482706 3278 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/afa2f1e9-0bbc-4679-9540-5ff5a5f68490-host-proc-sys-kernel\") on node \"ci-4230.2.1-a-cc9ddc1e95\" DevicePath \"\"" Jul 6 23:24:26.482977 kubelet[3278]: I0706 23:24:26.482714 3278 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/afa2f1e9-0bbc-4679-9540-5ff5a5f68490-xtables-lock\") on node \"ci-4230.2.1-a-cc9ddc1e95\" DevicePath \"\"" Jul 6 23:24:26.482977 kubelet[3278]: I0706 23:24:26.482722 3278 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/afa2f1e9-0bbc-4679-9540-5ff5a5f68490-hostproc\") on node \"ci-4230.2.1-a-cc9ddc1e95\" DevicePath \"\"" Jul 6 23:24:26.482977 kubelet[3278]: I0706 23:24:26.482730 3278 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-bd6b9\" (UniqueName: \"kubernetes.io/projected/79787c53-f9ea-41cb-a146-26572ac15ac1-kube-api-access-bd6b9\") on node \"ci-4230.2.1-a-cc9ddc1e95\" DevicePath \"\"" Jul 6 23:24:26.581639 kubelet[3278]: I0706 23:24:26.581099 3278 scope.go:117] "RemoveContainer" containerID="8c76f8f5aa09b720e0f60042eee21cd09325b382c0f5a88a5942c01920c8b48a" Jul 6 23:24:26.585308 systemd[1]: Removed slice kubepods-besteffort-pod79787c53_f9ea_41cb_a146_26572ac15ac1.slice - libcontainer container kubepods-besteffort-pod79787c53_f9ea_41cb_a146_26572ac15ac1.slice. Jul 6 23:24:26.586112 containerd[1744]: time="2025-07-06T23:24:26.585234372Z" level=info msg="RemoveContainer for \"8c76f8f5aa09b720e0f60042eee21cd09325b382c0f5a88a5942c01920c8b48a\"" Jul 6 23:24:26.596679 systemd[1]: Removed slice kubepods-burstable-podafa2f1e9_0bbc_4679_9540_5ff5a5f68490.slice - libcontainer container kubepods-burstable-podafa2f1e9_0bbc_4679_9540_5ff5a5f68490.slice. Jul 6 23:24:26.597106 systemd[1]: kubepods-burstable-podafa2f1e9_0bbc_4679_9540_5ff5a5f68490.slice: Consumed 6.687s CPU time, 124.4M memory peak, 136K read from disk, 12.9M written to disk. Jul 6 23:24:26.608099 containerd[1744]: time="2025-07-06T23:24:26.608041663Z" level=info msg="RemoveContainer for \"8c76f8f5aa09b720e0f60042eee21cd09325b382c0f5a88a5942c01920c8b48a\" returns successfully" Jul 6 23:24:26.608640 kubelet[3278]: I0706 23:24:26.608513 3278 scope.go:117] "RemoveContainer" containerID="8c76f8f5aa09b720e0f60042eee21cd09325b382c0f5a88a5942c01920c8b48a" Jul 6 23:24:26.609644 containerd[1744]: time="2025-07-06T23:24:26.609539701Z" level=error msg="ContainerStatus for \"8c76f8f5aa09b720e0f60042eee21cd09325b382c0f5a88a5942c01920c8b48a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8c76f8f5aa09b720e0f60042eee21cd09325b382c0f5a88a5942c01920c8b48a\": not found" Jul 6 23:24:26.610188 kubelet[3278]: E0706 23:24:26.609782 3278 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8c76f8f5aa09b720e0f60042eee21cd09325b382c0f5a88a5942c01920c8b48a\": not found" containerID="8c76f8f5aa09b720e0f60042eee21cd09325b382c0f5a88a5942c01920c8b48a" Jul 6 23:24:26.610188 kubelet[3278]: I0706 23:24:26.609823 3278 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8c76f8f5aa09b720e0f60042eee21cd09325b382c0f5a88a5942c01920c8b48a"} err="failed to get container status \"8c76f8f5aa09b720e0f60042eee21cd09325b382c0f5a88a5942c01920c8b48a\": rpc error: code = NotFound desc = an error occurred when try to find container \"8c76f8f5aa09b720e0f60042eee21cd09325b382c0f5a88a5942c01920c8b48a\": not found" Jul 6 23:24:26.610188 kubelet[3278]: I0706 23:24:26.609927 3278 scope.go:117] "RemoveContainer" containerID="f62d19b4cb110ccfaf68a7502dcd83ede0aaabbf3c00752bb9b7f04a2f4f160c" Jul 6 23:24:26.611436 containerd[1744]: time="2025-07-06T23:24:26.611390058Z" level=info msg="RemoveContainer for \"f62d19b4cb110ccfaf68a7502dcd83ede0aaabbf3c00752bb9b7f04a2f4f160c\"" Jul 6 23:24:26.622204 containerd[1744]: time="2025-07-06T23:24:26.622146925Z" level=info msg="RemoveContainer for \"f62d19b4cb110ccfaf68a7502dcd83ede0aaabbf3c00752bb9b7f04a2f4f160c\" returns successfully" Jul 6 23:24:26.622500 kubelet[3278]: I0706 23:24:26.622405 3278 scope.go:117] "RemoveContainer" containerID="331698af999b88bbccca664f5f2ebd3d579dd58d7b82d60f831c63d89d0270a4" Jul 6 23:24:26.624194 containerd[1744]: time="2025-07-06T23:24:26.623993962Z" level=info msg="RemoveContainer for \"331698af999b88bbccca664f5f2ebd3d579dd58d7b82d60f831c63d89d0270a4\"" Jul 6 23:24:26.639916 containerd[1744]: time="2025-07-06T23:24:26.638623464Z" level=info msg="RemoveContainer for \"331698af999b88bbccca664f5f2ebd3d579dd58d7b82d60f831c63d89d0270a4\" returns successfully" Jul 6 23:24:26.640071 kubelet[3278]: I0706 23:24:26.638943 3278 scope.go:117] "RemoveContainer" containerID="bd909ff70f733398ecdf2b44d104a9c9e642abdc7a1b5a4ef31c6ed20fe5447b" Jul 6 23:24:26.640558 containerd[1744]: time="2025-07-06T23:24:26.640419982Z" level=info msg="RemoveContainer for \"bd909ff70f733398ecdf2b44d104a9c9e642abdc7a1b5a4ef31c6ed20fe5447b\"" Jul 6 23:24:26.651659 containerd[1744]: time="2025-07-06T23:24:26.651602807Z" level=info msg="RemoveContainer for \"bd909ff70f733398ecdf2b44d104a9c9e642abdc7a1b5a4ef31c6ed20fe5447b\" returns successfully" Jul 6 23:24:26.651926 kubelet[3278]: I0706 23:24:26.651891 3278 scope.go:117] "RemoveContainer" containerID="3162478a7a2a51b47900579a49602e2a1836cedd0fa65def8f8de5a46286462d" Jul 6 23:24:26.653327 containerd[1744]: time="2025-07-06T23:24:26.653290485Z" level=info msg="RemoveContainer for \"3162478a7a2a51b47900579a49602e2a1836cedd0fa65def8f8de5a46286462d\"" Jul 6 23:24:26.662794 containerd[1744]: time="2025-07-06T23:24:26.662721233Z" level=info msg="RemoveContainer for \"3162478a7a2a51b47900579a49602e2a1836cedd0fa65def8f8de5a46286462d\" returns successfully" Jul 6 23:24:26.663127 kubelet[3278]: I0706 23:24:26.663092 3278 scope.go:117] "RemoveContainer" containerID="9ec5716bd91de2cd17417f832a0fa06536ceb9e3e76d044cf7f0dbefedfb7bb6" Jul 6 23:24:26.664353 containerd[1744]: time="2025-07-06T23:24:26.664318311Z" level=info msg="RemoveContainer for \"9ec5716bd91de2cd17417f832a0fa06536ceb9e3e76d044cf7f0dbefedfb7bb6\"" Jul 6 23:24:26.673228 containerd[1744]: time="2025-07-06T23:24:26.673182220Z" level=info msg="RemoveContainer for \"9ec5716bd91de2cd17417f832a0fa06536ceb9e3e76d044cf7f0dbefedfb7bb6\" returns successfully" Jul 6 23:24:26.673574 kubelet[3278]: I0706 23:24:26.673432 3278 scope.go:117] "RemoveContainer" containerID="f62d19b4cb110ccfaf68a7502dcd83ede0aaabbf3c00752bb9b7f04a2f4f160c" Jul 6 23:24:26.673783 containerd[1744]: time="2025-07-06T23:24:26.673711579Z" level=error msg="ContainerStatus for \"f62d19b4cb110ccfaf68a7502dcd83ede0aaabbf3c00752bb9b7f04a2f4f160c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f62d19b4cb110ccfaf68a7502dcd83ede0aaabbf3c00752bb9b7f04a2f4f160c\": not found" Jul 6 23:24:26.673924 kubelet[3278]: E0706 23:24:26.673897 3278 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f62d19b4cb110ccfaf68a7502dcd83ede0aaabbf3c00752bb9b7f04a2f4f160c\": not found" containerID="f62d19b4cb110ccfaf68a7502dcd83ede0aaabbf3c00752bb9b7f04a2f4f160c" Jul 6 23:24:26.673977 kubelet[3278]: I0706 23:24:26.673928 3278 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f62d19b4cb110ccfaf68a7502dcd83ede0aaabbf3c00752bb9b7f04a2f4f160c"} err="failed to get container status \"f62d19b4cb110ccfaf68a7502dcd83ede0aaabbf3c00752bb9b7f04a2f4f160c\": rpc error: code = NotFound desc = an error occurred when try to find container \"f62d19b4cb110ccfaf68a7502dcd83ede0aaabbf3c00752bb9b7f04a2f4f160c\": not found" Jul 6 23:24:26.673977 kubelet[3278]: I0706 23:24:26.673949 3278 scope.go:117] "RemoveContainer" containerID="331698af999b88bbccca664f5f2ebd3d579dd58d7b82d60f831c63d89d0270a4" Jul 6 23:24:26.674236 containerd[1744]: time="2025-07-06T23:24:26.674133459Z" level=error msg="ContainerStatus for \"331698af999b88bbccca664f5f2ebd3d579dd58d7b82d60f831c63d89d0270a4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"331698af999b88bbccca664f5f2ebd3d579dd58d7b82d60f831c63d89d0270a4\": not found" Jul 6 23:24:26.674484 kubelet[3278]: E0706 23:24:26.674362 3278 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"331698af999b88bbccca664f5f2ebd3d579dd58d7b82d60f831c63d89d0270a4\": not found" containerID="331698af999b88bbccca664f5f2ebd3d579dd58d7b82d60f831c63d89d0270a4" Jul 6 23:24:26.674484 kubelet[3278]: I0706 23:24:26.674392 3278 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"331698af999b88bbccca664f5f2ebd3d579dd58d7b82d60f831c63d89d0270a4"} err="failed to get container status \"331698af999b88bbccca664f5f2ebd3d579dd58d7b82d60f831c63d89d0270a4\": rpc error: code = NotFound desc = an error occurred when try to find container \"331698af999b88bbccca664f5f2ebd3d579dd58d7b82d60f831c63d89d0270a4\": not found" Jul 6 23:24:26.674484 kubelet[3278]: I0706 23:24:26.674408 3278 scope.go:117] "RemoveContainer" containerID="bd909ff70f733398ecdf2b44d104a9c9e642abdc7a1b5a4ef31c6ed20fe5447b" Jul 6 23:24:26.674731 containerd[1744]: time="2025-07-06T23:24:26.674689778Z" level=error msg="ContainerStatus for \"bd909ff70f733398ecdf2b44d104a9c9e642abdc7a1b5a4ef31c6ed20fe5447b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bd909ff70f733398ecdf2b44d104a9c9e642abdc7a1b5a4ef31c6ed20fe5447b\": not found" Jul 6 23:24:26.674922 kubelet[3278]: E0706 23:24:26.674889 3278 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bd909ff70f733398ecdf2b44d104a9c9e642abdc7a1b5a4ef31c6ed20fe5447b\": not found" containerID="bd909ff70f733398ecdf2b44d104a9c9e642abdc7a1b5a4ef31c6ed20fe5447b" Jul 6 23:24:26.674960 kubelet[3278]: I0706 23:24:26.674925 3278 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bd909ff70f733398ecdf2b44d104a9c9e642abdc7a1b5a4ef31c6ed20fe5447b"} err="failed to get container status \"bd909ff70f733398ecdf2b44d104a9c9e642abdc7a1b5a4ef31c6ed20fe5447b\": rpc error: code = NotFound desc = an error occurred when try to find container \"bd909ff70f733398ecdf2b44d104a9c9e642abdc7a1b5a4ef31c6ed20fe5447b\": not found" Jul 6 23:24:26.674985 kubelet[3278]: I0706 23:24:26.674943 3278 scope.go:117] "RemoveContainer" containerID="3162478a7a2a51b47900579a49602e2a1836cedd0fa65def8f8de5a46286462d" Jul 6 23:24:26.675333 containerd[1744]: time="2025-07-06T23:24:26.675291137Z" level=error msg="ContainerStatus for \"3162478a7a2a51b47900579a49602e2a1836cedd0fa65def8f8de5a46286462d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3162478a7a2a51b47900579a49602e2a1836cedd0fa65def8f8de5a46286462d\": not found" Jul 6 23:24:26.675480 kubelet[3278]: E0706 23:24:26.675437 3278 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3162478a7a2a51b47900579a49602e2a1836cedd0fa65def8f8de5a46286462d\": not found" containerID="3162478a7a2a51b47900579a49602e2a1836cedd0fa65def8f8de5a46286462d" Jul 6 23:24:26.675523 kubelet[3278]: I0706 23:24:26.675475 3278 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3162478a7a2a51b47900579a49602e2a1836cedd0fa65def8f8de5a46286462d"} err="failed to get container status \"3162478a7a2a51b47900579a49602e2a1836cedd0fa65def8f8de5a46286462d\": rpc error: code = NotFound desc = an error occurred when try to find container \"3162478a7a2a51b47900579a49602e2a1836cedd0fa65def8f8de5a46286462d\": not found" Jul 6 23:24:26.675523 kubelet[3278]: I0706 23:24:26.675495 3278 scope.go:117] "RemoveContainer" containerID="9ec5716bd91de2cd17417f832a0fa06536ceb9e3e76d044cf7f0dbefedfb7bb6" Jul 6 23:24:26.675724 containerd[1744]: time="2025-07-06T23:24:26.675686857Z" level=error msg="ContainerStatus for \"9ec5716bd91de2cd17417f832a0fa06536ceb9e3e76d044cf7f0dbefedfb7bb6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9ec5716bd91de2cd17417f832a0fa06536ceb9e3e76d044cf7f0dbefedfb7bb6\": not found" Jul 6 23:24:26.675954 kubelet[3278]: E0706 23:24:26.675928 3278 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9ec5716bd91de2cd17417f832a0fa06536ceb9e3e76d044cf7f0dbefedfb7bb6\": not found" containerID="9ec5716bd91de2cd17417f832a0fa06536ceb9e3e76d044cf7f0dbefedfb7bb6" Jul 6 23:24:26.675997 kubelet[3278]: I0706 23:24:26.675957 3278 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9ec5716bd91de2cd17417f832a0fa06536ceb9e3e76d044cf7f0dbefedfb7bb6"} err="failed to get container status \"9ec5716bd91de2cd17417f832a0fa06536ceb9e3e76d044cf7f0dbefedfb7bb6\": rpc error: code = NotFound desc = an error occurred when try to find container \"9ec5716bd91de2cd17417f832a0fa06536ceb9e3e76d044cf7f0dbefedfb7bb6\": not found" Jul 6 23:24:27.061163 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-409967e382bb9d519c4f75cb8891c4e9b5995d0150957906a4b89eac438a405c-rootfs.mount: Deactivated successfully. Jul 6 23:24:27.061287 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e27a40f159914c1a19dc26ab463a0a9d24b2bc8b0dd9744476e9a66377438b55-rootfs.mount: Deactivated successfully. Jul 6 23:24:27.061348 systemd[1]: var-lib-kubelet-pods-79787c53\x2df9ea\x2d41cb\x2da146\x2d26572ac15ac1-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbd6b9.mount: Deactivated successfully. Jul 6 23:24:27.061402 systemd[1]: var-lib-kubelet-pods-afa2f1e9\x2d0bbc\x2d4679\x2d9540\x2d5ff5a5f68490-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxsf4s.mount: Deactivated successfully. Jul 6 23:24:27.061452 systemd[1]: var-lib-kubelet-pods-afa2f1e9\x2d0bbc\x2d4679\x2d9540\x2d5ff5a5f68490-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 6 23:24:27.061505 systemd[1]: var-lib-kubelet-pods-afa2f1e9\x2d0bbc\x2d4679\x2d9540\x2d5ff5a5f68490-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 6 23:24:27.331701 kubelet[3278]: E0706 23:24:27.331329 3278 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 6 23:24:28.071360 sshd[4867]: Connection closed by 10.200.16.10 port 43708 Jul 6 23:24:28.071246 sshd-session[4865]: pam_unix(sshd:session): session closed for user core Jul 6 23:24:28.075211 systemd-logind[1707]: Session 23 logged out. Waiting for processes to exit. Jul 6 23:24:28.076634 systemd[1]: sshd@20-10.200.20.19:22-10.200.16.10:43708.service: Deactivated successfully. Jul 6 23:24:28.079027 systemd[1]: session-23.scope: Deactivated successfully. Jul 6 23:24:28.079243 systemd[1]: session-23.scope: Consumed 1.601s CPU time, 23.5M memory peak. Jul 6 23:24:28.080437 systemd-logind[1707]: Removed session 23. Jul 6 23:24:28.168060 systemd[1]: Started sshd@21-10.200.20.19:22-10.200.16.10:43710.service - OpenSSH per-connection server daemon (10.200.16.10:43710). Jul 6 23:24:28.231450 kubelet[3278]: I0706 23:24:28.231403 3278 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79787c53-f9ea-41cb-a146-26572ac15ac1" path="/var/lib/kubelet/pods/79787c53-f9ea-41cb-a146-26572ac15ac1/volumes" Jul 6 23:24:28.231916 kubelet[3278]: I0706 23:24:28.231844 3278 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="afa2f1e9-0bbc-4679-9540-5ff5a5f68490" path="/var/lib/kubelet/pods/afa2f1e9-0bbc-4679-9540-5ff5a5f68490/volumes" Jul 6 23:24:28.646827 sshd[5028]: Accepted publickey for core from 10.200.16.10 port 43710 ssh2: RSA SHA256:03hkvBavc73fCnbrqThnCFPODKWBbuy7ZtqQR+/MThM Jul 6 23:24:28.648338 sshd-session[5028]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:24:28.653004 systemd-logind[1707]: New session 24 of user core. Jul 6 23:24:28.659921 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 6 23:24:30.096709 systemd[1]: Created slice kubepods-burstable-pod18c3e0c0_f3e4_432e_bee9_b2dab461725d.slice - libcontainer container kubepods-burstable-pod18c3e0c0_f3e4_432e_bee9_b2dab461725d.slice. Jul 6 23:24:30.157143 sshd[5030]: Connection closed by 10.200.16.10 port 43710 Jul 6 23:24:30.157541 sshd-session[5028]: pam_unix(sshd:session): session closed for user core Jul 6 23:24:30.163094 systemd[1]: sshd@21-10.200.20.19:22-10.200.16.10:43710.service: Deactivated successfully. Jul 6 23:24:30.165991 systemd[1]: session-24.scope: Deactivated successfully. Jul 6 23:24:30.166374 systemd[1]: session-24.scope: Consumed 1.074s CPU time, 23.7M memory peak. Jul 6 23:24:30.167256 systemd-logind[1707]: Session 24 logged out. Waiting for processes to exit. Jul 6 23:24:30.169130 systemd-logind[1707]: Removed session 24. Jul 6 23:24:30.201678 kubelet[3278]: I0706 23:24:30.201290 3278 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/18c3e0c0-f3e4-432e-bee9-b2dab461725d-cilium-config-path\") pod \"cilium-b8g59\" (UID: \"18c3e0c0-f3e4-432e-bee9-b2dab461725d\") " pod="kube-system/cilium-b8g59" Jul 6 23:24:30.201678 kubelet[3278]: I0706 23:24:30.201337 3278 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/18c3e0c0-f3e4-432e-bee9-b2dab461725d-cilium-ipsec-secrets\") pod \"cilium-b8g59\" (UID: \"18c3e0c0-f3e4-432e-bee9-b2dab461725d\") " pod="kube-system/cilium-b8g59" Jul 6 23:24:30.201678 kubelet[3278]: I0706 23:24:30.201355 3278 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g46vb\" (UniqueName: \"kubernetes.io/projected/18c3e0c0-f3e4-432e-bee9-b2dab461725d-kube-api-access-g46vb\") pod \"cilium-b8g59\" (UID: \"18c3e0c0-f3e4-432e-bee9-b2dab461725d\") " pod="kube-system/cilium-b8g59" Jul 6 23:24:30.201678 kubelet[3278]: I0706 23:24:30.201379 3278 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/18c3e0c0-f3e4-432e-bee9-b2dab461725d-bpf-maps\") pod \"cilium-b8g59\" (UID: \"18c3e0c0-f3e4-432e-bee9-b2dab461725d\") " pod="kube-system/cilium-b8g59" Jul 6 23:24:30.201678 kubelet[3278]: I0706 23:24:30.201394 3278 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/18c3e0c0-f3e4-432e-bee9-b2dab461725d-host-proc-sys-kernel\") pod \"cilium-b8g59\" (UID: \"18c3e0c0-f3e4-432e-bee9-b2dab461725d\") " pod="kube-system/cilium-b8g59" Jul 6 23:24:30.202172 kubelet[3278]: I0706 23:24:30.201410 3278 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/18c3e0c0-f3e4-432e-bee9-b2dab461725d-host-proc-sys-net\") pod \"cilium-b8g59\" (UID: \"18c3e0c0-f3e4-432e-bee9-b2dab461725d\") " pod="kube-system/cilium-b8g59" Jul 6 23:24:30.202172 kubelet[3278]: I0706 23:24:30.201427 3278 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/18c3e0c0-f3e4-432e-bee9-b2dab461725d-cilium-run\") pod \"cilium-b8g59\" (UID: \"18c3e0c0-f3e4-432e-bee9-b2dab461725d\") " pod="kube-system/cilium-b8g59" Jul 6 23:24:30.202172 kubelet[3278]: I0706 23:24:30.201441 3278 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/18c3e0c0-f3e4-432e-bee9-b2dab461725d-cilium-cgroup\") pod \"cilium-b8g59\" (UID: \"18c3e0c0-f3e4-432e-bee9-b2dab461725d\") " pod="kube-system/cilium-b8g59" Jul 6 23:24:30.202172 kubelet[3278]: I0706 23:24:30.201454 3278 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/18c3e0c0-f3e4-432e-bee9-b2dab461725d-lib-modules\") pod \"cilium-b8g59\" (UID: \"18c3e0c0-f3e4-432e-bee9-b2dab461725d\") " pod="kube-system/cilium-b8g59" Jul 6 23:24:30.202172 kubelet[3278]: I0706 23:24:30.201472 3278 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/18c3e0c0-f3e4-432e-bee9-b2dab461725d-hubble-tls\") pod \"cilium-b8g59\" (UID: \"18c3e0c0-f3e4-432e-bee9-b2dab461725d\") " pod="kube-system/cilium-b8g59" Jul 6 23:24:30.202172 kubelet[3278]: I0706 23:24:30.201521 3278 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/18c3e0c0-f3e4-432e-bee9-b2dab461725d-xtables-lock\") pod \"cilium-b8g59\" (UID: \"18c3e0c0-f3e4-432e-bee9-b2dab461725d\") " pod="kube-system/cilium-b8g59" Jul 6 23:24:30.202298 kubelet[3278]: I0706 23:24:30.201537 3278 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/18c3e0c0-f3e4-432e-bee9-b2dab461725d-clustermesh-secrets\") pod \"cilium-b8g59\" (UID: \"18c3e0c0-f3e4-432e-bee9-b2dab461725d\") " pod="kube-system/cilium-b8g59" Jul 6 23:24:30.202298 kubelet[3278]: I0706 23:24:30.201552 3278 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/18c3e0c0-f3e4-432e-bee9-b2dab461725d-hostproc\") pod \"cilium-b8g59\" (UID: \"18c3e0c0-f3e4-432e-bee9-b2dab461725d\") " pod="kube-system/cilium-b8g59" Jul 6 23:24:30.202298 kubelet[3278]: I0706 23:24:30.201566 3278 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/18c3e0c0-f3e4-432e-bee9-b2dab461725d-cni-path\") pod \"cilium-b8g59\" (UID: \"18c3e0c0-f3e4-432e-bee9-b2dab461725d\") " pod="kube-system/cilium-b8g59" Jul 6 23:24:30.202298 kubelet[3278]: I0706 23:24:30.201579 3278 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/18c3e0c0-f3e4-432e-bee9-b2dab461725d-etc-cni-netd\") pod \"cilium-b8g59\" (UID: \"18c3e0c0-f3e4-432e-bee9-b2dab461725d\") " pod="kube-system/cilium-b8g59" Jul 6 23:24:30.248028 systemd[1]: Started sshd@22-10.200.20.19:22-10.200.16.10:36442.service - OpenSSH per-connection server daemon (10.200.16.10:36442). Jul 6 23:24:30.403968 containerd[1744]: time="2025-07-06T23:24:30.403508316Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-b8g59,Uid:18c3e0c0-f3e4-432e-bee9-b2dab461725d,Namespace:kube-system,Attempt:0,}" Jul 6 23:24:30.448764 containerd[1744]: time="2025-07-06T23:24:30.448498899Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:24:30.448764 containerd[1744]: time="2025-07-06T23:24:30.448559419Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:24:30.448764 containerd[1744]: time="2025-07-06T23:24:30.448570499Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:24:30.448764 containerd[1744]: time="2025-07-06T23:24:30.448657579Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:24:30.469939 systemd[1]: Started cri-containerd-d4b97f492017493c8f93ff51bd74b870b3138a4b1bc63e86daa6cf845a96e597.scope - libcontainer container d4b97f492017493c8f93ff51bd74b870b3138a4b1bc63e86daa6cf845a96e597. Jul 6 23:24:30.491145 containerd[1744]: time="2025-07-06T23:24:30.491087285Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-b8g59,Uid:18c3e0c0-f3e4-432e-bee9-b2dab461725d,Namespace:kube-system,Attempt:0,} returns sandbox id \"d4b97f492017493c8f93ff51bd74b870b3138a4b1bc63e86daa6cf845a96e597\"" Jul 6 23:24:30.502108 containerd[1744]: time="2025-07-06T23:24:30.502051471Z" level=info msg="CreateContainer within sandbox \"d4b97f492017493c8f93ff51bd74b870b3138a4b1bc63e86daa6cf845a96e597\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 6 23:24:30.543412 containerd[1744]: time="2025-07-06T23:24:30.543309378Z" level=info msg="CreateContainer within sandbox \"d4b97f492017493c8f93ff51bd74b870b3138a4b1bc63e86daa6cf845a96e597\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"18afeb205a488f5f551b72099f2cd8dc48f5fd541d1f1411c30b8e8f11c81108\"" Jul 6 23:24:30.544768 containerd[1744]: time="2025-07-06T23:24:30.543935417Z" level=info msg="StartContainer for \"18afeb205a488f5f551b72099f2cd8dc48f5fd541d1f1411c30b8e8f11c81108\"" Jul 6 23:24:30.571976 systemd[1]: Started cri-containerd-18afeb205a488f5f551b72099f2cd8dc48f5fd541d1f1411c30b8e8f11c81108.scope - libcontainer container 18afeb205a488f5f551b72099f2cd8dc48f5fd541d1f1411c30b8e8f11c81108. Jul 6 23:24:30.603747 containerd[1744]: time="2025-07-06T23:24:30.602760743Z" level=info msg="StartContainer for \"18afeb205a488f5f551b72099f2cd8dc48f5fd541d1f1411c30b8e8f11c81108\" returns successfully" Jul 6 23:24:30.611671 systemd[1]: cri-containerd-18afeb205a488f5f551b72099f2cd8dc48f5fd541d1f1411c30b8e8f11c81108.scope: Deactivated successfully. Jul 6 23:24:30.681213 containerd[1744]: time="2025-07-06T23:24:30.681075923Z" level=info msg="shim disconnected" id=18afeb205a488f5f551b72099f2cd8dc48f5fd541d1f1411c30b8e8f11c81108 namespace=k8s.io Jul 6 23:24:30.681474 containerd[1744]: time="2025-07-06T23:24:30.681427203Z" level=warning msg="cleaning up after shim disconnected" id=18afeb205a488f5f551b72099f2cd8dc48f5fd541d1f1411c30b8e8f11c81108 namespace=k8s.io Jul 6 23:24:30.681474 containerd[1744]: time="2025-07-06T23:24:30.681444123Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:24:30.726110 sshd[5042]: Accepted publickey for core from 10.200.16.10 port 36442 ssh2: RSA SHA256:03hkvBavc73fCnbrqThnCFPODKWBbuy7ZtqQR+/MThM Jul 6 23:24:30.727086 sshd-session[5042]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:24:30.731553 systemd-logind[1707]: New session 25 of user core. Jul 6 23:24:30.740914 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 6 23:24:31.062810 sshd[5150]: Connection closed by 10.200.16.10 port 36442 Jul 6 23:24:31.062697 sshd-session[5042]: pam_unix(sshd:session): session closed for user core Jul 6 23:24:31.066161 systemd[1]: sshd@22-10.200.20.19:22-10.200.16.10:36442.service: Deactivated successfully. Jul 6 23:24:31.067998 systemd[1]: session-25.scope: Deactivated successfully. Jul 6 23:24:31.068871 systemd-logind[1707]: Session 25 logged out. Waiting for processes to exit. Jul 6 23:24:31.070135 systemd-logind[1707]: Removed session 25. Jul 6 23:24:31.161102 systemd[1]: Started sshd@23-10.200.20.19:22-10.200.16.10:36456.service - OpenSSH per-connection server daemon (10.200.16.10:36456). Jul 6 23:24:31.619094 containerd[1744]: time="2025-07-06T23:24:31.618942610Z" level=info msg="CreateContainer within sandbox \"d4b97f492017493c8f93ff51bd74b870b3138a4b1bc63e86daa6cf845a96e597\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 6 23:24:31.640612 sshd[5157]: Accepted publickey for core from 10.200.16.10 port 36456 ssh2: RSA SHA256:03hkvBavc73fCnbrqThnCFPODKWBbuy7ZtqQR+/MThM Jul 6 23:24:31.642002 sshd-session[5157]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:24:31.647681 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1479108950.mount: Deactivated successfully. Jul 6 23:24:31.653451 systemd-logind[1707]: New session 26 of user core. Jul 6 23:24:31.658979 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 6 23:24:31.661677 containerd[1744]: time="2025-07-06T23:24:31.661492556Z" level=info msg="CreateContainer within sandbox \"d4b97f492017493c8f93ff51bd74b870b3138a4b1bc63e86daa6cf845a96e597\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"0de7665c512b9328c97484eb8261cc8134f0980074e84bc087cc3b650febd81d\"" Jul 6 23:24:31.663411 containerd[1744]: time="2025-07-06T23:24:31.662520995Z" level=info msg="StartContainer for \"0de7665c512b9328c97484eb8261cc8134f0980074e84bc087cc3b650febd81d\"" Jul 6 23:24:31.696961 systemd[1]: Started cri-containerd-0de7665c512b9328c97484eb8261cc8134f0980074e84bc087cc3b650febd81d.scope - libcontainer container 0de7665c512b9328c97484eb8261cc8134f0980074e84bc087cc3b650febd81d. Jul 6 23:24:31.733050 systemd[1]: cri-containerd-0de7665c512b9328c97484eb8261cc8134f0980074e84bc087cc3b650febd81d.scope: Deactivated successfully. Jul 6 23:24:31.734076 containerd[1744]: time="2025-07-06T23:24:31.734037704Z" level=info msg="StartContainer for \"0de7665c512b9328c97484eb8261cc8134f0980074e84bc087cc3b650febd81d\" returns successfully" Jul 6 23:24:31.773557 containerd[1744]: time="2025-07-06T23:24:31.773461734Z" level=info msg="shim disconnected" id=0de7665c512b9328c97484eb8261cc8134f0980074e84bc087cc3b650febd81d namespace=k8s.io Jul 6 23:24:31.773557 containerd[1744]: time="2025-07-06T23:24:31.773550894Z" level=warning msg="cleaning up after shim disconnected" id=0de7665c512b9328c97484eb8261cc8134f0980074e84bc087cc3b650febd81d namespace=k8s.io Jul 6 23:24:31.773557 containerd[1744]: time="2025-07-06T23:24:31.773562534Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:24:32.306285 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0de7665c512b9328c97484eb8261cc8134f0980074e84bc087cc3b650febd81d-rootfs.mount: Deactivated successfully. Jul 6 23:24:32.332542 kubelet[3278]: E0706 23:24:32.332460 3278 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 6 23:24:32.625212 containerd[1744]: time="2025-07-06T23:24:32.625089531Z" level=info msg="CreateContainer within sandbox \"d4b97f492017493c8f93ff51bd74b870b3138a4b1bc63e86daa6cf845a96e597\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 6 23:24:32.662664 containerd[1744]: time="2025-07-06T23:24:32.662611643Z" level=info msg="CreateContainer within sandbox \"d4b97f492017493c8f93ff51bd74b870b3138a4b1bc63e86daa6cf845a96e597\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"4356356642159b9edc696a836805e746f3999d5bbf7dfbdf972b73076573874e\"" Jul 6 23:24:32.663498 containerd[1744]: time="2025-07-06T23:24:32.663465162Z" level=info msg="StartContainer for \"4356356642159b9edc696a836805e746f3999d5bbf7dfbdf972b73076573874e\"" Jul 6 23:24:32.696960 systemd[1]: Started cri-containerd-4356356642159b9edc696a836805e746f3999d5bbf7dfbdf972b73076573874e.scope - libcontainer container 4356356642159b9edc696a836805e746f3999d5bbf7dfbdf972b73076573874e. Jul 6 23:24:32.730900 systemd[1]: cri-containerd-4356356642159b9edc696a836805e746f3999d5bbf7dfbdf972b73076573874e.scope: Deactivated successfully. Jul 6 23:24:32.734326 containerd[1744]: time="2025-07-06T23:24:32.734245512Z" level=info msg="StartContainer for \"4356356642159b9edc696a836805e746f3999d5bbf7dfbdf972b73076573874e\" returns successfully" Jul 6 23:24:32.770756 containerd[1744]: time="2025-07-06T23:24:32.770676866Z" level=info msg="shim disconnected" id=4356356642159b9edc696a836805e746f3999d5bbf7dfbdf972b73076573874e namespace=k8s.io Jul 6 23:24:32.770756 containerd[1744]: time="2025-07-06T23:24:32.770747546Z" level=warning msg="cleaning up after shim disconnected" id=4356356642159b9edc696a836805e746f3999d5bbf7dfbdf972b73076573874e namespace=k8s.io Jul 6 23:24:32.771050 containerd[1744]: time="2025-07-06T23:24:32.770800706Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:24:33.308568 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4356356642159b9edc696a836805e746f3999d5bbf7dfbdf972b73076573874e-rootfs.mount: Deactivated successfully. Jul 6 23:24:33.631129 containerd[1744]: time="2025-07-06T23:24:33.631003052Z" level=info msg="CreateContainer within sandbox \"d4b97f492017493c8f93ff51bd74b870b3138a4b1bc63e86daa6cf845a96e597\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 6 23:24:33.662573 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2434575516.mount: Deactivated successfully. Jul 6 23:24:33.680516 containerd[1744]: time="2025-07-06T23:24:33.680451629Z" level=info msg="CreateContainer within sandbox \"d4b97f492017493c8f93ff51bd74b870b3138a4b1bc63e86daa6cf845a96e597\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"da521868f61c00572ebf7f28bab85fffbead7b61199370ca7d2fbf60a59edf2f\"" Jul 6 23:24:33.681394 containerd[1744]: time="2025-07-06T23:24:33.681139988Z" level=info msg="StartContainer for \"da521868f61c00572ebf7f28bab85fffbead7b61199370ca7d2fbf60a59edf2f\"" Jul 6 23:24:33.716978 systemd[1]: Started cri-containerd-da521868f61c00572ebf7f28bab85fffbead7b61199370ca7d2fbf60a59edf2f.scope - libcontainer container da521868f61c00572ebf7f28bab85fffbead7b61199370ca7d2fbf60a59edf2f. Jul 6 23:24:33.748481 systemd[1]: cri-containerd-da521868f61c00572ebf7f28bab85fffbead7b61199370ca7d2fbf60a59edf2f.scope: Deactivated successfully. Jul 6 23:24:33.756563 containerd[1744]: time="2025-07-06T23:24:33.756246733Z" level=info msg="StartContainer for \"da521868f61c00572ebf7f28bab85fffbead7b61199370ca7d2fbf60a59edf2f\" returns successfully" Jul 6 23:24:33.793152 containerd[1744]: time="2025-07-06T23:24:33.793053926Z" level=info msg="shim disconnected" id=da521868f61c00572ebf7f28bab85fffbead7b61199370ca7d2fbf60a59edf2f namespace=k8s.io Jul 6 23:24:33.793152 containerd[1744]: time="2025-07-06T23:24:33.793139286Z" level=warning msg="cleaning up after shim disconnected" id=da521868f61c00572ebf7f28bab85fffbead7b61199370ca7d2fbf60a59edf2f namespace=k8s.io Jul 6 23:24:33.793152 containerd[1744]: time="2025-07-06T23:24:33.793149286Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:24:34.309414 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-da521868f61c00572ebf7f28bab85fffbead7b61199370ca7d2fbf60a59edf2f-rootfs.mount: Deactivated successfully. Jul 6 23:24:34.639714 containerd[1744]: time="2025-07-06T23:24:34.639596129Z" level=info msg="CreateContainer within sandbox \"d4b97f492017493c8f93ff51bd74b870b3138a4b1bc63e86daa6cf845a96e597\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 6 23:24:34.677472 containerd[1744]: time="2025-07-06T23:24:34.677416201Z" level=info msg="CreateContainer within sandbox \"d4b97f492017493c8f93ff51bd74b870b3138a4b1bc63e86daa6cf845a96e597\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"a81cf2026218e675e2fb7e3bbfdf17d0cc388959f5011d3623ee825879b672e1\"" Jul 6 23:24:34.679502 containerd[1744]: time="2025-07-06T23:24:34.678094280Z" level=info msg="StartContainer for \"a81cf2026218e675e2fb7e3bbfdf17d0cc388959f5011d3623ee825879b672e1\"" Jul 6 23:24:34.712457 systemd[1]: Started cri-containerd-a81cf2026218e675e2fb7e3bbfdf17d0cc388959f5011d3623ee825879b672e1.scope - libcontainer container a81cf2026218e675e2fb7e3bbfdf17d0cc388959f5011d3623ee825879b672e1. Jul 6 23:24:34.744554 containerd[1744]: time="2025-07-06T23:24:34.744478276Z" level=info msg="StartContainer for \"a81cf2026218e675e2fb7e3bbfdf17d0cc388959f5011d3623ee825879b672e1\" returns successfully" Jul 6 23:24:35.138834 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jul 6 23:24:35.385659 kubelet[3278]: I0706 23:24:35.385603 3278 setters.go:618] "Node became not ready" node="ci-4230.2.1-a-cc9ddc1e95" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-07-06T23:24:35Z","lastTransitionTime":"2025-07-06T23:24:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 6 23:24:37.870817 systemd-networkd[1564]: lxc_health: Link UP Jul 6 23:24:37.874589 systemd-networkd[1564]: lxc_health: Gained carrier Jul 6 23:24:38.431947 kubelet[3278]: I0706 23:24:38.431835 3278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-b8g59" podStartSLOduration=8.431815547 podStartE2EDuration="8.431815547s" podCreationTimestamp="2025-07-06 23:24:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:24:35.647402448 +0000 UTC m=+153.910604376" watchObservedRunningTime="2025-07-06 23:24:38.431815547 +0000 UTC m=+156.695017475" Jul 6 23:24:39.213904 systemd-networkd[1564]: lxc_health: Gained IPv6LL Jul 6 23:24:44.856776 sshd[5159]: Connection closed by 10.200.16.10 port 36456 Jul 6 23:24:44.857519 sshd-session[5157]: pam_unix(sshd:session): session closed for user core Jul 6 23:24:44.861544 systemd[1]: sshd@23-10.200.20.19:22-10.200.16.10:36456.service: Deactivated successfully. Jul 6 23:24:44.864951 systemd[1]: session-26.scope: Deactivated successfully. Jul 6 23:24:44.866459 systemd-logind[1707]: Session 26 logged out. Waiting for processes to exit. Jul 6 23:24:44.867597 systemd-logind[1707]: Removed session 26.