Jul 10 23:34:20.304125 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 10 23:34:20.304148 kernel: Linux version 6.6.96-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Thu Jul 10 22:12:17 -00 2025 Jul 10 23:34:20.304156 kernel: KASLR enabled Jul 10 23:34:20.304162 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jul 10 23:34:20.304169 kernel: printk: bootconsole [pl11] enabled Jul 10 23:34:20.304174 kernel: efi: EFI v2.7 by EDK II Jul 10 23:34:20.304181 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f20e698 RNG=0x3fd5f998 MEMRESERVE=0x3e477598 Jul 10 23:34:20.304187 kernel: random: crng init done Jul 10 23:34:20.304193 kernel: secureboot: Secure boot disabled Jul 10 23:34:20.304199 kernel: ACPI: Early table checksum verification disabled Jul 10 23:34:20.304205 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Jul 10 23:34:20.304210 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 10 23:34:20.304216 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 10 23:34:20.304224 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jul 10 23:34:20.304231 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 10 23:34:20.304237 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 10 23:34:20.304243 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 10 23:34:20.304251 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 10 23:34:20.304257 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 10 23:34:20.304263 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 10 23:34:20.304270 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jul 10 23:34:20.304276 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 10 23:34:20.304282 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jul 10 23:34:20.304288 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Jul 10 23:34:20.304294 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Jul 10 23:34:20.304300 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Jul 10 23:34:20.304306 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Jul 10 23:34:20.304313 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Jul 10 23:34:20.304320 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Jul 10 23:34:20.304326 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Jul 10 23:34:20.304333 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Jul 10 23:34:20.304339 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Jul 10 23:34:20.304345 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Jul 10 23:34:20.304351 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Jul 10 23:34:20.304357 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Jul 10 23:34:20.304363 kernel: NUMA: NODE_DATA [mem 0x1bf7ee800-0x1bf7f3fff] Jul 10 23:34:20.304369 kernel: Zone ranges: Jul 10 23:34:20.304376 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jul 10 23:34:20.304382 kernel: DMA32 empty Jul 10 23:34:20.304388 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jul 10 23:34:20.304398 kernel: Movable zone start for each node Jul 10 23:34:20.304405 kernel: Early memory node ranges Jul 10 23:34:20.304411 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jul 10 23:34:20.304418 kernel: node 0: [mem 0x0000000000824000-0x000000003e45ffff] Jul 10 23:34:20.304424 kernel: node 0: [mem 0x000000003e460000-0x000000003e46ffff] Jul 10 23:34:20.304432 kernel: node 0: [mem 0x000000003e470000-0x000000003e54ffff] Jul 10 23:34:20.304438 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Jul 10 23:34:20.304445 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Jul 10 23:34:20.304451 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Jul 10 23:34:20.304458 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Jul 10 23:34:20.304464 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jul 10 23:34:20.304471 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jul 10 23:34:20.304477 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jul 10 23:34:20.304484 kernel: psci: probing for conduit method from ACPI. Jul 10 23:34:20.304490 kernel: psci: PSCIv1.1 detected in firmware. Jul 10 23:34:20.304497 kernel: psci: Using standard PSCI v0.2 function IDs Jul 10 23:34:20.304503 kernel: psci: MIGRATE_INFO_TYPE not supported. Jul 10 23:34:20.304511 kernel: psci: SMC Calling Convention v1.4 Jul 10 23:34:20.304517 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jul 10 23:34:20.304524 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jul 10 23:34:20.304530 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jul 10 23:34:20.304537 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jul 10 23:34:20.304543 kernel: pcpu-alloc: [0] 0 [0] 1 Jul 10 23:34:20.304550 kernel: Detected PIPT I-cache on CPU0 Jul 10 23:34:20.304577 kernel: CPU features: detected: GIC system register CPU interface Jul 10 23:34:20.304585 kernel: CPU features: detected: Hardware dirty bit management Jul 10 23:34:20.304591 kernel: CPU features: detected: Spectre-BHB Jul 10 23:34:20.304598 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 10 23:34:20.304607 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 10 23:34:20.304613 kernel: CPU features: detected: ARM erratum 1418040 Jul 10 23:34:20.304620 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Jul 10 23:34:20.304627 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 10 23:34:20.304633 kernel: alternatives: applying boot alternatives Jul 10 23:34:20.304642 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=7d7ae41c578f00376368863b7a3cf53d899e76a854273f3187550259460980dc Jul 10 23:34:20.304648 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 10 23:34:20.304655 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 10 23:34:20.304662 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 10 23:34:20.304668 kernel: Fallback order for Node 0: 0 Jul 10 23:34:20.304675 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Jul 10 23:34:20.304683 kernel: Policy zone: Normal Jul 10 23:34:20.304689 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 10 23:34:20.304695 kernel: software IO TLB: area num 2. Jul 10 23:34:20.304702 kernel: software IO TLB: mapped [mem 0x0000000036540000-0x000000003a540000] (64MB) Jul 10 23:34:20.304709 kernel: Memory: 3983588K/4194160K available (10368K kernel code, 2186K rwdata, 8104K rodata, 38336K init, 897K bss, 210572K reserved, 0K cma-reserved) Jul 10 23:34:20.304715 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 10 23:34:20.304722 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 10 23:34:20.304729 kernel: rcu: RCU event tracing is enabled. Jul 10 23:34:20.304736 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 10 23:34:20.304742 kernel: Trampoline variant of Tasks RCU enabled. Jul 10 23:34:20.304749 kernel: Tracing variant of Tasks RCU enabled. Jul 10 23:34:20.304757 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 10 23:34:20.304763 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 10 23:34:20.304770 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 10 23:34:20.304776 kernel: GICv3: 960 SPIs implemented Jul 10 23:34:20.304783 kernel: GICv3: 0 Extended SPIs implemented Jul 10 23:34:20.304789 kernel: Root IRQ handler: gic_handle_irq Jul 10 23:34:20.304795 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jul 10 23:34:20.304802 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jul 10 23:34:20.304808 kernel: ITS: No ITS available, not enabling LPIs Jul 10 23:34:20.304815 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 10 23:34:20.304822 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 10 23:34:20.304828 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 10 23:34:20.304837 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 10 23:34:20.304843 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 10 23:34:20.304850 kernel: Console: colour dummy device 80x25 Jul 10 23:34:20.304857 kernel: printk: console [tty1] enabled Jul 10 23:34:20.304863 kernel: ACPI: Core revision 20230628 Jul 10 23:34:20.304870 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 10 23:34:20.304877 kernel: pid_max: default: 32768 minimum: 301 Jul 10 23:34:20.304884 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 10 23:34:20.304890 kernel: landlock: Up and running. Jul 10 23:34:20.304898 kernel: SELinux: Initializing. Jul 10 23:34:20.304905 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 10 23:34:20.304912 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 10 23:34:20.304919 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 10 23:34:20.304925 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 10 23:34:20.304932 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Jul 10 23:34:20.304939 kernel: Hyper-V: Host Build 10.0.22477.1619-1-0 Jul 10 23:34:20.304952 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jul 10 23:34:20.304959 kernel: rcu: Hierarchical SRCU implementation. Jul 10 23:34:20.304975 kernel: rcu: Max phase no-delay instances is 400. Jul 10 23:34:20.304984 kernel: Remapping and enabling EFI services. Jul 10 23:34:20.304991 kernel: smp: Bringing up secondary CPUs ... Jul 10 23:34:20.305000 kernel: Detected PIPT I-cache on CPU1 Jul 10 23:34:20.305007 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jul 10 23:34:20.305014 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 10 23:34:20.305021 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 10 23:34:20.305028 kernel: smp: Brought up 1 node, 2 CPUs Jul 10 23:34:20.305036 kernel: SMP: Total of 2 processors activated. Jul 10 23:34:20.305044 kernel: CPU features: detected: 32-bit EL0 Support Jul 10 23:34:20.305051 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jul 10 23:34:20.305058 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 10 23:34:20.305065 kernel: CPU features: detected: CRC32 instructions Jul 10 23:34:20.305072 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 10 23:34:20.305079 kernel: CPU features: detected: LSE atomic instructions Jul 10 23:34:20.305086 kernel: CPU features: detected: Privileged Access Never Jul 10 23:34:20.305093 kernel: CPU: All CPU(s) started at EL1 Jul 10 23:34:20.305102 kernel: alternatives: applying system-wide alternatives Jul 10 23:34:20.305109 kernel: devtmpfs: initialized Jul 10 23:34:20.305116 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 10 23:34:20.305124 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 10 23:34:20.305131 kernel: pinctrl core: initialized pinctrl subsystem Jul 10 23:34:20.305138 kernel: SMBIOS 3.1.0 present. Jul 10 23:34:20.305145 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Jul 10 23:34:20.305152 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 10 23:34:20.305159 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 10 23:34:20.305168 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 10 23:34:20.305175 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 10 23:34:20.305182 kernel: audit: initializing netlink subsys (disabled) Jul 10 23:34:20.305189 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Jul 10 23:34:20.305196 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 10 23:34:20.305203 kernel: cpuidle: using governor menu Jul 10 23:34:20.305211 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 10 23:34:20.305218 kernel: ASID allocator initialised with 32768 entries Jul 10 23:34:20.305225 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 10 23:34:20.305233 kernel: Serial: AMBA PL011 UART driver Jul 10 23:34:20.305240 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 10 23:34:20.305247 kernel: Modules: 0 pages in range for non-PLT usage Jul 10 23:34:20.305254 kernel: Modules: 509264 pages in range for PLT usage Jul 10 23:34:20.305261 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 10 23:34:20.305272 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 10 23:34:20.305280 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 10 23:34:20.305287 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 10 23:34:20.305294 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 10 23:34:20.305303 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 10 23:34:20.305310 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 10 23:34:20.305317 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 10 23:34:20.305324 kernel: ACPI: Added _OSI(Module Device) Jul 10 23:34:20.305331 kernel: ACPI: Added _OSI(Processor Device) Jul 10 23:34:20.305338 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 10 23:34:20.305345 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 10 23:34:20.305352 kernel: ACPI: Interpreter enabled Jul 10 23:34:20.305359 kernel: ACPI: Using GIC for interrupt routing Jul 10 23:34:20.305367 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jul 10 23:34:20.305392 kernel: printk: console [ttyAMA0] enabled Jul 10 23:34:20.305400 kernel: printk: bootconsole [pl11] disabled Jul 10 23:34:20.305407 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jul 10 23:34:20.305414 kernel: iommu: Default domain type: Translated Jul 10 23:34:20.305421 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 10 23:34:20.305429 kernel: efivars: Registered efivars operations Jul 10 23:34:20.305436 kernel: vgaarb: loaded Jul 10 23:34:20.305443 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 10 23:34:20.305452 kernel: VFS: Disk quotas dquot_6.6.0 Jul 10 23:34:20.305459 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 10 23:34:20.305466 kernel: pnp: PnP ACPI init Jul 10 23:34:20.305473 kernel: pnp: PnP ACPI: found 0 devices Jul 10 23:34:20.305480 kernel: NET: Registered PF_INET protocol family Jul 10 23:34:20.305487 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 10 23:34:20.305495 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 10 23:34:20.305502 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 10 23:34:20.305509 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 10 23:34:20.305518 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 10 23:34:20.305525 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 10 23:34:20.305532 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 10 23:34:20.305539 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 10 23:34:20.305546 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 10 23:34:20.305553 kernel: PCI: CLS 0 bytes, default 64 Jul 10 23:34:20.305560 kernel: kvm [1]: HYP mode not available Jul 10 23:34:20.305566 kernel: Initialise system trusted keyrings Jul 10 23:34:20.305573 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 10 23:34:20.305582 kernel: Key type asymmetric registered Jul 10 23:34:20.305589 kernel: Asymmetric key parser 'x509' registered Jul 10 23:34:20.305596 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 10 23:34:20.305603 kernel: io scheduler mq-deadline registered Jul 10 23:34:20.305610 kernel: io scheduler kyber registered Jul 10 23:34:20.305617 kernel: io scheduler bfq registered Jul 10 23:34:20.305624 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 10 23:34:20.305631 kernel: thunder_xcv, ver 1.0 Jul 10 23:34:20.305638 kernel: thunder_bgx, ver 1.0 Jul 10 23:34:20.305646 kernel: nicpf, ver 1.0 Jul 10 23:34:20.305653 kernel: nicvf, ver 1.0 Jul 10 23:34:20.305800 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 10 23:34:20.305869 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-10T23:34:19 UTC (1752190459) Jul 10 23:34:20.305879 kernel: efifb: probing for efifb Jul 10 23:34:20.305886 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jul 10 23:34:20.305893 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jul 10 23:34:20.305900 kernel: efifb: scrolling: redraw Jul 10 23:34:20.305910 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jul 10 23:34:20.305917 kernel: Console: switching to colour frame buffer device 128x48 Jul 10 23:34:20.305924 kernel: fb0: EFI VGA frame buffer device Jul 10 23:34:20.305931 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jul 10 23:34:20.305939 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 10 23:34:20.305946 kernel: No ACPI PMU IRQ for CPU0 Jul 10 23:34:20.305953 kernel: No ACPI PMU IRQ for CPU1 Jul 10 23:34:20.305960 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Jul 10 23:34:20.305980 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jul 10 23:34:20.305991 kernel: watchdog: Hard watchdog permanently disabled Jul 10 23:34:20.305998 kernel: NET: Registered PF_INET6 protocol family Jul 10 23:34:20.306005 kernel: Segment Routing with IPv6 Jul 10 23:34:20.306012 kernel: In-situ OAM (IOAM) with IPv6 Jul 10 23:34:20.306019 kernel: NET: Registered PF_PACKET protocol family Jul 10 23:34:20.306026 kernel: Key type dns_resolver registered Jul 10 23:34:20.306033 kernel: registered taskstats version 1 Jul 10 23:34:20.306040 kernel: Loading compiled-in X.509 certificates Jul 10 23:34:20.306047 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.96-flatcar: 31389229b1c1b066a3aecee2ec344e038e2f2cc0' Jul 10 23:34:20.306056 kernel: Key type .fscrypt registered Jul 10 23:34:20.306063 kernel: Key type fscrypt-provisioning registered Jul 10 23:34:20.306070 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 10 23:34:20.306077 kernel: ima: Allocated hash algorithm: sha1 Jul 10 23:34:20.306084 kernel: ima: No architecture policies found Jul 10 23:34:20.306091 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 10 23:34:20.306098 kernel: clk: Disabling unused clocks Jul 10 23:34:20.306105 kernel: Freeing unused kernel memory: 38336K Jul 10 23:34:20.306112 kernel: Run /init as init process Jul 10 23:34:20.306121 kernel: with arguments: Jul 10 23:34:20.306128 kernel: /init Jul 10 23:34:20.306135 kernel: with environment: Jul 10 23:34:20.306142 kernel: HOME=/ Jul 10 23:34:20.306148 kernel: TERM=linux Jul 10 23:34:20.306155 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 10 23:34:20.306163 systemd[1]: Successfully made /usr/ read-only. Jul 10 23:34:20.306173 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 10 23:34:20.306183 systemd[1]: Detected virtualization microsoft. Jul 10 23:34:20.306191 systemd[1]: Detected architecture arm64. Jul 10 23:34:20.306198 systemd[1]: Running in initrd. Jul 10 23:34:20.306205 systemd[1]: No hostname configured, using default hostname. Jul 10 23:34:20.306213 systemd[1]: Hostname set to . Jul 10 23:34:20.306221 systemd[1]: Initializing machine ID from random generator. Jul 10 23:34:20.306228 systemd[1]: Queued start job for default target initrd.target. Jul 10 23:34:20.306236 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 10 23:34:20.306245 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 10 23:34:20.306253 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 10 23:34:20.306261 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 10 23:34:20.306269 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 10 23:34:20.306277 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 10 23:34:20.306286 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 10 23:34:20.306295 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 10 23:34:20.306303 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 10 23:34:20.306311 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 10 23:34:20.306319 systemd[1]: Reached target paths.target - Path Units. Jul 10 23:34:20.306326 systemd[1]: Reached target slices.target - Slice Units. Jul 10 23:34:20.306334 systemd[1]: Reached target swap.target - Swaps. Jul 10 23:34:20.306342 systemd[1]: Reached target timers.target - Timer Units. Jul 10 23:34:20.306349 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 10 23:34:20.306357 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 10 23:34:20.306366 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 10 23:34:20.306374 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 10 23:34:20.306381 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 10 23:34:20.306389 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 10 23:34:20.306397 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 10 23:34:20.306405 systemd[1]: Reached target sockets.target - Socket Units. Jul 10 23:34:20.306412 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 10 23:34:20.306420 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 10 23:34:20.306428 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 10 23:34:20.306437 systemd[1]: Starting systemd-fsck-usr.service... Jul 10 23:34:20.306445 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 10 23:34:20.306472 systemd-journald[218]: Collecting audit messages is disabled. Jul 10 23:34:20.306492 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 10 23:34:20.306502 systemd-journald[218]: Journal started Jul 10 23:34:20.306520 systemd-journald[218]: Runtime Journal (/run/log/journal/b49ac59a3a494264bd9528ae254e3729) is 8M, max 78.5M, 70.5M free. Jul 10 23:34:20.303570 systemd-modules-load[220]: Inserted module 'overlay' Jul 10 23:34:20.324983 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 10 23:34:20.328347 systemd-modules-load[220]: Inserted module 'br_netfilter' Jul 10 23:34:20.341260 kernel: Bridge firewalling registered Jul 10 23:34:20.341283 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 23:34:20.359225 systemd[1]: Started systemd-journald.service - Journal Service. Jul 10 23:34:20.359953 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 10 23:34:20.372125 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 10 23:34:20.379585 systemd[1]: Finished systemd-fsck-usr.service. Jul 10 23:34:20.391993 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 10 23:34:20.400544 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 23:34:20.424089 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 10 23:34:20.432126 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 10 23:34:20.455149 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 10 23:34:20.474506 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 10 23:34:20.486613 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 10 23:34:20.499720 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 10 23:34:20.513006 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 10 23:34:20.525789 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 10 23:34:20.550281 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 10 23:34:20.562222 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 10 23:34:20.575171 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 10 23:34:20.589003 dracut-cmdline[252]: dracut-dracut-053 Jul 10 23:34:20.589003 dracut-cmdline[252]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=7d7ae41c578f00376368863b7a3cf53d899e76a854273f3187550259460980dc Jul 10 23:34:20.638408 systemd-resolved[256]: Positive Trust Anchors: Jul 10 23:34:20.638427 systemd-resolved[256]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 10 23:34:20.638457 systemd-resolved[256]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 10 23:34:20.641765 systemd-resolved[256]: Defaulting to hostname 'linux'. Jul 10 23:34:20.643311 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 10 23:34:20.671030 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 10 23:34:20.704254 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 10 23:34:20.772997 kernel: SCSI subsystem initialized Jul 10 23:34:20.779985 kernel: Loading iSCSI transport class v2.0-870. Jul 10 23:34:20.790994 kernel: iscsi: registered transport (tcp) Jul 10 23:34:20.808131 kernel: iscsi: registered transport (qla4xxx) Jul 10 23:34:20.808161 kernel: QLogic iSCSI HBA Driver Jul 10 23:34:20.841560 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 10 23:34:20.859115 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 10 23:34:20.890145 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 10 23:34:20.890200 kernel: device-mapper: uevent: version 1.0.3 Jul 10 23:34:20.897091 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 10 23:34:20.946000 kernel: raid6: neonx8 gen() 15789 MB/s Jul 10 23:34:20.965978 kernel: raid6: neonx4 gen() 15827 MB/s Jul 10 23:34:20.985980 kernel: raid6: neonx2 gen() 13208 MB/s Jul 10 23:34:21.006979 kernel: raid6: neonx1 gen() 10548 MB/s Jul 10 23:34:21.026977 kernel: raid6: int64x8 gen() 6786 MB/s Jul 10 23:34:21.046979 kernel: raid6: int64x4 gen() 7360 MB/s Jul 10 23:34:21.067985 kernel: raid6: int64x2 gen() 6115 MB/s Jul 10 23:34:21.091012 kernel: raid6: int64x1 gen() 5058 MB/s Jul 10 23:34:21.091024 kernel: raid6: using algorithm neonx4 gen() 15827 MB/s Jul 10 23:34:21.116114 kernel: raid6: .... xor() 12363 MB/s, rmw enabled Jul 10 23:34:21.116131 kernel: raid6: using neon recovery algorithm Jul 10 23:34:21.123981 kernel: xor: measuring software checksum speed Jul 10 23:34:21.130611 kernel: 8regs : 20182 MB/sec Jul 10 23:34:21.130624 kernel: 32regs : 21636 MB/sec Jul 10 23:34:21.133940 kernel: arm64_neon : 27927 MB/sec Jul 10 23:34:21.137914 kernel: xor: using function: arm64_neon (27927 MB/sec) Jul 10 23:34:21.188037 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 10 23:34:21.199014 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 10 23:34:21.215134 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 10 23:34:21.240652 systemd-udevd[439]: Using default interface naming scheme 'v255'. Jul 10 23:34:21.246122 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 10 23:34:21.266119 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 10 23:34:21.290633 dracut-pre-trigger[453]: rd.md=0: removing MD RAID activation Jul 10 23:34:21.324446 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 10 23:34:21.341120 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 10 23:34:21.392788 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 10 23:34:21.413226 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 10 23:34:21.438999 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 10 23:34:21.451440 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 10 23:34:21.467679 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 10 23:34:21.482240 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 10 23:34:21.508292 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 10 23:34:21.523619 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 10 23:34:21.555627 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 10 23:34:21.568046 kernel: hv_vmbus: Vmbus version:5.3 Jul 10 23:34:21.568072 kernel: hv_vmbus: registering driver hyperv_keyboard Jul 10 23:34:21.555745 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 10 23:34:21.593182 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 10 23:34:21.618665 kernel: hv_vmbus: registering driver hid_hyperv Jul 10 23:34:21.618687 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 10 23:34:21.618697 kernel: hv_vmbus: registering driver hv_netvsc Jul 10 23:34:21.618706 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 10 23:34:21.606060 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 10 23:34:21.653127 kernel: PTP clock support registered Jul 10 23:34:21.653158 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Jul 10 23:34:21.653169 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Jul 10 23:34:21.606231 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 23:34:21.701230 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jul 10 23:34:21.705617 kernel: hv_vmbus: registering driver hv_storvsc Jul 10 23:34:21.705632 kernel: hv_utils: Registering HyperV Utility Driver Jul 10 23:34:21.705642 kernel: scsi host0: storvsc_host_t Jul 10 23:34:21.715588 kernel: scsi host1: storvsc_host_t Jul 10 23:34:21.715629 kernel: hv_vmbus: registering driver hv_utils Jul 10 23:34:21.634142 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 23:34:21.735088 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jul 10 23:34:21.736093 kernel: hv_utils: Shutdown IC version 3.2 Jul 10 23:34:21.736109 kernel: hv_utils: Heartbeat IC version 3.0 Jul 10 23:34:21.736127 kernel: hv_utils: TimeSync IC version 4.0 Jul 10 23:34:21.691444 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 23:34:21.924850 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jul 10 23:34:21.910911 systemd-resolved[256]: Clock change detected. Flushing caches. Jul 10 23:34:21.918246 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 10 23:34:21.918370 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 23:34:21.958342 kernel: hv_netvsc 0022487a-872c-0022-487a-872c0022487a eth0: VF slot 1 added Jul 10 23:34:21.930951 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 10 23:34:21.988530 kernel: hv_vmbus: registering driver hv_pci Jul 10 23:34:21.988554 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jul 10 23:34:21.988812 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 10 23:34:21.954849 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 23:34:22.002127 kernel: hv_pci fb91b842-fc09-4ecd-a93a-cee8c4a64a22: PCI VMBus probing: Using version 0x10004 Jul 10 23:34:21.990473 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 23:34:22.198407 kernel: hv_pci fb91b842-fc09-4ecd-a93a-cee8c4a64a22: PCI host bridge to bus fc09:00 Jul 10 23:34:22.200651 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jul 10 23:34:22.201481 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jul 10 23:34:22.212567 kernel: pci_bus fc09:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jul 10 23:34:22.212756 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jul 10 23:34:22.222147 kernel: pci_bus fc09:00: No busn resource found for root bus, will use [bus 00-ff] Jul 10 23:34:22.222289 kernel: sd 0:0:0:0: [sda] Write Protect is off Jul 10 23:34:22.222594 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 10 23:34:22.256476 kernel: pci fc09:00:02.0: [15b3:1018] type 00 class 0x020000 Jul 10 23:34:22.256521 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jul 10 23:34:22.256708 kernel: pci fc09:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Jul 10 23:34:22.256730 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jul 10 23:34:22.256821 kernel: pci fc09:00:02.0: enabling Extended Tags Jul 10 23:34:22.267677 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 10 23:34:22.284836 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jul 10 23:34:22.285049 kernel: pci fc09:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at fc09:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Jul 10 23:34:22.295648 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 10 23:34:22.315485 kernel: pci_bus fc09:00: busn_res: [bus 00-ff] end is updated to 00 Jul 10 23:34:22.315868 kernel: pci fc09:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Jul 10 23:34:22.347326 kernel: mlx5_core fc09:00:02.0: enabling device (0000 -> 0002) Jul 10 23:34:22.353693 kernel: mlx5_core fc09:00:02.0: firmware version: 16.31.2424 Jul 10 23:34:22.627511 kernel: hv_netvsc 0022487a-872c-0022-487a-872c0022487a eth0: VF registering: eth1 Jul 10 23:34:22.627729 kernel: mlx5_core fc09:00:02.0 eth1: joined to eth0 Jul 10 23:34:22.636765 kernel: mlx5_core fc09:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Jul 10 23:34:22.646644 kernel: mlx5_core fc09:00:02.0 enP64521s1: renamed from eth1 Jul 10 23:34:22.908770 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (500) Jul 10 23:34:22.920096 kernel: BTRFS: device fsid 28ea517e-145c-4223-93e8-6347aefbc032 devid 1 transid 36 /dev/sda3 scanned by (udev-worker) (491) Jul 10 23:34:22.931252 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jul 10 23:34:22.954408 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jul 10 23:34:22.968450 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jul 10 23:34:22.975597 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jul 10 23:34:22.999557 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jul 10 23:34:23.021770 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 10 23:34:23.046640 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 10 23:34:23.054643 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 10 23:34:24.064682 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 10 23:34:24.065139 disk-uuid[610]: The operation has completed successfully. Jul 10 23:34:24.122415 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 10 23:34:24.122530 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 10 23:34:24.178814 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 10 23:34:24.191761 sh[696]: Success Jul 10 23:34:24.223675 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 10 23:34:24.499635 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 10 23:34:24.512924 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 10 23:34:24.518783 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 10 23:34:24.554452 kernel: BTRFS info (device dm-0): first mount of filesystem 28ea517e-145c-4223-93e8-6347aefbc032 Jul 10 23:34:24.554498 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 10 23:34:24.561302 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 10 23:34:24.566253 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 10 23:34:24.570427 kernel: BTRFS info (device dm-0): using free space tree Jul 10 23:34:24.906362 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 10 23:34:24.911366 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 10 23:34:24.930886 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 10 23:34:24.943315 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 10 23:34:24.976899 kernel: BTRFS info (device sda6): first mount of filesystem e248a549-ad9c-46e4-9226-90e819becc10 Jul 10 23:34:24.976959 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 10 23:34:24.981320 kernel: BTRFS info (device sda6): using free space tree Jul 10 23:34:25.019650 kernel: BTRFS info (device sda6): auto enabling async discard Jul 10 23:34:25.077697 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 10 23:34:25.094471 kernel: BTRFS info (device sda6): last unmount of filesystem e248a549-ad9c-46e4-9226-90e819becc10 Jul 10 23:34:25.099762 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 10 23:34:25.114514 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 10 23:34:25.123812 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 10 23:34:25.150508 systemd-networkd[875]: lo: Link UP Jul 10 23:34:25.150516 systemd-networkd[875]: lo: Gained carrier Jul 10 23:34:25.155714 systemd-networkd[875]: Enumeration completed Jul 10 23:34:25.155963 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 10 23:34:25.156431 systemd-networkd[875]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 23:34:25.156434 systemd-networkd[875]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 10 23:34:25.162680 systemd[1]: Reached target network.target - Network. Jul 10 23:34:25.250639 kernel: mlx5_core fc09:00:02.0 enP64521s1: Link up Jul 10 23:34:25.329683 kernel: hv_netvsc 0022487a-872c-0022-487a-872c0022487a eth0: Data path switched to VF: enP64521s1 Jul 10 23:34:25.330549 systemd-networkd[875]: enP64521s1: Link UP Jul 10 23:34:25.332771 systemd-networkd[875]: eth0: Link UP Jul 10 23:34:25.332876 systemd-networkd[875]: eth0: Gained carrier Jul 10 23:34:25.332885 systemd-networkd[875]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 23:34:25.354920 systemd-networkd[875]: enP64521s1: Gained carrier Jul 10 23:34:25.368705 systemd-networkd[875]: eth0: DHCPv4 address 10.200.20.36/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jul 10 23:34:25.960726 ignition[878]: Ignition 2.20.0 Jul 10 23:34:25.960739 ignition[878]: Stage: fetch-offline Jul 10 23:34:25.964465 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 10 23:34:25.960772 ignition[878]: no configs at "/usr/lib/ignition/base.d" Jul 10 23:34:25.960780 ignition[878]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 10 23:34:25.960865 ignition[878]: parsed url from cmdline: "" Jul 10 23:34:25.960868 ignition[878]: no config URL provided Jul 10 23:34:25.960872 ignition[878]: reading system config file "/usr/lib/ignition/user.ign" Jul 10 23:34:25.991876 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 10 23:34:25.960879 ignition[878]: no config at "/usr/lib/ignition/user.ign" Jul 10 23:34:25.960884 ignition[878]: failed to fetch config: resource requires networking Jul 10 23:34:25.961057 ignition[878]: Ignition finished successfully Jul 10 23:34:26.013378 ignition[888]: Ignition 2.20.0 Jul 10 23:34:26.013387 ignition[888]: Stage: fetch Jul 10 23:34:26.013554 ignition[888]: no configs at "/usr/lib/ignition/base.d" Jul 10 23:34:26.013564 ignition[888]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 10 23:34:26.013725 ignition[888]: parsed url from cmdline: "" Jul 10 23:34:26.013728 ignition[888]: no config URL provided Jul 10 23:34:26.013733 ignition[888]: reading system config file "/usr/lib/ignition/user.ign" Jul 10 23:34:26.013740 ignition[888]: no config at "/usr/lib/ignition/user.ign" Jul 10 23:34:26.013765 ignition[888]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jul 10 23:34:26.107380 ignition[888]: GET result: OK Jul 10 23:34:26.107477 ignition[888]: config has been read from IMDS userdata Jul 10 23:34:26.107529 ignition[888]: parsing config with SHA512: efdc42b2d62d6547ebbc2115625a1307eda47b582b8ca5e6a139bb39ce60ec1f6d66146059e2ac814f9debd6edc45ee818f64628d1554baff6fc12cc1e08bd1f Jul 10 23:34:26.111840 unknown[888]: fetched base config from "system" Jul 10 23:34:26.112223 ignition[888]: fetch: fetch complete Jul 10 23:34:26.111853 unknown[888]: fetched base config from "system" Jul 10 23:34:26.112227 ignition[888]: fetch: fetch passed Jul 10 23:34:26.111858 unknown[888]: fetched user config from "azure" Jul 10 23:34:26.112271 ignition[888]: Ignition finished successfully Jul 10 23:34:26.117577 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 10 23:34:26.136295 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 10 23:34:26.166557 ignition[895]: Ignition 2.20.0 Jul 10 23:34:26.166568 ignition[895]: Stage: kargs Jul 10 23:34:26.166755 ignition[895]: no configs at "/usr/lib/ignition/base.d" Jul 10 23:34:26.173504 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 10 23:34:26.166765 ignition[895]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 10 23:34:26.167772 ignition[895]: kargs: kargs passed Jul 10 23:34:26.167823 ignition[895]: Ignition finished successfully Jul 10 23:34:26.197776 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 10 23:34:26.219861 ignition[901]: Ignition 2.20.0 Jul 10 23:34:26.219876 ignition[901]: Stage: disks Jul 10 23:34:26.224917 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 10 23:34:26.220039 ignition[901]: no configs at "/usr/lib/ignition/base.d" Jul 10 23:34:26.230995 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 10 23:34:26.220048 ignition[901]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 10 23:34:26.241478 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 10 23:34:26.221033 ignition[901]: disks: disks passed Jul 10 23:34:26.253695 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 10 23:34:26.221077 ignition[901]: Ignition finished successfully Jul 10 23:34:26.264427 systemd[1]: Reached target sysinit.target - System Initialization. Jul 10 23:34:26.276634 systemd[1]: Reached target basic.target - Basic System. Jul 10 23:34:26.302915 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 10 23:34:26.372855 systemd-fsck[910]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jul 10 23:34:26.379892 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 10 23:34:26.399827 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 10 23:34:26.459638 kernel: EXT4-fs (sda9): mounted filesystem ef1c88fa-d23e-4a16-bbbf-07c92f8585ec r/w with ordered data mode. Quota mode: none. Jul 10 23:34:26.460852 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 10 23:34:26.465682 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 10 23:34:26.517692 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 10 23:34:26.524745 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 10 23:34:26.537805 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jul 10 23:34:26.570579 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (921) Jul 10 23:34:26.544842 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 10 23:34:26.595584 kernel: BTRFS info (device sda6): first mount of filesystem e248a549-ad9c-46e4-9226-90e819becc10 Jul 10 23:34:26.595608 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 10 23:34:26.544879 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 10 23:34:26.613215 kernel: BTRFS info (device sda6): using free space tree Jul 10 23:34:26.565246 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 10 23:34:26.591815 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 10 23:34:26.629289 kernel: BTRFS info (device sda6): auto enabling async discard Jul 10 23:34:26.630515 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 10 23:34:26.809769 systemd-networkd[875]: eth0: Gained IPv6LL Jul 10 23:34:26.810042 systemd-networkd[875]: enP64521s1: Gained IPv6LL Jul 10 23:34:27.111383 coreos-metadata[923]: Jul 10 23:34:27.111 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jul 10 23:34:27.119863 coreos-metadata[923]: Jul 10 23:34:27.119 INFO Fetch successful Jul 10 23:34:27.119863 coreos-metadata[923]: Jul 10 23:34:27.119 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jul 10 23:34:27.136832 coreos-metadata[923]: Jul 10 23:34:27.136 INFO Fetch successful Jul 10 23:34:27.150927 coreos-metadata[923]: Jul 10 23:34:27.150 INFO wrote hostname ci-4230.2.1-n-2b36f27b4a to /sysroot/etc/hostname Jul 10 23:34:27.160324 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 10 23:34:27.379393 initrd-setup-root[951]: cut: /sysroot/etc/passwd: No such file or directory Jul 10 23:34:27.453576 initrd-setup-root[958]: cut: /sysroot/etc/group: No such file or directory Jul 10 23:34:27.463182 initrd-setup-root[965]: cut: /sysroot/etc/shadow: No such file or directory Jul 10 23:34:27.469487 initrd-setup-root[972]: cut: /sysroot/etc/gshadow: No such file or directory Jul 10 23:34:28.526969 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 10 23:34:28.541797 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 10 23:34:28.560647 kernel: BTRFS info (device sda6): last unmount of filesystem e248a549-ad9c-46e4-9226-90e819becc10 Jul 10 23:34:28.560877 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 10 23:34:28.567985 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 10 23:34:28.591938 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 10 23:34:28.617676 ignition[1040]: INFO : Ignition 2.20.0 Jul 10 23:34:28.617676 ignition[1040]: INFO : Stage: mount Jul 10 23:34:28.617676 ignition[1040]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 10 23:34:28.617676 ignition[1040]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 10 23:34:28.617676 ignition[1040]: INFO : mount: mount passed Jul 10 23:34:28.617676 ignition[1040]: INFO : Ignition finished successfully Jul 10 23:34:28.618866 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 10 23:34:28.643776 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 10 23:34:28.664812 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 10 23:34:28.686639 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (1051) Jul 10 23:34:28.699227 kernel: BTRFS info (device sda6): first mount of filesystem e248a549-ad9c-46e4-9226-90e819becc10 Jul 10 23:34:28.699275 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 10 23:34:28.703711 kernel: BTRFS info (device sda6): using free space tree Jul 10 23:34:28.717635 kernel: BTRFS info (device sda6): auto enabling async discard Jul 10 23:34:28.719296 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 10 23:34:28.742645 ignition[1068]: INFO : Ignition 2.20.0 Jul 10 23:34:28.742645 ignition[1068]: INFO : Stage: files Jul 10 23:34:28.752246 ignition[1068]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 10 23:34:28.752246 ignition[1068]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 10 23:34:28.752246 ignition[1068]: DEBUG : files: compiled without relabeling support, skipping Jul 10 23:34:28.779595 ignition[1068]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 10 23:34:28.779595 ignition[1068]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 10 23:34:28.863211 ignition[1068]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 10 23:34:28.871085 ignition[1068]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 10 23:34:28.871085 ignition[1068]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 10 23:34:28.863597 unknown[1068]: wrote ssh authorized keys file for user: core Jul 10 23:34:28.918402 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jul 10 23:34:28.929137 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Jul 10 23:34:28.958131 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 10 23:34:29.061603 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jul 10 23:34:29.061603 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 10 23:34:29.083612 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jul 10 23:34:29.121642 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 10 23:34:29.200252 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 10 23:34:29.210921 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 10 23:34:29.210921 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 10 23:34:29.210921 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 10 23:34:29.210921 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 10 23:34:29.210921 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 10 23:34:29.210921 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 10 23:34:29.210921 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 10 23:34:29.210921 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 10 23:34:29.210921 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 10 23:34:29.210921 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 10 23:34:29.210921 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 10 23:34:29.210921 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 10 23:34:29.210921 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 10 23:34:29.210921 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Jul 10 23:34:29.881007 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 10 23:34:30.102985 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 10 23:34:30.102985 ignition[1068]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 10 23:34:30.137678 ignition[1068]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 10 23:34:30.152712 ignition[1068]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 10 23:34:30.152712 ignition[1068]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 10 23:34:30.152712 ignition[1068]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jul 10 23:34:30.152712 ignition[1068]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jul 10 23:34:30.152712 ignition[1068]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 10 23:34:30.152712 ignition[1068]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 10 23:34:30.152712 ignition[1068]: INFO : files: files passed Jul 10 23:34:30.152712 ignition[1068]: INFO : Ignition finished successfully Jul 10 23:34:30.150671 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 10 23:34:30.174888 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 10 23:34:30.190804 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 10 23:34:30.213839 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 10 23:34:30.285956 initrd-setup-root-after-ignition[1095]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 10 23:34:30.285956 initrd-setup-root-after-ignition[1095]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 10 23:34:30.213946 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 10 23:34:30.315709 initrd-setup-root-after-ignition[1099]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 10 23:34:30.221003 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 10 23:34:30.237521 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 10 23:34:30.254862 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 10 23:34:30.296189 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 10 23:34:30.296306 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 10 23:34:30.310526 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 10 23:34:30.321317 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 10 23:34:30.335030 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 10 23:34:30.359927 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 10 23:34:30.372066 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 10 23:34:30.381665 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 10 23:34:30.404400 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 10 23:34:30.412611 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 10 23:34:30.424951 systemd[1]: Stopped target timers.target - Timer Units. Jul 10 23:34:30.435799 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 10 23:34:30.436009 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 10 23:34:30.454687 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 10 23:34:30.466349 systemd[1]: Stopped target basic.target - Basic System. Jul 10 23:34:30.476176 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 10 23:34:30.486923 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 10 23:34:30.500056 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 10 23:34:30.511897 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 10 23:34:30.522021 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 10 23:34:30.535047 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 10 23:34:30.546494 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 10 23:34:30.557947 systemd[1]: Stopped target swap.target - Swaps. Jul 10 23:34:30.567716 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 10 23:34:30.567908 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 10 23:34:30.585988 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 10 23:34:30.597253 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 10 23:34:30.608857 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 10 23:34:30.608973 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 10 23:34:30.621266 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 10 23:34:30.621440 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 10 23:34:30.638712 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 10 23:34:30.638901 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 10 23:34:30.653983 systemd[1]: ignition-files.service: Deactivated successfully. Jul 10 23:34:30.654162 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 10 23:34:30.727300 ignition[1120]: INFO : Ignition 2.20.0 Jul 10 23:34:30.727300 ignition[1120]: INFO : Stage: umount Jul 10 23:34:30.727300 ignition[1120]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 10 23:34:30.727300 ignition[1120]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 10 23:34:30.727300 ignition[1120]: INFO : umount: umount passed Jul 10 23:34:30.727300 ignition[1120]: INFO : Ignition finished successfully Jul 10 23:34:30.664044 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jul 10 23:34:30.664239 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 10 23:34:30.698201 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 10 23:34:30.714608 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 10 23:34:30.714790 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 10 23:34:30.726805 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 10 23:34:30.733152 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 10 23:34:30.733286 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 10 23:34:30.743422 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 10 23:34:30.743546 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 10 23:34:30.763671 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 10 23:34:30.764567 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 10 23:34:30.764703 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 10 23:34:30.778868 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 10 23:34:30.778989 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 10 23:34:30.788612 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 10 23:34:30.788799 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 10 23:34:30.800603 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 10 23:34:30.800665 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 10 23:34:30.813304 systemd[1]: Stopped target network.target - Network. Jul 10 23:34:30.818156 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 10 23:34:30.818219 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 10 23:34:30.829524 systemd[1]: Stopped target paths.target - Path Units. Jul 10 23:34:30.839641 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 10 23:34:30.845597 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 10 23:34:30.852563 systemd[1]: Stopped target slices.target - Slice Units. Jul 10 23:34:30.862758 systemd[1]: Stopped target sockets.target - Socket Units. Jul 10 23:34:30.873140 systemd[1]: iscsid.socket: Deactivated successfully. Jul 10 23:34:30.873199 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 10 23:34:30.883998 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 10 23:34:30.884036 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 10 23:34:30.895104 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 10 23:34:30.895162 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 10 23:34:30.906205 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 10 23:34:30.906247 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 10 23:34:30.917423 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 10 23:34:30.927358 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 10 23:34:30.948832 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 10 23:34:30.948931 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 10 23:34:30.967246 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 10 23:34:30.967566 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 10 23:34:30.967677 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 10 23:34:30.977897 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 10 23:34:30.978143 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 10 23:34:30.978222 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 10 23:34:31.231769 kernel: hv_netvsc 0022487a-872c-0022-487a-872c0022487a eth0: Data path switched from VF: enP64521s1 Jul 10 23:34:30.996729 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 10 23:34:30.996801 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 10 23:34:31.022794 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 10 23:34:31.041695 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 10 23:34:31.041792 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 10 23:34:31.053714 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 10 23:34:31.053770 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 10 23:34:31.069805 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 10 23:34:31.069877 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 10 23:34:31.076544 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 10 23:34:31.076590 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 10 23:34:31.092979 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 10 23:34:31.103984 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 10 23:34:31.104058 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 10 23:34:31.135556 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 10 23:34:31.135742 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 10 23:34:31.148214 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 10 23:34:31.148254 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 10 23:34:31.158830 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 10 23:34:31.158867 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 10 23:34:31.170405 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 10 23:34:31.170469 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 10 23:34:31.188033 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 10 23:34:31.188092 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 10 23:34:31.205194 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 10 23:34:31.205255 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 10 23:34:31.256874 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 10 23:34:31.263641 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 10 23:34:31.263708 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 10 23:34:31.465981 systemd-journald[218]: Received SIGTERM from PID 1 (systemd). Jul 10 23:34:31.271071 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 10 23:34:31.271116 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 23:34:31.292513 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 10 23:34:31.292579 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 10 23:34:31.292912 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 10 23:34:31.293026 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 10 23:34:31.303004 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 10 23:34:31.303103 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 10 23:34:31.315715 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 10 23:34:31.315837 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 10 23:34:31.327926 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 10 23:34:31.328112 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 10 23:34:31.337717 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 10 23:34:31.369856 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 10 23:34:31.389083 systemd[1]: Switching root. Jul 10 23:34:31.547744 systemd-journald[218]: Journal stopped Jul 10 23:34:36.884705 kernel: SELinux: policy capability network_peer_controls=1 Jul 10 23:34:36.884727 kernel: SELinux: policy capability open_perms=1 Jul 10 23:34:36.889667 kernel: SELinux: policy capability extended_socket_class=1 Jul 10 23:34:36.889684 kernel: SELinux: policy capability always_check_network=0 Jul 10 23:34:36.889697 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 10 23:34:36.889705 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 10 23:34:36.889714 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 10 23:34:36.889722 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 10 23:34:36.889730 kernel: audit: type=1403 audit(1752190472.733:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 10 23:34:36.889740 systemd[1]: Successfully loaded SELinux policy in 173.310ms. Jul 10 23:34:36.889905 kernel: mlx5_core fc09:00:02.0: poll_health:835:(pid 0): device's health compromised - reached miss count Jul 10 23:34:36.889920 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 11.318ms. Jul 10 23:34:36.889930 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 10 23:34:36.889939 systemd[1]: Detected virtualization microsoft. Jul 10 23:34:36.889951 systemd[1]: Detected architecture arm64. Jul 10 23:34:36.889959 systemd[1]: Detected first boot. Jul 10 23:34:36.889968 systemd[1]: Hostname set to . Jul 10 23:34:36.889977 systemd[1]: Initializing machine ID from random generator. Jul 10 23:34:36.889985 zram_generator::config[1165]: No configuration found. Jul 10 23:34:36.889995 kernel: NET: Registered PF_VSOCK protocol family Jul 10 23:34:36.890004 systemd[1]: Populated /etc with preset unit settings. Jul 10 23:34:36.890015 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 10 23:34:36.890024 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 10 23:34:36.890033 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 10 23:34:36.890041 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 10 23:34:36.890050 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 10 23:34:36.890059 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 10 23:34:36.890068 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 10 23:34:36.890079 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 10 23:34:36.890088 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 10 23:34:36.890096 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 10 23:34:36.890105 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 10 23:34:36.890114 systemd[1]: Created slice user.slice - User and Session Slice. Jul 10 23:34:36.890123 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 10 23:34:36.890132 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 10 23:34:36.890140 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 10 23:34:36.890151 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 10 23:34:36.890160 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 10 23:34:36.890169 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 10 23:34:36.890178 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jul 10 23:34:36.890189 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 10 23:34:36.890199 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 10 23:34:36.890208 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 10 23:34:36.890217 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 10 23:34:36.890227 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 10 23:34:36.890236 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 10 23:34:36.890245 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 10 23:34:36.890254 systemd[1]: Reached target slices.target - Slice Units. Jul 10 23:34:36.890263 systemd[1]: Reached target swap.target - Swaps. Jul 10 23:34:36.890272 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 10 23:34:36.890281 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 10 23:34:36.890290 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 10 23:34:36.890301 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 10 23:34:36.890310 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 10 23:34:36.890319 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 10 23:34:36.890328 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 10 23:34:36.890337 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 10 23:34:36.890348 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 10 23:34:36.890357 systemd[1]: Mounting media.mount - External Media Directory... Jul 10 23:34:36.890366 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 10 23:34:36.890375 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 10 23:34:36.890385 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 10 23:34:36.890395 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 10 23:34:36.890404 systemd[1]: Reached target machines.target - Containers. Jul 10 23:34:36.890413 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 10 23:34:36.890424 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 10 23:34:36.890433 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 10 23:34:36.890442 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 10 23:34:36.890451 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 10 23:34:36.890460 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 10 23:34:36.890469 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 10 23:34:36.890478 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 10 23:34:36.890487 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 10 23:34:36.890498 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 10 23:34:36.890508 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 10 23:34:36.890517 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 10 23:34:36.890525 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 10 23:34:36.890534 systemd[1]: Stopped systemd-fsck-usr.service. Jul 10 23:34:36.890544 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 10 23:34:36.890553 kernel: loop: module loaded Jul 10 23:34:36.890561 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 10 23:34:36.890572 kernel: fuse: init (API version 7.39) Jul 10 23:34:36.890580 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 10 23:34:36.890590 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 10 23:34:36.890599 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 10 23:34:36.890608 kernel: ACPI: bus type drm_connector registered Jul 10 23:34:36.890656 systemd-journald[1269]: Collecting audit messages is disabled. Jul 10 23:34:36.890681 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 10 23:34:36.890692 systemd-journald[1269]: Journal started Jul 10 23:34:36.890712 systemd-journald[1269]: Runtime Journal (/run/log/journal/7131685ab5334d5f8e787f5527a68387) is 8M, max 78.5M, 70.5M free. Jul 10 23:34:35.927661 systemd[1]: Queued start job for default target multi-user.target. Jul 10 23:34:35.935362 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jul 10 23:34:35.935734 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 10 23:34:35.936030 systemd[1]: systemd-journald.service: Consumed 3.201s CPU time. Jul 10 23:34:36.924169 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 10 23:34:36.933366 systemd[1]: verity-setup.service: Deactivated successfully. Jul 10 23:34:36.933441 systemd[1]: Stopped verity-setup.service. Jul 10 23:34:36.952597 systemd[1]: Started systemd-journald.service - Journal Service. Jul 10 23:34:36.953402 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 10 23:34:36.959253 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 10 23:34:36.965422 systemd[1]: Mounted media.mount - External Media Directory. Jul 10 23:34:36.971093 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 10 23:34:36.977544 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 10 23:34:36.983791 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 10 23:34:36.990646 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 10 23:34:36.997506 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 10 23:34:37.004709 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 10 23:34:37.006651 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 10 23:34:37.013562 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 23:34:37.013739 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 10 23:34:37.019986 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 10 23:34:37.020133 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 10 23:34:37.026218 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 23:34:37.026367 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 10 23:34:37.033345 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 10 23:34:37.033490 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 10 23:34:37.039815 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 23:34:37.039965 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 10 23:34:37.047590 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 10 23:34:37.054131 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 10 23:34:37.061890 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 10 23:34:37.069329 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 10 23:34:37.076789 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 10 23:34:37.091790 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 10 23:34:37.108763 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 10 23:34:37.115853 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 10 23:34:37.121982 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 10 23:34:37.122021 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 10 23:34:37.128769 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 10 23:34:37.136592 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 10 23:34:37.143989 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 10 23:34:37.149504 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 10 23:34:37.150600 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 10 23:34:37.157942 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 10 23:34:37.164314 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 10 23:34:37.166423 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 10 23:34:37.172899 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 10 23:34:37.175784 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 10 23:34:37.188809 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 10 23:34:37.199564 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 10 23:34:37.207877 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 10 23:34:37.222284 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 10 23:34:37.229321 systemd-journald[1269]: Time spent on flushing to /var/log/journal/7131685ab5334d5f8e787f5527a68387 is 18.171ms for 916 entries. Jul 10 23:34:37.229321 systemd-journald[1269]: System Journal (/var/log/journal/7131685ab5334d5f8e787f5527a68387) is 8M, max 2.6G, 2.6G free. Jul 10 23:34:37.285328 systemd-journald[1269]: Received client request to flush runtime journal. Jul 10 23:34:37.285365 kernel: loop0: detected capacity change from 0 to 28720 Jul 10 23:34:37.236455 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 10 23:34:37.248328 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 10 23:34:37.256033 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 10 23:34:37.267155 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 10 23:34:37.277045 udevadm[1308]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jul 10 23:34:37.278095 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 10 23:34:37.296317 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 10 23:34:37.303650 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 10 23:34:37.379918 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 10 23:34:37.380598 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 10 23:34:37.440786 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 10 23:34:37.452526 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 10 23:34:37.505606 systemd-tmpfiles[1321]: ACLs are not supported, ignoring. Jul 10 23:34:37.505635 systemd-tmpfiles[1321]: ACLs are not supported, ignoring. Jul 10 23:34:37.510190 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 10 23:34:37.766650 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 10 23:34:37.906722 kernel: loop1: detected capacity change from 0 to 123192 Jul 10 23:34:38.284645 kernel: loop2: detected capacity change from 0 to 113512 Jul 10 23:34:38.544388 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 10 23:34:38.565858 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 10 23:34:38.588271 systemd-udevd[1329]: Using default interface naming scheme 'v255'. Jul 10 23:34:38.666645 kernel: loop3: detected capacity change from 0 to 211168 Jul 10 23:34:38.701650 kernel: loop4: detected capacity change from 0 to 28720 Jul 10 23:34:38.710633 kernel: loop5: detected capacity change from 0 to 123192 Jul 10 23:34:38.721632 kernel: loop6: detected capacity change from 0 to 113512 Jul 10 23:34:38.731631 kernel: loop7: detected capacity change from 0 to 211168 Jul 10 23:34:38.740556 (sd-merge)[1332]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jul 10 23:34:38.740999 (sd-merge)[1332]: Merged extensions into '/usr'. Jul 10 23:34:38.744464 systemd[1]: Reload requested from client PID 1306 ('systemd-sysext') (unit systemd-sysext.service)... Jul 10 23:34:38.744582 systemd[1]: Reloading... Jul 10 23:34:38.805658 zram_generator::config[1359]: No configuration found. Jul 10 23:34:38.952377 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 23:34:39.083815 systemd[1]: Reloading finished in 338 ms. Jul 10 23:34:39.096739 kernel: mousedev: PS/2 mouse device common for all mice Jul 10 23:34:39.099128 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 10 23:34:39.108423 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 10 23:34:39.124850 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jul 10 23:34:39.130954 systemd[1]: Starting ensure-sysext.service... Jul 10 23:34:39.138916 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 10 23:34:39.148008 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 10 23:34:39.174748 kernel: hv_vmbus: registering driver hv_balloon Jul 10 23:34:39.174841 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jul 10 23:34:39.193102 kernel: hv_balloon: Memory hot add disabled on ARM64 Jul 10 23:34:39.193185 kernel: hv_vmbus: registering driver hyperv_fb Jul 10 23:34:39.194870 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 23:34:39.204839 systemd[1]: Reload requested from client PID 1456 ('systemctl') (unit ensure-sysext.service)... Jul 10 23:34:39.204853 systemd[1]: Reloading... Jul 10 23:34:39.216378 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jul 10 23:34:39.216452 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jul 10 23:34:39.222229 kernel: Console: switching to colour dummy device 80x25 Jul 10 23:34:39.230885 kernel: Console: switching to colour frame buffer device 128x48 Jul 10 23:34:39.274916 systemd-tmpfiles[1458]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 10 23:34:39.275129 systemd-tmpfiles[1458]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 10 23:34:39.277289 systemd-tmpfiles[1458]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 10 23:34:39.277572 systemd-tmpfiles[1458]: ACLs are not supported, ignoring. Jul 10 23:34:39.278708 systemd-tmpfiles[1458]: ACLs are not supported, ignoring. Jul 10 23:34:39.298228 systemd-tmpfiles[1458]: Detected autofs mount point /boot during canonicalization of boot. Jul 10 23:34:39.298242 systemd-tmpfiles[1458]: Skipping /boot Jul 10 23:34:39.310681 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1416) Jul 10 23:34:39.323566 systemd-tmpfiles[1458]: Detected autofs mount point /boot during canonicalization of boot. Jul 10 23:34:39.323581 systemd-tmpfiles[1458]: Skipping /boot Jul 10 23:34:39.328631 zram_generator::config[1517]: No configuration found. Jul 10 23:34:39.475820 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 23:34:39.584281 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jul 10 23:34:39.591222 systemd[1]: Reloading finished in 386 ms. Jul 10 23:34:39.613824 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 10 23:34:39.660170 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 10 23:34:39.718260 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 10 23:34:39.725374 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 10 23:34:39.726919 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 10 23:34:39.734903 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 10 23:34:39.742858 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 10 23:34:39.750908 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 10 23:34:39.757865 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 10 23:34:39.759380 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 10 23:34:39.766059 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 10 23:34:39.767655 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 10 23:34:39.776929 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 10 23:34:39.783119 systemd[1]: Reached target time-set.target - System Time Set. Jul 10 23:34:39.793029 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 10 23:34:39.810078 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 10 23:34:39.825082 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 10 23:34:39.826648 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 23:34:39.835098 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 10 23:34:39.835801 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 23:34:39.835957 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 10 23:34:39.844330 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 10 23:34:39.844480 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 10 23:34:39.852208 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 23:34:39.852359 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 10 23:34:39.859593 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 23:34:39.859801 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 10 23:34:39.866230 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 10 23:34:39.880859 systemd[1]: Finished ensure-sysext.service. Jul 10 23:34:39.891544 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 10 23:34:39.898846 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 10 23:34:39.914912 augenrules[1648]: No rules Jul 10 23:34:39.918041 systemd[1]: audit-rules.service: Deactivated successfully. Jul 10 23:34:39.918352 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 10 23:34:39.929517 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 10 23:34:39.946716 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 10 23:34:39.953303 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 10 23:34:39.953379 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 10 23:34:39.957114 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 23:34:39.968645 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 10 23:34:40.024841 lvm[1659]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 10 23:34:40.057720 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 10 23:34:40.066202 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 10 23:34:40.081860 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 10 23:34:40.085489 lvm[1667]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 10 23:34:40.089503 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 23:34:40.108533 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 10 23:34:40.111113 systemd-resolved[1621]: Positive Trust Anchors: Jul 10 23:34:40.111383 systemd-resolved[1621]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 10 23:34:40.111473 systemd-resolved[1621]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 10 23:34:40.128080 systemd-resolved[1621]: Using system hostname 'ci-4230.2.1-n-2b36f27b4a'. Jul 10 23:34:40.129533 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 10 23:34:40.136399 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 10 23:34:40.161134 systemd-networkd[1457]: lo: Link UP Jul 10 23:34:40.161142 systemd-networkd[1457]: lo: Gained carrier Jul 10 23:34:40.163456 systemd-networkd[1457]: Enumeration completed Jul 10 23:34:40.163539 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 10 23:34:40.163758 systemd-networkd[1457]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 23:34:40.163761 systemd-networkd[1457]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 10 23:34:40.170570 systemd[1]: Reached target network.target - Network. Jul 10 23:34:40.182793 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 10 23:34:40.191146 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 10 23:34:40.239652 kernel: mlx5_core fc09:00:02.0 enP64521s1: Link up Jul 10 23:34:40.283790 kernel: hv_netvsc 0022487a-872c-0022-487a-872c0022487a eth0: Data path switched to VF: enP64521s1 Jul 10 23:34:40.286148 systemd-networkd[1457]: enP64521s1: Link UP Jul 10 23:34:40.286878 systemd-networkd[1457]: eth0: Link UP Jul 10 23:34:40.286887 systemd-networkd[1457]: eth0: Gained carrier Jul 10 23:34:40.286905 systemd-networkd[1457]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 23:34:40.288673 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 10 23:34:40.297298 systemd-networkd[1457]: enP64521s1: Gained carrier Jul 10 23:34:40.304752 systemd-networkd[1457]: eth0: DHCPv4 address 10.200.20.36/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jul 10 23:34:40.785595 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 10 23:34:40.792243 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 10 23:34:41.529732 systemd-networkd[1457]: enP64521s1: Gained IPv6LL Jul 10 23:34:41.913739 systemd-networkd[1457]: eth0: Gained IPv6LL Jul 10 23:34:41.915840 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 10 23:34:41.924394 systemd[1]: Reached target network-online.target - Network is Online. Jul 10 23:34:43.241614 ldconfig[1300]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 10 23:34:43.290120 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 10 23:34:43.300846 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 10 23:34:43.314669 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 10 23:34:43.320870 systemd[1]: Reached target sysinit.target - System Initialization. Jul 10 23:34:43.326439 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 10 23:34:43.332896 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 10 23:34:43.339520 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 10 23:34:43.345121 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 10 23:34:43.351944 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 10 23:34:43.358424 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 10 23:34:43.358460 systemd[1]: Reached target paths.target - Path Units. Jul 10 23:34:43.363273 systemd[1]: Reached target timers.target - Timer Units. Jul 10 23:34:43.385660 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 10 23:34:43.393104 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 10 23:34:43.400197 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 10 23:34:43.406886 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 10 23:34:43.414543 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 10 23:34:43.421982 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 10 23:34:43.427950 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 10 23:34:43.435168 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 10 23:34:43.441243 systemd[1]: Reached target sockets.target - Socket Units. Jul 10 23:34:43.446504 systemd[1]: Reached target basic.target - Basic System. Jul 10 23:34:43.451431 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 10 23:34:43.451457 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 10 23:34:43.462733 systemd[1]: Starting chronyd.service - NTP client/server... Jul 10 23:34:43.472600 systemd[1]: Starting containerd.service - containerd container runtime... Jul 10 23:34:43.489059 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 10 23:34:43.502067 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 10 23:34:43.509806 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 10 23:34:43.517339 (chronyd)[1681]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Jul 10 23:34:43.523073 jq[1688]: false Jul 10 23:34:43.523447 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 10 23:34:43.531693 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 10 23:34:43.531828 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Jul 10 23:34:43.532941 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jul 10 23:34:43.539099 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jul 10 23:34:43.540343 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 23:34:43.548044 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 10 23:34:43.553786 KVP[1690]: KVP starting; pid is:1690 Jul 10 23:34:43.556887 chronyd[1694]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Jul 10 23:34:43.560961 KVP[1690]: KVP LIC Version: 3.1 Jul 10 23:34:43.561656 kernel: hv_utils: KVP IC version 4.0 Jul 10 23:34:43.578831 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 10 23:34:43.586702 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 10 23:34:43.596681 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 10 23:34:43.608757 chronyd[1694]: Timezone right/UTC failed leap second check, ignoring Jul 10 23:34:43.608913 chronyd[1694]: Loaded seccomp filter (level 2) Jul 10 23:34:43.609357 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 10 23:34:43.623815 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 10 23:34:43.632346 extend-filesystems[1689]: Found loop4 Jul 10 23:34:43.632346 extend-filesystems[1689]: Found loop5 Jul 10 23:34:43.632346 extend-filesystems[1689]: Found loop6 Jul 10 23:34:43.632346 extend-filesystems[1689]: Found loop7 Jul 10 23:34:43.632346 extend-filesystems[1689]: Found sda Jul 10 23:34:43.632346 extend-filesystems[1689]: Found sda1 Jul 10 23:34:43.632346 extend-filesystems[1689]: Found sda2 Jul 10 23:34:43.632346 extend-filesystems[1689]: Found sda3 Jul 10 23:34:43.632346 extend-filesystems[1689]: Found usr Jul 10 23:34:43.632346 extend-filesystems[1689]: Found sda4 Jul 10 23:34:43.632346 extend-filesystems[1689]: Found sda6 Jul 10 23:34:43.632346 extend-filesystems[1689]: Found sda7 Jul 10 23:34:43.632346 extend-filesystems[1689]: Found sda9 Jul 10 23:34:43.632346 extend-filesystems[1689]: Checking size of /dev/sda9 Jul 10 23:34:43.631889 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 10 23:34:43.872486 coreos-metadata[1683]: Jul 10 23:34:43.778 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jul 10 23:34:43.872486 coreos-metadata[1683]: Jul 10 23:34:43.786 INFO Fetch successful Jul 10 23:34:43.872486 coreos-metadata[1683]: Jul 10 23:34:43.786 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jul 10 23:34:43.872486 coreos-metadata[1683]: Jul 10 23:34:43.791 INFO Fetch successful Jul 10 23:34:43.872486 coreos-metadata[1683]: Jul 10 23:34:43.791 INFO Fetching http://168.63.129.16/machine/995377b9-375a-44f0-8d4d-31db9d215165/42b78caf%2D3bad%2D4da1%2D894f%2Da69e79721a3e.%5Fci%2D4230.2.1%2Dn%2D2b36f27b4a?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jul 10 23:34:43.872486 coreos-metadata[1683]: Jul 10 23:34:43.795 INFO Fetch successful Jul 10 23:34:43.872486 coreos-metadata[1683]: Jul 10 23:34:43.795 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jul 10 23:34:43.872486 coreos-metadata[1683]: Jul 10 23:34:43.816 INFO Fetch successful Jul 10 23:34:43.682568 dbus-daemon[1687]: [system] SELinux support is enabled Jul 10 23:34:43.873034 extend-filesystems[1689]: Old size kept for /dev/sda9 Jul 10 23:34:43.873034 extend-filesystems[1689]: Found sr0 Jul 10 23:34:43.636048 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 10 23:34:43.637818 systemd[1]: Starting update-engine.service - Update Engine... Jul 10 23:34:43.923805 update_engine[1711]: I20250710 23:34:43.710129 1711 main.cc:92] Flatcar Update Engine starting Jul 10 23:34:43.923805 update_engine[1711]: I20250710 23:34:43.712217 1711 update_check_scheduler.cc:74] Next update check in 3m2s Jul 10 23:34:43.981861 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1749) Jul 10 23:34:43.654314 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 10 23:34:43.981978 jq[1713]: true Jul 10 23:34:43.665818 systemd[1]: Started chronyd.service - NTP client/server. Jul 10 23:34:43.685876 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 10 23:34:43.709059 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 10 23:34:43.709276 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 10 23:34:43.982446 jq[1730]: true Jul 10 23:34:43.712353 systemd[1]: motdgen.service: Deactivated successfully. Jul 10 23:34:43.712534 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 10 23:34:43.730375 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 10 23:34:43.730559 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 10 23:34:43.754024 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 10 23:34:43.754518 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 10 23:34:43.775193 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 10 23:34:43.804063 (ntainerd)[1734]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 10 23:34:43.818768 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 10 23:34:43.818811 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 10 23:34:43.844020 systemd-logind[1707]: New seat seat0. Jul 10 23:34:43.846940 systemd-logind[1707]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Jul 10 23:34:43.857406 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 10 23:34:43.857434 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 10 23:34:43.889118 systemd[1]: Started systemd-logind.service - User Login Management. Jul 10 23:34:44.010657 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 10 23:34:44.025316 bash[1770]: Updated "/home/core/.ssh/authorized_keys" Jul 10 23:34:44.025221 systemd[1]: Started update-engine.service - Update Engine. Jul 10 23:34:44.030819 tar[1729]: linux-arm64/LICENSE Jul 10 23:34:44.031052 tar[1729]: linux-arm64/helm Jul 10 23:34:44.035692 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 10 23:34:44.081995 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 10 23:34:44.082341 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 10 23:34:44.090863 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 10 23:34:44.409338 containerd[1734]: time="2025-07-10T23:34:44.409246220Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jul 10 23:34:44.463940 containerd[1734]: time="2025-07-10T23:34:44.463714180Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 10 23:34:44.469583 containerd[1734]: time="2025-07-10T23:34:44.467912140Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.96-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 10 23:34:44.469583 containerd[1734]: time="2025-07-10T23:34:44.467944700Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 10 23:34:44.469583 containerd[1734]: time="2025-07-10T23:34:44.467960580Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 10 23:34:44.469583 containerd[1734]: time="2025-07-10T23:34:44.468099540Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 10 23:34:44.469583 containerd[1734]: time="2025-07-10T23:34:44.468114660Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 10 23:34:44.469583 containerd[1734]: time="2025-07-10T23:34:44.468173780Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 10 23:34:44.469583 containerd[1734]: time="2025-07-10T23:34:44.468185700Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 10 23:34:44.469583 containerd[1734]: time="2025-07-10T23:34:44.468364900Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 10 23:34:44.469583 containerd[1734]: time="2025-07-10T23:34:44.468378420Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 10 23:34:44.469583 containerd[1734]: time="2025-07-10T23:34:44.468390980Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jul 10 23:34:44.469583 containerd[1734]: time="2025-07-10T23:34:44.468399620Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 10 23:34:44.469869 containerd[1734]: time="2025-07-10T23:34:44.468464660Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 10 23:34:44.469869 containerd[1734]: time="2025-07-10T23:34:44.468682140Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 10 23:34:44.469869 containerd[1734]: time="2025-07-10T23:34:44.468800820Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 10 23:34:44.469869 containerd[1734]: time="2025-07-10T23:34:44.468825700Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 10 23:34:44.469869 containerd[1734]: time="2025-07-10T23:34:44.468898260Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 10 23:34:44.469869 containerd[1734]: time="2025-07-10T23:34:44.468939380Z" level=info msg="metadata content store policy set" policy=shared Jul 10 23:34:44.475715 locksmithd[1830]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 10 23:34:44.488343 containerd[1734]: time="2025-07-10T23:34:44.486666020Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 10 23:34:44.488343 containerd[1734]: time="2025-07-10T23:34:44.486721020Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 10 23:34:44.488343 containerd[1734]: time="2025-07-10T23:34:44.486736980Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 10 23:34:44.488343 containerd[1734]: time="2025-07-10T23:34:44.486754220Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 10 23:34:44.488343 containerd[1734]: time="2025-07-10T23:34:44.486770340Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 10 23:34:44.488343 containerd[1734]: time="2025-07-10T23:34:44.486907860Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 10 23:34:44.488343 containerd[1734]: time="2025-07-10T23:34:44.487146060Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 10 23:34:44.488343 containerd[1734]: time="2025-07-10T23:34:44.487228220Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 10 23:34:44.488343 containerd[1734]: time="2025-07-10T23:34:44.487245140Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 10 23:34:44.488343 containerd[1734]: time="2025-07-10T23:34:44.487259860Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 10 23:34:44.488343 containerd[1734]: time="2025-07-10T23:34:44.487273260Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 10 23:34:44.488343 containerd[1734]: time="2025-07-10T23:34:44.487285100Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 10 23:34:44.488343 containerd[1734]: time="2025-07-10T23:34:44.487298220Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 10 23:34:44.488343 containerd[1734]: time="2025-07-10T23:34:44.487311500Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 10 23:34:44.488635 containerd[1734]: time="2025-07-10T23:34:44.487326340Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 10 23:34:44.488635 containerd[1734]: time="2025-07-10T23:34:44.487341220Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 10 23:34:44.488635 containerd[1734]: time="2025-07-10T23:34:44.487353220Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 10 23:34:44.488635 containerd[1734]: time="2025-07-10T23:34:44.487365660Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 10 23:34:44.488635 containerd[1734]: time="2025-07-10T23:34:44.487385700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 10 23:34:44.488635 containerd[1734]: time="2025-07-10T23:34:44.487400180Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 10 23:34:44.488635 containerd[1734]: time="2025-07-10T23:34:44.487415540Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 10 23:34:44.488635 containerd[1734]: time="2025-07-10T23:34:44.487428740Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 10 23:34:44.488635 containerd[1734]: time="2025-07-10T23:34:44.487440820Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 10 23:34:44.488635 containerd[1734]: time="2025-07-10T23:34:44.487453740Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 10 23:34:44.488635 containerd[1734]: time="2025-07-10T23:34:44.487464860Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 10 23:34:44.488635 containerd[1734]: time="2025-07-10T23:34:44.487477620Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 10 23:34:44.488635 containerd[1734]: time="2025-07-10T23:34:44.487490060Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 10 23:34:44.488635 containerd[1734]: time="2025-07-10T23:34:44.487503260Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 10 23:34:44.488912 containerd[1734]: time="2025-07-10T23:34:44.487514060Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 10 23:34:44.488912 containerd[1734]: time="2025-07-10T23:34:44.487524740Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 10 23:34:44.488912 containerd[1734]: time="2025-07-10T23:34:44.487536060Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 10 23:34:44.488912 containerd[1734]: time="2025-07-10T23:34:44.487550980Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 10 23:34:44.488912 containerd[1734]: time="2025-07-10T23:34:44.487571940Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 10 23:34:44.488912 containerd[1734]: time="2025-07-10T23:34:44.487586700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 10 23:34:44.488912 containerd[1734]: time="2025-07-10T23:34:44.487597820Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 10 23:34:44.490199 containerd[1734]: time="2025-07-10T23:34:44.489076700Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 10 23:34:44.490199 containerd[1734]: time="2025-07-10T23:34:44.489107980Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 10 23:34:44.490199 containerd[1734]: time="2025-07-10T23:34:44.489189620Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 10 23:34:44.490199 containerd[1734]: time="2025-07-10T23:34:44.489203740Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 10 23:34:44.490199 containerd[1734]: time="2025-07-10T23:34:44.489213420Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 10 23:34:44.490199 containerd[1734]: time="2025-07-10T23:34:44.489225260Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 10 23:34:44.490199 containerd[1734]: time="2025-07-10T23:34:44.489234700Z" level=info msg="NRI interface is disabled by configuration." Jul 10 23:34:44.490199 containerd[1734]: time="2025-07-10T23:34:44.489244860Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 10 23:34:44.490375 containerd[1734]: time="2025-07-10T23:34:44.489524780Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 10 23:34:44.490375 containerd[1734]: time="2025-07-10T23:34:44.489569020Z" level=info msg="Connect containerd service" Jul 10 23:34:44.490375 containerd[1734]: time="2025-07-10T23:34:44.489597500Z" level=info msg="using legacy CRI server" Jul 10 23:34:44.490375 containerd[1734]: time="2025-07-10T23:34:44.489604740Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 10 23:34:44.496532 containerd[1734]: time="2025-07-10T23:34:44.493349060Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 10 23:34:44.496532 containerd[1734]: time="2025-07-10T23:34:44.494041340Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 10 23:34:44.496532 containerd[1734]: time="2025-07-10T23:34:44.494162660Z" level=info msg="Start subscribing containerd event" Jul 10 23:34:44.496532 containerd[1734]: time="2025-07-10T23:34:44.494198740Z" level=info msg="Start recovering state" Jul 10 23:34:44.496532 containerd[1734]: time="2025-07-10T23:34:44.494252060Z" level=info msg="Start event monitor" Jul 10 23:34:44.496532 containerd[1734]: time="2025-07-10T23:34:44.494263020Z" level=info msg="Start snapshots syncer" Jul 10 23:34:44.496532 containerd[1734]: time="2025-07-10T23:34:44.494270700Z" level=info msg="Start cni network conf syncer for default" Jul 10 23:34:44.496532 containerd[1734]: time="2025-07-10T23:34:44.494277420Z" level=info msg="Start streaming server" Jul 10 23:34:44.496532 containerd[1734]: time="2025-07-10T23:34:44.495294060Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 10 23:34:44.496532 containerd[1734]: time="2025-07-10T23:34:44.495337700Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 10 23:34:44.502650 containerd[1734]: time="2025-07-10T23:34:44.496841460Z" level=info msg="containerd successfully booted in 0.090640s" Jul 10 23:34:44.496933 systemd[1]: Started containerd.service - containerd container runtime. Jul 10 23:34:44.705746 tar[1729]: linux-arm64/README.md Jul 10 23:34:44.727269 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 10 23:34:44.833785 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 23:34:44.841786 (kubelet)[1850]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 10 23:34:45.232477 kubelet[1850]: E0710 23:34:45.232395 1850 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 23:34:45.234037 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 23:34:45.234157 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 23:34:45.234650 systemd[1]: kubelet.service: Consumed 714ms CPU time, 258.7M memory peak. Jul 10 23:34:45.497472 sshd_keygen[1712]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 10 23:34:45.515670 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 10 23:34:45.526923 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 10 23:34:45.533567 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jul 10 23:34:45.539656 systemd[1]: issuegen.service: Deactivated successfully. Jul 10 23:34:45.539852 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 10 23:34:45.550890 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 10 23:34:45.557669 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jul 10 23:34:45.566645 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 10 23:34:45.581774 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 10 23:34:45.588896 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jul 10 23:34:45.596057 systemd[1]: Reached target getty.target - Login Prompts. Jul 10 23:34:45.603136 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 10 23:34:45.610640 systemd[1]: Startup finished in 680ms (kernel) + 12.654s (initrd) + 13.049s (userspace) = 26.384s. Jul 10 23:34:45.937700 login[1879]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:34:45.939678 login[1880]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:34:45.951527 systemd-logind[1707]: New session 2 of user core. Jul 10 23:34:45.951937 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 10 23:34:45.962957 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 10 23:34:45.966280 systemd-logind[1707]: New session 1 of user core. Jul 10 23:34:45.973781 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 10 23:34:45.981888 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 10 23:34:45.983996 (systemd)[1887]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 10 23:34:45.986089 systemd-logind[1707]: New session c1 of user core. Jul 10 23:34:46.247580 systemd[1887]: Queued start job for default target default.target. Jul 10 23:34:46.253488 systemd[1887]: Created slice app.slice - User Application Slice. Jul 10 23:34:46.253520 systemd[1887]: Reached target paths.target - Paths. Jul 10 23:34:46.253558 systemd[1887]: Reached target timers.target - Timers. Jul 10 23:34:46.254718 systemd[1887]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 10 23:34:46.264943 systemd[1887]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 10 23:34:46.264995 systemd[1887]: Reached target sockets.target - Sockets. Jul 10 23:34:46.265034 systemd[1887]: Reached target basic.target - Basic System. Jul 10 23:34:46.265063 systemd[1887]: Reached target default.target - Main User Target. Jul 10 23:34:46.265092 systemd[1887]: Startup finished in 273ms. Jul 10 23:34:46.265276 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 10 23:34:46.273801 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 10 23:34:46.274467 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 10 23:34:47.525653 waagent[1876]: 2025-07-10T23:34:47.523497Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Jul 10 23:34:47.529498 waagent[1876]: 2025-07-10T23:34:47.529427Z INFO Daemon Daemon OS: flatcar 4230.2.1 Jul 10 23:34:47.533969 waagent[1876]: 2025-07-10T23:34:47.533912Z INFO Daemon Daemon Python: 3.11.11 Jul 10 23:34:47.538462 waagent[1876]: 2025-07-10T23:34:47.538404Z INFO Daemon Daemon Run daemon Jul 10 23:34:47.542434 waagent[1876]: 2025-07-10T23:34:47.542389Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4230.2.1' Jul 10 23:34:47.551117 waagent[1876]: 2025-07-10T23:34:47.551063Z INFO Daemon Daemon Using waagent for provisioning Jul 10 23:34:47.556486 waagent[1876]: 2025-07-10T23:34:47.556442Z INFO Daemon Daemon Activate resource disk Jul 10 23:34:47.561076 waagent[1876]: 2025-07-10T23:34:47.561024Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jul 10 23:34:47.573797 waagent[1876]: 2025-07-10T23:34:47.573742Z INFO Daemon Daemon Found device: None Jul 10 23:34:47.578334 waagent[1876]: 2025-07-10T23:34:47.578285Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jul 10 23:34:47.586786 waagent[1876]: 2025-07-10T23:34:47.586739Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jul 10 23:34:47.598599 waagent[1876]: 2025-07-10T23:34:47.598549Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jul 10 23:34:47.605378 waagent[1876]: 2025-07-10T23:34:47.605322Z INFO Daemon Daemon Running default provisioning handler Jul 10 23:34:47.617168 waagent[1876]: 2025-07-10T23:34:47.617107Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jul 10 23:34:47.631436 waagent[1876]: 2025-07-10T23:34:47.631364Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jul 10 23:34:47.641728 waagent[1876]: 2025-07-10T23:34:47.641667Z INFO Daemon Daemon cloud-init is enabled: False Jul 10 23:34:47.647662 waagent[1876]: 2025-07-10T23:34:47.647586Z INFO Daemon Daemon Copying ovf-env.xml Jul 10 23:34:47.758260 waagent[1876]: 2025-07-10T23:34:47.758161Z INFO Daemon Daemon Successfully mounted dvd Jul 10 23:34:47.788169 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jul 10 23:34:47.791641 waagent[1876]: 2025-07-10T23:34:47.790664Z INFO Daemon Daemon Detect protocol endpoint Jul 10 23:34:47.795849 waagent[1876]: 2025-07-10T23:34:47.795791Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jul 10 23:34:47.801848 waagent[1876]: 2025-07-10T23:34:47.801796Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jul 10 23:34:47.808816 waagent[1876]: 2025-07-10T23:34:47.808770Z INFO Daemon Daemon Test for route to 168.63.129.16 Jul 10 23:34:47.814508 waagent[1876]: 2025-07-10T23:34:47.814462Z INFO Daemon Daemon Route to 168.63.129.16 exists Jul 10 23:34:47.819884 waagent[1876]: 2025-07-10T23:34:47.819839Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jul 10 23:34:47.854209 waagent[1876]: 2025-07-10T23:34:47.854161Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jul 10 23:34:47.861164 waagent[1876]: 2025-07-10T23:34:47.861135Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jul 10 23:34:47.866802 waagent[1876]: 2025-07-10T23:34:47.866753Z INFO Daemon Daemon Server preferred version:2015-04-05 Jul 10 23:34:48.097711 waagent[1876]: 2025-07-10T23:34:48.097012Z INFO Daemon Daemon Initializing goal state during protocol detection Jul 10 23:34:48.103425 waagent[1876]: 2025-07-10T23:34:48.103359Z INFO Daemon Daemon Forcing an update of the goal state. Jul 10 23:34:48.112447 waagent[1876]: 2025-07-10T23:34:48.112396Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jul 10 23:34:48.133827 waagent[1876]: 2025-07-10T23:34:48.133778Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.175 Jul 10 23:34:48.140267 waagent[1876]: 2025-07-10T23:34:48.140217Z INFO Daemon Jul 10 23:34:48.143444 waagent[1876]: 2025-07-10T23:34:48.143400Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 74308677-66c2-4aac-8bc1-fede637e9e03 eTag: 1295038094750384523 source: Fabric] Jul 10 23:34:48.155037 waagent[1876]: 2025-07-10T23:34:48.154983Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jul 10 23:34:48.161648 waagent[1876]: 2025-07-10T23:34:48.161579Z INFO Daemon Jul 10 23:34:48.164394 waagent[1876]: 2025-07-10T23:34:48.164344Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jul 10 23:34:48.175486 waagent[1876]: 2025-07-10T23:34:48.175439Z INFO Daemon Daemon Downloading artifacts profile blob Jul 10 23:34:48.336851 waagent[1876]: 2025-07-10T23:34:48.336764Z INFO Daemon Downloaded certificate {'thumbprint': '89F7D233668D51ECFC3CA187FD7BC7F05164B399', 'hasPrivateKey': True} Jul 10 23:34:48.346676 waagent[1876]: 2025-07-10T23:34:48.346601Z INFO Daemon Downloaded certificate {'thumbprint': '4BC19CE2BF0F270AA16BDF68045FB084C0AAE8A3', 'hasPrivateKey': False} Jul 10 23:34:48.356238 waagent[1876]: 2025-07-10T23:34:48.356162Z INFO Daemon Fetch goal state completed Jul 10 23:34:48.406407 waagent[1876]: 2025-07-10T23:34:48.406361Z INFO Daemon Daemon Starting provisioning Jul 10 23:34:48.411322 waagent[1876]: 2025-07-10T23:34:48.411268Z INFO Daemon Daemon Handle ovf-env.xml. Jul 10 23:34:48.415989 waagent[1876]: 2025-07-10T23:34:48.415948Z INFO Daemon Daemon Set hostname [ci-4230.2.1-n-2b36f27b4a] Jul 10 23:34:48.437817 waagent[1876]: 2025-07-10T23:34:48.437743Z INFO Daemon Daemon Publish hostname [ci-4230.2.1-n-2b36f27b4a] Jul 10 23:34:48.444105 waagent[1876]: 2025-07-10T23:34:48.444049Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jul 10 23:34:48.450441 waagent[1876]: 2025-07-10T23:34:48.450396Z INFO Daemon Daemon Primary interface is [eth0] Jul 10 23:34:48.462094 systemd-networkd[1457]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 23:34:48.462108 systemd-networkd[1457]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 10 23:34:48.462134 systemd-networkd[1457]: eth0: DHCP lease lost Jul 10 23:34:48.463097 waagent[1876]: 2025-07-10T23:34:48.462943Z INFO Daemon Daemon Create user account if not exists Jul 10 23:34:48.468461 waagent[1876]: 2025-07-10T23:34:48.468415Z INFO Daemon Daemon User core already exists, skip useradd Jul 10 23:34:48.475071 waagent[1876]: 2025-07-10T23:34:48.475015Z INFO Daemon Daemon Configure sudoer Jul 10 23:34:48.480046 waagent[1876]: 2025-07-10T23:34:48.479980Z INFO Daemon Daemon Configure sshd Jul 10 23:34:48.484348 waagent[1876]: 2025-07-10T23:34:48.484295Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jul 10 23:34:48.497405 waagent[1876]: 2025-07-10T23:34:48.497133Z INFO Daemon Daemon Deploy ssh public key. Jul 10 23:34:48.506682 systemd-networkd[1457]: eth0: DHCPv4 address 10.200.20.36/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jul 10 23:34:49.814823 waagent[1876]: 2025-07-10T23:34:49.814757Z INFO Daemon Daemon Provisioning complete Jul 10 23:34:49.832524 waagent[1876]: 2025-07-10T23:34:49.832477Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jul 10 23:34:49.838595 waagent[1876]: 2025-07-10T23:34:49.838550Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jul 10 23:34:49.847750 waagent[1876]: 2025-07-10T23:34:49.847710Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Jul 10 23:34:49.973583 waagent[1944]: 2025-07-10T23:34:49.973500Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Jul 10 23:34:49.974527 waagent[1944]: 2025-07-10T23:34:49.974095Z INFO ExtHandler ExtHandler OS: flatcar 4230.2.1 Jul 10 23:34:49.974527 waagent[1944]: 2025-07-10T23:34:49.974174Z INFO ExtHandler ExtHandler Python: 3.11.11 Jul 10 23:34:50.013646 waagent[1944]: 2025-07-10T23:34:50.012269Z INFO ExtHandler ExtHandler Distro: flatcar-4230.2.1; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.11; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Jul 10 23:34:50.013646 waagent[1944]: 2025-07-10T23:34:50.012508Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 10 23:34:50.013646 waagent[1944]: 2025-07-10T23:34:50.012567Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 10 23:34:50.020792 waagent[1944]: 2025-07-10T23:34:50.020738Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jul 10 23:34:50.026655 waagent[1944]: 2025-07-10T23:34:50.026605Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.175 Jul 10 23:34:50.027180 waagent[1944]: 2025-07-10T23:34:50.027142Z INFO ExtHandler Jul 10 23:34:50.027319 waagent[1944]: 2025-07-10T23:34:50.027287Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 6787e072-766f-4700-b5d5-07a5f6817f83 eTag: 1295038094750384523 source: Fabric] Jul 10 23:34:50.027699 waagent[1944]: 2025-07-10T23:34:50.027657Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jul 10 23:34:50.028335 waagent[1944]: 2025-07-10T23:34:50.028294Z INFO ExtHandler Jul 10 23:34:50.028465 waagent[1944]: 2025-07-10T23:34:50.028434Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jul 10 23:34:50.032489 waagent[1944]: 2025-07-10T23:34:50.032460Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jul 10 23:34:50.106394 waagent[1944]: 2025-07-10T23:34:50.106279Z INFO ExtHandler Downloaded certificate {'thumbprint': '89F7D233668D51ECFC3CA187FD7BC7F05164B399', 'hasPrivateKey': True} Jul 10 23:34:50.106934 waagent[1944]: 2025-07-10T23:34:50.106895Z INFO ExtHandler Downloaded certificate {'thumbprint': '4BC19CE2BF0F270AA16BDF68045FB084C0AAE8A3', 'hasPrivateKey': False} Jul 10 23:34:50.107413 waagent[1944]: 2025-07-10T23:34:50.107375Z INFO ExtHandler Fetch goal state completed Jul 10 23:34:50.123400 waagent[1944]: 2025-07-10T23:34:50.123344Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1944 Jul 10 23:34:50.123699 waagent[1944]: 2025-07-10T23:34:50.123662Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jul 10 23:34:50.125392 waagent[1944]: 2025-07-10T23:34:50.125352Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4230.2.1', '', 'Flatcar Container Linux by Kinvolk'] Jul 10 23:34:50.125883 waagent[1944]: 2025-07-10T23:34:50.125845Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jul 10 23:34:50.176952 waagent[1944]: 2025-07-10T23:34:50.176913Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jul 10 23:34:50.177255 waagent[1944]: 2025-07-10T23:34:50.177219Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jul 10 23:34:50.183274 waagent[1944]: 2025-07-10T23:34:50.183246Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jul 10 23:34:50.189142 systemd[1]: Reload requested from client PID 1959 ('systemctl') (unit waagent.service)... Jul 10 23:34:50.189158 systemd[1]: Reloading... Jul 10 23:34:50.272093 zram_generator::config[2008]: No configuration found. Jul 10 23:34:50.373195 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 23:34:50.487032 systemd[1]: Reloading finished in 297 ms. Jul 10 23:34:50.502397 waagent[1944]: 2025-07-10T23:34:50.502036Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Jul 10 23:34:50.508191 systemd[1]: Reload requested from client PID 2053 ('systemctl') (unit waagent.service)... Jul 10 23:34:50.508205 systemd[1]: Reloading... Jul 10 23:34:50.589653 zram_generator::config[2095]: No configuration found. Jul 10 23:34:50.689926 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 23:34:50.800166 systemd[1]: Reloading finished in 291 ms. Jul 10 23:34:50.820893 waagent[1944]: 2025-07-10T23:34:50.820724Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jul 10 23:34:50.820968 waagent[1944]: 2025-07-10T23:34:50.820898Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jul 10 23:34:51.006819 waagent[1944]: 2025-07-10T23:34:51.006732Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jul 10 23:34:51.007417 waagent[1944]: 2025-07-10T23:34:51.007343Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Jul 10 23:34:51.008235 waagent[1944]: 2025-07-10T23:34:51.008177Z INFO ExtHandler ExtHandler Starting env monitor service. Jul 10 23:34:51.008889 waagent[1944]: 2025-07-10T23:34:51.008596Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jul 10 23:34:51.008889 waagent[1944]: 2025-07-10T23:34:51.008815Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 10 23:34:51.009136 waagent[1944]: 2025-07-10T23:34:51.009098Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 10 23:34:51.009264 waagent[1944]: 2025-07-10T23:34:51.009234Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 10 23:34:51.009823 waagent[1944]: 2025-07-10T23:34:51.009346Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 10 23:34:51.009823 waagent[1944]: 2025-07-10T23:34:51.009481Z INFO EnvHandler ExtHandler Configure routes Jul 10 23:34:51.009823 waagent[1944]: 2025-07-10T23:34:51.009540Z INFO EnvHandler ExtHandler Gateway:None Jul 10 23:34:51.009823 waagent[1944]: 2025-07-10T23:34:51.009581Z INFO EnvHandler ExtHandler Routes:None Jul 10 23:34:51.009940 waagent[1944]: 2025-07-10T23:34:51.009857Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jul 10 23:34:51.010241 waagent[1944]: 2025-07-10T23:34:51.010199Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jul 10 23:34:51.010668 waagent[1944]: 2025-07-10T23:34:51.010612Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jul 10 23:34:51.011082 waagent[1944]: 2025-07-10T23:34:51.011041Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jul 10 23:34:51.011082 waagent[1944]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jul 10 23:34:51.011082 waagent[1944]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Jul 10 23:34:51.011082 waagent[1944]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jul 10 23:34:51.011082 waagent[1944]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jul 10 23:34:51.011082 waagent[1944]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jul 10 23:34:51.011082 waagent[1944]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jul 10 23:34:51.011347 waagent[1944]: 2025-07-10T23:34:51.011309Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jul 10 23:34:51.011428 waagent[1944]: 2025-07-10T23:34:51.011399Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jul 10 23:34:51.011970 waagent[1944]: 2025-07-10T23:34:51.011933Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jul 10 23:34:51.021266 waagent[1944]: 2025-07-10T23:34:51.020544Z INFO ExtHandler ExtHandler Jul 10 23:34:51.021266 waagent[1944]: 2025-07-10T23:34:51.020671Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 43fc1fce-0af2-4278-9769-02e177436879 correlation e1c3b703-0fb6-4118-9a92-1dfd607b4710 created: 2025-07-10T23:33:33.930267Z] Jul 10 23:34:51.021266 waagent[1944]: 2025-07-10T23:34:51.021060Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jul 10 23:34:51.022558 waagent[1944]: 2025-07-10T23:34:51.022403Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Jul 10 23:34:51.066945 waagent[1944]: 2025-07-10T23:34:51.066824Z INFO MonitorHandler ExtHandler Network interfaces: Jul 10 23:34:51.066945 waagent[1944]: Executing ['ip', '-a', '-o', 'link']: Jul 10 23:34:51.066945 waagent[1944]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jul 10 23:34:51.066945 waagent[1944]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:7a:87:2c brd ff:ff:ff:ff:ff:ff Jul 10 23:34:51.066945 waagent[1944]: 3: enP64521s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:7a:87:2c brd ff:ff:ff:ff:ff:ff\ altname enP64521p0s2 Jul 10 23:34:51.066945 waagent[1944]: Executing ['ip', '-4', '-a', '-o', 'address']: Jul 10 23:34:51.066945 waagent[1944]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jul 10 23:34:51.066945 waagent[1944]: 2: eth0 inet 10.200.20.36/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Jul 10 23:34:51.066945 waagent[1944]: Executing ['ip', '-6', '-a', '-o', 'address']: Jul 10 23:34:51.066945 waagent[1944]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jul 10 23:34:51.066945 waagent[1944]: 2: eth0 inet6 fe80::222:48ff:fe7a:872c/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jul 10 23:34:51.066945 waagent[1944]: 3: enP64521s1 inet6 fe80::222:48ff:fe7a:872c/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jul 10 23:34:51.067232 waagent[1944]: 2025-07-10T23:34:51.067077Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 54A3FC5F-E602-4762-9515-223BAAE2F188;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Jul 10 23:34:51.142789 waagent[1944]: 2025-07-10T23:34:51.142718Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Jul 10 23:34:51.142789 waagent[1944]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jul 10 23:34:51.142789 waagent[1944]: pkts bytes target prot opt in out source destination Jul 10 23:34:51.142789 waagent[1944]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jul 10 23:34:51.142789 waagent[1944]: pkts bytes target prot opt in out source destination Jul 10 23:34:51.142789 waagent[1944]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jul 10 23:34:51.142789 waagent[1944]: pkts bytes target prot opt in out source destination Jul 10 23:34:51.142789 waagent[1944]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jul 10 23:34:51.142789 waagent[1944]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jul 10 23:34:51.142789 waagent[1944]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jul 10 23:34:51.145501 waagent[1944]: 2025-07-10T23:34:51.145445Z INFO EnvHandler ExtHandler Current Firewall rules: Jul 10 23:34:51.145501 waagent[1944]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jul 10 23:34:51.145501 waagent[1944]: pkts bytes target prot opt in out source destination Jul 10 23:34:51.145501 waagent[1944]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jul 10 23:34:51.145501 waagent[1944]: pkts bytes target prot opt in out source destination Jul 10 23:34:51.145501 waagent[1944]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jul 10 23:34:51.145501 waagent[1944]: pkts bytes target prot opt in out source destination Jul 10 23:34:51.145501 waagent[1944]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jul 10 23:34:51.145501 waagent[1944]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jul 10 23:34:51.145501 waagent[1944]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jul 10 23:34:51.145721 waagent[1944]: 2025-07-10T23:34:51.145693Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jul 10 23:34:55.471530 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 10 23:34:55.479783 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 23:34:55.583270 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 23:34:55.586566 (kubelet)[2185]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 10 23:34:55.707522 kubelet[2185]: E0710 23:34:55.707451 2185 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 23:34:55.710396 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 23:34:55.710541 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 23:34:55.710986 systemd[1]: kubelet.service: Consumed 205ms CPU time, 104.4M memory peak. Jul 10 23:35:05.721664 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 10 23:35:05.729769 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 23:35:06.068426 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 23:35:06.071467 (kubelet)[2199]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 10 23:35:06.104945 kubelet[2199]: E0710 23:35:06.104893 2199 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 23:35:06.107607 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 23:35:06.107868 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 23:35:06.108720 systemd[1]: kubelet.service: Consumed 118ms CPU time, 105.5M memory peak. Jul 10 23:35:07.401193 chronyd[1694]: Selected source PHC0 Jul 10 23:35:16.221596 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 10 23:35:16.229766 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 23:35:16.464766 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 23:35:16.467774 (kubelet)[2215]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 10 23:35:16.499682 kubelet[2215]: E0710 23:35:16.499547 2215 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 23:35:16.501472 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 23:35:16.501595 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 23:35:16.502116 systemd[1]: kubelet.service: Consumed 118ms CPU time, 105.1M memory peak. Jul 10 23:35:20.794428 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 10 23:35:20.795573 systemd[1]: Started sshd@0-10.200.20.36:22-10.200.16.10:57916.service - OpenSSH per-connection server daemon (10.200.16.10:57916). Jul 10 23:35:21.365145 sshd[2223]: Accepted publickey for core from 10.200.16.10 port 57916 ssh2: RSA SHA256:MRWiT2m5xbwSE4Nwnya++Fyqw3vZsniZxlGhvbxVqjo Jul 10 23:35:21.366380 sshd-session[2223]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:35:21.370355 systemd-logind[1707]: New session 3 of user core. Jul 10 23:35:21.377750 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 10 23:35:21.796415 systemd[1]: Started sshd@1-10.200.20.36:22-10.200.16.10:57932.service - OpenSSH per-connection server daemon (10.200.16.10:57932). Jul 10 23:35:22.290272 sshd[2228]: Accepted publickey for core from 10.200.16.10 port 57932 ssh2: RSA SHA256:MRWiT2m5xbwSE4Nwnya++Fyqw3vZsniZxlGhvbxVqjo Jul 10 23:35:22.291541 sshd-session[2228]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:35:22.296659 systemd-logind[1707]: New session 4 of user core. Jul 10 23:35:22.303781 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 10 23:35:22.654350 sshd[2230]: Connection closed by 10.200.16.10 port 57932 Jul 10 23:35:22.654873 sshd-session[2228]: pam_unix(sshd:session): session closed for user core Jul 10 23:35:22.657657 systemd-logind[1707]: Session 4 logged out. Waiting for processes to exit. Jul 10 23:35:22.657869 systemd[1]: sshd@1-10.200.20.36:22-10.200.16.10:57932.service: Deactivated successfully. Jul 10 23:35:22.659372 systemd[1]: session-4.scope: Deactivated successfully. Jul 10 23:35:22.661890 systemd-logind[1707]: Removed session 4. Jul 10 23:35:22.739709 systemd[1]: Started sshd@2-10.200.20.36:22-10.200.16.10:57944.service - OpenSSH per-connection server daemon (10.200.16.10:57944). Jul 10 23:35:23.219444 sshd[2236]: Accepted publickey for core from 10.200.16.10 port 57944 ssh2: RSA SHA256:MRWiT2m5xbwSE4Nwnya++Fyqw3vZsniZxlGhvbxVqjo Jul 10 23:35:23.220751 sshd-session[2236]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:35:23.224730 systemd-logind[1707]: New session 5 of user core. Jul 10 23:35:23.232765 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 10 23:35:23.572651 sshd[2238]: Connection closed by 10.200.16.10 port 57944 Jul 10 23:35:23.573158 sshd-session[2236]: pam_unix(sshd:session): session closed for user core Jul 10 23:35:23.576323 systemd[1]: sshd@2-10.200.20.36:22-10.200.16.10:57944.service: Deactivated successfully. Jul 10 23:35:23.577984 systemd[1]: session-5.scope: Deactivated successfully. Jul 10 23:35:23.578779 systemd-logind[1707]: Session 5 logged out. Waiting for processes to exit. Jul 10 23:35:23.579551 systemd-logind[1707]: Removed session 5. Jul 10 23:35:23.658728 systemd[1]: Started sshd@3-10.200.20.36:22-10.200.16.10:57946.service - OpenSSH per-connection server daemon (10.200.16.10:57946). Jul 10 23:35:24.141614 sshd[2244]: Accepted publickey for core from 10.200.16.10 port 57946 ssh2: RSA SHA256:MRWiT2m5xbwSE4Nwnya++Fyqw3vZsniZxlGhvbxVqjo Jul 10 23:35:24.142929 sshd-session[2244]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:35:24.148448 systemd-logind[1707]: New session 6 of user core. Jul 10 23:35:24.152800 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 10 23:35:24.485666 sshd[2246]: Connection closed by 10.200.16.10 port 57946 Jul 10 23:35:24.486170 sshd-session[2244]: pam_unix(sshd:session): session closed for user core Jul 10 23:35:24.489751 systemd-logind[1707]: Session 6 logged out. Waiting for processes to exit. Jul 10 23:35:24.490348 systemd[1]: sshd@3-10.200.20.36:22-10.200.16.10:57946.service: Deactivated successfully. Jul 10 23:35:24.492400 systemd[1]: session-6.scope: Deactivated successfully. Jul 10 23:35:24.493425 systemd-logind[1707]: Removed session 6. Jul 10 23:35:24.570828 systemd[1]: Started sshd@4-10.200.20.36:22-10.200.16.10:57950.service - OpenSSH per-connection server daemon (10.200.16.10:57950). Jul 10 23:35:25.033374 sshd[2252]: Accepted publickey for core from 10.200.16.10 port 57950 ssh2: RSA SHA256:MRWiT2m5xbwSE4Nwnya++Fyqw3vZsniZxlGhvbxVqjo Jul 10 23:35:25.034598 sshd-session[2252]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:35:25.038340 systemd-logind[1707]: New session 7 of user core. Jul 10 23:35:25.048852 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 10 23:35:25.361114 sudo[2255]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 10 23:35:25.361374 sudo[2255]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 23:35:25.388970 sudo[2255]: pam_unix(sudo:session): session closed for user root Jul 10 23:35:25.470713 sshd[2254]: Connection closed by 10.200.16.10 port 57950 Jul 10 23:35:25.471537 sshd-session[2252]: pam_unix(sshd:session): session closed for user core Jul 10 23:35:25.474848 systemd-logind[1707]: Session 7 logged out. Waiting for processes to exit. Jul 10 23:35:25.475072 systemd[1]: sshd@4-10.200.20.36:22-10.200.16.10:57950.service: Deactivated successfully. Jul 10 23:35:25.476507 systemd[1]: session-7.scope: Deactivated successfully. Jul 10 23:35:25.478584 systemd-logind[1707]: Removed session 7. Jul 10 23:35:25.571018 systemd[1]: Started sshd@5-10.200.20.36:22-10.200.16.10:57958.service - OpenSSH per-connection server daemon (10.200.16.10:57958). Jul 10 23:35:26.048110 sshd[2261]: Accepted publickey for core from 10.200.16.10 port 57958 ssh2: RSA SHA256:MRWiT2m5xbwSE4Nwnya++Fyqw3vZsniZxlGhvbxVqjo Jul 10 23:35:26.050584 sshd-session[2261]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:35:26.055662 systemd-logind[1707]: New session 8 of user core. Jul 10 23:35:26.062754 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 10 23:35:26.316207 sudo[2265]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 10 23:35:26.316842 sudo[2265]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 23:35:26.319960 sudo[2265]: pam_unix(sudo:session): session closed for user root Jul 10 23:35:26.323973 sudo[2264]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 10 23:35:26.324211 sudo[2264]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 23:35:26.335887 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 10 23:35:26.355909 augenrules[2287]: No rules Jul 10 23:35:26.356902 systemd[1]: audit-rules.service: Deactivated successfully. Jul 10 23:35:26.357081 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 10 23:35:26.358782 sudo[2264]: pam_unix(sudo:session): session closed for user root Jul 10 23:35:26.440698 sshd[2263]: Connection closed by 10.200.16.10 port 57958 Jul 10 23:35:26.441215 sshd-session[2261]: pam_unix(sshd:session): session closed for user core Jul 10 23:35:26.444666 systemd[1]: sshd@5-10.200.20.36:22-10.200.16.10:57958.service: Deactivated successfully. Jul 10 23:35:26.446195 systemd[1]: session-8.scope: Deactivated successfully. Jul 10 23:35:26.446881 systemd-logind[1707]: Session 8 logged out. Waiting for processes to exit. Jul 10 23:35:26.447804 systemd-logind[1707]: Removed session 8. Jul 10 23:35:26.524060 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jul 10 23:35:26.531826 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 23:35:26.533397 systemd[1]: Started sshd@6-10.200.20.36:22-10.200.16.10:57964.service - OpenSSH per-connection server daemon (10.200.16.10:57964). Jul 10 23:35:26.770335 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 23:35:26.773753 (kubelet)[2305]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 10 23:35:26.809082 kubelet[2305]: E0710 23:35:26.808963 2305 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 23:35:26.811242 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 23:35:26.811387 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 23:35:26.811689 systemd[1]: kubelet.service: Consumed 120ms CPU time, 106.8M memory peak. Jul 10 23:35:27.001203 sshd[2297]: Accepted publickey for core from 10.200.16.10 port 57964 ssh2: RSA SHA256:MRWiT2m5xbwSE4Nwnya++Fyqw3vZsniZxlGhvbxVqjo Jul 10 23:35:27.002768 sshd-session[2297]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:35:27.008226 systemd-logind[1707]: New session 9 of user core. Jul 10 23:35:27.009760 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 10 23:35:27.263201 sudo[2314]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 10 23:35:27.263455 sudo[2314]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 23:35:27.332688 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Jul 10 23:35:28.372028 (dockerd)[2331]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 10 23:35:28.372060 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 10 23:35:29.230293 update_engine[1711]: I20250710 23:35:29.229471 1711 update_attempter.cc:509] Updating boot flags... Jul 10 23:35:29.285715 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (2351) Jul 10 23:35:29.392423 dockerd[2331]: time="2025-07-10T23:35:29.392379714Z" level=info msg="Starting up" Jul 10 23:35:29.415757 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (2351) Jul 10 23:35:29.738953 dockerd[2331]: time="2025-07-10T23:35:29.738743244Z" level=info msg="Loading containers: start." Jul 10 23:35:29.916735 kernel: Initializing XFRM netlink socket Jul 10 23:35:29.990527 systemd-networkd[1457]: docker0: Link UP Jul 10 23:35:30.030707 dockerd[2331]: time="2025-07-10T23:35:30.030661806Z" level=info msg="Loading containers: done." Jul 10 23:35:30.051999 dockerd[2331]: time="2025-07-10T23:35:30.051946729Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 10 23:35:30.052168 dockerd[2331]: time="2025-07-10T23:35:30.052054049Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Jul 10 23:35:30.052196 dockerd[2331]: time="2025-07-10T23:35:30.052176289Z" level=info msg="Daemon has completed initialization" Jul 10 23:35:30.111637 dockerd[2331]: time="2025-07-10T23:35:30.111545098Z" level=info msg="API listen on /run/docker.sock" Jul 10 23:35:30.111722 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 10 23:35:30.758282 containerd[1734]: time="2025-07-10T23:35:30.758241111Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\"" Jul 10 23:35:31.663125 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3011560005.mount: Deactivated successfully. Jul 10 23:35:33.075824 containerd[1734]: time="2025-07-10T23:35:33.075773176Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:35:33.082862 containerd[1734]: time="2025-07-10T23:35:33.082828582Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.2: active requests=0, bytes read=27351716" Jul 10 23:35:33.098035 containerd[1734]: time="2025-07-10T23:35:33.097972274Z" level=info msg="ImageCreate event name:\"sha256:04ac773cca35cc457f24a6501b6b308d63a2cddd1aec14fe95559bccca3010a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:35:33.103184 containerd[1734]: time="2025-07-10T23:35:33.103140958Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:35:33.104573 containerd[1734]: time="2025-07-10T23:35:33.104167999Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.2\" with image id \"sha256:04ac773cca35cc457f24a6501b6b308d63a2cddd1aec14fe95559bccca3010a4\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\", size \"27348516\" in 2.345886207s" Jul 10 23:35:33.104573 containerd[1734]: time="2025-07-10T23:35:33.104205439Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\" returns image reference \"sha256:04ac773cca35cc457f24a6501b6b308d63a2cddd1aec14fe95559bccca3010a4\"" Jul 10 23:35:33.105498 containerd[1734]: time="2025-07-10T23:35:33.105469640Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\"" Jul 10 23:35:34.734572 containerd[1734]: time="2025-07-10T23:35:34.734518198Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:35:34.738685 containerd[1734]: time="2025-07-10T23:35:34.738632521Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.2: active requests=0, bytes read=23537623" Jul 10 23:35:34.743114 containerd[1734]: time="2025-07-10T23:35:34.743083405Z" level=info msg="ImageCreate event name:\"sha256:99a259072231375ad69a369cdf5620d60cdff72d450951c603fad8a94667af65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:35:34.749705 containerd[1734]: time="2025-07-10T23:35:34.749654850Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:35:34.750994 containerd[1734]: time="2025-07-10T23:35:34.750640251Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.2\" with image id \"sha256:99a259072231375ad69a369cdf5620d60cdff72d450951c603fad8a94667af65\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\", size \"25092541\" in 1.645136851s" Jul 10 23:35:34.750994 containerd[1734]: time="2025-07-10T23:35:34.750672731Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\" returns image reference \"sha256:99a259072231375ad69a369cdf5620d60cdff72d450951c603fad8a94667af65\"" Jul 10 23:35:34.751249 containerd[1734]: time="2025-07-10T23:35:34.751213651Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\"" Jul 10 23:35:36.045255 containerd[1734]: time="2025-07-10T23:35:36.045197018Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:35:36.048122 containerd[1734]: time="2025-07-10T23:35:36.047916140Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.2: active requests=0, bytes read=18293515" Jul 10 23:35:36.053770 containerd[1734]: time="2025-07-10T23:35:36.053740785Z" level=info msg="ImageCreate event name:\"sha256:bb3da57746ca4726b669d35145eb9b4085643c61bbc80b9df3bf1e6021ba9eaf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:35:36.062378 containerd[1734]: time="2025-07-10T23:35:36.062321352Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:35:36.063748 containerd[1734]: time="2025-07-10T23:35:36.063599833Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.2\" with image id \"sha256:bb3da57746ca4726b669d35145eb9b4085643c61bbc80b9df3bf1e6021ba9eaf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\", size \"19848451\" in 1.312352782s" Jul 10 23:35:36.063748 containerd[1734]: time="2025-07-10T23:35:36.063653993Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\" returns image reference \"sha256:bb3da57746ca4726b669d35145eb9b4085643c61bbc80b9df3bf1e6021ba9eaf\"" Jul 10 23:35:36.064331 containerd[1734]: time="2025-07-10T23:35:36.064301754Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\"" Jul 10 23:35:36.971412 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jul 10 23:35:36.980788 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 23:35:37.076647 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 23:35:37.079825 (kubelet)[2698]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 10 23:35:37.212891 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 23:35:37.542564 kubelet[2698]: E0710 23:35:37.211269 2698 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 23:35:37.213010 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 23:35:37.213265 systemd[1]: kubelet.service: Consumed 217ms CPU time, 107.3M memory peak. Jul 10 23:35:37.929178 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount5436181.mount: Deactivated successfully. Jul 10 23:35:38.378281 containerd[1734]: time="2025-07-10T23:35:38.378237661Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:35:38.381015 containerd[1734]: time="2025-07-10T23:35:38.380981022Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.2: active requests=0, bytes read=28199472" Jul 10 23:35:38.391241 containerd[1734]: time="2025-07-10T23:35:38.391181584Z" level=info msg="ImageCreate event name:\"sha256:c26522e54bad2e6bfbb1bf11500833c94433076a3fa38436a2ec496a422c5455\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:35:38.397086 containerd[1734]: time="2025-07-10T23:35:38.397016305Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:35:38.397834 containerd[1734]: time="2025-07-10T23:35:38.397601225Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.2\" with image id \"sha256:c26522e54bad2e6bfbb1bf11500833c94433076a3fa38436a2ec496a422c5455\", repo tag \"registry.k8s.io/kube-proxy:v1.33.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\", size \"28198491\" in 2.333264951s" Jul 10 23:35:38.397834 containerd[1734]: time="2025-07-10T23:35:38.397647025Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\" returns image reference \"sha256:c26522e54bad2e6bfbb1bf11500833c94433076a3fa38436a2ec496a422c5455\"" Jul 10 23:35:38.398397 containerd[1734]: time="2025-07-10T23:35:38.398243225Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jul 10 23:35:39.170451 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1262982925.mount: Deactivated successfully. Jul 10 23:35:41.101435 containerd[1734]: time="2025-07-10T23:35:41.101377011Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:35:41.104785 containerd[1734]: time="2025-07-10T23:35:41.104554095Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152117" Jul 10 23:35:41.109047 containerd[1734]: time="2025-07-10T23:35:41.109002942Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:35:41.119143 containerd[1734]: time="2025-07-10T23:35:41.119102316Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:35:41.119975 containerd[1734]: time="2025-07-10T23:35:41.119859437Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 2.721586492s" Jul 10 23:35:41.119975 containerd[1734]: time="2025-07-10T23:35:41.119888477Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Jul 10 23:35:41.120415 containerd[1734]: time="2025-07-10T23:35:41.120277717Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 10 23:35:41.792043 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3816421520.mount: Deactivated successfully. Jul 10 23:35:41.819777 containerd[1734]: time="2025-07-10T23:35:41.819718211Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:35:41.824262 containerd[1734]: time="2025-07-10T23:35:41.823995617Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Jul 10 23:35:41.828733 containerd[1734]: time="2025-07-10T23:35:41.828655303Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:35:41.836945 containerd[1734]: time="2025-07-10T23:35:41.836898395Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:35:41.837765 containerd[1734]: time="2025-07-10T23:35:41.837659476Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 717.353079ms" Jul 10 23:35:41.837765 containerd[1734]: time="2025-07-10T23:35:41.837689436Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 10 23:35:41.838501 containerd[1734]: time="2025-07-10T23:35:41.838430837Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jul 10 23:35:42.637698 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount277869287.mount: Deactivated successfully. Jul 10 23:35:45.947704 containerd[1734]: time="2025-07-10T23:35:45.947658075Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:35:45.951761 containerd[1734]: time="2025-07-10T23:35:45.951710516Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69334599" Jul 10 23:35:45.958320 containerd[1734]: time="2025-07-10T23:35:45.958273318Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:35:45.966382 containerd[1734]: time="2025-07-10T23:35:45.966352960Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:35:45.967235 containerd[1734]: time="2025-07-10T23:35:45.967109600Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 4.128651643s" Jul 10 23:35:45.967235 containerd[1734]: time="2025-07-10T23:35:45.967141680Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Jul 10 23:35:47.222116 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jul 10 23:35:47.233861 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 23:35:47.381783 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 23:35:47.385072 (kubelet)[2853]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 10 23:35:47.419144 kubelet[2853]: E0710 23:35:47.419102 2853 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 23:35:47.422008 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 23:35:47.422246 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 23:35:47.424701 systemd[1]: kubelet.service: Consumed 113ms CPU time, 107.1M memory peak. Jul 10 23:35:51.002528 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 23:35:51.002833 systemd[1]: kubelet.service: Consumed 113ms CPU time, 107.1M memory peak. Jul 10 23:35:51.010827 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 23:35:51.034016 systemd[1]: Reload requested from client PID 2867 ('systemctl') (unit session-9.scope)... Jul 10 23:35:51.034156 systemd[1]: Reloading... Jul 10 23:35:51.151714 zram_generator::config[2912]: No configuration found. Jul 10 23:35:51.255594 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 23:35:51.370869 systemd[1]: Reloading finished in 336 ms. Jul 10 23:35:51.420339 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 23:35:51.424267 (kubelet)[2971]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 10 23:35:51.427149 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 23:35:51.428137 systemd[1]: kubelet.service: Deactivated successfully. Jul 10 23:35:51.429657 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 23:35:51.429711 systemd[1]: kubelet.service: Consumed 86ms CPU time, 96M memory peak. Jul 10 23:35:51.435872 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 23:35:51.527499 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 23:35:51.531749 (kubelet)[2985]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 10 23:35:51.567143 kubelet[2985]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 23:35:51.567475 kubelet[2985]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 10 23:35:51.567518 kubelet[2985]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 23:35:51.567671 kubelet[2985]: I0710 23:35:51.567639 2985 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 10 23:35:52.080599 kubelet[2985]: I0710 23:35:52.080563 2985 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 10 23:35:52.081416 kubelet[2985]: I0710 23:35:52.080852 2985 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 10 23:35:52.081416 kubelet[2985]: I0710 23:35:52.081332 2985 server.go:956] "Client rotation is on, will bootstrap in background" Jul 10 23:35:52.095851 kubelet[2985]: E0710 23:35:52.095825 2985 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.20.36:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.36:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jul 10 23:35:52.097004 kubelet[2985]: I0710 23:35:52.096976 2985 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 10 23:35:52.105335 kubelet[2985]: E0710 23:35:52.105303 2985 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 10 23:35:52.105443 kubelet[2985]: I0710 23:35:52.105430 2985 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 10 23:35:52.108337 kubelet[2985]: I0710 23:35:52.108315 2985 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 10 23:35:52.109613 kubelet[2985]: I0710 23:35:52.109577 2985 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 10 23:35:52.109885 kubelet[2985]: I0710 23:35:52.109731 2985 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230.2.1-n-2b36f27b4a","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 10 23:35:52.110025 kubelet[2985]: I0710 23:35:52.110013 2985 topology_manager.go:138] "Creating topology manager with none policy" Jul 10 23:35:52.110084 kubelet[2985]: I0710 23:35:52.110076 2985 container_manager_linux.go:303] "Creating device plugin manager" Jul 10 23:35:52.110281 kubelet[2985]: I0710 23:35:52.110267 2985 state_mem.go:36] "Initialized new in-memory state store" Jul 10 23:35:52.113036 kubelet[2985]: I0710 23:35:52.113015 2985 kubelet.go:480] "Attempting to sync node with API server" Jul 10 23:35:52.113134 kubelet[2985]: I0710 23:35:52.113123 2985 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 10 23:35:52.113206 kubelet[2985]: I0710 23:35:52.113197 2985 kubelet.go:386] "Adding apiserver pod source" Jul 10 23:35:52.114692 kubelet[2985]: I0710 23:35:52.114672 2985 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 10 23:35:52.118093 kubelet[2985]: E0710 23:35:52.117820 2985 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.36:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.2.1-n-2b36f27b4a&limit=500&resourceVersion=0\": dial tcp 10.200.20.36:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 10 23:35:52.119661 kubelet[2985]: E0710 23:35:52.119613 2985 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.36:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.36:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 10 23:35:52.119722 kubelet[2985]: I0710 23:35:52.119708 2985 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jul 10 23:35:52.120281 kubelet[2985]: I0710 23:35:52.120251 2985 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 10 23:35:52.120335 kubelet[2985]: W0710 23:35:52.120311 2985 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 10 23:35:52.125035 kubelet[2985]: I0710 23:35:52.124942 2985 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 10 23:35:52.125035 kubelet[2985]: I0710 23:35:52.124996 2985 server.go:1289] "Started kubelet" Jul 10 23:35:52.126118 kubelet[2985]: I0710 23:35:52.126079 2985 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 10 23:35:52.130094 kubelet[2985]: I0710 23:35:52.130038 2985 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 10 23:35:52.130415 kubelet[2985]: I0710 23:35:52.130389 2985 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 10 23:35:52.131157 kubelet[2985]: I0710 23:35:52.131136 2985 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 10 23:35:52.132868 kubelet[2985]: I0710 23:35:52.132835 2985 server.go:317] "Adding debug handlers to kubelet server" Jul 10 23:35:52.135861 kubelet[2985]: E0710 23:35:52.134885 2985 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.36:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.36:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230.2.1-n-2b36f27b4a.185108060e342e3f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230.2.1-n-2b36f27b4a,UID:ci-4230.2.1-n-2b36f27b4a,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230.2.1-n-2b36f27b4a,},FirstTimestamp:2025-07-10 23:35:52.124960319 +0000 UTC m=+0.590076980,LastTimestamp:2025-07-10 23:35:52.124960319 +0000 UTC m=+0.590076980,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230.2.1-n-2b36f27b4a,}" Jul 10 23:35:52.136402 kubelet[2985]: I0710 23:35:52.136368 2985 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 10 23:35:52.138879 kubelet[2985]: E0710 23:35:52.138853 2985 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 10 23:35:52.139193 kubelet[2985]: I0710 23:35:52.139167 2985 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 10 23:35:52.139280 kubelet[2985]: E0710 23:35:52.139258 2985 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.1-n-2b36f27b4a\" not found" Jul 10 23:35:52.139331 kubelet[2985]: I0710 23:35:52.139315 2985 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 10 23:35:52.139373 kubelet[2985]: I0710 23:35:52.139358 2985 reconciler.go:26] "Reconciler: start to sync state" Jul 10 23:35:52.139926 kubelet[2985]: E0710 23:35:52.139890 2985 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.36:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.36:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 10 23:35:52.140013 kubelet[2985]: E0710 23:35:52.139957 2985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.2.1-n-2b36f27b4a?timeout=10s\": dial tcp 10.200.20.36:6443: connect: connection refused" interval="200ms" Jul 10 23:35:52.140118 kubelet[2985]: I0710 23:35:52.140095 2985 factory.go:223] Registration of the systemd container factory successfully Jul 10 23:35:52.140184 kubelet[2985]: I0710 23:35:52.140162 2985 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 10 23:35:52.141154 kubelet[2985]: I0710 23:35:52.141127 2985 factory.go:223] Registration of the containerd container factory successfully Jul 10 23:35:52.150982 kubelet[2985]: I0710 23:35:52.150948 2985 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 10 23:35:52.151909 kubelet[2985]: I0710 23:35:52.151891 2985 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 10 23:35:52.151993 kubelet[2985]: I0710 23:35:52.151984 2985 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 10 23:35:52.152347 kubelet[2985]: I0710 23:35:52.152060 2985 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 10 23:35:52.152347 kubelet[2985]: I0710 23:35:52.152078 2985 kubelet.go:2436] "Starting kubelet main sync loop" Jul 10 23:35:52.152347 kubelet[2985]: E0710 23:35:52.152120 2985 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 10 23:35:52.158004 kubelet[2985]: E0710 23:35:52.157976 2985 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.36:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.36:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 10 23:35:52.169445 kubelet[2985]: I0710 23:35:52.169413 2985 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 10 23:35:52.169445 kubelet[2985]: I0710 23:35:52.169436 2985 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 10 23:35:52.169445 kubelet[2985]: I0710 23:35:52.169451 2985 state_mem.go:36] "Initialized new in-memory state store" Jul 10 23:35:52.175695 kubelet[2985]: I0710 23:35:52.175672 2985 policy_none.go:49] "None policy: Start" Jul 10 23:35:52.175695 kubelet[2985]: I0710 23:35:52.175695 2985 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 10 23:35:52.175774 kubelet[2985]: I0710 23:35:52.175705 2985 state_mem.go:35] "Initializing new in-memory state store" Jul 10 23:35:52.184319 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 10 23:35:52.196273 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 10 23:35:52.199358 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 10 23:35:52.210569 kubelet[2985]: E0710 23:35:52.210541 2985 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 10 23:35:52.211020 kubelet[2985]: I0710 23:35:52.210765 2985 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 10 23:35:52.211020 kubelet[2985]: I0710 23:35:52.210777 2985 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 10 23:35:52.211020 kubelet[2985]: I0710 23:35:52.211010 2985 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 10 23:35:52.212058 kubelet[2985]: E0710 23:35:52.211948 2985 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 10 23:35:52.212058 kubelet[2985]: E0710 23:35:52.211990 2985 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4230.2.1-n-2b36f27b4a\" not found" Jul 10 23:35:52.266871 systemd[1]: Created slice kubepods-burstable-podeedc21e0a7016df519a61afb2300f379.slice - libcontainer container kubepods-burstable-podeedc21e0a7016df519a61afb2300f379.slice. Jul 10 23:35:52.278104 kubelet[2985]: E0710 23:35:52.277769 2985 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.1-n-2b36f27b4a\" not found" node="ci-4230.2.1-n-2b36f27b4a" Jul 10 23:35:52.283559 systemd[1]: Created slice kubepods-burstable-podac3e1896de1bdcde66b58b835ec98991.slice - libcontainer container kubepods-burstable-podac3e1896de1bdcde66b58b835ec98991.slice. Jul 10 23:35:52.285120 kubelet[2985]: E0710 23:35:52.285089 2985 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.1-n-2b36f27b4a\" not found" node="ci-4230.2.1-n-2b36f27b4a" Jul 10 23:35:52.295851 systemd[1]: Created slice kubepods-burstable-pod770e66acfa82e7c261bd9c96ffe452b5.slice - libcontainer container kubepods-burstable-pod770e66acfa82e7c261bd9c96ffe452b5.slice. Jul 10 23:35:52.297491 kubelet[2985]: E0710 23:35:52.297468 2985 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.1-n-2b36f27b4a\" not found" node="ci-4230.2.1-n-2b36f27b4a" Jul 10 23:35:52.313772 kubelet[2985]: I0710 23:35:52.313747 2985 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.1-n-2b36f27b4a" Jul 10 23:35:52.316647 kubelet[2985]: E0710 23:35:52.314766 2985 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.36:6443/api/v1/nodes\": dial tcp 10.200.20.36:6443: connect: connection refused" node="ci-4230.2.1-n-2b36f27b4a" Jul 10 23:35:52.340985 kubelet[2985]: I0710 23:35:52.340891 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/eedc21e0a7016df519a61afb2300f379-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230.2.1-n-2b36f27b4a\" (UID: \"eedc21e0a7016df519a61afb2300f379\") " pod="kube-system/kube-apiserver-ci-4230.2.1-n-2b36f27b4a" Jul 10 23:35:52.340985 kubelet[2985]: I0710 23:35:52.340930 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ac3e1896de1bdcde66b58b835ec98991-k8s-certs\") pod \"kube-controller-manager-ci-4230.2.1-n-2b36f27b4a\" (UID: \"ac3e1896de1bdcde66b58b835ec98991\") " pod="kube-system/kube-controller-manager-ci-4230.2.1-n-2b36f27b4a" Jul 10 23:35:52.340985 kubelet[2985]: I0710 23:35:52.340951 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ac3e1896de1bdcde66b58b835ec98991-kubeconfig\") pod \"kube-controller-manager-ci-4230.2.1-n-2b36f27b4a\" (UID: \"ac3e1896de1bdcde66b58b835ec98991\") " pod="kube-system/kube-controller-manager-ci-4230.2.1-n-2b36f27b4a" Jul 10 23:35:52.340985 kubelet[2985]: I0710 23:35:52.340967 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ac3e1896de1bdcde66b58b835ec98991-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230.2.1-n-2b36f27b4a\" (UID: \"ac3e1896de1bdcde66b58b835ec98991\") " pod="kube-system/kube-controller-manager-ci-4230.2.1-n-2b36f27b4a" Jul 10 23:35:52.341122 kubelet[2985]: I0710 23:35:52.340994 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/770e66acfa82e7c261bd9c96ffe452b5-kubeconfig\") pod \"kube-scheduler-ci-4230.2.1-n-2b36f27b4a\" (UID: \"770e66acfa82e7c261bd9c96ffe452b5\") " pod="kube-system/kube-scheduler-ci-4230.2.1-n-2b36f27b4a" Jul 10 23:35:52.341122 kubelet[2985]: I0710 23:35:52.341010 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ac3e1896de1bdcde66b58b835ec98991-ca-certs\") pod \"kube-controller-manager-ci-4230.2.1-n-2b36f27b4a\" (UID: \"ac3e1896de1bdcde66b58b835ec98991\") " pod="kube-system/kube-controller-manager-ci-4230.2.1-n-2b36f27b4a" Jul 10 23:35:52.341122 kubelet[2985]: I0710 23:35:52.341023 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ac3e1896de1bdcde66b58b835ec98991-flexvolume-dir\") pod \"kube-controller-manager-ci-4230.2.1-n-2b36f27b4a\" (UID: \"ac3e1896de1bdcde66b58b835ec98991\") " pod="kube-system/kube-controller-manager-ci-4230.2.1-n-2b36f27b4a" Jul 10 23:35:52.341122 kubelet[2985]: I0710 23:35:52.341036 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/eedc21e0a7016df519a61afb2300f379-ca-certs\") pod \"kube-apiserver-ci-4230.2.1-n-2b36f27b4a\" (UID: \"eedc21e0a7016df519a61afb2300f379\") " pod="kube-system/kube-apiserver-ci-4230.2.1-n-2b36f27b4a" Jul 10 23:35:52.341122 kubelet[2985]: I0710 23:35:52.341050 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/eedc21e0a7016df519a61afb2300f379-k8s-certs\") pod \"kube-apiserver-ci-4230.2.1-n-2b36f27b4a\" (UID: \"eedc21e0a7016df519a61afb2300f379\") " pod="kube-system/kube-apiserver-ci-4230.2.1-n-2b36f27b4a" Jul 10 23:35:52.341806 kubelet[2985]: E0710 23:35:52.341765 2985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.2.1-n-2b36f27b4a?timeout=10s\": dial tcp 10.200.20.36:6443: connect: connection refused" interval="400ms" Jul 10 23:35:52.517245 kubelet[2985]: I0710 23:35:52.517220 2985 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.1-n-2b36f27b4a" Jul 10 23:35:52.517725 kubelet[2985]: E0710 23:35:52.517705 2985 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.36:6443/api/v1/nodes\": dial tcp 10.200.20.36:6443: connect: connection refused" node="ci-4230.2.1-n-2b36f27b4a" Jul 10 23:35:52.578845 containerd[1734]: time="2025-07-10T23:35:52.578803257Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230.2.1-n-2b36f27b4a,Uid:eedc21e0a7016df519a61afb2300f379,Namespace:kube-system,Attempt:0,}" Jul 10 23:35:52.586676 containerd[1734]: time="2025-07-10T23:35:52.586354419Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230.2.1-n-2b36f27b4a,Uid:ac3e1896de1bdcde66b58b835ec98991,Namespace:kube-system,Attempt:0,}" Jul 10 23:35:52.598472 containerd[1734]: time="2025-07-10T23:35:52.598353343Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230.2.1-n-2b36f27b4a,Uid:770e66acfa82e7c261bd9c96ffe452b5,Namespace:kube-system,Attempt:0,}" Jul 10 23:35:52.742667 kubelet[2985]: E0710 23:35:52.742600 2985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.2.1-n-2b36f27b4a?timeout=10s\": dial tcp 10.200.20.36:6443: connect: connection refused" interval="800ms" Jul 10 23:35:52.920014 kubelet[2985]: I0710 23:35:52.919919 2985 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.1-n-2b36f27b4a" Jul 10 23:35:52.920252 kubelet[2985]: E0710 23:35:52.920214 2985 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.36:6443/api/v1/nodes\": dial tcp 10.200.20.36:6443: connect: connection refused" node="ci-4230.2.1-n-2b36f27b4a" Jul 10 23:35:53.080472 kubelet[2985]: E0710 23:35:53.080427 2985 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.36:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.36:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 10 23:35:53.216720 kubelet[2985]: E0710 23:35:53.216584 2985 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.36:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.2.1-n-2b36f27b4a&limit=500&resourceVersion=0\": dial tcp 10.200.20.36:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 10 23:35:53.357994 kubelet[2985]: E0710 23:35:53.357949 2985 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.36:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.36:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 10 23:35:53.372540 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3255074907.mount: Deactivated successfully. Jul 10 23:35:53.417821 kubelet[2985]: E0710 23:35:53.417784 2985 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.36:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.36:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 10 23:35:53.424732 containerd[1734]: time="2025-07-10T23:35:53.424693514Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 10 23:35:53.444370 containerd[1734]: time="2025-07-10T23:35:53.444324760Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Jul 10 23:35:53.455594 containerd[1734]: time="2025-07-10T23:35:53.455559363Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 10 23:35:53.467228 containerd[1734]: time="2025-07-10T23:35:53.467148047Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 10 23:35:53.470744 containerd[1734]: time="2025-07-10T23:35:53.470707328Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 10 23:35:53.476655 containerd[1734]: time="2025-07-10T23:35:53.476611769Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 10 23:35:53.481749 containerd[1734]: time="2025-07-10T23:35:53.481703011Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 10 23:35:53.487532 containerd[1734]: time="2025-07-10T23:35:53.487484173Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 10 23:35:53.488490 containerd[1734]: time="2025-07-10T23:35:53.488262373Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 909.375556ms" Jul 10 23:35:53.497119 containerd[1734]: time="2025-07-10T23:35:53.497057056Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 910.642717ms" Jul 10 23:35:53.508001 containerd[1734]: time="2025-07-10T23:35:53.507964499Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 909.527316ms" Jul 10 23:35:53.543210 kubelet[2985]: E0710 23:35:53.543169 2985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.2.1-n-2b36f27b4a?timeout=10s\": dial tcp 10.200.20.36:6443: connect: connection refused" interval="1.6s" Jul 10 23:35:53.722541 kubelet[2985]: I0710 23:35:53.722513 2985 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.1-n-2b36f27b4a" Jul 10 23:35:53.722859 kubelet[2985]: E0710 23:35:53.722832 2985 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.36:6443/api/v1/nodes\": dial tcp 10.200.20.36:6443: connect: connection refused" node="ci-4230.2.1-n-2b36f27b4a" Jul 10 23:35:54.208776 kubelet[2985]: E0710 23:35:54.208664 2985 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.20.36:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.36:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jul 10 23:35:54.238429 containerd[1734]: time="2025-07-10T23:35:54.238309641Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 23:35:54.238429 containerd[1734]: time="2025-07-10T23:35:54.238373761Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 23:35:54.238429 containerd[1734]: time="2025-07-10T23:35:54.238389081Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 23:35:54.239206 containerd[1734]: time="2025-07-10T23:35:54.238455241Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 23:35:54.242074 containerd[1734]: time="2025-07-10T23:35:54.241712522Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 23:35:54.242074 containerd[1734]: time="2025-07-10T23:35:54.241776482Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 23:35:54.242074 containerd[1734]: time="2025-07-10T23:35:54.241792162Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 23:35:54.242271 containerd[1734]: time="2025-07-10T23:35:54.241862762Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 23:35:54.242271 containerd[1734]: time="2025-07-10T23:35:54.242012122Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 23:35:54.242496 containerd[1734]: time="2025-07-10T23:35:54.242183442Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 23:35:54.242767 containerd[1734]: time="2025-07-10T23:35:54.242702082Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 23:35:54.246657 containerd[1734]: time="2025-07-10T23:35:54.245008563Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 23:35:54.265875 systemd[1]: Started cri-containerd-a559d5e15bb1d1e56febe1dd6484658c843732793dfe780ddf84ae243247b372.scope - libcontainer container a559d5e15bb1d1e56febe1dd6484658c843732793dfe780ddf84ae243247b372. Jul 10 23:35:54.267926 systemd[1]: Started cri-containerd-cf45c072311f576ae4ded0ff1dcb92c5602ecf062dfbfda10214e568f5d3861d.scope - libcontainer container cf45c072311f576ae4ded0ff1dcb92c5602ecf062dfbfda10214e568f5d3861d. Jul 10 23:35:54.272416 systemd[1]: Started cri-containerd-113b828b73d5e16cd0e6152f706c0ad53d246fa2e1746b39170283cee5ad4373.scope - libcontainer container 113b828b73d5e16cd0e6152f706c0ad53d246fa2e1746b39170283cee5ad4373. Jul 10 23:35:54.324410 containerd[1734]: time="2025-07-10T23:35:54.324014987Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230.2.1-n-2b36f27b4a,Uid:770e66acfa82e7c261bd9c96ffe452b5,Namespace:kube-system,Attempt:0,} returns sandbox id \"113b828b73d5e16cd0e6152f706c0ad53d246fa2e1746b39170283cee5ad4373\"" Jul 10 23:35:54.324410 containerd[1734]: time="2025-07-10T23:35:54.324164627Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230.2.1-n-2b36f27b4a,Uid:eedc21e0a7016df519a61afb2300f379,Namespace:kube-system,Attempt:0,} returns sandbox id \"a559d5e15bb1d1e56febe1dd6484658c843732793dfe780ddf84ae243247b372\"" Jul 10 23:35:54.326197 containerd[1734]: time="2025-07-10T23:35:54.326072107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230.2.1-n-2b36f27b4a,Uid:ac3e1896de1bdcde66b58b835ec98991,Namespace:kube-system,Attempt:0,} returns sandbox id \"cf45c072311f576ae4ded0ff1dcb92c5602ecf062dfbfda10214e568f5d3861d\"" Jul 10 23:35:54.334564 containerd[1734]: time="2025-07-10T23:35:54.334512150Z" level=info msg="CreateContainer within sandbox \"a559d5e15bb1d1e56febe1dd6484658c843732793dfe780ddf84ae243247b372\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 10 23:35:54.343535 containerd[1734]: time="2025-07-10T23:35:54.343502953Z" level=info msg="CreateContainer within sandbox \"113b828b73d5e16cd0e6152f706c0ad53d246fa2e1746b39170283cee5ad4373\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 10 23:35:54.348575 containerd[1734]: time="2025-07-10T23:35:54.348550434Z" level=info msg="CreateContainer within sandbox \"cf45c072311f576ae4ded0ff1dcb92c5602ecf062dfbfda10214e568f5d3861d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 10 23:35:54.418710 containerd[1734]: time="2025-07-10T23:35:54.418661735Z" level=info msg="CreateContainer within sandbox \"a559d5e15bb1d1e56febe1dd6484658c843732793dfe780ddf84ae243247b372\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"7205e5854974cab846b7815383ef35b5d477636b7f29b7fff329ca52dd08bf6b\"" Jul 10 23:35:54.419646 containerd[1734]: time="2025-07-10T23:35:54.419373216Z" level=info msg="StartContainer for \"7205e5854974cab846b7815383ef35b5d477636b7f29b7fff329ca52dd08bf6b\"" Jul 10 23:35:54.440911 containerd[1734]: time="2025-07-10T23:35:54.440660142Z" level=info msg="CreateContainer within sandbox \"cf45c072311f576ae4ded0ff1dcb92c5602ecf062dfbfda10214e568f5d3861d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"4216e5ba3777fefc916ae377a2577a5088e2febfe0415e8ef5ff06b1508d78c5\"" Jul 10 23:35:54.441157 containerd[1734]: time="2025-07-10T23:35:54.441108222Z" level=info msg="StartContainer for \"4216e5ba3777fefc916ae377a2577a5088e2febfe0415e8ef5ff06b1508d78c5\"" Jul 10 23:35:54.442863 systemd[1]: Started cri-containerd-7205e5854974cab846b7815383ef35b5d477636b7f29b7fff329ca52dd08bf6b.scope - libcontainer container 7205e5854974cab846b7815383ef35b5d477636b7f29b7fff329ca52dd08bf6b. Jul 10 23:35:54.448561 containerd[1734]: time="2025-07-10T23:35:54.448497944Z" level=info msg="CreateContainer within sandbox \"113b828b73d5e16cd0e6152f706c0ad53d246fa2e1746b39170283cee5ad4373\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a1527db6c919e9a1d303ac2e75d1acd415d0dafcec42353b266889f00ec875c7\"" Jul 10 23:35:54.449156 containerd[1734]: time="2025-07-10T23:35:54.449134145Z" level=info msg="StartContainer for \"a1527db6c919e9a1d303ac2e75d1acd415d0dafcec42353b266889f00ec875c7\"" Jul 10 23:35:54.480883 systemd[1]: Started cri-containerd-4216e5ba3777fefc916ae377a2577a5088e2febfe0415e8ef5ff06b1508d78c5.scope - libcontainer container 4216e5ba3777fefc916ae377a2577a5088e2febfe0415e8ef5ff06b1508d78c5. Jul 10 23:35:54.482484 systemd[1]: Started cri-containerd-a1527db6c919e9a1d303ac2e75d1acd415d0dafcec42353b266889f00ec875c7.scope - libcontainer container a1527db6c919e9a1d303ac2e75d1acd415d0dafcec42353b266889f00ec875c7. Jul 10 23:35:54.496869 containerd[1734]: time="2025-07-10T23:35:54.496735919Z" level=info msg="StartContainer for \"7205e5854974cab846b7815383ef35b5d477636b7f29b7fff329ca52dd08bf6b\" returns successfully" Jul 10 23:35:54.540094 containerd[1734]: time="2025-07-10T23:35:54.540058132Z" level=info msg="StartContainer for \"4216e5ba3777fefc916ae377a2577a5088e2febfe0415e8ef5ff06b1508d78c5\" returns successfully" Jul 10 23:35:54.551716 containerd[1734]: time="2025-07-10T23:35:54.551601656Z" level=info msg="StartContainer for \"a1527db6c919e9a1d303ac2e75d1acd415d0dafcec42353b266889f00ec875c7\" returns successfully" Jul 10 23:35:55.168896 kubelet[2985]: E0710 23:35:55.168369 2985 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.1-n-2b36f27b4a\" not found" node="ci-4230.2.1-n-2b36f27b4a" Jul 10 23:35:55.170017 kubelet[2985]: E0710 23:35:55.169998 2985 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.1-n-2b36f27b4a\" not found" node="ci-4230.2.1-n-2b36f27b4a" Jul 10 23:35:55.173008 kubelet[2985]: E0710 23:35:55.172859 2985 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.1-n-2b36f27b4a\" not found" node="ci-4230.2.1-n-2b36f27b4a" Jul 10 23:35:55.325215 kubelet[2985]: I0710 23:35:55.325191 2985 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.1-n-2b36f27b4a" Jul 10 23:35:56.175005 kubelet[2985]: E0710 23:35:56.174831 2985 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.1-n-2b36f27b4a\" not found" node="ci-4230.2.1-n-2b36f27b4a" Jul 10 23:35:56.175379 kubelet[2985]: E0710 23:35:56.175364 2985 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.1-n-2b36f27b4a\" not found" node="ci-4230.2.1-n-2b36f27b4a" Jul 10 23:35:56.307765 kubelet[2985]: E0710 23:35:56.307722 2985 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4230.2.1-n-2b36f27b4a\" not found" node="ci-4230.2.1-n-2b36f27b4a" Jul 10 23:35:56.423302 kubelet[2985]: E0710 23:35:56.423166 2985 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4230.2.1-n-2b36f27b4a.185108060e342e3f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230.2.1-n-2b36f27b4a,UID:ci-4230.2.1-n-2b36f27b4a,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230.2.1-n-2b36f27b4a,},FirstTimestamp:2025-07-10 23:35:52.124960319 +0000 UTC m=+0.590076980,LastTimestamp:2025-07-10 23:35:52.124960319 +0000 UTC m=+0.590076980,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230.2.1-n-2b36f27b4a,}" Jul 10 23:35:56.477538 kubelet[2985]: I0710 23:35:56.477493 2985 kubelet_node_status.go:78] "Successfully registered node" node="ci-4230.2.1-n-2b36f27b4a" Jul 10 23:35:56.492283 kubelet[2985]: E0710 23:35:56.491647 2985 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4230.2.1-n-2b36f27b4a.185108060f07eb43 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230.2.1-n-2b36f27b4a,UID:ci-4230.2.1-n-2b36f27b4a,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ci-4230.2.1-n-2b36f27b4a,},FirstTimestamp:2025-07-10 23:35:52.138836803 +0000 UTC m=+0.603953464,LastTimestamp:2025-07-10 23:35:52.138836803 +0000 UTC m=+0.603953464,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230.2.1-n-2b36f27b4a,}" Jul 10 23:35:56.539785 kubelet[2985]: I0710 23:35:56.539744 2985 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230.2.1-n-2b36f27b4a" Jul 10 23:35:56.553854 kubelet[2985]: E0710 23:35:56.553814 2985 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4230.2.1-n-2b36f27b4a\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4230.2.1-n-2b36f27b4a" Jul 10 23:35:56.553854 kubelet[2985]: I0710 23:35:56.553843 2985 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230.2.1-n-2b36f27b4a" Jul 10 23:35:56.558287 kubelet[2985]: E0710 23:35:56.558134 2985 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4230.2.1-n-2b36f27b4a\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4230.2.1-n-2b36f27b4a" Jul 10 23:35:56.558287 kubelet[2985]: I0710 23:35:56.558158 2985 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230.2.1-n-2b36f27b4a" Jul 10 23:35:56.560020 kubelet[2985]: E0710 23:35:56.559984 2985 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4230.2.1-n-2b36f27b4a\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4230.2.1-n-2b36f27b4a" Jul 10 23:35:57.124433 kubelet[2985]: I0710 23:35:57.124394 2985 apiserver.go:52] "Watching apiserver" Jul 10 23:35:57.139731 kubelet[2985]: I0710 23:35:57.139677 2985 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 10 23:35:58.591140 systemd[1]: Reload requested from client PID 3272 ('systemctl') (unit session-9.scope)... Jul 10 23:35:58.591156 systemd[1]: Reloading... Jul 10 23:35:58.697656 zram_generator::config[3317]: No configuration found. Jul 10 23:35:58.802186 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 23:35:58.929885 systemd[1]: Reloading finished in 338 ms. Jul 10 23:35:58.953423 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 23:35:58.962918 systemd[1]: kubelet.service: Deactivated successfully. Jul 10 23:35:58.963243 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 23:35:58.963356 systemd[1]: kubelet.service: Consumed 925ms CPU time, 129.5M memory peak. Jul 10 23:35:58.970829 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 23:35:59.073545 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 23:35:59.080054 (kubelet)[3384]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 10 23:35:59.115419 kubelet[3384]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 23:35:59.115779 kubelet[3384]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 10 23:35:59.115826 kubelet[3384]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 23:35:59.115979 kubelet[3384]: I0710 23:35:59.115948 3384 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 10 23:35:59.121201 kubelet[3384]: I0710 23:35:59.121173 3384 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 10 23:35:59.121324 kubelet[3384]: I0710 23:35:59.121314 3384 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 10 23:35:59.121557 kubelet[3384]: I0710 23:35:59.121545 3384 server.go:956] "Client rotation is on, will bootstrap in background" Jul 10 23:35:59.122795 kubelet[3384]: I0710 23:35:59.122776 3384 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jul 10 23:35:59.124919 kubelet[3384]: I0710 23:35:59.124888 3384 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 10 23:35:59.127791 kubelet[3384]: E0710 23:35:59.127760 3384 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 10 23:35:59.127791 kubelet[3384]: I0710 23:35:59.127791 3384 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 10 23:35:59.134749 kubelet[3384]: I0710 23:35:59.134723 3384 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 10 23:35:59.134922 kubelet[3384]: I0710 23:35:59.134894 3384 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 10 23:35:59.135054 kubelet[3384]: I0710 23:35:59.134920 3384 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230.2.1-n-2b36f27b4a","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 10 23:35:59.135124 kubelet[3384]: I0710 23:35:59.135060 3384 topology_manager.go:138] "Creating topology manager with none policy" Jul 10 23:35:59.135124 kubelet[3384]: I0710 23:35:59.135068 3384 container_manager_linux.go:303] "Creating device plugin manager" Jul 10 23:35:59.135178 kubelet[3384]: I0710 23:35:59.135126 3384 state_mem.go:36] "Initialized new in-memory state store" Jul 10 23:35:59.135266 kubelet[3384]: I0710 23:35:59.135253 3384 kubelet.go:480] "Attempting to sync node with API server" Jul 10 23:35:59.135299 kubelet[3384]: I0710 23:35:59.135269 3384 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 10 23:35:59.135299 kubelet[3384]: I0710 23:35:59.135293 3384 kubelet.go:386] "Adding apiserver pod source" Jul 10 23:35:59.136472 kubelet[3384]: I0710 23:35:59.135305 3384 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 10 23:35:59.137995 kubelet[3384]: I0710 23:35:59.137772 3384 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jul 10 23:35:59.138484 kubelet[3384]: I0710 23:35:59.138468 3384 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 10 23:35:59.141676 kubelet[3384]: I0710 23:35:59.140505 3384 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 10 23:35:59.141676 kubelet[3384]: I0710 23:35:59.140537 3384 server.go:1289] "Started kubelet" Jul 10 23:35:59.142367 kubelet[3384]: I0710 23:35:59.142349 3384 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 10 23:35:59.149318 kubelet[3384]: I0710 23:35:59.149271 3384 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 10 23:35:59.153661 kubelet[3384]: I0710 23:35:59.152450 3384 server.go:317] "Adding debug handlers to kubelet server" Jul 10 23:35:59.157661 kubelet[3384]: I0710 23:35:59.155478 3384 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 10 23:35:59.157661 kubelet[3384]: E0710 23:35:59.155743 3384 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.1-n-2b36f27b4a\" not found" Jul 10 23:35:59.157661 kubelet[3384]: I0710 23:35:59.156026 3384 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 10 23:35:59.157661 kubelet[3384]: I0710 23:35:59.156133 3384 reconciler.go:26] "Reconciler: start to sync state" Jul 10 23:35:59.159116 kubelet[3384]: I0710 23:35:59.159068 3384 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 10 23:35:59.159654 kubelet[3384]: I0710 23:35:59.159243 3384 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 10 23:35:59.159654 kubelet[3384]: I0710 23:35:59.159414 3384 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 10 23:35:59.168605 kubelet[3384]: I0710 23:35:59.167180 3384 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 10 23:35:59.168605 kubelet[3384]: I0710 23:35:59.168282 3384 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 10 23:35:59.168605 kubelet[3384]: I0710 23:35:59.168320 3384 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 10 23:35:59.168605 kubelet[3384]: I0710 23:35:59.168336 3384 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 10 23:35:59.168605 kubelet[3384]: I0710 23:35:59.168342 3384 kubelet.go:2436] "Starting kubelet main sync loop" Jul 10 23:35:59.168605 kubelet[3384]: E0710 23:35:59.168397 3384 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 10 23:35:59.184832 kubelet[3384]: I0710 23:35:59.184745 3384 factory.go:223] Registration of the systemd container factory successfully Jul 10 23:35:59.184893 kubelet[3384]: I0710 23:35:59.184856 3384 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 10 23:35:59.187221 kubelet[3384]: E0710 23:35:59.187201 3384 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 10 23:35:59.190389 kubelet[3384]: I0710 23:35:59.190372 3384 factory.go:223] Registration of the containerd container factory successfully Jul 10 23:35:59.255104 kubelet[3384]: I0710 23:35:59.255066 3384 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 10 23:35:59.255264 kubelet[3384]: I0710 23:35:59.255249 3384 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 10 23:35:59.255328 kubelet[3384]: I0710 23:35:59.255320 3384 state_mem.go:36] "Initialized new in-memory state store" Jul 10 23:35:59.255605 kubelet[3384]: I0710 23:35:59.255591 3384 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 10 23:35:59.255717 kubelet[3384]: I0710 23:35:59.255694 3384 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 10 23:35:59.255766 kubelet[3384]: I0710 23:35:59.255759 3384 policy_none.go:49] "None policy: Start" Jul 10 23:35:59.255810 kubelet[3384]: I0710 23:35:59.255802 3384 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 10 23:35:59.255890 kubelet[3384]: I0710 23:35:59.255881 3384 state_mem.go:35] "Initializing new in-memory state store" Jul 10 23:35:59.256062 kubelet[3384]: I0710 23:35:59.256041 3384 state_mem.go:75] "Updated machine memory state" Jul 10 23:35:59.261540 kubelet[3384]: E0710 23:35:59.261515 3384 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 10 23:35:59.261697 kubelet[3384]: I0710 23:35:59.261677 3384 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 10 23:35:59.261758 kubelet[3384]: I0710 23:35:59.261693 3384 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 10 23:35:59.263343 kubelet[3384]: I0710 23:35:59.262695 3384 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 10 23:35:59.266275 kubelet[3384]: E0710 23:35:59.266258 3384 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 10 23:35:59.271638 kubelet[3384]: I0710 23:35:59.271250 3384 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230.2.1-n-2b36f27b4a" Jul 10 23:35:59.272053 kubelet[3384]: I0710 23:35:59.272017 3384 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230.2.1-n-2b36f27b4a" Jul 10 23:35:59.272148 kubelet[3384]: I0710 23:35:59.271853 3384 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230.2.1-n-2b36f27b4a" Jul 10 23:35:59.282064 kubelet[3384]: I0710 23:35:59.282033 3384 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jul 10 23:35:59.288640 kubelet[3384]: I0710 23:35:59.288583 3384 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jul 10 23:35:59.288841 kubelet[3384]: I0710 23:35:59.288728 3384 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jul 10 23:35:59.378440 kubelet[3384]: I0710 23:35:59.377613 3384 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.1-n-2b36f27b4a" Jul 10 23:35:59.394157 kubelet[3384]: I0710 23:35:59.394032 3384 kubelet_node_status.go:124] "Node was previously registered" node="ci-4230.2.1-n-2b36f27b4a" Jul 10 23:35:59.394157 kubelet[3384]: I0710 23:35:59.394106 3384 kubelet_node_status.go:78] "Successfully registered node" node="ci-4230.2.1-n-2b36f27b4a" Jul 10 23:35:59.458040 kubelet[3384]: I0710 23:35:59.457801 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ac3e1896de1bdcde66b58b835ec98991-ca-certs\") pod \"kube-controller-manager-ci-4230.2.1-n-2b36f27b4a\" (UID: \"ac3e1896de1bdcde66b58b835ec98991\") " pod="kube-system/kube-controller-manager-ci-4230.2.1-n-2b36f27b4a" Jul 10 23:35:59.458040 kubelet[3384]: I0710 23:35:59.457841 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ac3e1896de1bdcde66b58b835ec98991-k8s-certs\") pod \"kube-controller-manager-ci-4230.2.1-n-2b36f27b4a\" (UID: \"ac3e1896de1bdcde66b58b835ec98991\") " pod="kube-system/kube-controller-manager-ci-4230.2.1-n-2b36f27b4a" Jul 10 23:35:59.458040 kubelet[3384]: I0710 23:35:59.457859 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ac3e1896de1bdcde66b58b835ec98991-kubeconfig\") pod \"kube-controller-manager-ci-4230.2.1-n-2b36f27b4a\" (UID: \"ac3e1896de1bdcde66b58b835ec98991\") " pod="kube-system/kube-controller-manager-ci-4230.2.1-n-2b36f27b4a" Jul 10 23:35:59.458040 kubelet[3384]: I0710 23:35:59.457878 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ac3e1896de1bdcde66b58b835ec98991-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230.2.1-n-2b36f27b4a\" (UID: \"ac3e1896de1bdcde66b58b835ec98991\") " pod="kube-system/kube-controller-manager-ci-4230.2.1-n-2b36f27b4a" Jul 10 23:35:59.458040 kubelet[3384]: I0710 23:35:59.457899 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/770e66acfa82e7c261bd9c96ffe452b5-kubeconfig\") pod \"kube-scheduler-ci-4230.2.1-n-2b36f27b4a\" (UID: \"770e66acfa82e7c261bd9c96ffe452b5\") " pod="kube-system/kube-scheduler-ci-4230.2.1-n-2b36f27b4a" Jul 10 23:35:59.458263 kubelet[3384]: I0710 23:35:59.457914 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ac3e1896de1bdcde66b58b835ec98991-flexvolume-dir\") pod \"kube-controller-manager-ci-4230.2.1-n-2b36f27b4a\" (UID: \"ac3e1896de1bdcde66b58b835ec98991\") " pod="kube-system/kube-controller-manager-ci-4230.2.1-n-2b36f27b4a" Jul 10 23:35:59.458263 kubelet[3384]: I0710 23:35:59.457971 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/eedc21e0a7016df519a61afb2300f379-ca-certs\") pod \"kube-apiserver-ci-4230.2.1-n-2b36f27b4a\" (UID: \"eedc21e0a7016df519a61afb2300f379\") " pod="kube-system/kube-apiserver-ci-4230.2.1-n-2b36f27b4a" Jul 10 23:35:59.458263 kubelet[3384]: I0710 23:35:59.458004 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/eedc21e0a7016df519a61afb2300f379-k8s-certs\") pod \"kube-apiserver-ci-4230.2.1-n-2b36f27b4a\" (UID: \"eedc21e0a7016df519a61afb2300f379\") " pod="kube-system/kube-apiserver-ci-4230.2.1-n-2b36f27b4a" Jul 10 23:35:59.458263 kubelet[3384]: I0710 23:35:59.458022 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/eedc21e0a7016df519a61afb2300f379-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230.2.1-n-2b36f27b4a\" (UID: \"eedc21e0a7016df519a61afb2300f379\") " pod="kube-system/kube-apiserver-ci-4230.2.1-n-2b36f27b4a" Jul 10 23:35:59.605530 sudo[3422]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 10 23:35:59.605806 sudo[3422]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jul 10 23:36:00.059104 sudo[3422]: pam_unix(sudo:session): session closed for user root Jul 10 23:36:00.137643 kubelet[3384]: I0710 23:36:00.135916 3384 apiserver.go:52] "Watching apiserver" Jul 10 23:36:00.156596 kubelet[3384]: I0710 23:36:00.156527 3384 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 10 23:36:00.230503 kubelet[3384]: I0710 23:36:00.229606 3384 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230.2.1-n-2b36f27b4a" Jul 10 23:36:00.245989 kubelet[3384]: I0710 23:36:00.245796 3384 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jul 10 23:36:00.245989 kubelet[3384]: E0710 23:36:00.245849 3384 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4230.2.1-n-2b36f27b4a\" already exists" pod="kube-system/kube-apiserver-ci-4230.2.1-n-2b36f27b4a" Jul 10 23:36:00.312311 kubelet[3384]: I0710 23:36:00.312051 3384 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4230.2.1-n-2b36f27b4a" podStartSLOduration=1.312033674 podStartE2EDuration="1.312033674s" podCreationTimestamp="2025-07-10 23:35:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 23:36:00.283918905 +0000 UTC m=+1.200333082" watchObservedRunningTime="2025-07-10 23:36:00.312033674 +0000 UTC m=+1.228447891" Jul 10 23:36:00.336383 kubelet[3384]: I0710 23:36:00.336329 3384 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4230.2.1-n-2b36f27b4a" podStartSLOduration=1.336312721 podStartE2EDuration="1.336312721s" podCreationTimestamp="2025-07-10 23:35:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 23:36:00.313890835 +0000 UTC m=+1.230305052" watchObservedRunningTime="2025-07-10 23:36:00.336312721 +0000 UTC m=+1.252726938" Jul 10 23:36:00.336556 kubelet[3384]: I0710 23:36:00.336410 3384 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4230.2.1-n-2b36f27b4a" podStartSLOduration=1.336406521 podStartE2EDuration="1.336406521s" podCreationTimestamp="2025-07-10 23:35:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 23:36:00.3336804 +0000 UTC m=+1.250094617" watchObservedRunningTime="2025-07-10 23:36:00.336406521 +0000 UTC m=+1.252820778" Jul 10 23:36:01.603831 sudo[2314]: pam_unix(sudo:session): session closed for user root Jul 10 23:36:01.674160 sshd[2313]: Connection closed by 10.200.16.10 port 57964 Jul 10 23:36:01.674661 sshd-session[2297]: pam_unix(sshd:session): session closed for user core Jul 10 23:36:01.677972 systemd[1]: sshd@6-10.200.20.36:22-10.200.16.10:57964.service: Deactivated successfully. Jul 10 23:36:01.680243 systemd[1]: session-9.scope: Deactivated successfully. Jul 10 23:36:01.680528 systemd[1]: session-9.scope: Consumed 6.752s CPU time, 262.7M memory peak. Jul 10 23:36:01.681956 systemd-logind[1707]: Session 9 logged out. Waiting for processes to exit. Jul 10 23:36:01.683595 systemd-logind[1707]: Removed session 9. Jul 10 23:36:04.047929 kubelet[3384]: I0710 23:36:04.047787 3384 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 10 23:36:04.048328 containerd[1734]: time="2025-07-10T23:36:04.048043390Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 10 23:36:04.048801 kubelet[3384]: I0710 23:36:04.048597 3384 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 10 23:36:04.862725 systemd[1]: Created slice kubepods-besteffort-pod0a0994e1_a921_4204_9b4a_6016f7731eb4.slice - libcontainer container kubepods-besteffort-pod0a0994e1_a921_4204_9b4a_6016f7731eb4.slice. Jul 10 23:36:04.880921 systemd[1]: Created slice kubepods-burstable-pod198ef552_ea7f_4a87_9962_8b677436d88a.slice - libcontainer container kubepods-burstable-pod198ef552_ea7f_4a87_9962_8b677436d88a.slice. Jul 10 23:36:04.888151 kubelet[3384]: I0710 23:36:04.888115 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0a0994e1-a921-4204-9b4a-6016f7731eb4-kube-proxy\") pod \"kube-proxy-chcxv\" (UID: \"0a0994e1-a921-4204-9b4a-6016f7731eb4\") " pod="kube-system/kube-proxy-chcxv" Jul 10 23:36:04.888151 kubelet[3384]: I0710 23:36:04.888154 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/198ef552-ea7f-4a87-9962-8b677436d88a-hostproc\") pod \"cilium-w4wh9\" (UID: \"198ef552-ea7f-4a87-9962-8b677436d88a\") " pod="kube-system/cilium-w4wh9" Jul 10 23:36:04.888711 kubelet[3384]: I0710 23:36:04.888172 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/198ef552-ea7f-4a87-9962-8b677436d88a-cni-path\") pod \"cilium-w4wh9\" (UID: \"198ef552-ea7f-4a87-9962-8b677436d88a\") " pod="kube-system/cilium-w4wh9" Jul 10 23:36:04.888711 kubelet[3384]: I0710 23:36:04.888189 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/198ef552-ea7f-4a87-9962-8b677436d88a-xtables-lock\") pod \"cilium-w4wh9\" (UID: \"198ef552-ea7f-4a87-9962-8b677436d88a\") " pod="kube-system/cilium-w4wh9" Jul 10 23:36:04.888711 kubelet[3384]: I0710 23:36:04.888206 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhmhs\" (UniqueName: \"kubernetes.io/projected/198ef552-ea7f-4a87-9962-8b677436d88a-kube-api-access-bhmhs\") pod \"cilium-w4wh9\" (UID: \"198ef552-ea7f-4a87-9962-8b677436d88a\") " pod="kube-system/cilium-w4wh9" Jul 10 23:36:04.888711 kubelet[3384]: I0710 23:36:04.888245 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0a0994e1-a921-4204-9b4a-6016f7731eb4-xtables-lock\") pod \"kube-proxy-chcxv\" (UID: \"0a0994e1-a921-4204-9b4a-6016f7731eb4\") " pod="kube-system/kube-proxy-chcxv" Jul 10 23:36:04.888711 kubelet[3384]: I0710 23:36:04.888278 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/198ef552-ea7f-4a87-9962-8b677436d88a-cilium-cgroup\") pod \"cilium-w4wh9\" (UID: \"198ef552-ea7f-4a87-9962-8b677436d88a\") " pod="kube-system/cilium-w4wh9" Jul 10 23:36:04.888711 kubelet[3384]: I0710 23:36:04.888297 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/198ef552-ea7f-4a87-9962-8b677436d88a-etc-cni-netd\") pod \"cilium-w4wh9\" (UID: \"198ef552-ea7f-4a87-9962-8b677436d88a\") " pod="kube-system/cilium-w4wh9" Jul 10 23:36:04.888839 kubelet[3384]: I0710 23:36:04.888320 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/198ef552-ea7f-4a87-9962-8b677436d88a-hubble-tls\") pod \"cilium-w4wh9\" (UID: \"198ef552-ea7f-4a87-9962-8b677436d88a\") " pod="kube-system/cilium-w4wh9" Jul 10 23:36:04.888839 kubelet[3384]: I0710 23:36:04.888359 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/198ef552-ea7f-4a87-9962-8b677436d88a-lib-modules\") pod \"cilium-w4wh9\" (UID: \"198ef552-ea7f-4a87-9962-8b677436d88a\") " pod="kube-system/cilium-w4wh9" Jul 10 23:36:04.888839 kubelet[3384]: I0710 23:36:04.888378 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/198ef552-ea7f-4a87-9962-8b677436d88a-cilium-config-path\") pod \"cilium-w4wh9\" (UID: \"198ef552-ea7f-4a87-9962-8b677436d88a\") " pod="kube-system/cilium-w4wh9" Jul 10 23:36:04.888839 kubelet[3384]: I0710 23:36:04.888393 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/198ef552-ea7f-4a87-9962-8b677436d88a-host-proc-sys-net\") pod \"cilium-w4wh9\" (UID: \"198ef552-ea7f-4a87-9962-8b677436d88a\") " pod="kube-system/cilium-w4wh9" Jul 10 23:36:04.888839 kubelet[3384]: I0710 23:36:04.888411 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/198ef552-ea7f-4a87-9962-8b677436d88a-host-proc-sys-kernel\") pod \"cilium-w4wh9\" (UID: \"198ef552-ea7f-4a87-9962-8b677436d88a\") " pod="kube-system/cilium-w4wh9" Jul 10 23:36:04.888839 kubelet[3384]: I0710 23:36:04.888435 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0a0994e1-a921-4204-9b4a-6016f7731eb4-lib-modules\") pod \"kube-proxy-chcxv\" (UID: \"0a0994e1-a921-4204-9b4a-6016f7731eb4\") " pod="kube-system/kube-proxy-chcxv" Jul 10 23:36:04.888956 kubelet[3384]: I0710 23:36:04.888465 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghsdw\" (UniqueName: \"kubernetes.io/projected/0a0994e1-a921-4204-9b4a-6016f7731eb4-kube-api-access-ghsdw\") pod \"kube-proxy-chcxv\" (UID: \"0a0994e1-a921-4204-9b4a-6016f7731eb4\") " pod="kube-system/kube-proxy-chcxv" Jul 10 23:36:04.888956 kubelet[3384]: I0710 23:36:04.888485 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/198ef552-ea7f-4a87-9962-8b677436d88a-cilium-run\") pod \"cilium-w4wh9\" (UID: \"198ef552-ea7f-4a87-9962-8b677436d88a\") " pod="kube-system/cilium-w4wh9" Jul 10 23:36:04.888956 kubelet[3384]: I0710 23:36:04.888522 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/198ef552-ea7f-4a87-9962-8b677436d88a-bpf-maps\") pod \"cilium-w4wh9\" (UID: \"198ef552-ea7f-4a87-9962-8b677436d88a\") " pod="kube-system/cilium-w4wh9" Jul 10 23:36:04.888956 kubelet[3384]: I0710 23:36:04.888541 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/198ef552-ea7f-4a87-9962-8b677436d88a-clustermesh-secrets\") pod \"cilium-w4wh9\" (UID: \"198ef552-ea7f-4a87-9962-8b677436d88a\") " pod="kube-system/cilium-w4wh9" Jul 10 23:36:05.176452 containerd[1734]: time="2025-07-10T23:36:05.175938268Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-chcxv,Uid:0a0994e1-a921-4204-9b4a-6016f7731eb4,Namespace:kube-system,Attempt:0,}" Jul 10 23:36:05.185305 containerd[1734]: time="2025-07-10T23:36:05.185071072Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-w4wh9,Uid:198ef552-ea7f-4a87-9962-8b677436d88a,Namespace:kube-system,Attempt:0,}" Jul 10 23:36:05.275163 containerd[1734]: time="2025-07-10T23:36:05.275058950Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 23:36:05.275527 containerd[1734]: time="2025-07-10T23:36:05.275463030Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 23:36:05.275945 containerd[1734]: time="2025-07-10T23:36:05.275513111Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 23:36:05.276188 containerd[1734]: time="2025-07-10T23:36:05.276140311Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 23:36:05.286146 containerd[1734]: time="2025-07-10T23:36:05.285937275Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 23:36:05.286146 containerd[1734]: time="2025-07-10T23:36:05.285990435Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 23:36:05.286146 containerd[1734]: time="2025-07-10T23:36:05.286005395Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 23:36:05.286146 containerd[1734]: time="2025-07-10T23:36:05.286078915Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 23:36:05.310785 systemd[1]: Started cri-containerd-2f8408b1d90e30cf30dc3a45c350247bec1abdb79051b755bae50ce327468bc9.scope - libcontainer container 2f8408b1d90e30cf30dc3a45c350247bec1abdb79051b755bae50ce327468bc9. Jul 10 23:36:05.318702 systemd[1]: Started cri-containerd-84d2aaa01230727e912d2d8873db6959f6fa26c891766eb44acd018296349329.scope - libcontainer container 84d2aaa01230727e912d2d8873db6959f6fa26c891766eb44acd018296349329. Jul 10 23:36:05.323610 systemd[1]: Created slice kubepods-besteffort-podc2d64835_7430_4efc_a1c4_2e7a5427bd19.slice - libcontainer container kubepods-besteffort-podc2d64835_7430_4efc_a1c4_2e7a5427bd19.slice. Jul 10 23:36:05.392364 kubelet[3384]: I0710 23:36:05.392272 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c2d64835-7430-4efc-a1c4-2e7a5427bd19-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-69l6v\" (UID: \"c2d64835-7430-4efc-a1c4-2e7a5427bd19\") " pod="kube-system/cilium-operator-6c4d7847fc-69l6v" Jul 10 23:36:05.392364 kubelet[3384]: I0710 23:36:05.392309 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vqf8\" (UniqueName: \"kubernetes.io/projected/c2d64835-7430-4efc-a1c4-2e7a5427bd19-kube-api-access-6vqf8\") pod \"cilium-operator-6c4d7847fc-69l6v\" (UID: \"c2d64835-7430-4efc-a1c4-2e7a5427bd19\") " pod="kube-system/cilium-operator-6c4d7847fc-69l6v" Jul 10 23:36:05.401822 containerd[1734]: time="2025-07-10T23:36:05.401360924Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-w4wh9,Uid:198ef552-ea7f-4a87-9962-8b677436d88a,Namespace:kube-system,Attempt:0,} returns sandbox id \"84d2aaa01230727e912d2d8873db6959f6fa26c891766eb44acd018296349329\"" Jul 10 23:36:05.409410 containerd[1734]: time="2025-07-10T23:36:05.408778607Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 10 23:36:05.418172 containerd[1734]: time="2025-07-10T23:36:05.418006291Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-chcxv,Uid:0a0994e1-a921-4204-9b4a-6016f7731eb4,Namespace:kube-system,Attempt:0,} returns sandbox id \"2f8408b1d90e30cf30dc3a45c350247bec1abdb79051b755bae50ce327468bc9\"" Jul 10 23:36:05.426135 containerd[1734]: time="2025-07-10T23:36:05.426008894Z" level=info msg="CreateContainer within sandbox \"2f8408b1d90e30cf30dc3a45c350247bec1abdb79051b755bae50ce327468bc9\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 10 23:36:05.490482 containerd[1734]: time="2025-07-10T23:36:05.490438882Z" level=info msg="CreateContainer within sandbox \"2f8408b1d90e30cf30dc3a45c350247bec1abdb79051b755bae50ce327468bc9\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6d5fe32f2f716e6fff1940c47e9226a28e8b39f2125753b27973c522af167cc9\"" Jul 10 23:36:05.491506 containerd[1734]: time="2025-07-10T23:36:05.491227922Z" level=info msg="StartContainer for \"6d5fe32f2f716e6fff1940c47e9226a28e8b39f2125753b27973c522af167cc9\"" Jul 10 23:36:05.518858 systemd[1]: Started cri-containerd-6d5fe32f2f716e6fff1940c47e9226a28e8b39f2125753b27973c522af167cc9.scope - libcontainer container 6d5fe32f2f716e6fff1940c47e9226a28e8b39f2125753b27973c522af167cc9. Jul 10 23:36:05.548160 containerd[1734]: time="2025-07-10T23:36:05.548090626Z" level=info msg="StartContainer for \"6d5fe32f2f716e6fff1940c47e9226a28e8b39f2125753b27973c522af167cc9\" returns successfully" Jul 10 23:36:05.627990 containerd[1734]: time="2025-07-10T23:36:05.627704060Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-69l6v,Uid:c2d64835-7430-4efc-a1c4-2e7a5427bd19,Namespace:kube-system,Attempt:0,}" Jul 10 23:36:05.691420 containerd[1734]: time="2025-07-10T23:36:05.691216207Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 23:36:05.691565 containerd[1734]: time="2025-07-10T23:36:05.691448207Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 23:36:05.692176 containerd[1734]: time="2025-07-10T23:36:05.692036087Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 23:36:05.692414 containerd[1734]: time="2025-07-10T23:36:05.692293327Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 23:36:05.706795 systemd[1]: Started cri-containerd-01424d04aa3575f0af73c62cfe749f09a42e0ba91da8010cc75a9810944668d4.scope - libcontainer container 01424d04aa3575f0af73c62cfe749f09a42e0ba91da8010cc75a9810944668d4. Jul 10 23:36:05.733436 containerd[1734]: time="2025-07-10T23:36:05.733396505Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-69l6v,Uid:c2d64835-7430-4efc-a1c4-2e7a5427bd19,Namespace:kube-system,Attempt:0,} returns sandbox id \"01424d04aa3575f0af73c62cfe749f09a42e0ba91da8010cc75a9810944668d4\"" Jul 10 23:36:06.282637 kubelet[3384]: I0710 23:36:06.282556 3384 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-chcxv" podStartSLOduration=2.282541138 podStartE2EDuration="2.282541138s" podCreationTimestamp="2025-07-10 23:36:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 23:36:06.257983007 +0000 UTC m=+7.174397224" watchObservedRunningTime="2025-07-10 23:36:06.282541138 +0000 UTC m=+7.198955355" Jul 10 23:36:09.878975 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3985640246.mount: Deactivated successfully. Jul 10 23:36:11.575203 containerd[1734]: time="2025-07-10T23:36:11.575143975Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:36:11.579924 containerd[1734]: time="2025-07-10T23:36:11.579872096Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Jul 10 23:36:11.584328 containerd[1734]: time="2025-07-10T23:36:11.584279258Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:36:11.587123 containerd[1734]: time="2025-07-10T23:36:11.586807139Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 6.177802012s" Jul 10 23:36:11.587123 containerd[1734]: time="2025-07-10T23:36:11.586839739Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jul 10 23:36:11.588574 containerd[1734]: time="2025-07-10T23:36:11.588108979Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 10 23:36:11.599758 containerd[1734]: time="2025-07-10T23:36:11.599655103Z" level=info msg="CreateContainer within sandbox \"84d2aaa01230727e912d2d8873db6959f6fa26c891766eb44acd018296349329\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 10 23:36:12.305050 containerd[1734]: time="2025-07-10T23:36:12.304970776Z" level=info msg="CreateContainer within sandbox \"84d2aaa01230727e912d2d8873db6959f6fa26c891766eb44acd018296349329\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"230a8395203da75139c999cc12cc6e825bfc0cf62acef17414c5d923288fb4e3\"" Jul 10 23:36:12.305906 containerd[1734]: time="2025-07-10T23:36:12.305356336Z" level=info msg="StartContainer for \"230a8395203da75139c999cc12cc6e825bfc0cf62acef17414c5d923288fb4e3\"" Jul 10 23:36:12.335780 systemd[1]: Started cri-containerd-230a8395203da75139c999cc12cc6e825bfc0cf62acef17414c5d923288fb4e3.scope - libcontainer container 230a8395203da75139c999cc12cc6e825bfc0cf62acef17414c5d923288fb4e3. Jul 10 23:36:12.360365 containerd[1734]: time="2025-07-10T23:36:12.360319394Z" level=info msg="StartContainer for \"230a8395203da75139c999cc12cc6e825bfc0cf62acef17414c5d923288fb4e3\" returns successfully" Jul 10 23:36:12.370707 systemd[1]: cri-containerd-230a8395203da75139c999cc12cc6e825bfc0cf62acef17414c5d923288fb4e3.scope: Deactivated successfully. Jul 10 23:36:12.385717 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-230a8395203da75139c999cc12cc6e825bfc0cf62acef17414c5d923288fb4e3-rootfs.mount: Deactivated successfully. Jul 10 23:36:13.361737 containerd[1734]: time="2025-07-10T23:36:13.361666644Z" level=info msg="shim disconnected" id=230a8395203da75139c999cc12cc6e825bfc0cf62acef17414c5d923288fb4e3 namespace=k8s.io Jul 10 23:36:13.361737 containerd[1734]: time="2025-07-10T23:36:13.361732084Z" level=warning msg="cleaning up after shim disconnected" id=230a8395203da75139c999cc12cc6e825bfc0cf62acef17414c5d923288fb4e3 namespace=k8s.io Jul 10 23:36:13.361737 containerd[1734]: time="2025-07-10T23:36:13.361741404Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 10 23:36:14.219995 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2780021099.mount: Deactivated successfully. Jul 10 23:36:14.271195 containerd[1734]: time="2025-07-10T23:36:14.270381864Z" level=info msg="CreateContainer within sandbox \"84d2aaa01230727e912d2d8873db6959f6fa26c891766eb44acd018296349329\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 10 23:36:14.308089 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1621305109.mount: Deactivated successfully. Jul 10 23:36:14.324847 containerd[1734]: time="2025-07-10T23:36:14.324805962Z" level=info msg="CreateContainer within sandbox \"84d2aaa01230727e912d2d8873db6959f6fa26c891766eb44acd018296349329\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"8b5cff66b2fbea5fa60f84db78588a051317c80f05d98f5d2c136a60488eeeeb\"" Jul 10 23:36:14.325593 containerd[1734]: time="2025-07-10T23:36:14.325538003Z" level=info msg="StartContainer for \"8b5cff66b2fbea5fa60f84db78588a051317c80f05d98f5d2c136a60488eeeeb\"" Jul 10 23:36:14.352787 systemd[1]: Started cri-containerd-8b5cff66b2fbea5fa60f84db78588a051317c80f05d98f5d2c136a60488eeeeb.scope - libcontainer container 8b5cff66b2fbea5fa60f84db78588a051317c80f05d98f5d2c136a60488eeeeb. Jul 10 23:36:14.384303 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 10 23:36:14.384513 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 10 23:36:14.384704 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 10 23:36:14.388990 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 10 23:36:14.391740 systemd[1]: cri-containerd-8b5cff66b2fbea5fa60f84db78588a051317c80f05d98f5d2c136a60488eeeeb.scope: Deactivated successfully. Jul 10 23:36:14.397070 containerd[1734]: time="2025-07-10T23:36:14.396973426Z" level=info msg="StartContainer for \"8b5cff66b2fbea5fa60f84db78588a051317c80f05d98f5d2c136a60488eeeeb\" returns successfully" Jul 10 23:36:14.409859 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 10 23:36:14.462600 containerd[1734]: time="2025-07-10T23:36:14.462550648Z" level=info msg="shim disconnected" id=8b5cff66b2fbea5fa60f84db78588a051317c80f05d98f5d2c136a60488eeeeb namespace=k8s.io Jul 10 23:36:14.463117 containerd[1734]: time="2025-07-10T23:36:14.462962168Z" level=warning msg="cleaning up after shim disconnected" id=8b5cff66b2fbea5fa60f84db78588a051317c80f05d98f5d2c136a60488eeeeb namespace=k8s.io Jul 10 23:36:14.463117 containerd[1734]: time="2025-07-10T23:36:14.462992928Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 10 23:36:14.816282 containerd[1734]: time="2025-07-10T23:36:14.816237285Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:36:14.822401 containerd[1734]: time="2025-07-10T23:36:14.822366647Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Jul 10 23:36:14.831438 containerd[1734]: time="2025-07-10T23:36:14.831408610Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:36:14.832703 containerd[1734]: time="2025-07-10T23:36:14.832572690Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 3.244432791s" Jul 10 23:36:14.832703 containerd[1734]: time="2025-07-10T23:36:14.832605130Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jul 10 23:36:14.840676 containerd[1734]: time="2025-07-10T23:36:14.840602493Z" level=info msg="CreateContainer within sandbox \"01424d04aa3575f0af73c62cfe749f09a42e0ba91da8010cc75a9810944668d4\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 10 23:36:14.887901 containerd[1734]: time="2025-07-10T23:36:14.887829668Z" level=info msg="CreateContainer within sandbox \"01424d04aa3575f0af73c62cfe749f09a42e0ba91da8010cc75a9810944668d4\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"ba692e3f9e8c8a2d005e654aa1ce5716063dbdc60378f742f34138856ea9cb97\"" Jul 10 23:36:14.888886 containerd[1734]: time="2025-07-10T23:36:14.888852229Z" level=info msg="StartContainer for \"ba692e3f9e8c8a2d005e654aa1ce5716063dbdc60378f742f34138856ea9cb97\"" Jul 10 23:36:14.909834 systemd[1]: Started cri-containerd-ba692e3f9e8c8a2d005e654aa1ce5716063dbdc60378f742f34138856ea9cb97.scope - libcontainer container ba692e3f9e8c8a2d005e654aa1ce5716063dbdc60378f742f34138856ea9cb97. Jul 10 23:36:14.935818 containerd[1734]: time="2025-07-10T23:36:14.935777764Z" level=info msg="StartContainer for \"ba692e3f9e8c8a2d005e654aa1ce5716063dbdc60378f742f34138856ea9cb97\" returns successfully" Jul 10 23:36:15.219180 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8b5cff66b2fbea5fa60f84db78588a051317c80f05d98f5d2c136a60488eeeeb-rootfs.mount: Deactivated successfully. Jul 10 23:36:15.279049 containerd[1734]: time="2025-07-10T23:36:15.278841797Z" level=info msg="CreateContainer within sandbox \"84d2aaa01230727e912d2d8873db6959f6fa26c891766eb44acd018296349329\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 10 23:36:15.289651 kubelet[3384]: I0710 23:36:15.289490 3384 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-69l6v" podStartSLOduration=1.190657856 podStartE2EDuration="10.289473601s" podCreationTimestamp="2025-07-10 23:36:05 +0000 UTC" firstStartedPulling="2025-07-10 23:36:05.734506385 +0000 UTC m=+6.650920562" lastFinishedPulling="2025-07-10 23:36:14.83332209 +0000 UTC m=+15.749736307" observedRunningTime="2025-07-10 23:36:15.289207041 +0000 UTC m=+16.205621258" watchObservedRunningTime="2025-07-10 23:36:15.289473601 +0000 UTC m=+16.205887818" Jul 10 23:36:15.328372 containerd[1734]: time="2025-07-10T23:36:15.328289854Z" level=info msg="CreateContainer within sandbox \"84d2aaa01230727e912d2d8873db6959f6fa26c891766eb44acd018296349329\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"59f9f6caf38ba4962fe978916e0e4fc37dbc1b63f034046a82dd21d85def0a63\"" Jul 10 23:36:15.329247 containerd[1734]: time="2025-07-10T23:36:15.329129894Z" level=info msg="StartContainer for \"59f9f6caf38ba4962fe978916e0e4fc37dbc1b63f034046a82dd21d85def0a63\"" Jul 10 23:36:15.363824 systemd[1]: Started cri-containerd-59f9f6caf38ba4962fe978916e0e4fc37dbc1b63f034046a82dd21d85def0a63.scope - libcontainer container 59f9f6caf38ba4962fe978916e0e4fc37dbc1b63f034046a82dd21d85def0a63. Jul 10 23:36:15.448884 containerd[1734]: time="2025-07-10T23:36:15.448539373Z" level=info msg="StartContainer for \"59f9f6caf38ba4962fe978916e0e4fc37dbc1b63f034046a82dd21d85def0a63\" returns successfully" Jul 10 23:36:15.452706 systemd[1]: cri-containerd-59f9f6caf38ba4962fe978916e0e4fc37dbc1b63f034046a82dd21d85def0a63.scope: Deactivated successfully. Jul 10 23:36:15.478798 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-59f9f6caf38ba4962fe978916e0e4fc37dbc1b63f034046a82dd21d85def0a63-rootfs.mount: Deactivated successfully. Jul 10 23:36:15.757525 containerd[1734]: time="2025-07-10T23:36:15.757385595Z" level=info msg="shim disconnected" id=59f9f6caf38ba4962fe978916e0e4fc37dbc1b63f034046a82dd21d85def0a63 namespace=k8s.io Jul 10 23:36:15.758033 containerd[1734]: time="2025-07-10T23:36:15.757751595Z" level=warning msg="cleaning up after shim disconnected" id=59f9f6caf38ba4962fe978916e0e4fc37dbc1b63f034046a82dd21d85def0a63 namespace=k8s.io Jul 10 23:36:15.758033 containerd[1734]: time="2025-07-10T23:36:15.757770675Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 10 23:36:16.280387 containerd[1734]: time="2025-07-10T23:36:16.280341728Z" level=info msg="CreateContainer within sandbox \"84d2aaa01230727e912d2d8873db6959f6fa26c891766eb44acd018296349329\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 10 23:36:16.329122 containerd[1734]: time="2025-07-10T23:36:16.329077144Z" level=info msg="CreateContainer within sandbox \"84d2aaa01230727e912d2d8873db6959f6fa26c891766eb44acd018296349329\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"75e341bdf856d2735b117337454fba8acc6c200560c509cb899cf2c33bc40cab\"" Jul 10 23:36:16.329695 containerd[1734]: time="2025-07-10T23:36:16.329671664Z" level=info msg="StartContainer for \"75e341bdf856d2735b117337454fba8acc6c200560c509cb899cf2c33bc40cab\"" Jul 10 23:36:16.354869 systemd[1]: Started cri-containerd-75e341bdf856d2735b117337454fba8acc6c200560c509cb899cf2c33bc40cab.scope - libcontainer container 75e341bdf856d2735b117337454fba8acc6c200560c509cb899cf2c33bc40cab. Jul 10 23:36:16.377199 systemd[1]: cri-containerd-75e341bdf856d2735b117337454fba8acc6c200560c509cb899cf2c33bc40cab.scope: Deactivated successfully. Jul 10 23:36:16.382130 containerd[1734]: time="2025-07-10T23:36:16.381964001Z" level=info msg="StartContainer for \"75e341bdf856d2735b117337454fba8acc6c200560c509cb899cf2c33bc40cab\" returns successfully" Jul 10 23:36:16.398459 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-75e341bdf856d2735b117337454fba8acc6c200560c509cb899cf2c33bc40cab-rootfs.mount: Deactivated successfully. Jul 10 23:36:16.420907 containerd[1734]: time="2025-07-10T23:36:16.420725494Z" level=info msg="shim disconnected" id=75e341bdf856d2735b117337454fba8acc6c200560c509cb899cf2c33bc40cab namespace=k8s.io Jul 10 23:36:16.420907 containerd[1734]: time="2025-07-10T23:36:16.420780774Z" level=warning msg="cleaning up after shim disconnected" id=75e341bdf856d2735b117337454fba8acc6c200560c509cb899cf2c33bc40cab namespace=k8s.io Jul 10 23:36:16.420907 containerd[1734]: time="2025-07-10T23:36:16.420789174Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 10 23:36:17.284418 containerd[1734]: time="2025-07-10T23:36:17.284368819Z" level=info msg="CreateContainer within sandbox \"84d2aaa01230727e912d2d8873db6959f6fa26c891766eb44acd018296349329\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 10 23:36:17.328984 containerd[1734]: time="2025-07-10T23:36:17.328935154Z" level=info msg="CreateContainer within sandbox \"84d2aaa01230727e912d2d8873db6959f6fa26c891766eb44acd018296349329\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"a4e10cb75bd26a0c59fd113fa91c76895fb67b88c310bf231f4250f9a9cf14de\"" Jul 10 23:36:17.329565 containerd[1734]: time="2025-07-10T23:36:17.329429074Z" level=info msg="StartContainer for \"a4e10cb75bd26a0c59fd113fa91c76895fb67b88c310bf231f4250f9a9cf14de\"" Jul 10 23:36:17.357777 systemd[1]: Started cri-containerd-a4e10cb75bd26a0c59fd113fa91c76895fb67b88c310bf231f4250f9a9cf14de.scope - libcontainer container a4e10cb75bd26a0c59fd113fa91c76895fb67b88c310bf231f4250f9a9cf14de. Jul 10 23:36:17.386564 containerd[1734]: time="2025-07-10T23:36:17.386356293Z" level=info msg="StartContainer for \"a4e10cb75bd26a0c59fd113fa91c76895fb67b88c310bf231f4250f9a9cf14de\" returns successfully" Jul 10 23:36:17.551787 kubelet[3384]: I0710 23:36:17.551528 3384 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 10 23:36:17.606751 systemd[1]: Created slice kubepods-burstable-podd3e8f56f_8960_4ab8_813d_1fae3ba4cc24.slice - libcontainer container kubepods-burstable-podd3e8f56f_8960_4ab8_813d_1fae3ba4cc24.slice. Jul 10 23:36:17.615887 systemd[1]: Created slice kubepods-burstable-pod55345ea3_4672_44ab_a6e8_360080b54546.slice - libcontainer container kubepods-burstable-pod55345ea3_4672_44ab_a6e8_360080b54546.slice. Jul 10 23:36:17.670681 kubelet[3384]: I0710 23:36:17.670606 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/55345ea3-4672-44ab-a6e8-360080b54546-config-volume\") pod \"coredns-674b8bbfcf-ht8vv\" (UID: \"55345ea3-4672-44ab-a6e8-360080b54546\") " pod="kube-system/coredns-674b8bbfcf-ht8vv" Jul 10 23:36:17.670813 kubelet[3384]: I0710 23:36:17.670696 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6br4m\" (UniqueName: \"kubernetes.io/projected/d3e8f56f-8960-4ab8-813d-1fae3ba4cc24-kube-api-access-6br4m\") pod \"coredns-674b8bbfcf-5kpbj\" (UID: \"d3e8f56f-8960-4ab8-813d-1fae3ba4cc24\") " pod="kube-system/coredns-674b8bbfcf-5kpbj" Jul 10 23:36:17.670813 kubelet[3384]: I0710 23:36:17.670759 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d3e8f56f-8960-4ab8-813d-1fae3ba4cc24-config-volume\") pod \"coredns-674b8bbfcf-5kpbj\" (UID: \"d3e8f56f-8960-4ab8-813d-1fae3ba4cc24\") " pod="kube-system/coredns-674b8bbfcf-5kpbj" Jul 10 23:36:17.670813 kubelet[3384]: I0710 23:36:17.670775 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhp9x\" (UniqueName: \"kubernetes.io/projected/55345ea3-4672-44ab-a6e8-360080b54546-kube-api-access-dhp9x\") pod \"coredns-674b8bbfcf-ht8vv\" (UID: \"55345ea3-4672-44ab-a6e8-360080b54546\") " pod="kube-system/coredns-674b8bbfcf-ht8vv" Jul 10 23:36:17.913547 containerd[1734]: time="2025-07-10T23:36:17.913361787Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-5kpbj,Uid:d3e8f56f-8960-4ab8-813d-1fae3ba4cc24,Namespace:kube-system,Attempt:0,}" Jul 10 23:36:17.920277 containerd[1734]: time="2025-07-10T23:36:17.920023429Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ht8vv,Uid:55345ea3-4672-44ab-a6e8-360080b54546,Namespace:kube-system,Attempt:0,}" Jul 10 23:36:19.469420 systemd-networkd[1457]: cilium_host: Link UP Jul 10 23:36:19.470859 systemd-networkd[1457]: cilium_net: Link UP Jul 10 23:36:19.471040 systemd-networkd[1457]: cilium_net: Gained carrier Jul 10 23:36:19.471160 systemd-networkd[1457]: cilium_host: Gained carrier Jul 10 23:36:19.471244 systemd-networkd[1457]: cilium_net: Gained IPv6LL Jul 10 23:36:19.471351 systemd-networkd[1457]: cilium_host: Gained IPv6LL Jul 10 23:36:19.589897 systemd-networkd[1457]: cilium_vxlan: Link UP Jul 10 23:36:19.589907 systemd-networkd[1457]: cilium_vxlan: Gained carrier Jul 10 23:36:19.862690 kernel: NET: Registered PF_ALG protocol family Jul 10 23:36:20.513860 systemd-networkd[1457]: lxc_health: Link UP Jul 10 23:36:20.522334 systemd-networkd[1457]: lxc_health: Gained carrier Jul 10 23:36:21.008751 systemd-networkd[1457]: lxcba347c562da4: Link UP Jul 10 23:36:21.018658 kernel: eth0: renamed from tmp57e38 Jul 10 23:36:21.027595 systemd-networkd[1457]: lxcba347c562da4: Gained carrier Jul 10 23:36:21.028037 systemd-networkd[1457]: lxcfea607855373: Link UP Jul 10 23:36:21.039641 kernel: eth0: renamed from tmp57af9 Jul 10 23:36:21.047329 systemd-networkd[1457]: lxcfea607855373: Gained carrier Jul 10 23:36:21.052890 systemd-networkd[1457]: cilium_vxlan: Gained IPv6LL Jul 10 23:36:21.246655 kubelet[3384]: I0710 23:36:21.246156 3384 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-w4wh9" podStartSLOduration=11.063984075 podStartE2EDuration="17.246140608s" podCreationTimestamp="2025-07-10 23:36:04 +0000 UTC" firstStartedPulling="2025-07-10 23:36:05.405350846 +0000 UTC m=+6.321765063" lastFinishedPulling="2025-07-10 23:36:11.587507379 +0000 UTC m=+12.503921596" observedRunningTime="2025-07-10 23:36:18.299661314 +0000 UTC m=+19.216075531" watchObservedRunningTime="2025-07-10 23:36:21.246140608 +0000 UTC m=+22.162554825" Jul 10 23:36:21.946833 systemd-networkd[1457]: lxc_health: Gained IPv6LL Jul 10 23:36:22.394849 systemd-networkd[1457]: lxcba347c562da4: Gained IPv6LL Jul 10 23:36:22.841821 systemd-networkd[1457]: lxcfea607855373: Gained IPv6LL Jul 10 23:36:24.564582 containerd[1734]: time="2025-07-10T23:36:24.562047663Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 23:36:24.564582 containerd[1734]: time="2025-07-10T23:36:24.562338064Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 23:36:24.564582 containerd[1734]: time="2025-07-10T23:36:24.562357184Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 23:36:24.564582 containerd[1734]: time="2025-07-10T23:36:24.562455464Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 23:36:24.599701 containerd[1734]: time="2025-07-10T23:36:24.599118196Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 23:36:24.599701 containerd[1734]: time="2025-07-10T23:36:24.599171036Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 23:36:24.599701 containerd[1734]: time="2025-07-10T23:36:24.599185596Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 23:36:24.599778 systemd[1]: Started cri-containerd-57af9a44ecd56f6399083529b68402d6ec079a226fe04e745b8c11ae34c49c60.scope - libcontainer container 57af9a44ecd56f6399083529b68402d6ec079a226fe04e745b8c11ae34c49c60. Jul 10 23:36:24.602556 containerd[1734]: time="2025-07-10T23:36:24.601294316Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 23:36:24.635768 systemd[1]: Started cri-containerd-57e38aed3dea882903e2a6e15046019937ca4e8e22ced36c93042648012f3a16.scope - libcontainer container 57e38aed3dea882903e2a6e15046019937ca4e8e22ced36c93042648012f3a16. Jul 10 23:36:24.665796 containerd[1734]: time="2025-07-10T23:36:24.665458698Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ht8vv,Uid:55345ea3-4672-44ab-a6e8-360080b54546,Namespace:kube-system,Attempt:0,} returns sandbox id \"57af9a44ecd56f6399083529b68402d6ec079a226fe04e745b8c11ae34c49c60\"" Jul 10 23:36:24.679190 containerd[1734]: time="2025-07-10T23:36:24.678389422Z" level=info msg="CreateContainer within sandbox \"57af9a44ecd56f6399083529b68402d6ec079a226fe04e745b8c11ae34c49c60\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 10 23:36:24.687698 containerd[1734]: time="2025-07-10T23:36:24.687599105Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-5kpbj,Uid:d3e8f56f-8960-4ab8-813d-1fae3ba4cc24,Namespace:kube-system,Attempt:0,} returns sandbox id \"57e38aed3dea882903e2a6e15046019937ca4e8e22ced36c93042648012f3a16\"" Jul 10 23:36:24.697692 containerd[1734]: time="2025-07-10T23:36:24.697566508Z" level=info msg="CreateContainer within sandbox \"57e38aed3dea882903e2a6e15046019937ca4e8e22ced36c93042648012f3a16\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 10 23:36:24.759587 containerd[1734]: time="2025-07-10T23:36:24.759540849Z" level=info msg="CreateContainer within sandbox \"57e38aed3dea882903e2a6e15046019937ca4e8e22ced36c93042648012f3a16\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"78fa1cda19a5905898eb53f9cd39c63aa8a7bd447569d2d39f4dee0d208c7861\"" Jul 10 23:36:24.761136 containerd[1734]: time="2025-07-10T23:36:24.760244369Z" level=info msg="StartContainer for \"78fa1cda19a5905898eb53f9cd39c63aa8a7bd447569d2d39f4dee0d208c7861\"" Jul 10 23:36:24.766673 containerd[1734]: time="2025-07-10T23:36:24.766642811Z" level=info msg="CreateContainer within sandbox \"57af9a44ecd56f6399083529b68402d6ec079a226fe04e745b8c11ae34c49c60\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f64abbba5569f1f36acaeef3b57e0307f5e9a3f453c25f98d6d1ca64ea098b34\"" Jul 10 23:36:24.768431 containerd[1734]: time="2025-07-10T23:36:24.768398692Z" level=info msg="StartContainer for \"f64abbba5569f1f36acaeef3b57e0307f5e9a3f453c25f98d6d1ca64ea098b34\"" Jul 10 23:36:24.784791 systemd[1]: Started cri-containerd-78fa1cda19a5905898eb53f9cd39c63aa8a7bd447569d2d39f4dee0d208c7861.scope - libcontainer container 78fa1cda19a5905898eb53f9cd39c63aa8a7bd447569d2d39f4dee0d208c7861. Jul 10 23:36:24.800748 systemd[1]: Started cri-containerd-f64abbba5569f1f36acaeef3b57e0307f5e9a3f453c25f98d6d1ca64ea098b34.scope - libcontainer container f64abbba5569f1f36acaeef3b57e0307f5e9a3f453c25f98d6d1ca64ea098b34. Jul 10 23:36:24.824725 containerd[1734]: time="2025-07-10T23:36:24.824031110Z" level=info msg="StartContainer for \"78fa1cda19a5905898eb53f9cd39c63aa8a7bd447569d2d39f4dee0d208c7861\" returns successfully" Jul 10 23:36:24.837930 containerd[1734]: time="2025-07-10T23:36:24.837878795Z" level=info msg="StartContainer for \"f64abbba5569f1f36acaeef3b57e0307f5e9a3f453c25f98d6d1ca64ea098b34\" returns successfully" Jul 10 23:36:25.328026 kubelet[3384]: I0710 23:36:25.327362 3384 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-5kpbj" podStartSLOduration=20.327346596 podStartE2EDuration="20.327346596s" podCreationTimestamp="2025-07-10 23:36:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 23:36:25.311705631 +0000 UTC m=+26.228119848" watchObservedRunningTime="2025-07-10 23:36:25.327346596 +0000 UTC m=+26.243760813" Jul 10 23:36:25.363734 kubelet[3384]: I0710 23:36:25.362656 3384 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-ht8vv" podStartSLOduration=20.362641728 podStartE2EDuration="20.362641728s" podCreationTimestamp="2025-07-10 23:36:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 23:36:25.329768677 +0000 UTC m=+26.246182894" watchObservedRunningTime="2025-07-10 23:36:25.362641728 +0000 UTC m=+26.279055985" Jul 10 23:37:46.304425 update_engine[1711]: I20250710 23:37:46.303961 1711 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jul 10 23:37:46.304425 update_engine[1711]: I20250710 23:37:46.304008 1711 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jul 10 23:37:46.304425 update_engine[1711]: I20250710 23:37:46.304162 1711 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jul 10 23:37:46.305416 update_engine[1711]: I20250710 23:37:46.305035 1711 omaha_request_params.cc:62] Current group set to stable Jul 10 23:37:46.305416 update_engine[1711]: I20250710 23:37:46.305130 1711 update_attempter.cc:499] Already updated boot flags. Skipping. Jul 10 23:37:46.305416 update_engine[1711]: I20250710 23:37:46.305141 1711 update_attempter.cc:643] Scheduling an action processor start. Jul 10 23:37:46.305416 update_engine[1711]: I20250710 23:37:46.305155 1711 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jul 10 23:37:46.305416 update_engine[1711]: I20250710 23:37:46.305184 1711 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jul 10 23:37:46.305416 update_engine[1711]: I20250710 23:37:46.305238 1711 omaha_request_action.cc:271] Posting an Omaha request to disabled Jul 10 23:37:46.305416 update_engine[1711]: I20250710 23:37:46.305245 1711 omaha_request_action.cc:272] Request: Jul 10 23:37:46.305416 update_engine[1711]: Jul 10 23:37:46.305416 update_engine[1711]: Jul 10 23:37:46.305416 update_engine[1711]: Jul 10 23:37:46.305416 update_engine[1711]: Jul 10 23:37:46.305416 update_engine[1711]: Jul 10 23:37:46.305416 update_engine[1711]: Jul 10 23:37:46.305416 update_engine[1711]: Jul 10 23:37:46.305416 update_engine[1711]: Jul 10 23:37:46.305416 update_engine[1711]: I20250710 23:37:46.305250 1711 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 10 23:37:46.305909 locksmithd[1830]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jul 10 23:37:46.306379 update_engine[1711]: I20250710 23:37:46.306340 1711 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 10 23:37:46.306718 update_engine[1711]: I20250710 23:37:46.306689 1711 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 10 23:37:46.382438 update_engine[1711]: E20250710 23:37:46.382381 1711 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 10 23:37:46.382556 update_engine[1711]: I20250710 23:37:46.382479 1711 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jul 10 23:37:56.231904 update_engine[1711]: I20250710 23:37:56.231839 1711 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 10 23:37:56.232226 update_engine[1711]: I20250710 23:37:56.232057 1711 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 10 23:37:56.232331 update_engine[1711]: I20250710 23:37:56.232298 1711 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 10 23:37:56.271862 update_engine[1711]: E20250710 23:37:56.271816 1711 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 10 23:37:56.271970 update_engine[1711]: I20250710 23:37:56.271889 1711 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jul 10 23:38:04.975430 systemd[1]: Started sshd@7-10.200.20.36:22-10.200.16.10:34752.service - OpenSSH per-connection server daemon (10.200.16.10:34752). Jul 10 23:38:05.473471 sshd[4784]: Accepted publickey for core from 10.200.16.10 port 34752 ssh2: RSA SHA256:MRWiT2m5xbwSE4Nwnya++Fyqw3vZsniZxlGhvbxVqjo Jul 10 23:38:05.475040 sshd-session[4784]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:38:05.479140 systemd-logind[1707]: New session 10 of user core. Jul 10 23:38:05.490783 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 10 23:38:05.913725 sshd[4786]: Connection closed by 10.200.16.10 port 34752 Jul 10 23:38:05.914213 sshd-session[4784]: pam_unix(sshd:session): session closed for user core Jul 10 23:38:05.917173 systemd[1]: sshd@7-10.200.20.36:22-10.200.16.10:34752.service: Deactivated successfully. Jul 10 23:38:05.919917 systemd[1]: session-10.scope: Deactivated successfully. Jul 10 23:38:05.921326 systemd-logind[1707]: Session 10 logged out. Waiting for processes to exit. Jul 10 23:38:05.922443 systemd-logind[1707]: Removed session 10. Jul 10 23:38:06.227173 update_engine[1711]: I20250710 23:38:06.227121 1711 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 10 23:38:06.227474 update_engine[1711]: I20250710 23:38:06.227324 1711 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 10 23:38:06.227571 update_engine[1711]: I20250710 23:38:06.227540 1711 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 10 23:38:06.513742 update_engine[1711]: E20250710 23:38:06.513582 1711 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 10 23:38:06.513742 update_engine[1711]: I20250710 23:38:06.513692 1711 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jul 10 23:38:11.009854 systemd[1]: Started sshd@8-10.200.20.36:22-10.200.16.10:49570.service - OpenSSH per-connection server daemon (10.200.16.10:49570). Jul 10 23:38:11.503227 sshd[4801]: Accepted publickey for core from 10.200.16.10 port 49570 ssh2: RSA SHA256:MRWiT2m5xbwSE4Nwnya++Fyqw3vZsniZxlGhvbxVqjo Jul 10 23:38:11.504438 sshd-session[4801]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:38:11.508420 systemd-logind[1707]: New session 11 of user core. Jul 10 23:38:11.512771 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 10 23:38:11.926998 sshd[4803]: Connection closed by 10.200.16.10 port 49570 Jul 10 23:38:11.927805 sshd-session[4801]: pam_unix(sshd:session): session closed for user core Jul 10 23:38:11.931187 systemd-logind[1707]: Session 11 logged out. Waiting for processes to exit. Jul 10 23:38:11.931752 systemd[1]: sshd@8-10.200.20.36:22-10.200.16.10:49570.service: Deactivated successfully. Jul 10 23:38:11.933838 systemd[1]: session-11.scope: Deactivated successfully. Jul 10 23:38:11.934809 systemd-logind[1707]: Removed session 11. Jul 10 23:38:17.016845 systemd[1]: Started sshd@9-10.200.20.36:22-10.200.16.10:49574.service - OpenSSH per-connection server daemon (10.200.16.10:49574). Jul 10 23:38:17.228153 update_engine[1711]: I20250710 23:38:17.228068 1711 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 10 23:38:17.228559 update_engine[1711]: I20250710 23:38:17.228349 1711 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 10 23:38:17.228607 update_engine[1711]: I20250710 23:38:17.228575 1711 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 10 23:38:17.277414 update_engine[1711]: E20250710 23:38:17.277302 1711 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 10 23:38:17.277414 update_engine[1711]: I20250710 23:38:17.277384 1711 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jul 10 23:38:17.277414 update_engine[1711]: I20250710 23:38:17.277392 1711 omaha_request_action.cc:617] Omaha request response: Jul 10 23:38:17.277549 update_engine[1711]: E20250710 23:38:17.277467 1711 omaha_request_action.cc:636] Omaha request network transfer failed. Jul 10 23:38:17.277549 update_engine[1711]: I20250710 23:38:17.277483 1711 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jul 10 23:38:17.277549 update_engine[1711]: I20250710 23:38:17.277489 1711 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 10 23:38:17.277549 update_engine[1711]: I20250710 23:38:17.277493 1711 update_attempter.cc:306] Processing Done. Jul 10 23:38:17.277549 update_engine[1711]: E20250710 23:38:17.277508 1711 update_attempter.cc:619] Update failed. Jul 10 23:38:17.277549 update_engine[1711]: I20250710 23:38:17.277513 1711 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jul 10 23:38:17.277549 update_engine[1711]: I20250710 23:38:17.277518 1711 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jul 10 23:38:17.277549 update_engine[1711]: I20250710 23:38:17.277523 1711 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jul 10 23:38:17.277744 update_engine[1711]: I20250710 23:38:17.277590 1711 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jul 10 23:38:17.277744 update_engine[1711]: I20250710 23:38:17.277613 1711 omaha_request_action.cc:271] Posting an Omaha request to disabled Jul 10 23:38:17.277744 update_engine[1711]: I20250710 23:38:17.277647 1711 omaha_request_action.cc:272] Request: Jul 10 23:38:17.277744 update_engine[1711]: Jul 10 23:38:17.277744 update_engine[1711]: Jul 10 23:38:17.277744 update_engine[1711]: Jul 10 23:38:17.277744 update_engine[1711]: Jul 10 23:38:17.277744 update_engine[1711]: Jul 10 23:38:17.277744 update_engine[1711]: Jul 10 23:38:17.277744 update_engine[1711]: I20250710 23:38:17.277653 1711 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 10 23:38:17.277915 update_engine[1711]: I20250710 23:38:17.277796 1711 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 10 23:38:17.278487 update_engine[1711]: I20250710 23:38:17.277989 1711 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 10 23:38:17.278563 locksmithd[1830]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jul 10 23:38:17.296850 update_engine[1711]: E20250710 23:38:17.296653 1711 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 10 23:38:17.296850 update_engine[1711]: I20250710 23:38:17.296716 1711 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jul 10 23:38:17.296850 update_engine[1711]: I20250710 23:38:17.296723 1711 omaha_request_action.cc:617] Omaha request response: Jul 10 23:38:17.296850 update_engine[1711]: I20250710 23:38:17.296730 1711 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 10 23:38:17.296850 update_engine[1711]: I20250710 23:38:17.296736 1711 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 10 23:38:17.296850 update_engine[1711]: I20250710 23:38:17.296741 1711 update_attempter.cc:306] Processing Done. Jul 10 23:38:17.296850 update_engine[1711]: I20250710 23:38:17.296747 1711 update_attempter.cc:310] Error event sent. Jul 10 23:38:17.296850 update_engine[1711]: I20250710 23:38:17.296756 1711 update_check_scheduler.cc:74] Next update check in 40m29s Jul 10 23:38:17.297273 locksmithd[1830]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jul 10 23:38:17.494059 sshd[4816]: Accepted publickey for core from 10.200.16.10 port 49574 ssh2: RSA SHA256:MRWiT2m5xbwSE4Nwnya++Fyqw3vZsniZxlGhvbxVqjo Jul 10 23:38:17.495396 sshd-session[4816]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:38:17.499693 systemd-logind[1707]: New session 12 of user core. Jul 10 23:38:17.505750 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 10 23:38:17.911207 sshd[4818]: Connection closed by 10.200.16.10 port 49574 Jul 10 23:38:17.910742 sshd-session[4816]: pam_unix(sshd:session): session closed for user core Jul 10 23:38:17.913434 systemd-logind[1707]: Session 12 logged out. Waiting for processes to exit. Jul 10 23:38:17.914160 systemd[1]: sshd@9-10.200.20.36:22-10.200.16.10:49574.service: Deactivated successfully. Jul 10 23:38:17.916244 systemd[1]: session-12.scope: Deactivated successfully. Jul 10 23:38:17.918132 systemd-logind[1707]: Removed session 12. Jul 10 23:38:23.001844 systemd[1]: Started sshd@10-10.200.20.36:22-10.200.16.10:41558.service - OpenSSH per-connection server daemon (10.200.16.10:41558). Jul 10 23:38:23.457071 sshd[4831]: Accepted publickey for core from 10.200.16.10 port 41558 ssh2: RSA SHA256:MRWiT2m5xbwSE4Nwnya++Fyqw3vZsniZxlGhvbxVqjo Jul 10 23:38:23.458269 sshd-session[4831]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:38:23.462695 systemd-logind[1707]: New session 13 of user core. Jul 10 23:38:23.464765 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 10 23:38:23.843639 sshd[4833]: Connection closed by 10.200.16.10 port 41558 Jul 10 23:38:23.844186 sshd-session[4831]: pam_unix(sshd:session): session closed for user core Jul 10 23:38:23.847201 systemd-logind[1707]: Session 13 logged out. Waiting for processes to exit. Jul 10 23:38:23.847376 systemd[1]: sshd@10-10.200.20.36:22-10.200.16.10:41558.service: Deactivated successfully. Jul 10 23:38:23.850017 systemd[1]: session-13.scope: Deactivated successfully. Jul 10 23:38:23.852088 systemd-logind[1707]: Removed session 13. Jul 10 23:38:23.931846 systemd[1]: Started sshd@11-10.200.20.36:22-10.200.16.10:41564.service - OpenSSH per-connection server daemon (10.200.16.10:41564). Jul 10 23:38:24.388956 sshd[4846]: Accepted publickey for core from 10.200.16.10 port 41564 ssh2: RSA SHA256:MRWiT2m5xbwSE4Nwnya++Fyqw3vZsniZxlGhvbxVqjo Jul 10 23:38:24.390199 sshd-session[4846]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:38:24.394720 systemd-logind[1707]: New session 14 of user core. Jul 10 23:38:24.396753 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 10 23:38:24.817198 sshd[4849]: Connection closed by 10.200.16.10 port 41564 Jul 10 23:38:24.817788 sshd-session[4846]: pam_unix(sshd:session): session closed for user core Jul 10 23:38:24.822249 systemd[1]: sshd@11-10.200.20.36:22-10.200.16.10:41564.service: Deactivated successfully. Jul 10 23:38:24.822401 systemd-logind[1707]: Session 14 logged out. Waiting for processes to exit. Jul 10 23:38:24.825092 systemd[1]: session-14.scope: Deactivated successfully. Jul 10 23:38:24.826236 systemd-logind[1707]: Removed session 14. Jul 10 23:38:24.908852 systemd[1]: Started sshd@12-10.200.20.36:22-10.200.16.10:41574.service - OpenSSH per-connection server daemon (10.200.16.10:41574). Jul 10 23:38:25.387230 sshd[4859]: Accepted publickey for core from 10.200.16.10 port 41574 ssh2: RSA SHA256:MRWiT2m5xbwSE4Nwnya++Fyqw3vZsniZxlGhvbxVqjo Jul 10 23:38:25.388506 sshd-session[4859]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:38:25.392988 systemd-logind[1707]: New session 15 of user core. Jul 10 23:38:25.403751 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 10 23:38:25.789483 sshd[4861]: Connection closed by 10.200.16.10 port 41574 Jul 10 23:38:25.789329 sshd-session[4859]: pam_unix(sshd:session): session closed for user core Jul 10 23:38:25.793079 systemd[1]: sshd@12-10.200.20.36:22-10.200.16.10:41574.service: Deactivated successfully. Jul 10 23:38:25.795966 systemd[1]: session-15.scope: Deactivated successfully. Jul 10 23:38:25.796698 systemd-logind[1707]: Session 15 logged out. Waiting for processes to exit. Jul 10 23:38:25.797906 systemd-logind[1707]: Removed session 15. Jul 10 23:38:30.881860 systemd[1]: Started sshd@13-10.200.20.36:22-10.200.16.10:42292.service - OpenSSH per-connection server daemon (10.200.16.10:42292). Jul 10 23:38:31.337384 sshd[4874]: Accepted publickey for core from 10.200.16.10 port 42292 ssh2: RSA SHA256:MRWiT2m5xbwSE4Nwnya++Fyqw3vZsniZxlGhvbxVqjo Jul 10 23:38:31.338665 sshd-session[4874]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:38:31.342704 systemd-logind[1707]: New session 16 of user core. Jul 10 23:38:31.346782 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 10 23:38:31.726125 sshd[4876]: Connection closed by 10.200.16.10 port 42292 Jul 10 23:38:31.726708 sshd-session[4874]: pam_unix(sshd:session): session closed for user core Jul 10 23:38:31.729971 systemd-logind[1707]: Session 16 logged out. Waiting for processes to exit. Jul 10 23:38:31.730609 systemd[1]: sshd@13-10.200.20.36:22-10.200.16.10:42292.service: Deactivated successfully. Jul 10 23:38:31.733312 systemd[1]: session-16.scope: Deactivated successfully. Jul 10 23:38:31.734685 systemd-logind[1707]: Removed session 16. Jul 10 23:38:36.820859 systemd[1]: Started sshd@14-10.200.20.36:22-10.200.16.10:42306.service - OpenSSH per-connection server daemon (10.200.16.10:42306). Jul 10 23:38:37.313126 sshd[4890]: Accepted publickey for core from 10.200.16.10 port 42306 ssh2: RSA SHA256:MRWiT2m5xbwSE4Nwnya++Fyqw3vZsniZxlGhvbxVqjo Jul 10 23:38:37.314411 sshd-session[4890]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:38:37.318829 systemd-logind[1707]: New session 17 of user core. Jul 10 23:38:37.328769 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 10 23:38:37.737836 sshd[4892]: Connection closed by 10.200.16.10 port 42306 Jul 10 23:38:37.738354 sshd-session[4890]: pam_unix(sshd:session): session closed for user core Jul 10 23:38:37.741587 systemd[1]: sshd@14-10.200.20.36:22-10.200.16.10:42306.service: Deactivated successfully. Jul 10 23:38:37.744356 systemd[1]: session-17.scope: Deactivated successfully. Jul 10 23:38:37.746166 systemd-logind[1707]: Session 17 logged out. Waiting for processes to exit. Jul 10 23:38:37.747430 systemd-logind[1707]: Removed session 17. Jul 10 23:38:37.830848 systemd[1]: Started sshd@15-10.200.20.36:22-10.200.16.10:42322.service - OpenSSH per-connection server daemon (10.200.16.10:42322). Jul 10 23:38:38.324388 sshd[4904]: Accepted publickey for core from 10.200.16.10 port 42322 ssh2: RSA SHA256:MRWiT2m5xbwSE4Nwnya++Fyqw3vZsniZxlGhvbxVqjo Jul 10 23:38:38.325758 sshd-session[4904]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:38:38.330172 systemd-logind[1707]: New session 18 of user core. Jul 10 23:38:38.336832 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 10 23:38:38.780556 sshd[4906]: Connection closed by 10.200.16.10 port 42322 Jul 10 23:38:38.781283 sshd-session[4904]: pam_unix(sshd:session): session closed for user core Jul 10 23:38:38.784010 systemd-logind[1707]: Session 18 logged out. Waiting for processes to exit. Jul 10 23:38:38.785678 systemd[1]: sshd@15-10.200.20.36:22-10.200.16.10:42322.service: Deactivated successfully. Jul 10 23:38:38.788089 systemd[1]: session-18.scope: Deactivated successfully. Jul 10 23:38:38.790392 systemd-logind[1707]: Removed session 18. Jul 10 23:38:38.887892 systemd[1]: Started sshd@16-10.200.20.36:22-10.200.16.10:42332.service - OpenSSH per-connection server daemon (10.200.16.10:42332). Jul 10 23:38:39.380811 sshd[4916]: Accepted publickey for core from 10.200.16.10 port 42332 ssh2: RSA SHA256:MRWiT2m5xbwSE4Nwnya++Fyqw3vZsniZxlGhvbxVqjo Jul 10 23:38:39.382072 sshd-session[4916]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:38:39.386818 systemd-logind[1707]: New session 19 of user core. Jul 10 23:38:39.396904 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 10 23:38:40.192448 sshd[4918]: Connection closed by 10.200.16.10 port 42332 Jul 10 23:38:40.192875 sshd-session[4916]: pam_unix(sshd:session): session closed for user core Jul 10 23:38:40.197110 systemd-logind[1707]: Session 19 logged out. Waiting for processes to exit. Jul 10 23:38:40.197730 systemd[1]: sshd@16-10.200.20.36:22-10.200.16.10:42332.service: Deactivated successfully. Jul 10 23:38:40.200046 systemd[1]: session-19.scope: Deactivated successfully. Jul 10 23:38:40.201031 systemd-logind[1707]: Removed session 19. Jul 10 23:38:40.288842 systemd[1]: Started sshd@17-10.200.20.36:22-10.200.16.10:42330.service - OpenSSH per-connection server daemon (10.200.16.10:42330). Jul 10 23:38:40.743595 sshd[4935]: Accepted publickey for core from 10.200.16.10 port 42330 ssh2: RSA SHA256:MRWiT2m5xbwSE4Nwnya++Fyqw3vZsniZxlGhvbxVqjo Jul 10 23:38:40.744884 sshd-session[4935]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:38:40.748751 systemd-logind[1707]: New session 20 of user core. Jul 10 23:38:40.759761 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 10 23:38:41.251748 sshd[4937]: Connection closed by 10.200.16.10 port 42330 Jul 10 23:38:41.252363 sshd-session[4935]: pam_unix(sshd:session): session closed for user core Jul 10 23:38:41.255523 systemd[1]: sshd@17-10.200.20.36:22-10.200.16.10:42330.service: Deactivated successfully. Jul 10 23:38:41.257169 systemd[1]: session-20.scope: Deactivated successfully. Jul 10 23:38:41.257869 systemd-logind[1707]: Session 20 logged out. Waiting for processes to exit. Jul 10 23:38:41.258817 systemd-logind[1707]: Removed session 20. Jul 10 23:38:41.349979 systemd[1]: Started sshd@18-10.200.20.36:22-10.200.16.10:42334.service - OpenSSH per-connection server daemon (10.200.16.10:42334). Jul 10 23:38:41.841475 sshd[4946]: Accepted publickey for core from 10.200.16.10 port 42334 ssh2: RSA SHA256:MRWiT2m5xbwSE4Nwnya++Fyqw3vZsniZxlGhvbxVqjo Jul 10 23:38:41.842785 sshd-session[4946]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:38:41.846469 systemd-logind[1707]: New session 21 of user core. Jul 10 23:38:41.851772 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 10 23:38:42.259817 sshd[4948]: Connection closed by 10.200.16.10 port 42334 Jul 10 23:38:42.260360 sshd-session[4946]: pam_unix(sshd:session): session closed for user core Jul 10 23:38:42.263591 systemd[1]: sshd@18-10.200.20.36:22-10.200.16.10:42334.service: Deactivated successfully. Jul 10 23:38:42.265230 systemd[1]: session-21.scope: Deactivated successfully. Jul 10 23:38:42.266004 systemd-logind[1707]: Session 21 logged out. Waiting for processes to exit. Jul 10 23:38:42.266873 systemd-logind[1707]: Removed session 21. Jul 10 23:38:47.344935 systemd[1]: Started sshd@19-10.200.20.36:22-10.200.16.10:42342.service - OpenSSH per-connection server daemon (10.200.16.10:42342). Jul 10 23:38:47.824705 sshd[4962]: Accepted publickey for core from 10.200.16.10 port 42342 ssh2: RSA SHA256:MRWiT2m5xbwSE4Nwnya++Fyqw3vZsniZxlGhvbxVqjo Jul 10 23:38:47.826112 sshd-session[4962]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:38:47.830709 systemd-logind[1707]: New session 22 of user core. Jul 10 23:38:47.836757 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 10 23:38:48.231999 sshd[4964]: Connection closed by 10.200.16.10 port 42342 Jul 10 23:38:48.232526 sshd-session[4962]: pam_unix(sshd:session): session closed for user core Jul 10 23:38:48.235434 systemd[1]: sshd@19-10.200.20.36:22-10.200.16.10:42342.service: Deactivated successfully. Jul 10 23:38:48.235524 systemd-logind[1707]: Session 22 logged out. Waiting for processes to exit. Jul 10 23:38:48.237508 systemd[1]: session-22.scope: Deactivated successfully. Jul 10 23:38:48.239140 systemd-logind[1707]: Removed session 22. Jul 10 23:38:53.333919 systemd[1]: Started sshd@20-10.200.20.36:22-10.200.16.10:36238.service - OpenSSH per-connection server daemon (10.200.16.10:36238). Jul 10 23:38:53.827450 sshd[4975]: Accepted publickey for core from 10.200.16.10 port 36238 ssh2: RSA SHA256:MRWiT2m5xbwSE4Nwnya++Fyqw3vZsniZxlGhvbxVqjo Jul 10 23:38:53.828760 sshd-session[4975]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:38:53.833574 systemd-logind[1707]: New session 23 of user core. Jul 10 23:38:53.838764 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 10 23:38:54.248660 sshd[4977]: Connection closed by 10.200.16.10 port 36238 Jul 10 23:38:54.249208 sshd-session[4975]: pam_unix(sshd:session): session closed for user core Jul 10 23:38:54.252934 systemd[1]: sshd@20-10.200.20.36:22-10.200.16.10:36238.service: Deactivated successfully. Jul 10 23:38:54.254832 systemd[1]: session-23.scope: Deactivated successfully. Jul 10 23:38:54.256261 systemd-logind[1707]: Session 23 logged out. Waiting for processes to exit. Jul 10 23:38:54.257418 systemd-logind[1707]: Removed session 23. Jul 10 23:38:54.330854 systemd[1]: Started sshd@21-10.200.20.36:22-10.200.16.10:36248.service - OpenSSH per-connection server daemon (10.200.16.10:36248). Jul 10 23:38:54.772357 sshd[4988]: Accepted publickey for core from 10.200.16.10 port 36248 ssh2: RSA SHA256:MRWiT2m5xbwSE4Nwnya++Fyqw3vZsniZxlGhvbxVqjo Jul 10 23:38:54.773592 sshd-session[4988]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:38:54.778668 systemd-logind[1707]: New session 24 of user core. Jul 10 23:38:54.788795 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 10 23:38:56.805265 containerd[1734]: time="2025-07-10T23:38:56.805219389Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 10 23:38:56.813265 containerd[1734]: time="2025-07-10T23:38:56.813227552Z" level=info msg="StopContainer for \"ba692e3f9e8c8a2d005e654aa1ce5716063dbdc60378f742f34138856ea9cb97\" with timeout 30 (s)" Jul 10 23:38:56.813867 containerd[1734]: time="2025-07-10T23:38:56.813614392Z" level=info msg="Stop container \"ba692e3f9e8c8a2d005e654aa1ce5716063dbdc60378f742f34138856ea9cb97\" with signal terminated" Jul 10 23:38:56.814823 containerd[1734]: time="2025-07-10T23:38:56.814788552Z" level=info msg="StopContainer for \"a4e10cb75bd26a0c59fd113fa91c76895fb67b88c310bf231f4250f9a9cf14de\" with timeout 2 (s)" Jul 10 23:38:56.816157 containerd[1734]: time="2025-07-10T23:38:56.815183312Z" level=info msg="Stop container \"a4e10cb75bd26a0c59fd113fa91c76895fb67b88c310bf231f4250f9a9cf14de\" with signal terminated" Jul 10 23:38:56.824636 systemd[1]: cri-containerd-ba692e3f9e8c8a2d005e654aa1ce5716063dbdc60378f742f34138856ea9cb97.scope: Deactivated successfully. Jul 10 23:38:56.828723 systemd-networkd[1457]: lxc_health: Link DOWN Jul 10 23:38:56.828733 systemd-networkd[1457]: lxc_health: Lost carrier Jul 10 23:38:56.847394 systemd[1]: cri-containerd-a4e10cb75bd26a0c59fd113fa91c76895fb67b88c310bf231f4250f9a9cf14de.scope: Deactivated successfully. Jul 10 23:38:56.848139 systemd[1]: cri-containerd-a4e10cb75bd26a0c59fd113fa91c76895fb67b88c310bf231f4250f9a9cf14de.scope: Consumed 6.123s CPU time, 122.1M memory peak, 136K read from disk, 12.9M written to disk. Jul 10 23:38:56.858502 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ba692e3f9e8c8a2d005e654aa1ce5716063dbdc60378f742f34138856ea9cb97-rootfs.mount: Deactivated successfully. Jul 10 23:38:56.869455 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a4e10cb75bd26a0c59fd113fa91c76895fb67b88c310bf231f4250f9a9cf14de-rootfs.mount: Deactivated successfully. Jul 10 23:38:56.940785 containerd[1734]: time="2025-07-10T23:38:56.940604999Z" level=info msg="shim disconnected" id=a4e10cb75bd26a0c59fd113fa91c76895fb67b88c310bf231f4250f9a9cf14de namespace=k8s.io Jul 10 23:38:56.940785 containerd[1734]: time="2025-07-10T23:38:56.940777919Z" level=warning msg="cleaning up after shim disconnected" id=a4e10cb75bd26a0c59fd113fa91c76895fb67b88c310bf231f4250f9a9cf14de namespace=k8s.io Jul 10 23:38:56.941016 containerd[1734]: time="2025-07-10T23:38:56.940789639Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 10 23:38:56.941016 containerd[1734]: time="2025-07-10T23:38:56.940826799Z" level=info msg="shim disconnected" id=ba692e3f9e8c8a2d005e654aa1ce5716063dbdc60378f742f34138856ea9cb97 namespace=k8s.io Jul 10 23:38:56.941016 containerd[1734]: time="2025-07-10T23:38:56.940901199Z" level=warning msg="cleaning up after shim disconnected" id=ba692e3f9e8c8a2d005e654aa1ce5716063dbdc60378f742f34138856ea9cb97 namespace=k8s.io Jul 10 23:38:56.941016 containerd[1734]: time="2025-07-10T23:38:56.940909239Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 10 23:38:56.960370 containerd[1734]: time="2025-07-10T23:38:56.960309886Z" level=info msg="StopContainer for \"ba692e3f9e8c8a2d005e654aa1ce5716063dbdc60378f742f34138856ea9cb97\" returns successfully" Jul 10 23:38:56.961045 containerd[1734]: time="2025-07-10T23:38:56.960905367Z" level=info msg="StopPodSandbox for \"01424d04aa3575f0af73c62cfe749f09a42e0ba91da8010cc75a9810944668d4\"" Jul 10 23:38:56.961045 containerd[1734]: time="2025-07-10T23:38:56.960942447Z" level=info msg="Container to stop \"ba692e3f9e8c8a2d005e654aa1ce5716063dbdc60378f742f34138856ea9cb97\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 23:38:56.962402 containerd[1734]: time="2025-07-10T23:38:56.962362367Z" level=info msg="StopContainer for \"a4e10cb75bd26a0c59fd113fa91c76895fb67b88c310bf231f4250f9a9cf14de\" returns successfully" Jul 10 23:38:56.963512 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-01424d04aa3575f0af73c62cfe749f09a42e0ba91da8010cc75a9810944668d4-shm.mount: Deactivated successfully. Jul 10 23:38:56.965719 containerd[1734]: time="2025-07-10T23:38:56.964023488Z" level=info msg="StopPodSandbox for \"84d2aaa01230727e912d2d8873db6959f6fa26c891766eb44acd018296349329\"" Jul 10 23:38:56.965719 containerd[1734]: time="2025-07-10T23:38:56.964924688Z" level=info msg="Container to stop \"230a8395203da75139c999cc12cc6e825bfc0cf62acef17414c5d923288fb4e3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 23:38:56.965719 containerd[1734]: time="2025-07-10T23:38:56.964953848Z" level=info msg="Container to stop \"8b5cff66b2fbea5fa60f84db78588a051317c80f05d98f5d2c136a60488eeeeb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 23:38:56.965719 containerd[1734]: time="2025-07-10T23:38:56.964963728Z" level=info msg="Container to stop \"59f9f6caf38ba4962fe978916e0e4fc37dbc1b63f034046a82dd21d85def0a63\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 23:38:56.965719 containerd[1734]: time="2025-07-10T23:38:56.964972288Z" level=info msg="Container to stop \"75e341bdf856d2735b117337454fba8acc6c200560c509cb899cf2c33bc40cab\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 23:38:56.965719 containerd[1734]: time="2025-07-10T23:38:56.964981808Z" level=info msg="Container to stop \"a4e10cb75bd26a0c59fd113fa91c76895fb67b88c310bf231f4250f9a9cf14de\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 23:38:56.970455 systemd[1]: cri-containerd-84d2aaa01230727e912d2d8873db6959f6fa26c891766eb44acd018296349329.scope: Deactivated successfully. Jul 10 23:38:56.973220 systemd[1]: cri-containerd-01424d04aa3575f0af73c62cfe749f09a42e0ba91da8010cc75a9810944668d4.scope: Deactivated successfully. Jul 10 23:38:57.019424 containerd[1734]: time="2025-07-10T23:38:57.019238228Z" level=info msg="shim disconnected" id=01424d04aa3575f0af73c62cfe749f09a42e0ba91da8010cc75a9810944668d4 namespace=k8s.io Jul 10 23:38:57.019424 containerd[1734]: time="2025-07-10T23:38:57.019290468Z" level=warning msg="cleaning up after shim disconnected" id=01424d04aa3575f0af73c62cfe749f09a42e0ba91da8010cc75a9810944668d4 namespace=k8s.io Jul 10 23:38:57.019424 containerd[1734]: time="2025-07-10T23:38:57.019298188Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 10 23:38:57.020365 containerd[1734]: time="2025-07-10T23:38:57.020048389Z" level=info msg="shim disconnected" id=84d2aaa01230727e912d2d8873db6959f6fa26c891766eb44acd018296349329 namespace=k8s.io Jul 10 23:38:57.020365 containerd[1734]: time="2025-07-10T23:38:57.020092909Z" level=warning msg="cleaning up after shim disconnected" id=84d2aaa01230727e912d2d8873db6959f6fa26c891766eb44acd018296349329 namespace=k8s.io Jul 10 23:38:57.020365 containerd[1734]: time="2025-07-10T23:38:57.020101789Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 10 23:38:57.034043 containerd[1734]: time="2025-07-10T23:38:57.033749634Z" level=info msg="TearDown network for sandbox \"01424d04aa3575f0af73c62cfe749f09a42e0ba91da8010cc75a9810944668d4\" successfully" Jul 10 23:38:57.034043 containerd[1734]: time="2025-07-10T23:38:57.033783234Z" level=info msg="StopPodSandbox for \"01424d04aa3575f0af73c62cfe749f09a42e0ba91da8010cc75a9810944668d4\" returns successfully" Jul 10 23:38:57.041313 containerd[1734]: time="2025-07-10T23:38:57.040786476Z" level=warning msg="cleanup warnings time=\"2025-07-10T23:38:57Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 10 23:38:57.043035 containerd[1734]: time="2025-07-10T23:38:57.042988837Z" level=info msg="TearDown network for sandbox \"84d2aaa01230727e912d2d8873db6959f6fa26c891766eb44acd018296349329\" successfully" Jul 10 23:38:57.043035 containerd[1734]: time="2025-07-10T23:38:57.043024557Z" level=info msg="StopPodSandbox for \"84d2aaa01230727e912d2d8873db6959f6fa26c891766eb44acd018296349329\" returns successfully" Jul 10 23:38:57.178256 kubelet[3384]: I0710 23:38:57.177636 3384 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/198ef552-ea7f-4a87-9962-8b677436d88a-cilium-cgroup\") pod \"198ef552-ea7f-4a87-9962-8b677436d88a\" (UID: \"198ef552-ea7f-4a87-9962-8b677436d88a\") " Jul 10 23:38:57.178256 kubelet[3384]: I0710 23:38:57.177666 3384 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/198ef552-ea7f-4a87-9962-8b677436d88a-host-proc-sys-net\") pod \"198ef552-ea7f-4a87-9962-8b677436d88a\" (UID: \"198ef552-ea7f-4a87-9962-8b677436d88a\") " Jul 10 23:38:57.178256 kubelet[3384]: I0710 23:38:57.177686 3384 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/198ef552-ea7f-4a87-9962-8b677436d88a-cilium-config-path\") pod \"198ef552-ea7f-4a87-9962-8b677436d88a\" (UID: \"198ef552-ea7f-4a87-9962-8b677436d88a\") " Jul 10 23:38:57.178256 kubelet[3384]: I0710 23:38:57.177703 3384 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c2d64835-7430-4efc-a1c4-2e7a5427bd19-cilium-config-path\") pod \"c2d64835-7430-4efc-a1c4-2e7a5427bd19\" (UID: \"c2d64835-7430-4efc-a1c4-2e7a5427bd19\") " Jul 10 23:38:57.178256 kubelet[3384]: I0710 23:38:57.177718 3384 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/198ef552-ea7f-4a87-9962-8b677436d88a-xtables-lock\") pod \"198ef552-ea7f-4a87-9962-8b677436d88a\" (UID: \"198ef552-ea7f-4a87-9962-8b677436d88a\") " Jul 10 23:38:57.178256 kubelet[3384]: I0710 23:38:57.177733 3384 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/198ef552-ea7f-4a87-9962-8b677436d88a-clustermesh-secrets\") pod \"198ef552-ea7f-4a87-9962-8b677436d88a\" (UID: \"198ef552-ea7f-4a87-9962-8b677436d88a\") " Jul 10 23:38:57.178694 kubelet[3384]: I0710 23:38:57.177752 3384 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/198ef552-ea7f-4a87-9962-8b677436d88a-cilium-run\") pod \"198ef552-ea7f-4a87-9962-8b677436d88a\" (UID: \"198ef552-ea7f-4a87-9962-8b677436d88a\") " Jul 10 23:38:57.178694 kubelet[3384]: I0710 23:38:57.177768 3384 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/198ef552-ea7f-4a87-9962-8b677436d88a-hostproc\") pod \"198ef552-ea7f-4a87-9962-8b677436d88a\" (UID: \"198ef552-ea7f-4a87-9962-8b677436d88a\") " Jul 10 23:38:57.178694 kubelet[3384]: I0710 23:38:57.177785 3384 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6vqf8\" (UniqueName: \"kubernetes.io/projected/c2d64835-7430-4efc-a1c4-2e7a5427bd19-kube-api-access-6vqf8\") pod \"c2d64835-7430-4efc-a1c4-2e7a5427bd19\" (UID: \"c2d64835-7430-4efc-a1c4-2e7a5427bd19\") " Jul 10 23:38:57.178694 kubelet[3384]: I0710 23:38:57.177803 3384 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/198ef552-ea7f-4a87-9962-8b677436d88a-host-proc-sys-kernel\") pod \"198ef552-ea7f-4a87-9962-8b677436d88a\" (UID: \"198ef552-ea7f-4a87-9962-8b677436d88a\") " Jul 10 23:38:57.178694 kubelet[3384]: I0710 23:38:57.177816 3384 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/198ef552-ea7f-4a87-9962-8b677436d88a-bpf-maps\") pod \"198ef552-ea7f-4a87-9962-8b677436d88a\" (UID: \"198ef552-ea7f-4a87-9962-8b677436d88a\") " Jul 10 23:38:57.178694 kubelet[3384]: I0710 23:38:57.177831 3384 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/198ef552-ea7f-4a87-9962-8b677436d88a-cni-path\") pod \"198ef552-ea7f-4a87-9962-8b677436d88a\" (UID: \"198ef552-ea7f-4a87-9962-8b677436d88a\") " Jul 10 23:38:57.178817 kubelet[3384]: I0710 23:38:57.177846 3384 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bhmhs\" (UniqueName: \"kubernetes.io/projected/198ef552-ea7f-4a87-9962-8b677436d88a-kube-api-access-bhmhs\") pod \"198ef552-ea7f-4a87-9962-8b677436d88a\" (UID: \"198ef552-ea7f-4a87-9962-8b677436d88a\") " Jul 10 23:38:57.178817 kubelet[3384]: I0710 23:38:57.177858 3384 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/198ef552-ea7f-4a87-9962-8b677436d88a-lib-modules\") pod \"198ef552-ea7f-4a87-9962-8b677436d88a\" (UID: \"198ef552-ea7f-4a87-9962-8b677436d88a\") " Jul 10 23:38:57.178817 kubelet[3384]: I0710 23:38:57.177873 3384 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/198ef552-ea7f-4a87-9962-8b677436d88a-etc-cni-netd\") pod \"198ef552-ea7f-4a87-9962-8b677436d88a\" (UID: \"198ef552-ea7f-4a87-9962-8b677436d88a\") " Jul 10 23:38:57.178817 kubelet[3384]: I0710 23:38:57.177889 3384 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/198ef552-ea7f-4a87-9962-8b677436d88a-hubble-tls\") pod \"198ef552-ea7f-4a87-9962-8b677436d88a\" (UID: \"198ef552-ea7f-4a87-9962-8b677436d88a\") " Jul 10 23:38:57.178817 kubelet[3384]: I0710 23:38:57.178184 3384 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/198ef552-ea7f-4a87-9962-8b677436d88a-hostproc" (OuterVolumeSpecName: "hostproc") pod "198ef552-ea7f-4a87-9962-8b677436d88a" (UID: "198ef552-ea7f-4a87-9962-8b677436d88a"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 23:38:57.178817 kubelet[3384]: I0710 23:38:57.178225 3384 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/198ef552-ea7f-4a87-9962-8b677436d88a-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "198ef552-ea7f-4a87-9962-8b677436d88a" (UID: "198ef552-ea7f-4a87-9962-8b677436d88a"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 23:38:57.178936 kubelet[3384]: I0710 23:38:57.178240 3384 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/198ef552-ea7f-4a87-9962-8b677436d88a-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "198ef552-ea7f-4a87-9962-8b677436d88a" (UID: "198ef552-ea7f-4a87-9962-8b677436d88a"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 23:38:57.180687 kubelet[3384]: I0710 23:38:57.180353 3384 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/198ef552-ea7f-4a87-9962-8b677436d88a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "198ef552-ea7f-4a87-9962-8b677436d88a" (UID: "198ef552-ea7f-4a87-9962-8b677436d88a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 10 23:38:57.181808 kubelet[3384]: I0710 23:38:57.181600 3384 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/198ef552-ea7f-4a87-9962-8b677436d88a-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "198ef552-ea7f-4a87-9962-8b677436d88a" (UID: "198ef552-ea7f-4a87-9962-8b677436d88a"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 23:38:57.182208 kubelet[3384]: I0710 23:38:57.182153 3384 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/198ef552-ea7f-4a87-9962-8b677436d88a-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "198ef552-ea7f-4a87-9962-8b677436d88a" (UID: "198ef552-ea7f-4a87-9962-8b677436d88a"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 23:38:57.182208 kubelet[3384]: I0710 23:38:57.182185 3384 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/198ef552-ea7f-4a87-9962-8b677436d88a-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "198ef552-ea7f-4a87-9962-8b677436d88a" (UID: "198ef552-ea7f-4a87-9962-8b677436d88a"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 23:38:57.182373 kubelet[3384]: I0710 23:38:57.182303 3384 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/198ef552-ea7f-4a87-9962-8b677436d88a-cni-path" (OuterVolumeSpecName: "cni-path") pod "198ef552-ea7f-4a87-9962-8b677436d88a" (UID: "198ef552-ea7f-4a87-9962-8b677436d88a"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 23:38:57.182826 kubelet[3384]: I0710 23:38:57.182711 3384 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/198ef552-ea7f-4a87-9962-8b677436d88a-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "198ef552-ea7f-4a87-9962-8b677436d88a" (UID: "198ef552-ea7f-4a87-9962-8b677436d88a"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 23:38:57.182983 kubelet[3384]: I0710 23:38:57.182925 3384 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/198ef552-ea7f-4a87-9962-8b677436d88a-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "198ef552-ea7f-4a87-9962-8b677436d88a" (UID: "198ef552-ea7f-4a87-9962-8b677436d88a"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 10 23:38:57.183623 kubelet[3384]: I0710 23:38:57.183482 3384 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/198ef552-ea7f-4a87-9962-8b677436d88a-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "198ef552-ea7f-4a87-9962-8b677436d88a" (UID: "198ef552-ea7f-4a87-9962-8b677436d88a"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 10 23:38:57.183830 kubelet[3384]: I0710 23:38:57.183812 3384 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/198ef552-ea7f-4a87-9962-8b677436d88a-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "198ef552-ea7f-4a87-9962-8b677436d88a" (UID: "198ef552-ea7f-4a87-9962-8b677436d88a"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 23:38:57.184039 kubelet[3384]: I0710 23:38:57.183896 3384 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/198ef552-ea7f-4a87-9962-8b677436d88a-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "198ef552-ea7f-4a87-9962-8b677436d88a" (UID: "198ef552-ea7f-4a87-9962-8b677436d88a"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 23:38:57.184909 kubelet[3384]: I0710 23:38:57.184879 3384 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2d64835-7430-4efc-a1c4-2e7a5427bd19-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c2d64835-7430-4efc-a1c4-2e7a5427bd19" (UID: "c2d64835-7430-4efc-a1c4-2e7a5427bd19"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 10 23:38:57.186803 kubelet[3384]: I0710 23:38:57.186778 3384 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/198ef552-ea7f-4a87-9962-8b677436d88a-kube-api-access-bhmhs" (OuterVolumeSpecName: "kube-api-access-bhmhs") pod "198ef552-ea7f-4a87-9962-8b677436d88a" (UID: "198ef552-ea7f-4a87-9962-8b677436d88a"). InnerVolumeSpecName "kube-api-access-bhmhs". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 10 23:38:57.187231 kubelet[3384]: I0710 23:38:57.187211 3384 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2d64835-7430-4efc-a1c4-2e7a5427bd19-kube-api-access-6vqf8" (OuterVolumeSpecName: "kube-api-access-6vqf8") pod "c2d64835-7430-4efc-a1c4-2e7a5427bd19" (UID: "c2d64835-7430-4efc-a1c4-2e7a5427bd19"). InnerVolumeSpecName "kube-api-access-6vqf8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 10 23:38:57.278718 kubelet[3384]: I0710 23:38:57.278677 3384 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/198ef552-ea7f-4a87-9962-8b677436d88a-lib-modules\") on node \"ci-4230.2.1-n-2b36f27b4a\" DevicePath \"\"" Jul 10 23:38:57.278894 kubelet[3384]: I0710 23:38:57.278882 3384 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/198ef552-ea7f-4a87-9962-8b677436d88a-etc-cni-netd\") on node \"ci-4230.2.1-n-2b36f27b4a\" DevicePath \"\"" Jul 10 23:38:57.278992 kubelet[3384]: I0710 23:38:57.278979 3384 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/198ef552-ea7f-4a87-9962-8b677436d88a-hubble-tls\") on node \"ci-4230.2.1-n-2b36f27b4a\" DevicePath \"\"" Jul 10 23:38:57.279066 kubelet[3384]: I0710 23:38:57.279057 3384 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/198ef552-ea7f-4a87-9962-8b677436d88a-cilium-cgroup\") on node \"ci-4230.2.1-n-2b36f27b4a\" DevicePath \"\"" Jul 10 23:38:57.279134 kubelet[3384]: I0710 23:38:57.279124 3384 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/198ef552-ea7f-4a87-9962-8b677436d88a-host-proc-sys-net\") on node \"ci-4230.2.1-n-2b36f27b4a\" DevicePath \"\"" Jul 10 23:38:57.279189 kubelet[3384]: I0710 23:38:57.279178 3384 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/198ef552-ea7f-4a87-9962-8b677436d88a-cilium-config-path\") on node \"ci-4230.2.1-n-2b36f27b4a\" DevicePath \"\"" Jul 10 23:38:57.279250 kubelet[3384]: I0710 23:38:57.279240 3384 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c2d64835-7430-4efc-a1c4-2e7a5427bd19-cilium-config-path\") on node \"ci-4230.2.1-n-2b36f27b4a\" DevicePath \"\"" Jul 10 23:38:57.279322 kubelet[3384]: I0710 23:38:57.279298 3384 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/198ef552-ea7f-4a87-9962-8b677436d88a-xtables-lock\") on node \"ci-4230.2.1-n-2b36f27b4a\" DevicePath \"\"" Jul 10 23:38:57.279377 kubelet[3384]: I0710 23:38:57.279367 3384 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/198ef552-ea7f-4a87-9962-8b677436d88a-clustermesh-secrets\") on node \"ci-4230.2.1-n-2b36f27b4a\" DevicePath \"\"" Jul 10 23:38:57.279499 kubelet[3384]: I0710 23:38:57.279431 3384 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/198ef552-ea7f-4a87-9962-8b677436d88a-cilium-run\") on node \"ci-4230.2.1-n-2b36f27b4a\" DevicePath \"\"" Jul 10 23:38:57.279499 kubelet[3384]: I0710 23:38:57.279444 3384 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/198ef552-ea7f-4a87-9962-8b677436d88a-hostproc\") on node \"ci-4230.2.1-n-2b36f27b4a\" DevicePath \"\"" Jul 10 23:38:57.279499 kubelet[3384]: I0710 23:38:57.279452 3384 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6vqf8\" (UniqueName: \"kubernetes.io/projected/c2d64835-7430-4efc-a1c4-2e7a5427bd19-kube-api-access-6vqf8\") on node \"ci-4230.2.1-n-2b36f27b4a\" DevicePath \"\"" Jul 10 23:38:57.279499 kubelet[3384]: I0710 23:38:57.279461 3384 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/198ef552-ea7f-4a87-9962-8b677436d88a-host-proc-sys-kernel\") on node \"ci-4230.2.1-n-2b36f27b4a\" DevicePath \"\"" Jul 10 23:38:57.279499 kubelet[3384]: I0710 23:38:57.279470 3384 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/198ef552-ea7f-4a87-9962-8b677436d88a-bpf-maps\") on node \"ci-4230.2.1-n-2b36f27b4a\" DevicePath \"\"" Jul 10 23:38:57.279499 kubelet[3384]: I0710 23:38:57.279480 3384 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/198ef552-ea7f-4a87-9962-8b677436d88a-cni-path\") on node \"ci-4230.2.1-n-2b36f27b4a\" DevicePath \"\"" Jul 10 23:38:57.279682 kubelet[3384]: I0710 23:38:57.279488 3384 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-bhmhs\" (UniqueName: \"kubernetes.io/projected/198ef552-ea7f-4a87-9962-8b677436d88a-kube-api-access-bhmhs\") on node \"ci-4230.2.1-n-2b36f27b4a\" DevicePath \"\"" Jul 10 23:38:57.556132 kubelet[3384]: I0710 23:38:57.556049 3384 scope.go:117] "RemoveContainer" containerID="ba692e3f9e8c8a2d005e654aa1ce5716063dbdc60378f742f34138856ea9cb97" Jul 10 23:38:57.560026 containerd[1734]: time="2025-07-10T23:38:57.559563190Z" level=info msg="RemoveContainer for \"ba692e3f9e8c8a2d005e654aa1ce5716063dbdc60378f742f34138856ea9cb97\"" Jul 10 23:38:57.565153 systemd[1]: Removed slice kubepods-besteffort-podc2d64835_7430_4efc_a1c4_2e7a5427bd19.slice - libcontainer container kubepods-besteffort-podc2d64835_7430_4efc_a1c4_2e7a5427bd19.slice. Jul 10 23:38:57.571592 systemd[1]: Removed slice kubepods-burstable-pod198ef552_ea7f_4a87_9962_8b677436d88a.slice - libcontainer container kubepods-burstable-pod198ef552_ea7f_4a87_9962_8b677436d88a.slice. Jul 10 23:38:57.571938 systemd[1]: kubepods-burstable-pod198ef552_ea7f_4a87_9962_8b677436d88a.slice: Consumed 6.190s CPU time, 122.5M memory peak, 136K read from disk, 12.9M written to disk. Jul 10 23:38:57.574584 containerd[1734]: time="2025-07-10T23:38:57.574429316Z" level=info msg="RemoveContainer for \"ba692e3f9e8c8a2d005e654aa1ce5716063dbdc60378f742f34138856ea9cb97\" returns successfully" Jul 10 23:38:57.575226 kubelet[3384]: I0710 23:38:57.574812 3384 scope.go:117] "RemoveContainer" containerID="ba692e3f9e8c8a2d005e654aa1ce5716063dbdc60378f742f34138856ea9cb97" Jul 10 23:38:57.575317 containerd[1734]: time="2025-07-10T23:38:57.575017756Z" level=error msg="ContainerStatus for \"ba692e3f9e8c8a2d005e654aa1ce5716063dbdc60378f742f34138856ea9cb97\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ba692e3f9e8c8a2d005e654aa1ce5716063dbdc60378f742f34138856ea9cb97\": not found" Jul 10 23:38:57.575586 kubelet[3384]: E0710 23:38:57.575435 3384 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ba692e3f9e8c8a2d005e654aa1ce5716063dbdc60378f742f34138856ea9cb97\": not found" containerID="ba692e3f9e8c8a2d005e654aa1ce5716063dbdc60378f742f34138856ea9cb97" Jul 10 23:38:57.575586 kubelet[3384]: I0710 23:38:57.575463 3384 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ba692e3f9e8c8a2d005e654aa1ce5716063dbdc60378f742f34138856ea9cb97"} err="failed to get container status \"ba692e3f9e8c8a2d005e654aa1ce5716063dbdc60378f742f34138856ea9cb97\": rpc error: code = NotFound desc = an error occurred when try to find container \"ba692e3f9e8c8a2d005e654aa1ce5716063dbdc60378f742f34138856ea9cb97\": not found" Jul 10 23:38:57.575586 kubelet[3384]: I0710 23:38:57.575495 3384 scope.go:117] "RemoveContainer" containerID="a4e10cb75bd26a0c59fd113fa91c76895fb67b88c310bf231f4250f9a9cf14de" Jul 10 23:38:57.576961 containerd[1734]: time="2025-07-10T23:38:57.576930197Z" level=info msg="RemoveContainer for \"a4e10cb75bd26a0c59fd113fa91c76895fb67b88c310bf231f4250f9a9cf14de\"" Jul 10 23:38:57.590663 containerd[1734]: time="2025-07-10T23:38:57.590543682Z" level=info msg="RemoveContainer for \"a4e10cb75bd26a0c59fd113fa91c76895fb67b88c310bf231f4250f9a9cf14de\" returns successfully" Jul 10 23:38:57.591049 kubelet[3384]: I0710 23:38:57.591024 3384 scope.go:117] "RemoveContainer" containerID="75e341bdf856d2735b117337454fba8acc6c200560c509cb899cf2c33bc40cab" Jul 10 23:38:57.593132 containerd[1734]: time="2025-07-10T23:38:57.592811003Z" level=info msg="RemoveContainer for \"75e341bdf856d2735b117337454fba8acc6c200560c509cb899cf2c33bc40cab\"" Jul 10 23:38:57.603458 containerd[1734]: time="2025-07-10T23:38:57.603366046Z" level=info msg="RemoveContainer for \"75e341bdf856d2735b117337454fba8acc6c200560c509cb899cf2c33bc40cab\" returns successfully" Jul 10 23:38:57.603649 kubelet[3384]: I0710 23:38:57.603539 3384 scope.go:117] "RemoveContainer" containerID="59f9f6caf38ba4962fe978916e0e4fc37dbc1b63f034046a82dd21d85def0a63" Jul 10 23:38:57.605040 containerd[1734]: time="2025-07-10T23:38:57.604985807Z" level=info msg="RemoveContainer for \"59f9f6caf38ba4962fe978916e0e4fc37dbc1b63f034046a82dd21d85def0a63\"" Jul 10 23:38:57.619092 containerd[1734]: time="2025-07-10T23:38:57.619053212Z" level=info msg="RemoveContainer for \"59f9f6caf38ba4962fe978916e0e4fc37dbc1b63f034046a82dd21d85def0a63\" returns successfully" Jul 10 23:38:57.619291 kubelet[3384]: I0710 23:38:57.619267 3384 scope.go:117] "RemoveContainer" containerID="8b5cff66b2fbea5fa60f84db78588a051317c80f05d98f5d2c136a60488eeeeb" Jul 10 23:38:57.620638 containerd[1734]: time="2025-07-10T23:38:57.620403733Z" level=info msg="RemoveContainer for \"8b5cff66b2fbea5fa60f84db78588a051317c80f05d98f5d2c136a60488eeeeb\"" Jul 10 23:38:57.638400 containerd[1734]: time="2025-07-10T23:38:57.638372020Z" level=info msg="RemoveContainer for \"8b5cff66b2fbea5fa60f84db78588a051317c80f05d98f5d2c136a60488eeeeb\" returns successfully" Jul 10 23:38:57.638837 kubelet[3384]: I0710 23:38:57.638740 3384 scope.go:117] "RemoveContainer" containerID="230a8395203da75139c999cc12cc6e825bfc0cf62acef17414c5d923288fb4e3" Jul 10 23:38:57.640125 containerd[1734]: time="2025-07-10T23:38:57.640093860Z" level=info msg="RemoveContainer for \"230a8395203da75139c999cc12cc6e825bfc0cf62acef17414c5d923288fb4e3\"" Jul 10 23:38:57.657636 containerd[1734]: time="2025-07-10T23:38:57.657101627Z" level=info msg="RemoveContainer for \"230a8395203da75139c999cc12cc6e825bfc0cf62acef17414c5d923288fb4e3\" returns successfully" Jul 10 23:38:57.662425 kubelet[3384]: I0710 23:38:57.662290 3384 scope.go:117] "RemoveContainer" containerID="a4e10cb75bd26a0c59fd113fa91c76895fb67b88c310bf231f4250f9a9cf14de" Jul 10 23:38:57.662793 containerd[1734]: time="2025-07-10T23:38:57.662741589Z" level=error msg="ContainerStatus for \"a4e10cb75bd26a0c59fd113fa91c76895fb67b88c310bf231f4250f9a9cf14de\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a4e10cb75bd26a0c59fd113fa91c76895fb67b88c310bf231f4250f9a9cf14de\": not found" Jul 10 23:38:57.662983 kubelet[3384]: E0710 23:38:57.662929 3384 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a4e10cb75bd26a0c59fd113fa91c76895fb67b88c310bf231f4250f9a9cf14de\": not found" containerID="a4e10cb75bd26a0c59fd113fa91c76895fb67b88c310bf231f4250f9a9cf14de" Jul 10 23:38:57.663149 kubelet[3384]: I0710 23:38:57.662959 3384 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a4e10cb75bd26a0c59fd113fa91c76895fb67b88c310bf231f4250f9a9cf14de"} err="failed to get container status \"a4e10cb75bd26a0c59fd113fa91c76895fb67b88c310bf231f4250f9a9cf14de\": rpc error: code = NotFound desc = an error occurred when try to find container \"a4e10cb75bd26a0c59fd113fa91c76895fb67b88c310bf231f4250f9a9cf14de\": not found" Jul 10 23:38:57.663149 kubelet[3384]: I0710 23:38:57.663085 3384 scope.go:117] "RemoveContainer" containerID="75e341bdf856d2735b117337454fba8acc6c200560c509cb899cf2c33bc40cab" Jul 10 23:38:57.663897 containerd[1734]: time="2025-07-10T23:38:57.663836389Z" level=error msg="ContainerStatus for \"75e341bdf856d2735b117337454fba8acc6c200560c509cb899cf2c33bc40cab\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"75e341bdf856d2735b117337454fba8acc6c200560c509cb899cf2c33bc40cab\": not found" Jul 10 23:38:57.664304 kubelet[3384]: E0710 23:38:57.664094 3384 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"75e341bdf856d2735b117337454fba8acc6c200560c509cb899cf2c33bc40cab\": not found" containerID="75e341bdf856d2735b117337454fba8acc6c200560c509cb899cf2c33bc40cab" Jul 10 23:38:57.664304 kubelet[3384]: I0710 23:38:57.664122 3384 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"75e341bdf856d2735b117337454fba8acc6c200560c509cb899cf2c33bc40cab"} err="failed to get container status \"75e341bdf856d2735b117337454fba8acc6c200560c509cb899cf2c33bc40cab\": rpc error: code = NotFound desc = an error occurred when try to find container \"75e341bdf856d2735b117337454fba8acc6c200560c509cb899cf2c33bc40cab\": not found" Jul 10 23:38:57.664304 kubelet[3384]: I0710 23:38:57.664141 3384 scope.go:117] "RemoveContainer" containerID="59f9f6caf38ba4962fe978916e0e4fc37dbc1b63f034046a82dd21d85def0a63" Jul 10 23:38:57.664636 containerd[1734]: time="2025-07-10T23:38:57.664535509Z" level=error msg="ContainerStatus for \"59f9f6caf38ba4962fe978916e0e4fc37dbc1b63f034046a82dd21d85def0a63\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"59f9f6caf38ba4962fe978916e0e4fc37dbc1b63f034046a82dd21d85def0a63\": not found" Jul 10 23:38:57.664929 containerd[1734]: time="2025-07-10T23:38:57.664842309Z" level=error msg="ContainerStatus for \"8b5cff66b2fbea5fa60f84db78588a051317c80f05d98f5d2c136a60488eeeeb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8b5cff66b2fbea5fa60f84db78588a051317c80f05d98f5d2c136a60488eeeeb\": not found" Jul 10 23:38:57.664974 kubelet[3384]: E0710 23:38:57.664645 3384 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"59f9f6caf38ba4962fe978916e0e4fc37dbc1b63f034046a82dd21d85def0a63\": not found" containerID="59f9f6caf38ba4962fe978916e0e4fc37dbc1b63f034046a82dd21d85def0a63" Jul 10 23:38:57.664974 kubelet[3384]: I0710 23:38:57.664666 3384 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"59f9f6caf38ba4962fe978916e0e4fc37dbc1b63f034046a82dd21d85def0a63"} err="failed to get container status \"59f9f6caf38ba4962fe978916e0e4fc37dbc1b63f034046a82dd21d85def0a63\": rpc error: code = NotFound desc = an error occurred when try to find container \"59f9f6caf38ba4962fe978916e0e4fc37dbc1b63f034046a82dd21d85def0a63\": not found" Jul 10 23:38:57.664974 kubelet[3384]: I0710 23:38:57.664679 3384 scope.go:117] "RemoveContainer" containerID="8b5cff66b2fbea5fa60f84db78588a051317c80f05d98f5d2c136a60488eeeeb" Jul 10 23:38:57.664974 kubelet[3384]: E0710 23:38:57.664925 3384 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8b5cff66b2fbea5fa60f84db78588a051317c80f05d98f5d2c136a60488eeeeb\": not found" containerID="8b5cff66b2fbea5fa60f84db78588a051317c80f05d98f5d2c136a60488eeeeb" Jul 10 23:38:57.664974 kubelet[3384]: I0710 23:38:57.664939 3384 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8b5cff66b2fbea5fa60f84db78588a051317c80f05d98f5d2c136a60488eeeeb"} err="failed to get container status \"8b5cff66b2fbea5fa60f84db78588a051317c80f05d98f5d2c136a60488eeeeb\": rpc error: code = NotFound desc = an error occurred when try to find container \"8b5cff66b2fbea5fa60f84db78588a051317c80f05d98f5d2c136a60488eeeeb\": not found" Jul 10 23:38:57.664974 kubelet[3384]: I0710 23:38:57.664949 3384 scope.go:117] "RemoveContainer" containerID="230a8395203da75139c999cc12cc6e825bfc0cf62acef17414c5d923288fb4e3" Jul 10 23:38:57.665258 containerd[1734]: time="2025-07-10T23:38:57.665204870Z" level=error msg="ContainerStatus for \"230a8395203da75139c999cc12cc6e825bfc0cf62acef17414c5d923288fb4e3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"230a8395203da75139c999cc12cc6e825bfc0cf62acef17414c5d923288fb4e3\": not found" Jul 10 23:38:57.665378 kubelet[3384]: E0710 23:38:57.665289 3384 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"230a8395203da75139c999cc12cc6e825bfc0cf62acef17414c5d923288fb4e3\": not found" containerID="230a8395203da75139c999cc12cc6e825bfc0cf62acef17414c5d923288fb4e3" Jul 10 23:38:57.665378 kubelet[3384]: I0710 23:38:57.665303 3384 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"230a8395203da75139c999cc12cc6e825bfc0cf62acef17414c5d923288fb4e3"} err="failed to get container status \"230a8395203da75139c999cc12cc6e825bfc0cf62acef17414c5d923288fb4e3\": rpc error: code = NotFound desc = an error occurred when try to find container \"230a8395203da75139c999cc12cc6e825bfc0cf62acef17414c5d923288fb4e3\": not found" Jul 10 23:38:57.791894 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-01424d04aa3575f0af73c62cfe749f09a42e0ba91da8010cc75a9810944668d4-rootfs.mount: Deactivated successfully. Jul 10 23:38:57.791984 systemd[1]: var-lib-kubelet-pods-c2d64835\x2d7430\x2d4efc\x2da1c4\x2d2e7a5427bd19-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6vqf8.mount: Deactivated successfully. Jul 10 23:38:57.792041 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-84d2aaa01230727e912d2d8873db6959f6fa26c891766eb44acd018296349329-rootfs.mount: Deactivated successfully. Jul 10 23:38:57.792093 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-84d2aaa01230727e912d2d8873db6959f6fa26c891766eb44acd018296349329-shm.mount: Deactivated successfully. Jul 10 23:38:57.792140 systemd[1]: var-lib-kubelet-pods-198ef552\x2dea7f\x2d4a87\x2d9962\x2d8b677436d88a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbhmhs.mount: Deactivated successfully. Jul 10 23:38:57.792188 systemd[1]: var-lib-kubelet-pods-198ef552\x2dea7f\x2d4a87\x2d9962\x2d8b677436d88a-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 10 23:38:57.792242 systemd[1]: var-lib-kubelet-pods-198ef552\x2dea7f\x2d4a87\x2d9962\x2d8b677436d88a-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 10 23:38:58.790702 sshd[4990]: Connection closed by 10.200.16.10 port 36248 Jul 10 23:38:58.791320 sshd-session[4988]: pam_unix(sshd:session): session closed for user core Jul 10 23:38:58.795045 systemd[1]: sshd@21-10.200.20.36:22-10.200.16.10:36248.service: Deactivated successfully. Jul 10 23:38:58.797256 systemd[1]: session-24.scope: Deactivated successfully. Jul 10 23:38:58.797597 systemd[1]: session-24.scope: Consumed 1.122s CPU time, 22.2M memory peak. Jul 10 23:38:58.798174 systemd-logind[1707]: Session 24 logged out. Waiting for processes to exit. Jul 10 23:38:58.799463 systemd-logind[1707]: Removed session 24. Jul 10 23:38:58.888918 systemd[1]: Started sshd@22-10.200.20.36:22-10.200.16.10:36256.service - OpenSSH per-connection server daemon (10.200.16.10:36256). Jul 10 23:38:59.171271 kubelet[3384]: I0710 23:38:59.170976 3384 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="198ef552-ea7f-4a87-9962-8b677436d88a" path="/var/lib/kubelet/pods/198ef552-ea7f-4a87-9962-8b677436d88a/volumes" Jul 10 23:38:59.172345 kubelet[3384]: I0710 23:38:59.172108 3384 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2d64835-7430-4efc-a1c4-2e7a5427bd19" path="/var/lib/kubelet/pods/c2d64835-7430-4efc-a1c4-2e7a5427bd19/volumes" Jul 10 23:38:59.194030 containerd[1734]: time="2025-07-10T23:38:59.193986080Z" level=info msg="StopPodSandbox for \"01424d04aa3575f0af73c62cfe749f09a42e0ba91da8010cc75a9810944668d4\"" Jul 10 23:38:59.194363 containerd[1734]: time="2025-07-10T23:38:59.194077040Z" level=info msg="TearDown network for sandbox \"01424d04aa3575f0af73c62cfe749f09a42e0ba91da8010cc75a9810944668d4\" successfully" Jul 10 23:38:59.194363 containerd[1734]: time="2025-07-10T23:38:59.194090120Z" level=info msg="StopPodSandbox for \"01424d04aa3575f0af73c62cfe749f09a42e0ba91da8010cc75a9810944668d4\" returns successfully" Jul 10 23:38:59.194888 containerd[1734]: time="2025-07-10T23:38:59.194857281Z" level=info msg="RemovePodSandbox for \"01424d04aa3575f0af73c62cfe749f09a42e0ba91da8010cc75a9810944668d4\"" Jul 10 23:38:59.194944 containerd[1734]: time="2025-07-10T23:38:59.194890761Z" level=info msg="Forcibly stopping sandbox \"01424d04aa3575f0af73c62cfe749f09a42e0ba91da8010cc75a9810944668d4\"" Jul 10 23:38:59.194944 containerd[1734]: time="2025-07-10T23:38:59.194939561Z" level=info msg="TearDown network for sandbox \"01424d04aa3575f0af73c62cfe749f09a42e0ba91da8010cc75a9810944668d4\" successfully" Jul 10 23:38:59.216833 containerd[1734]: time="2025-07-10T23:38:59.216721209Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"01424d04aa3575f0af73c62cfe749f09a42e0ba91da8010cc75a9810944668d4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 10 23:38:59.216899 containerd[1734]: time="2025-07-10T23:38:59.216851809Z" level=info msg="RemovePodSandbox \"01424d04aa3575f0af73c62cfe749f09a42e0ba91da8010cc75a9810944668d4\" returns successfully" Jul 10 23:38:59.217512 containerd[1734]: time="2025-07-10T23:38:59.217397009Z" level=info msg="StopPodSandbox for \"84d2aaa01230727e912d2d8873db6959f6fa26c891766eb44acd018296349329\"" Jul 10 23:38:59.217512 containerd[1734]: time="2025-07-10T23:38:59.217462129Z" level=info msg="TearDown network for sandbox \"84d2aaa01230727e912d2d8873db6959f6fa26c891766eb44acd018296349329\" successfully" Jul 10 23:38:59.217512 containerd[1734]: time="2025-07-10T23:38:59.217471049Z" level=info msg="StopPodSandbox for \"84d2aaa01230727e912d2d8873db6959f6fa26c891766eb44acd018296349329\" returns successfully" Jul 10 23:38:59.218576 containerd[1734]: time="2025-07-10T23:38:59.218424689Z" level=info msg="RemovePodSandbox for \"84d2aaa01230727e912d2d8873db6959f6fa26c891766eb44acd018296349329\"" Jul 10 23:38:59.218576 containerd[1734]: time="2025-07-10T23:38:59.218452689Z" level=info msg="Forcibly stopping sandbox \"84d2aaa01230727e912d2d8873db6959f6fa26c891766eb44acd018296349329\"" Jul 10 23:38:59.219275 containerd[1734]: time="2025-07-10T23:38:59.218751810Z" level=info msg="TearDown network for sandbox \"84d2aaa01230727e912d2d8873db6959f6fa26c891766eb44acd018296349329\" successfully" Jul 10 23:38:59.230333 containerd[1734]: time="2025-07-10T23:38:59.230297854Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"84d2aaa01230727e912d2d8873db6959f6fa26c891766eb44acd018296349329\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 10 23:38:59.230487 containerd[1734]: time="2025-07-10T23:38:59.230469174Z" level=info msg="RemovePodSandbox \"84d2aaa01230727e912d2d8873db6959f6fa26c891766eb44acd018296349329\" returns successfully" Jul 10 23:38:59.302139 kubelet[3384]: E0710 23:38:59.302096 3384 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 10 23:38:59.384935 sshd[5146]: Accepted publickey for core from 10.200.16.10 port 36256 ssh2: RSA SHA256:MRWiT2m5xbwSE4Nwnya++Fyqw3vZsniZxlGhvbxVqjo Jul 10 23:38:59.386265 sshd-session[5146]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:38:59.390690 systemd-logind[1707]: New session 25 of user core. Jul 10 23:38:59.394788 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 10 23:39:00.809870 systemd[1]: Created slice kubepods-burstable-pod125303e8_7f9f_44ba_ba2c_c47a6999a390.slice - libcontainer container kubepods-burstable-pod125303e8_7f9f_44ba_ba2c_c47a6999a390.slice. Jul 10 23:39:00.835390 sshd[5150]: Connection closed by 10.200.16.10 port 36256 Jul 10 23:39:00.837091 sshd-session[5146]: pam_unix(sshd:session): session closed for user core Jul 10 23:39:00.839603 systemd[1]: sshd@22-10.200.20.36:22-10.200.16.10:36256.service: Deactivated successfully. Jul 10 23:39:00.843514 systemd[1]: session-25.scope: Deactivated successfully. Jul 10 23:39:00.846218 systemd-logind[1707]: Session 25 logged out. Waiting for processes to exit. Jul 10 23:39:00.847123 systemd-logind[1707]: Removed session 25. Jul 10 23:39:00.900951 kubelet[3384]: I0710 23:39:00.900869 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/125303e8-7f9f-44ba-ba2c-c47a6999a390-cilium-run\") pod \"cilium-kp2p2\" (UID: \"125303e8-7f9f-44ba-ba2c-c47a6999a390\") " pod="kube-system/cilium-kp2p2" Jul 10 23:39:00.900951 kubelet[3384]: I0710 23:39:00.900908 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/125303e8-7f9f-44ba-ba2c-c47a6999a390-lib-modules\") pod \"cilium-kp2p2\" (UID: \"125303e8-7f9f-44ba-ba2c-c47a6999a390\") " pod="kube-system/cilium-kp2p2" Jul 10 23:39:00.900951 kubelet[3384]: I0710 23:39:00.900925 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/125303e8-7f9f-44ba-ba2c-c47a6999a390-etc-cni-netd\") pod \"cilium-kp2p2\" (UID: \"125303e8-7f9f-44ba-ba2c-c47a6999a390\") " pod="kube-system/cilium-kp2p2" Jul 10 23:39:00.901344 kubelet[3384]: I0710 23:39:00.900964 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fzfn\" (UniqueName: \"kubernetes.io/projected/125303e8-7f9f-44ba-ba2c-c47a6999a390-kube-api-access-6fzfn\") pod \"cilium-kp2p2\" (UID: \"125303e8-7f9f-44ba-ba2c-c47a6999a390\") " pod="kube-system/cilium-kp2p2" Jul 10 23:39:00.901344 kubelet[3384]: I0710 23:39:00.900993 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/125303e8-7f9f-44ba-ba2c-c47a6999a390-cilium-config-path\") pod \"cilium-kp2p2\" (UID: \"125303e8-7f9f-44ba-ba2c-c47a6999a390\") " pod="kube-system/cilium-kp2p2" Jul 10 23:39:00.901344 kubelet[3384]: I0710 23:39:00.901012 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/125303e8-7f9f-44ba-ba2c-c47a6999a390-cilium-ipsec-secrets\") pod \"cilium-kp2p2\" (UID: \"125303e8-7f9f-44ba-ba2c-c47a6999a390\") " pod="kube-system/cilium-kp2p2" Jul 10 23:39:00.901344 kubelet[3384]: I0710 23:39:00.901027 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/125303e8-7f9f-44ba-ba2c-c47a6999a390-bpf-maps\") pod \"cilium-kp2p2\" (UID: \"125303e8-7f9f-44ba-ba2c-c47a6999a390\") " pod="kube-system/cilium-kp2p2" Jul 10 23:39:00.901344 kubelet[3384]: I0710 23:39:00.901044 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/125303e8-7f9f-44ba-ba2c-c47a6999a390-host-proc-sys-net\") pod \"cilium-kp2p2\" (UID: \"125303e8-7f9f-44ba-ba2c-c47a6999a390\") " pod="kube-system/cilium-kp2p2" Jul 10 23:39:00.901462 kubelet[3384]: I0710 23:39:00.901059 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/125303e8-7f9f-44ba-ba2c-c47a6999a390-hubble-tls\") pod \"cilium-kp2p2\" (UID: \"125303e8-7f9f-44ba-ba2c-c47a6999a390\") " pod="kube-system/cilium-kp2p2" Jul 10 23:39:00.901462 kubelet[3384]: I0710 23:39:00.901073 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/125303e8-7f9f-44ba-ba2c-c47a6999a390-xtables-lock\") pod \"cilium-kp2p2\" (UID: \"125303e8-7f9f-44ba-ba2c-c47a6999a390\") " pod="kube-system/cilium-kp2p2" Jul 10 23:39:00.901462 kubelet[3384]: I0710 23:39:00.901089 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/125303e8-7f9f-44ba-ba2c-c47a6999a390-clustermesh-secrets\") pod \"cilium-kp2p2\" (UID: \"125303e8-7f9f-44ba-ba2c-c47a6999a390\") " pod="kube-system/cilium-kp2p2" Jul 10 23:39:00.901462 kubelet[3384]: I0710 23:39:00.901105 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/125303e8-7f9f-44ba-ba2c-c47a6999a390-hostproc\") pod \"cilium-kp2p2\" (UID: \"125303e8-7f9f-44ba-ba2c-c47a6999a390\") " pod="kube-system/cilium-kp2p2" Jul 10 23:39:00.901462 kubelet[3384]: I0710 23:39:00.901119 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/125303e8-7f9f-44ba-ba2c-c47a6999a390-cni-path\") pod \"cilium-kp2p2\" (UID: \"125303e8-7f9f-44ba-ba2c-c47a6999a390\") " pod="kube-system/cilium-kp2p2" Jul 10 23:39:00.901462 kubelet[3384]: I0710 23:39:00.901134 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/125303e8-7f9f-44ba-ba2c-c47a6999a390-cilium-cgroup\") pod \"cilium-kp2p2\" (UID: \"125303e8-7f9f-44ba-ba2c-c47a6999a390\") " pod="kube-system/cilium-kp2p2" Jul 10 23:39:00.901578 kubelet[3384]: I0710 23:39:00.901147 3384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/125303e8-7f9f-44ba-ba2c-c47a6999a390-host-proc-sys-kernel\") pod \"cilium-kp2p2\" (UID: \"125303e8-7f9f-44ba-ba2c-c47a6999a390\") " pod="kube-system/cilium-kp2p2" Jul 10 23:39:00.927839 systemd[1]: Started sshd@23-10.200.20.36:22-10.200.16.10:60350.service - OpenSSH per-connection server daemon (10.200.16.10:60350). Jul 10 23:39:01.114893 containerd[1734]: time="2025-07-10T23:39:01.114743750Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kp2p2,Uid:125303e8-7f9f-44ba-ba2c-c47a6999a390,Namespace:kube-system,Attempt:0,}" Jul 10 23:39:01.183284 containerd[1734]: time="2025-07-10T23:39:01.183157251Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 23:39:01.183284 containerd[1734]: time="2025-07-10T23:39:01.183230491Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 23:39:01.183284 containerd[1734]: time="2025-07-10T23:39:01.183245811Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 23:39:01.183981 containerd[1734]: time="2025-07-10T23:39:01.183722691Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 23:39:01.204784 systemd[1]: Started cri-containerd-d071522bf595acbd6a6471139aa4cd3e9a3b63100539e018faebd189e5a0327f.scope - libcontainer container d071522bf595acbd6a6471139aa4cd3e9a3b63100539e018faebd189e5a0327f. Jul 10 23:39:01.224739 containerd[1734]: time="2025-07-10T23:39:01.224693864Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kp2p2,Uid:125303e8-7f9f-44ba-ba2c-c47a6999a390,Namespace:kube-system,Attempt:0,} returns sandbox id \"d071522bf595acbd6a6471139aa4cd3e9a3b63100539e018faebd189e5a0327f\"" Jul 10 23:39:01.235646 containerd[1734]: time="2025-07-10T23:39:01.235541108Z" level=info msg="CreateContainer within sandbox \"d071522bf595acbd6a6471139aa4cd3e9a3b63100539e018faebd189e5a0327f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 10 23:39:01.281490 containerd[1734]: time="2025-07-10T23:39:01.281444322Z" level=info msg="CreateContainer within sandbox \"d071522bf595acbd6a6471139aa4cd3e9a3b63100539e018faebd189e5a0327f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"809ae0cca038af07242b7dcbf02b902da5817f38a7af8767a7bff0ad9a00ebc3\"" Jul 10 23:39:01.283013 containerd[1734]: time="2025-07-10T23:39:01.282097242Z" level=info msg="StartContainer for \"809ae0cca038af07242b7dcbf02b902da5817f38a7af8767a7bff0ad9a00ebc3\"" Jul 10 23:39:01.306779 systemd[1]: Started cri-containerd-809ae0cca038af07242b7dcbf02b902da5817f38a7af8767a7bff0ad9a00ebc3.scope - libcontainer container 809ae0cca038af07242b7dcbf02b902da5817f38a7af8767a7bff0ad9a00ebc3. Jul 10 23:39:01.332290 containerd[1734]: time="2025-07-10T23:39:01.332162858Z" level=info msg="StartContainer for \"809ae0cca038af07242b7dcbf02b902da5817f38a7af8767a7bff0ad9a00ebc3\" returns successfully" Jul 10 23:39:01.337849 systemd[1]: cri-containerd-809ae0cca038af07242b7dcbf02b902da5817f38a7af8767a7bff0ad9a00ebc3.scope: Deactivated successfully. Jul 10 23:39:01.421511 containerd[1734]: time="2025-07-10T23:39:01.421028805Z" level=info msg="shim disconnected" id=809ae0cca038af07242b7dcbf02b902da5817f38a7af8767a7bff0ad9a00ebc3 namespace=k8s.io Jul 10 23:39:01.421511 containerd[1734]: time="2025-07-10T23:39:01.421203885Z" level=warning msg="cleaning up after shim disconnected" id=809ae0cca038af07242b7dcbf02b902da5817f38a7af8767a7bff0ad9a00ebc3 namespace=k8s.io Jul 10 23:39:01.421511 containerd[1734]: time="2025-07-10T23:39:01.421215045Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 10 23:39:01.422286 sshd[5162]: Accepted publickey for core from 10.200.16.10 port 60350 ssh2: RSA SHA256:MRWiT2m5xbwSE4Nwnya++Fyqw3vZsniZxlGhvbxVqjo Jul 10 23:39:01.423693 sshd-session[5162]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:39:01.434566 systemd-logind[1707]: New session 26 of user core. Jul 10 23:39:01.440023 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 10 23:39:01.584002 containerd[1734]: time="2025-07-10T23:39:01.583869376Z" level=info msg="CreateContainer within sandbox \"d071522bf595acbd6a6471139aa4cd3e9a3b63100539e018faebd189e5a0327f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 10 23:39:01.646706 containerd[1734]: time="2025-07-10T23:39:01.646658155Z" level=info msg="CreateContainer within sandbox \"d071522bf595acbd6a6471139aa4cd3e9a3b63100539e018faebd189e5a0327f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d50a1701027bd38f5ff83143513ec7e66596e5b4c30363ef7d0e99b42aded116\"" Jul 10 23:39:01.648687 containerd[1734]: time="2025-07-10T23:39:01.648066076Z" level=info msg="StartContainer for \"d50a1701027bd38f5ff83143513ec7e66596e5b4c30363ef7d0e99b42aded116\"" Jul 10 23:39:01.675878 systemd[1]: Started cri-containerd-d50a1701027bd38f5ff83143513ec7e66596e5b4c30363ef7d0e99b42aded116.scope - libcontainer container d50a1701027bd38f5ff83143513ec7e66596e5b4c30363ef7d0e99b42aded116. Jul 10 23:39:01.704552 containerd[1734]: time="2025-07-10T23:39:01.704500013Z" level=info msg="StartContainer for \"d50a1701027bd38f5ff83143513ec7e66596e5b4c30363ef7d0e99b42aded116\" returns successfully" Jul 10 23:39:01.708566 systemd[1]: cri-containerd-d50a1701027bd38f5ff83143513ec7e66596e5b4c30363ef7d0e99b42aded116.scope: Deactivated successfully. Jul 10 23:39:01.749220 containerd[1734]: time="2025-07-10T23:39:01.749153027Z" level=info msg="shim disconnected" id=d50a1701027bd38f5ff83143513ec7e66596e5b4c30363ef7d0e99b42aded116 namespace=k8s.io Jul 10 23:39:01.749220 containerd[1734]: time="2025-07-10T23:39:01.749213307Z" level=warning msg="cleaning up after shim disconnected" id=d50a1701027bd38f5ff83143513ec7e66596e5b4c30363ef7d0e99b42aded116 namespace=k8s.io Jul 10 23:39:01.749220 containerd[1734]: time="2025-07-10T23:39:01.749232067Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 10 23:39:01.784711 sshd[5270]: Connection closed by 10.200.16.10 port 60350 Jul 10 23:39:01.785070 sshd-session[5162]: pam_unix(sshd:session): session closed for user core Jul 10 23:39:01.787954 systemd-logind[1707]: Session 26 logged out. Waiting for processes to exit. Jul 10 23:39:01.788729 systemd[1]: sshd@23-10.200.20.36:22-10.200.16.10:60350.service: Deactivated successfully. Jul 10 23:39:01.790519 systemd[1]: session-26.scope: Deactivated successfully. Jul 10 23:39:01.793060 systemd-logind[1707]: Removed session 26. Jul 10 23:39:01.879318 systemd[1]: Started sshd@24-10.200.20.36:22-10.200.16.10:60362.service - OpenSSH per-connection server daemon (10.200.16.10:60362). Jul 10 23:39:02.374486 sshd[5337]: Accepted publickey for core from 10.200.16.10 port 60362 ssh2: RSA SHA256:MRWiT2m5xbwSE4Nwnya++Fyqw3vZsniZxlGhvbxVqjo Jul 10 23:39:02.375772 sshd-session[5337]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:39:02.379985 systemd-logind[1707]: New session 27 of user core. Jul 10 23:39:02.382860 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 10 23:39:02.440210 kubelet[3384]: I0710 23:39:02.440150 3384 setters.go:618] "Node became not ready" node="ci-4230.2.1-n-2b36f27b4a" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-07-10T23:39:02Z","lastTransitionTime":"2025-07-10T23:39:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 10 23:39:02.591746 containerd[1734]: time="2025-07-10T23:39:02.591599489Z" level=info msg="CreateContainer within sandbox \"d071522bf595acbd6a6471139aa4cd3e9a3b63100539e018faebd189e5a0327f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 10 23:39:02.670139 containerd[1734]: time="2025-07-10T23:39:02.670025313Z" level=info msg="CreateContainer within sandbox \"d071522bf595acbd6a6471139aa4cd3e9a3b63100539e018faebd189e5a0327f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"7e87ad6d4b6132b0690e6d2eb3a8fb07e223e6bea339ccde20ca021718089086\"" Jul 10 23:39:02.671464 containerd[1734]: time="2025-07-10T23:39:02.670848994Z" level=info msg="StartContainer for \"7e87ad6d4b6132b0690e6d2eb3a8fb07e223e6bea339ccde20ca021718089086\"" Jul 10 23:39:02.708794 systemd[1]: Started cri-containerd-7e87ad6d4b6132b0690e6d2eb3a8fb07e223e6bea339ccde20ca021718089086.scope - libcontainer container 7e87ad6d4b6132b0690e6d2eb3a8fb07e223e6bea339ccde20ca021718089086. Jul 10 23:39:02.747128 systemd[1]: cri-containerd-7e87ad6d4b6132b0690e6d2eb3a8fb07e223e6bea339ccde20ca021718089086.scope: Deactivated successfully. Jul 10 23:39:02.748192 containerd[1734]: time="2025-07-10T23:39:02.748153298Z" level=info msg="StartContainer for \"7e87ad6d4b6132b0690e6d2eb3a8fb07e223e6bea339ccde20ca021718089086\" returns successfully" Jul 10 23:39:02.798443 containerd[1734]: time="2025-07-10T23:39:02.798383953Z" level=info msg="shim disconnected" id=7e87ad6d4b6132b0690e6d2eb3a8fb07e223e6bea339ccde20ca021718089086 namespace=k8s.io Jul 10 23:39:02.798443 containerd[1734]: time="2025-07-10T23:39:02.798437313Z" level=warning msg="cleaning up after shim disconnected" id=7e87ad6d4b6132b0690e6d2eb3a8fb07e223e6bea339ccde20ca021718089086 namespace=k8s.io Jul 10 23:39:02.798443 containerd[1734]: time="2025-07-10T23:39:02.798446073Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 10 23:39:03.007235 systemd[1]: run-containerd-runc-k8s.io-7e87ad6d4b6132b0690e6d2eb3a8fb07e223e6bea339ccde20ca021718089086-runc.Mi6qUU.mount: Deactivated successfully. Jul 10 23:39:03.007339 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7e87ad6d4b6132b0690e6d2eb3a8fb07e223e6bea339ccde20ca021718089086-rootfs.mount: Deactivated successfully. Jul 10 23:39:03.591811 containerd[1734]: time="2025-07-10T23:39:03.591774720Z" level=info msg="CreateContainer within sandbox \"d071522bf595acbd6a6471139aa4cd3e9a3b63100539e018faebd189e5a0327f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 10 23:39:03.633477 containerd[1734]: time="2025-07-10T23:39:03.633433293Z" level=info msg="CreateContainer within sandbox \"d071522bf595acbd6a6471139aa4cd3e9a3b63100539e018faebd189e5a0327f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"76074c0aa88d33f64f06001cd296b28fea02e8b5fd9d939293895cc538937fd9\"" Jul 10 23:39:03.634473 containerd[1734]: time="2025-07-10T23:39:03.634272573Z" level=info msg="StartContainer for \"76074c0aa88d33f64f06001cd296b28fea02e8b5fd9d939293895cc538937fd9\"" Jul 10 23:39:03.660776 systemd[1]: Started cri-containerd-76074c0aa88d33f64f06001cd296b28fea02e8b5fd9d939293895cc538937fd9.scope - libcontainer container 76074c0aa88d33f64f06001cd296b28fea02e8b5fd9d939293895cc538937fd9. Jul 10 23:39:03.680919 systemd[1]: cri-containerd-76074c0aa88d33f64f06001cd296b28fea02e8b5fd9d939293895cc538937fd9.scope: Deactivated successfully. Jul 10 23:39:03.689533 containerd[1734]: time="2025-07-10T23:39:03.689443350Z" level=info msg="StartContainer for \"76074c0aa88d33f64f06001cd296b28fea02e8b5fd9d939293895cc538937fd9\" returns successfully" Jul 10 23:39:03.739578 containerd[1734]: time="2025-07-10T23:39:03.739515366Z" level=info msg="shim disconnected" id=76074c0aa88d33f64f06001cd296b28fea02e8b5fd9d939293895cc538937fd9 namespace=k8s.io Jul 10 23:39:03.739578 containerd[1734]: time="2025-07-10T23:39:03.739570566Z" level=warning msg="cleaning up after shim disconnected" id=76074c0aa88d33f64f06001cd296b28fea02e8b5fd9d939293895cc538937fd9 namespace=k8s.io Jul 10 23:39:03.739578 containerd[1734]: time="2025-07-10T23:39:03.739579406Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 10 23:39:04.007229 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-76074c0aa88d33f64f06001cd296b28fea02e8b5fd9d939293895cc538937fd9-rootfs.mount: Deactivated successfully. Jul 10 23:39:04.303641 kubelet[3384]: E0710 23:39:04.303475 3384 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 10 23:39:04.594438 containerd[1734]: time="2025-07-10T23:39:04.594335071Z" level=info msg="CreateContainer within sandbox \"d071522bf595acbd6a6471139aa4cd3e9a3b63100539e018faebd189e5a0327f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 10 23:39:04.646678 containerd[1734]: time="2025-07-10T23:39:04.646605447Z" level=info msg="CreateContainer within sandbox \"d071522bf595acbd6a6471139aa4cd3e9a3b63100539e018faebd189e5a0327f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"2c8f7554eb265e994f9c480b2850fd729c7a60f6af7efc2d6ba5ec70a0813675\"" Jul 10 23:39:04.647356 containerd[1734]: time="2025-07-10T23:39:04.647208848Z" level=info msg="StartContainer for \"2c8f7554eb265e994f9c480b2850fd729c7a60f6af7efc2d6ba5ec70a0813675\"" Jul 10 23:39:04.675761 systemd[1]: Started cri-containerd-2c8f7554eb265e994f9c480b2850fd729c7a60f6af7efc2d6ba5ec70a0813675.scope - libcontainer container 2c8f7554eb265e994f9c480b2850fd729c7a60f6af7efc2d6ba5ec70a0813675. Jul 10 23:39:04.703609 containerd[1734]: time="2025-07-10T23:39:04.703495705Z" level=info msg="StartContainer for \"2c8f7554eb265e994f9c480b2850fd729c7a60f6af7efc2d6ba5ec70a0813675\" returns successfully" Jul 10 23:39:05.148648 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jul 10 23:39:05.618000 kubelet[3384]: I0710 23:39:05.617868 3384 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-kp2p2" podStartSLOduration=5.617854069 podStartE2EDuration="5.617854069s" podCreationTimestamp="2025-07-10 23:39:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 23:39:05.617853029 +0000 UTC m=+186.534267246" watchObservedRunningTime="2025-07-10 23:39:05.617854069 +0000 UTC m=+186.534268246" Jul 10 23:39:06.859446 systemd[1]: run-containerd-runc-k8s.io-2c8f7554eb265e994f9c480b2850fd729c7a60f6af7efc2d6ba5ec70a0813675-runc.qo2Vx2.mount: Deactivated successfully. Jul 10 23:39:07.887452 systemd-networkd[1457]: lxc_health: Link UP Jul 10 23:39:07.903211 systemd-networkd[1457]: lxc_health: Gained carrier Jul 10 23:39:08.169380 kubelet[3384]: E0710 23:39:08.168990 3384 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-5kpbj" podUID="d3e8f56f-8960-4ab8-813d-1fae3ba4cc24" Jul 10 23:39:09.754831 systemd-networkd[1457]: lxc_health: Gained IPv6LL Jul 10 23:39:13.263374 systemd[1]: run-containerd-runc-k8s.io-2c8f7554eb265e994f9c480b2850fd729c7a60f6af7efc2d6ba5ec70a0813675-runc.Mwt6Iw.mount: Deactivated successfully. Jul 10 23:39:13.396576 sshd[5339]: Connection closed by 10.200.16.10 port 60362 Jul 10 23:39:13.397214 sshd-session[5337]: pam_unix(sshd:session): session closed for user core Jul 10 23:39:13.400556 systemd[1]: sshd@24-10.200.20.36:22-10.200.16.10:60362.service: Deactivated successfully. Jul 10 23:39:13.403179 systemd[1]: session-27.scope: Deactivated successfully. Jul 10 23:39:13.404102 systemd-logind[1707]: Session 27 logged out. Waiting for processes to exit. Jul 10 23:39:13.405145 systemd-logind[1707]: Removed session 27.