May 13 23:42:17.413627 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 13 23:42:17.413652 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Tue May 13 22:16:18 -00 2025 May 13 23:42:17.413684 kernel: KASLR enabled May 13 23:42:17.413706 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') May 13 23:42:17.413714 kernel: printk: bootconsole [pl11] enabled May 13 23:42:17.413720 kernel: efi: EFI v2.7 by EDK II May 13 23:42:17.413728 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3eac7018 RNG=0x3fd5f998 MEMRESERVE=0x3e477598 May 13 23:42:17.413734 kernel: random: crng init done May 13 23:42:17.413740 kernel: secureboot: Secure boot disabled May 13 23:42:17.413746 kernel: ACPI: Early table checksum verification disabled May 13 23:42:17.413753 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) May 13 23:42:17.413759 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 13 23:42:17.413766 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) May 13 23:42:17.413774 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) May 13 23:42:17.413782 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) May 13 23:42:17.413800 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) May 13 23:42:17.413833 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 13 23:42:17.413842 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) May 13 23:42:17.413849 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) May 13 23:42:17.413855 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) May 13 23:42:17.413862 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) May 13 23:42:17.413869 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 13 23:42:17.413875 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 May 13 23:42:17.413882 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] May 13 23:42:17.413888 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] May 13 23:42:17.413895 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] May 13 23:42:17.413910 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] May 13 23:42:17.413961 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] May 13 23:42:17.413972 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] May 13 23:42:17.413979 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] May 13 23:42:17.413985 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] May 13 23:42:17.413992 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] May 13 23:42:17.413999 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] May 13 23:42:17.414005 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] May 13 23:42:17.414012 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] May 13 23:42:17.414018 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] May 13 23:42:17.414024 kernel: Zone ranges: May 13 23:42:17.414031 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] May 13 23:42:17.414058 kernel: DMA32 empty May 13 23:42:17.414082 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] May 13 23:42:17.414093 kernel: Movable zone start for each node May 13 23:42:17.414100 kernel: Early memory node ranges May 13 23:42:17.414106 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] May 13 23:42:17.414113 kernel: node 0: [mem 0x0000000000824000-0x000000003e45ffff] May 13 23:42:17.414142 kernel: node 0: [mem 0x000000003e460000-0x000000003e46ffff] May 13 23:42:17.414167 kernel: node 0: [mem 0x000000003e470000-0x000000003e54ffff] May 13 23:42:17.414174 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] May 13 23:42:17.414181 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] May 13 23:42:17.414187 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] May 13 23:42:17.414194 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] May 13 23:42:17.414201 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] May 13 23:42:17.414208 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] May 13 23:42:17.414215 kernel: On node 0, zone DMA: 36 pages in unavailable ranges May 13 23:42:17.414222 kernel: psci: probing for conduit method from ACPI. May 13 23:42:17.414229 kernel: psci: PSCIv1.1 detected in firmware. May 13 23:42:17.414235 kernel: psci: Using standard PSCI v0.2 function IDs May 13 23:42:17.414250 kernel: psci: MIGRATE_INFO_TYPE not supported. May 13 23:42:17.414290 kernel: psci: SMC Calling Convention v1.4 May 13 23:42:17.414297 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 May 13 23:42:17.414304 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 May 13 23:42:17.414311 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 May 13 23:42:17.414318 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 May 13 23:42:17.414325 kernel: pcpu-alloc: [0] 0 [0] 1 May 13 23:42:17.414332 kernel: Detected PIPT I-cache on CPU0 May 13 23:42:17.414340 kernel: CPU features: detected: GIC system register CPU interface May 13 23:42:17.414347 kernel: CPU features: detected: Hardware dirty bit management May 13 23:42:17.414353 kernel: CPU features: detected: Spectre-BHB May 13 23:42:17.414360 kernel: CPU features: kernel page table isolation forced ON by KASLR May 13 23:42:17.414369 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 13 23:42:17.414382 kernel: CPU features: detected: ARM erratum 1418040 May 13 23:42:17.414418 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) May 13 23:42:17.414427 kernel: CPU features: detected: SSBS not fully self-synchronizing May 13 23:42:17.414434 kernel: alternatives: applying boot alternatives May 13 23:42:17.414442 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=3174b2682629aa8ad4069807ed6fd62c10f62266ee1e150a1104f2a2fb6489b5 May 13 23:42:17.414450 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 13 23:42:17.414457 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 13 23:42:17.414464 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 13 23:42:17.414471 kernel: Fallback order for Node 0: 0 May 13 23:42:17.414477 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 May 13 23:42:17.414486 kernel: Policy zone: Normal May 13 23:42:17.414493 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 13 23:42:17.414500 kernel: software IO TLB: area num 2. May 13 23:42:17.414530 kernel: software IO TLB: mapped [mem 0x0000000036520000-0x000000003a520000] (64MB) May 13 23:42:17.414551 kernel: Memory: 3983464K/4194160K available (10368K kernel code, 2186K rwdata, 8100K rodata, 38464K init, 897K bss, 210696K reserved, 0K cma-reserved) May 13 23:42:17.414558 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 13 23:42:17.414565 kernel: rcu: Preemptible hierarchical RCU implementation. May 13 23:42:17.414573 kernel: rcu: RCU event tracing is enabled. May 13 23:42:17.414580 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 13 23:42:17.414587 kernel: Trampoline variant of Tasks RCU enabled. May 13 23:42:17.414594 kernel: Tracing variant of Tasks RCU enabled. May 13 23:42:17.414603 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 13 23:42:17.414610 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 13 23:42:17.414638 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 13 23:42:17.414660 kernel: GICv3: 960 SPIs implemented May 13 23:42:17.414667 kernel: GICv3: 0 Extended SPIs implemented May 13 23:42:17.414674 kernel: Root IRQ handler: gic_handle_irq May 13 23:42:17.414681 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI May 13 23:42:17.414688 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 May 13 23:42:17.414694 kernel: ITS: No ITS available, not enabling LPIs May 13 23:42:17.414713 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 13 23:42:17.414742 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 23:42:17.414764 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 13 23:42:17.414773 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 13 23:42:17.414780 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 13 23:42:17.414787 kernel: Console: colour dummy device 80x25 May 13 23:42:17.414795 kernel: printk: console [tty1] enabled May 13 23:42:17.414802 kernel: ACPI: Core revision 20230628 May 13 23:42:17.414810 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 13 23:42:17.414817 kernel: pid_max: default: 32768 minimum: 301 May 13 23:42:17.414824 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 13 23:42:17.414831 kernel: landlock: Up and running. May 13 23:42:17.414840 kernel: SELinux: Initializing. May 13 23:42:17.414851 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 23:42:17.414889 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 23:42:17.414898 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 13 23:42:17.414906 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 13 23:42:17.414913 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 May 13 23:42:17.421212 kernel: Hyper-V: Host Build 10.0.22477.1619-1-0 May 13 23:42:17.421261 kernel: Hyper-V: enabling crash_kexec_post_notifiers May 13 23:42:17.421271 kernel: rcu: Hierarchical SRCU implementation. May 13 23:42:17.421281 kernel: rcu: Max phase no-delay instances is 400. May 13 23:42:17.421289 kernel: Remapping and enabling EFI services. May 13 23:42:17.421297 kernel: smp: Bringing up secondary CPUs ... May 13 23:42:17.421307 kernel: Detected PIPT I-cache on CPU1 May 13 23:42:17.421315 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 May 13 23:42:17.421323 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 23:42:17.421331 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 13 23:42:17.421339 kernel: smp: Brought up 1 node, 2 CPUs May 13 23:42:17.421349 kernel: SMP: Total of 2 processors activated. May 13 23:42:17.421357 kernel: CPU features: detected: 32-bit EL0 Support May 13 23:42:17.421365 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence May 13 23:42:17.421373 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 13 23:42:17.421381 kernel: CPU features: detected: CRC32 instructions May 13 23:42:17.421388 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 13 23:42:17.421396 kernel: CPU features: detected: LSE atomic instructions May 13 23:42:17.421404 kernel: CPU features: detected: Privileged Access Never May 13 23:42:17.421412 kernel: CPU: All CPU(s) started at EL1 May 13 23:42:17.421422 kernel: alternatives: applying system-wide alternatives May 13 23:42:17.421430 kernel: devtmpfs: initialized May 13 23:42:17.421438 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 13 23:42:17.421446 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 13 23:42:17.421454 kernel: pinctrl core: initialized pinctrl subsystem May 13 23:42:17.421462 kernel: SMBIOS 3.1.0 present. May 13 23:42:17.421470 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 May 13 23:42:17.421479 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 13 23:42:17.421493 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 13 23:42:17.421503 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 13 23:42:17.421511 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 13 23:42:17.421519 kernel: audit: initializing netlink subsys (disabled) May 13 23:42:17.421527 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 May 13 23:42:17.421535 kernel: thermal_sys: Registered thermal governor 'step_wise' May 13 23:42:17.421543 kernel: cpuidle: using governor menu May 13 23:42:17.421550 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 13 23:42:17.421558 kernel: ASID allocator initialised with 32768 entries May 13 23:42:17.421577 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 13 23:42:17.421587 kernel: Serial: AMBA PL011 UART driver May 13 23:42:17.421595 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 13 23:42:17.421603 kernel: Modules: 0 pages in range for non-PLT usage May 13 23:42:17.421612 kernel: Modules: 509232 pages in range for PLT usage May 13 23:42:17.421619 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 13 23:42:17.421627 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 13 23:42:17.421635 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 13 23:42:17.421643 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 13 23:42:17.421651 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 13 23:42:17.421661 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 13 23:42:17.421669 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 13 23:42:17.421677 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 13 23:42:17.421685 kernel: ACPI: Added _OSI(Module Device) May 13 23:42:17.421692 kernel: ACPI: Added _OSI(Processor Device) May 13 23:42:17.421700 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 13 23:42:17.421709 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 13 23:42:17.421716 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 13 23:42:17.421724 kernel: ACPI: Interpreter enabled May 13 23:42:17.421735 kernel: ACPI: Using GIC for interrupt routing May 13 23:42:17.421743 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA May 13 23:42:17.421750 kernel: printk: console [ttyAMA0] enabled May 13 23:42:17.421758 kernel: printk: bootconsole [pl11] disabled May 13 23:42:17.421766 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA May 13 23:42:17.421774 kernel: iommu: Default domain type: Translated May 13 23:42:17.421782 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 13 23:42:17.421790 kernel: efivars: Registered efivars operations May 13 23:42:17.421798 kernel: vgaarb: loaded May 13 23:42:17.421808 kernel: clocksource: Switched to clocksource arch_sys_counter May 13 23:42:17.421816 kernel: VFS: Disk quotas dquot_6.6.0 May 13 23:42:17.421824 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 13 23:42:17.421832 kernel: pnp: PnP ACPI init May 13 23:42:17.421840 kernel: pnp: PnP ACPI: found 0 devices May 13 23:42:17.421847 kernel: NET: Registered PF_INET protocol family May 13 23:42:17.421855 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 13 23:42:17.421863 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 13 23:42:17.421871 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 13 23:42:17.421881 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 13 23:42:17.421889 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 13 23:42:17.421896 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 13 23:42:17.421904 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 23:42:17.421912 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 23:42:17.421919 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 13 23:42:17.421944 kernel: PCI: CLS 0 bytes, default 64 May 13 23:42:17.421952 kernel: kvm [1]: HYP mode not available May 13 23:42:17.421960 kernel: Initialise system trusted keyrings May 13 23:42:17.421969 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 13 23:42:17.421977 kernel: Key type asymmetric registered May 13 23:42:17.421984 kernel: Asymmetric key parser 'x509' registered May 13 23:42:17.421992 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 13 23:42:17.421999 kernel: io scheduler mq-deadline registered May 13 23:42:17.422007 kernel: io scheduler kyber registered May 13 23:42:17.422015 kernel: io scheduler bfq registered May 13 23:42:17.422022 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 13 23:42:17.422029 kernel: thunder_xcv, ver 1.0 May 13 23:42:17.422039 kernel: thunder_bgx, ver 1.0 May 13 23:42:17.422047 kernel: nicpf, ver 1.0 May 13 23:42:17.422054 kernel: nicvf, ver 1.0 May 13 23:42:17.422235 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 13 23:42:17.422313 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-13T23:42:16 UTC (1747179736) May 13 23:42:17.422324 kernel: efifb: probing for efifb May 13 23:42:17.422332 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k May 13 23:42:17.422339 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 May 13 23:42:17.422349 kernel: efifb: scrolling: redraw May 13 23:42:17.422357 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 May 13 23:42:17.422364 kernel: Console: switching to colour frame buffer device 128x48 May 13 23:42:17.422371 kernel: fb0: EFI VGA frame buffer device May 13 23:42:17.422379 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... May 13 23:42:17.422386 kernel: hid: raw HID events driver (C) Jiri Kosina May 13 23:42:17.422393 kernel: No ACPI PMU IRQ for CPU0 May 13 23:42:17.422401 kernel: No ACPI PMU IRQ for CPU1 May 13 23:42:17.422408 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available May 13 23:42:17.422418 kernel: watchdog: Delayed init of the lockup detector failed: -19 May 13 23:42:17.422425 kernel: watchdog: Hard watchdog permanently disabled May 13 23:42:17.422433 kernel: NET: Registered PF_INET6 protocol family May 13 23:42:17.422440 kernel: Segment Routing with IPv6 May 13 23:42:17.422448 kernel: In-situ OAM (IOAM) with IPv6 May 13 23:42:17.422455 kernel: NET: Registered PF_PACKET protocol family May 13 23:42:17.422463 kernel: Key type dns_resolver registered May 13 23:42:17.422470 kernel: registered taskstats version 1 May 13 23:42:17.422478 kernel: Loading compiled-in X.509 certificates May 13 23:42:17.422488 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: 568a15bbab977599d8f910f319ba50c03c8a57bd' May 13 23:42:17.422495 kernel: Key type .fscrypt registered May 13 23:42:17.422503 kernel: Key type fscrypt-provisioning registered May 13 23:42:17.422510 kernel: ima: No TPM chip found, activating TPM-bypass! May 13 23:42:17.422518 kernel: ima: Allocated hash algorithm: sha1 May 13 23:42:17.422526 kernel: ima: No architecture policies found May 13 23:42:17.422533 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 13 23:42:17.422541 kernel: clk: Disabling unused clocks May 13 23:42:17.422548 kernel: Freeing unused kernel memory: 38464K May 13 23:42:17.422559 kernel: Run /init as init process May 13 23:42:17.422566 kernel: with arguments: May 13 23:42:17.422574 kernel: /init May 13 23:42:17.422581 kernel: with environment: May 13 23:42:17.422588 kernel: HOME=/ May 13 23:42:17.422595 kernel: TERM=linux May 13 23:42:17.422603 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 13 23:42:17.422611 systemd[1]: Successfully made /usr/ read-only. May 13 23:42:17.422650 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 13 23:42:17.422665 systemd[1]: Detected virtualization microsoft. May 13 23:42:17.422673 systemd[1]: Detected architecture arm64. May 13 23:42:17.422681 systemd[1]: Running in initrd. May 13 23:42:17.422689 systemd[1]: No hostname configured, using default hostname. May 13 23:42:17.422697 systemd[1]: Hostname set to . May 13 23:42:17.422705 systemd[1]: Initializing machine ID from random generator. May 13 23:42:17.422713 systemd[1]: Queued start job for default target initrd.target. May 13 23:42:17.422724 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 23:42:17.422732 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 23:42:17.422742 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 13 23:42:17.422750 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 13 23:42:17.422758 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 13 23:42:17.422767 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 13 23:42:17.422778 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 13 23:42:17.422786 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 13 23:42:17.422794 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 23:42:17.422802 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 13 23:42:17.422811 systemd[1]: Reached target paths.target - Path Units. May 13 23:42:17.422818 systemd[1]: Reached target slices.target - Slice Units. May 13 23:42:17.422826 systemd[1]: Reached target swap.target - Swaps. May 13 23:42:17.422834 systemd[1]: Reached target timers.target - Timer Units. May 13 23:42:17.422843 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 13 23:42:17.422853 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 13 23:42:17.422861 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 13 23:42:17.422869 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 13 23:42:17.422878 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 13 23:42:17.422886 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 13 23:42:17.422894 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 13 23:42:17.422903 systemd[1]: Reached target sockets.target - Socket Units. May 13 23:42:17.422911 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 13 23:42:17.422919 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 13 23:42:17.422945 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 13 23:42:17.422954 systemd[1]: Starting systemd-fsck-usr.service... May 13 23:42:17.422962 systemd[1]: Starting systemd-journald.service - Journal Service... May 13 23:42:17.422970 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 13 23:42:17.423005 systemd-journald[218]: Collecting audit messages is disabled. May 13 23:42:17.423029 systemd-journald[218]: Journal started May 13 23:42:17.423049 systemd-journald[218]: Runtime Journal (/run/log/journal/dc6669d456ad47b69ee587d96fda64ee) is 8M, max 78.5M, 70.5M free. May 13 23:42:17.421540 systemd-modules-load[220]: Inserted module 'overlay' May 13 23:42:17.440959 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:42:17.441033 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 13 23:42:17.464502 systemd-modules-load[220]: Inserted module 'br_netfilter' May 13 23:42:17.469697 kernel: Bridge firewalling registered May 13 23:42:17.487953 systemd[1]: Started systemd-journald.service - Journal Service. May 13 23:42:17.488605 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 13 23:42:17.498005 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 13 23:42:17.507435 systemd[1]: Finished systemd-fsck-usr.service. May 13 23:42:17.514953 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 13 23:42:17.525785 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:42:17.543072 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 23:42:17.575947 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 23:42:17.586095 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 13 23:42:17.612335 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 13 23:42:17.641401 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 23:42:17.659091 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 23:42:17.667916 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 13 23:42:17.682721 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 23:42:17.702107 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 13 23:42:17.732267 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 13 23:42:17.750768 dracut-cmdline[252]: dracut-dracut-053 May 13 23:42:17.757986 dracut-cmdline[252]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=3174b2682629aa8ad4069807ed6fd62c10f62266ee1e150a1104f2a2fb6489b5 May 13 23:42:17.755115 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 13 23:42:17.824818 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 23:42:17.851720 systemd-resolved[253]: Positive Trust Anchors: May 13 23:42:17.851747 systemd-resolved[253]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 23:42:17.851778 systemd-resolved[253]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 13 23:42:17.854232 systemd-resolved[253]: Defaulting to hostname 'linux'. May 13 23:42:17.856773 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 13 23:42:17.866126 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 13 23:42:17.992957 kernel: SCSI subsystem initialized May 13 23:42:18.002964 kernel: Loading iSCSI transport class v2.0-870. May 13 23:42:18.011958 kernel: iscsi: registered transport (tcp) May 13 23:42:18.030962 kernel: iscsi: registered transport (qla4xxx) May 13 23:42:18.031035 kernel: QLogic iSCSI HBA Driver May 13 23:42:18.077124 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 13 23:42:18.089161 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 13 23:42:18.138553 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 13 23:42:18.138622 kernel: device-mapper: uevent: version 1.0.3 May 13 23:42:18.145355 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 13 23:42:18.196961 kernel: raid6: neonx8 gen() 15751 MB/s May 13 23:42:18.216937 kernel: raid6: neonx4 gen() 15804 MB/s May 13 23:42:18.236934 kernel: raid6: neonx2 gen() 13201 MB/s May 13 23:42:18.257934 kernel: raid6: neonx1 gen() 10520 MB/s May 13 23:42:18.277933 kernel: raid6: int64x8 gen() 6795 MB/s May 13 23:42:18.297932 kernel: raid6: int64x4 gen() 7357 MB/s May 13 23:42:18.318935 kernel: raid6: int64x2 gen() 6111 MB/s May 13 23:42:18.342553 kernel: raid6: int64x1 gen() 5059 MB/s May 13 23:42:18.342564 kernel: raid6: using algorithm neonx4 gen() 15804 MB/s May 13 23:42:18.366551 kernel: raid6: .... xor() 12417 MB/s, rmw enabled May 13 23:42:18.366576 kernel: raid6: using neon recovery algorithm May 13 23:42:18.379285 kernel: xor: measuring software checksum speed May 13 23:42:18.379317 kernel: 8regs : 21630 MB/sec May 13 23:42:18.382866 kernel: 32regs : 21601 MB/sec May 13 23:42:18.386378 kernel: arm64_neon : 27917 MB/sec May 13 23:42:18.392390 kernel: xor: using function: arm64_neon (27917 MB/sec) May 13 23:42:18.443953 kernel: Btrfs loaded, zoned=no, fsverity=no May 13 23:42:18.456137 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 13 23:42:18.469454 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 23:42:18.505375 systemd-udevd[438]: Using default interface naming scheme 'v255'. May 13 23:42:18.511384 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 23:42:18.530472 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 13 23:42:18.562312 dracut-pre-trigger[450]: rd.md=0: removing MD RAID activation May 13 23:42:18.594487 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 13 23:42:18.605133 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 13 23:42:18.665042 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 13 23:42:18.686332 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 13 23:42:18.721981 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 13 23:42:18.742756 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 13 23:42:18.751909 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 23:42:18.769997 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 13 23:42:18.792168 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 13 23:42:18.838002 kernel: hv_vmbus: Vmbus version:5.3 May 13 23:42:18.845826 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 23:42:18.846048 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 23:42:18.870245 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 23:42:18.912690 kernel: hv_vmbus: registering driver hyperv_keyboard May 13 23:42:18.912720 kernel: hv_vmbus: registering driver hid_hyperv May 13 23:42:18.912732 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 May 13 23:42:18.904897 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 23:42:18.969767 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 May 13 23:42:18.969796 kernel: pps_core: LinuxPPS API ver. 1 registered May 13 23:42:18.969807 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on May 13 23:42:18.970004 kernel: hv_vmbus: registering driver hv_netvsc May 13 23:42:18.970016 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 13 23:42:18.912590 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:42:18.989016 kernel: hv_vmbus: registering driver hv_storvsc May 13 23:42:18.989060 kernel: PTP clock support registered May 13 23:42:18.989070 kernel: scsi host1: storvsc_host_t May 13 23:42:18.989265 kernel: scsi host0: storvsc_host_t May 13 23:42:18.961060 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:42:19.015864 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 May 13 23:42:19.015918 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 May 13 23:42:19.006550 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:42:19.033763 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 13 23:42:19.050716 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 13 23:42:19.154230 kernel: hv_utils: Registering HyperV Utility Driver May 13 23:42:19.154255 kernel: sr 0:0:0:2: [sr0] scsi-1 drive May 13 23:42:19.154444 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 13 23:42:19.154456 kernel: hv_vmbus: registering driver hv_utils May 13 23:42:19.154474 kernel: hv_utils: Heartbeat IC version 3.0 May 13 23:42:19.154485 kernel: hv_utils: Shutdown IC version 3.2 May 13 23:42:19.154494 kernel: hv_utils: TimeSync IC version 4.0 May 13 23:42:19.154504 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 May 13 23:42:19.154595 kernel: hv_netvsc 000d3af7-7448-000d-3af7-7448000d3af7 eth0: VF slot 1 added May 13 23:42:19.135473 systemd-resolved[253]: Clock change detected. Flushing caches. May 13 23:42:19.178384 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 23:42:19.219032 kernel: hv_vmbus: registering driver hv_pci May 13 23:42:19.219060 kernel: hv_pci e0b40d6c-2a4e-4e8b-b803-0e34447c6c74: PCI VMBus probing: Using version 0x10004 May 13 23:42:19.178483 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:42:19.287408 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) May 13 23:42:19.291161 kernel: hv_pci e0b40d6c-2a4e-4e8b-b803-0e34447c6c74: PCI host bridge to bus 2a4e:00 May 13 23:42:19.291308 kernel: pci_bus 2a4e:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] May 13 23:42:19.291426 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks May 13 23:42:19.291518 kernel: pci_bus 2a4e:00: No busn resource found for root bus, will use [bus 00-ff] May 13 23:42:19.291598 kernel: pci 2a4e:00:02.0: [15b3:1018] type 00 class 0x020000 May 13 23:42:19.291621 kernel: sd 0:0:0:0: [sda] Write Protect is off May 13 23:42:19.291704 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 May 13 23:42:19.291788 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA May 13 23:42:19.291875 kernel: pci 2a4e:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] May 13 23:42:19.291892 kernel: pci 2a4e:00:02.0: enabling Extended Tags May 13 23:42:19.201811 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 13 23:42:19.345159 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 13 23:42:19.345204 kernel: pci 2a4e:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 2a4e:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) May 13 23:42:19.345608 kernel: sd 0:0:0:0: [sda] Attached SCSI disk May 13 23:42:19.345746 kernel: pci_bus 2a4e:00: busn_res: [bus 00-ff] end is updated to 00 May 13 23:42:19.345873 kernel: pci 2a4e:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] May 13 23:42:19.204511 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:42:19.308597 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:42:19.373462 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 23:42:19.400200 kernel: mlx5_core 2a4e:00:02.0: enabling device (0000 -> 0002) May 13 23:42:19.407291 kernel: mlx5_core 2a4e:00:02.0: firmware version: 16.30.1284 May 13 23:42:19.426313 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 23:42:19.613237 kernel: hv_netvsc 000d3af7-7448-000d-3af7-7448000d3af7 eth0: VF registering: eth1 May 13 23:42:19.613497 kernel: mlx5_core 2a4e:00:02.0 eth1: joined to eth0 May 13 23:42:19.621280 kernel: mlx5_core 2a4e:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) May 13 23:42:19.635290 kernel: mlx5_core 2a4e:00:02.0 enP10830s1: renamed from eth1 May 13 23:42:19.938693 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. May 13 23:42:19.963821 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by (udev-worker) (496) May 13 23:42:19.989217 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. May 13 23:42:20.026298 kernel: BTRFS: device fsid ee830c17-a93d-4109-bd12-3fec8ef6763d devid 1 transid 41 /dev/sda3 scanned by (udev-worker) (483) May 13 23:42:20.039444 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. May 13 23:42:20.054696 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. May 13 23:42:20.062374 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. May 13 23:42:20.082739 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 13 23:42:20.128292 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 13 23:42:20.138296 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 13 23:42:21.147348 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 13 23:42:21.149545 disk-uuid[606]: The operation has completed successfully. May 13 23:42:21.231568 systemd[1]: disk-uuid.service: Deactivated successfully. May 13 23:42:21.231707 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 13 23:42:21.281180 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 13 23:42:21.310127 sh[692]: Success May 13 23:42:21.351429 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 13 23:42:21.555040 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 13 23:42:21.578748 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 13 23:42:21.589772 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 13 23:42:21.617120 kernel: BTRFS info (device dm-0): first mount of filesystem ee830c17-a93d-4109-bd12-3fec8ef6763d May 13 23:42:21.617176 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 13 23:42:21.617188 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 13 23:42:21.629552 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 13 23:42:21.634103 kernel: BTRFS info (device dm-0): using free space tree May 13 23:42:21.939598 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 13 23:42:21.946680 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 13 23:42:21.948413 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 13 23:42:21.969088 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 13 23:42:22.026461 kernel: BTRFS info (device sda6): first mount of filesystem e7b30525-8b14-4004-ad68-68a99b3959db May 13 23:42:22.026534 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm May 13 23:42:22.032226 kernel: BTRFS info (device sda6): using free space tree May 13 23:42:22.064324 kernel: BTRFS info (device sda6): auto enabling async discard May 13 23:42:22.084439 kernel: BTRFS info (device sda6): last unmount of filesystem e7b30525-8b14-4004-ad68-68a99b3959db May 13 23:42:22.090311 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 13 23:42:22.111445 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 13 23:42:22.168970 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 13 23:42:22.178452 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 13 23:42:22.221406 systemd-networkd[873]: lo: Link UP May 13 23:42:22.221416 systemd-networkd[873]: lo: Gained carrier May 13 23:42:22.223131 systemd-networkd[873]: Enumeration completed May 13 23:42:22.225819 systemd[1]: Started systemd-networkd.service - Network Configuration. May 13 23:42:22.232998 systemd-networkd[873]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 23:42:22.233002 systemd-networkd[873]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 23:42:22.233649 systemd[1]: Reached target network.target - Network. May 13 23:42:22.326293 kernel: mlx5_core 2a4e:00:02.0 enP10830s1: Link up May 13 23:42:22.368300 kernel: hv_netvsc 000d3af7-7448-000d-3af7-7448000d3af7 eth0: Data path switched to VF: enP10830s1 May 13 23:42:22.368869 systemd-networkd[873]: enP10830s1: Link UP May 13 23:42:22.368953 systemd-networkd[873]: eth0: Link UP May 13 23:42:22.369056 systemd-networkd[873]: eth0: Gained carrier May 13 23:42:22.369067 systemd-networkd[873]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 23:42:22.385102 systemd-networkd[873]: enP10830s1: Gained carrier May 13 23:42:22.405376 systemd-networkd[873]: eth0: DHCPv4 address 10.200.20.10/24, gateway 10.200.20.1 acquired from 168.63.129.16 May 13 23:42:23.094832 ignition[814]: Ignition 2.20.0 May 13 23:42:23.094845 ignition[814]: Stage: fetch-offline May 13 23:42:23.094880 ignition[814]: no configs at "/usr/lib/ignition/base.d" May 13 23:42:23.102388 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 13 23:42:23.094888 ignition[814]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 13 23:42:23.112477 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 13 23:42:23.094993 ignition[814]: parsed url from cmdline: "" May 13 23:42:23.094997 ignition[814]: no config URL provided May 13 23:42:23.095001 ignition[814]: reading system config file "/usr/lib/ignition/user.ign" May 13 23:42:23.095009 ignition[814]: no config at "/usr/lib/ignition/user.ign" May 13 23:42:23.095014 ignition[814]: failed to fetch config: resource requires networking May 13 23:42:23.095196 ignition[814]: Ignition finished successfully May 13 23:42:23.152924 ignition[884]: Ignition 2.20.0 May 13 23:42:23.152931 ignition[884]: Stage: fetch May 13 23:42:23.153155 ignition[884]: no configs at "/usr/lib/ignition/base.d" May 13 23:42:23.153167 ignition[884]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 13 23:42:23.153309 ignition[884]: parsed url from cmdline: "" May 13 23:42:23.153312 ignition[884]: no config URL provided May 13 23:42:23.153318 ignition[884]: reading system config file "/usr/lib/ignition/user.ign" May 13 23:42:23.153327 ignition[884]: no config at "/usr/lib/ignition/user.ign" May 13 23:42:23.153363 ignition[884]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 May 13 23:42:23.279716 ignition[884]: GET result: OK May 13 23:42:23.279800 ignition[884]: config has been read from IMDS userdata May 13 23:42:23.279839 ignition[884]: parsing config with SHA512: 2920b682d5070f2e8f82e9ae39f4f522571a5ed1f52474db695bebffc0881b74a3d1ac0c28c095c98720f18215783338392a03cade7ea19d8c2b6cad742648f9 May 13 23:42:23.284483 unknown[884]: fetched base config from "system" May 13 23:42:23.284887 ignition[884]: fetch: fetch complete May 13 23:42:23.284491 unknown[884]: fetched base config from "system" May 13 23:42:23.284893 ignition[884]: fetch: fetch passed May 13 23:42:23.284496 unknown[884]: fetched user config from "azure" May 13 23:42:23.284945 ignition[884]: Ignition finished successfully May 13 23:42:23.294673 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 13 23:42:23.305516 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 13 23:42:23.348255 ignition[890]: Ignition 2.20.0 May 13 23:42:23.348283 ignition[890]: Stage: kargs May 13 23:42:23.348550 ignition[890]: no configs at "/usr/lib/ignition/base.d" May 13 23:42:23.348564 ignition[890]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 13 23:42:23.368349 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 13 23:42:23.357178 ignition[890]: kargs: kargs passed May 13 23:42:23.379493 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 13 23:42:23.357251 ignition[890]: Ignition finished successfully May 13 23:42:23.414639 systemd-networkd[873]: enP10830s1: Gained IPv6LL May 13 23:42:23.421690 ignition[896]: Ignition 2.20.0 May 13 23:42:23.421697 ignition[896]: Stage: disks May 13 23:42:23.427533 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 13 23:42:23.421902 ignition[896]: no configs at "/usr/lib/ignition/base.d" May 13 23:42:23.437563 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 13 23:42:23.421914 ignition[896]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 13 23:42:23.449650 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 13 23:42:23.422969 ignition[896]: disks: disks passed May 13 23:42:23.462324 systemd[1]: Reached target local-fs.target - Local File Systems. May 13 23:42:23.423025 ignition[896]: Ignition finished successfully May 13 23:42:23.474788 systemd[1]: Reached target sysinit.target - System Initialization. May 13 23:42:23.486832 systemd[1]: Reached target basic.target - Basic System. May 13 23:42:23.502543 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 13 23:42:23.642086 systemd-fsck[905]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks May 13 23:42:23.660069 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 13 23:42:23.676438 systemd[1]: Mounting sysroot.mount - /sysroot... May 13 23:42:23.768345 kernel: EXT4-fs (sda9): mounted filesystem 9f8d74e6-c079-469f-823a-18a62077a2c7 r/w with ordered data mode. Quota mode: none. May 13 23:42:23.769004 systemd[1]: Mounted sysroot.mount - /sysroot. May 13 23:42:23.775572 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 13 23:42:23.823170 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 13 23:42:23.851418 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 13 23:42:23.874307 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (916) May 13 23:42:23.894165 kernel: BTRFS info (device sda6): first mount of filesystem e7b30525-8b14-4004-ad68-68a99b3959db May 13 23:42:23.894203 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm May 13 23:42:23.899512 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... May 13 23:42:23.915415 kernel: BTRFS info (device sda6): using free space tree May 13 23:42:23.927793 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 13 23:42:23.955181 kernel: BTRFS info (device sda6): auto enabling async discard May 13 23:42:23.927890 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 13 23:42:23.957667 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 13 23:42:23.969704 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 13 23:42:23.986480 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 13 23:42:24.182657 systemd-networkd[873]: eth0: Gained IPv6LL May 13 23:42:24.503706 coreos-metadata[918]: May 13 23:42:24.503 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 May 13 23:42:24.516746 coreos-metadata[918]: May 13 23:42:24.516 INFO Fetch successful May 13 23:42:24.522597 coreos-metadata[918]: May 13 23:42:24.522 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 May 13 23:42:24.534915 coreos-metadata[918]: May 13 23:42:24.534 INFO Fetch successful May 13 23:42:24.557772 coreos-metadata[918]: May 13 23:42:24.557 INFO wrote hostname ci-4284.0.0-n-791441f790 to /sysroot/etc/hostname May 13 23:42:24.567737 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 13 23:42:24.774605 initrd-setup-root[947]: cut: /sysroot/etc/passwd: No such file or directory May 13 23:42:24.808634 initrd-setup-root[954]: cut: /sysroot/etc/group: No such file or directory May 13 23:42:24.822180 initrd-setup-root[961]: cut: /sysroot/etc/shadow: No such file or directory May 13 23:42:24.836549 initrd-setup-root[968]: cut: /sysroot/etc/gshadow: No such file or directory May 13 23:42:25.694941 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 13 23:42:25.705406 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 13 23:42:25.721411 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 13 23:42:25.734866 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 13 23:42:25.748196 kernel: BTRFS info (device sda6): last unmount of filesystem e7b30525-8b14-4004-ad68-68a99b3959db May 13 23:42:25.772386 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 13 23:42:25.786347 ignition[1036]: INFO : Ignition 2.20.0 May 13 23:42:25.790995 ignition[1036]: INFO : Stage: mount May 13 23:42:25.790995 ignition[1036]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 23:42:25.790995 ignition[1036]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" May 13 23:42:25.821715 ignition[1036]: INFO : mount: mount passed May 13 23:42:25.821715 ignition[1036]: INFO : Ignition finished successfully May 13 23:42:25.796443 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 13 23:42:25.810429 systemd[1]: Starting ignition-files.service - Ignition (files)... May 13 23:42:25.851699 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 13 23:42:25.883288 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (1048) May 13 23:42:25.883348 kernel: BTRFS info (device sda6): first mount of filesystem e7b30525-8b14-4004-ad68-68a99b3959db May 13 23:42:25.897083 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm May 13 23:42:25.901686 kernel: BTRFS info (device sda6): using free space tree May 13 23:42:25.908280 kernel: BTRFS info (device sda6): auto enabling async discard May 13 23:42:25.910661 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 13 23:42:25.945353 ignition[1065]: INFO : Ignition 2.20.0 May 13 23:42:25.945353 ignition[1065]: INFO : Stage: files May 13 23:42:25.954172 ignition[1065]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 23:42:25.954172 ignition[1065]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" May 13 23:42:25.954172 ignition[1065]: DEBUG : files: compiled without relabeling support, skipping May 13 23:42:25.974219 ignition[1065]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 13 23:42:25.974219 ignition[1065]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 13 23:42:26.040436 ignition[1065]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 13 23:42:26.048726 ignition[1065]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 13 23:42:26.048726 ignition[1065]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 13 23:42:26.040864 unknown[1065]: wrote ssh authorized keys file for user: core May 13 23:42:26.084600 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 13 23:42:26.099493 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 May 13 23:42:26.132071 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 13 23:42:26.195758 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 13 23:42:26.195758 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 13 23:42:26.218214 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 May 13 23:42:26.559633 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 13 23:42:26.628620 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 13 23:42:26.639731 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 13 23:42:26.639731 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 13 23:42:26.639731 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 13 23:42:26.639731 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 13 23:42:26.639731 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 23:42:26.639731 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 23:42:26.639731 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 23:42:26.639731 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 23:42:26.639731 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 13 23:42:26.639731 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 13 23:42:26.639731 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 13 23:42:26.639731 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 13 23:42:26.639731 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 13 23:42:26.639731 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 May 13 23:42:26.898960 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 13 23:42:27.103500 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 13 23:42:27.103500 ignition[1065]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 13 23:42:27.139652 ignition[1065]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 23:42:27.139652 ignition[1065]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 23:42:27.139652 ignition[1065]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 13 23:42:27.139652 ignition[1065]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" May 13 23:42:27.139652 ignition[1065]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" May 13 23:42:27.139652 ignition[1065]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" May 13 23:42:27.139652 ignition[1065]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" May 13 23:42:27.139652 ignition[1065]: INFO : files: files passed May 13 23:42:27.139652 ignition[1065]: INFO : Ignition finished successfully May 13 23:42:27.106062 systemd[1]: Finished ignition-files.service - Ignition (files). May 13 23:42:27.126462 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 13 23:42:27.185858 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 13 23:42:27.218810 systemd[1]: ignition-quench.service: Deactivated successfully. May 13 23:42:27.308993 initrd-setup-root-after-ignition[1094]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 23:42:27.308993 initrd-setup-root-after-ignition[1094]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 13 23:42:27.218960 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 13 23:42:27.336977 initrd-setup-root-after-ignition[1098]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 23:42:27.243295 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 13 23:42:27.252792 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 13 23:42:27.262453 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 13 23:42:27.339227 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 13 23:42:27.339373 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 13 23:42:27.354367 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 13 23:42:27.360917 systemd[1]: Reached target initrd.target - Initrd Default Target. May 13 23:42:27.374215 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 13 23:42:27.377446 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 13 23:42:27.419814 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 13 23:42:27.428431 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 13 23:42:27.469592 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 13 23:42:27.469722 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 13 23:42:27.482229 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 13 23:42:27.494939 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 23:42:27.508883 systemd[1]: Stopped target timers.target - Timer Units. May 13 23:42:27.521416 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 13 23:42:27.521495 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 13 23:42:27.538825 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 13 23:42:27.551191 systemd[1]: Stopped target basic.target - Basic System. May 13 23:42:27.561917 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 13 23:42:27.572528 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 13 23:42:27.584998 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 13 23:42:27.598354 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 13 23:42:27.611059 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 13 23:42:27.623816 systemd[1]: Stopped target sysinit.target - System Initialization. May 13 23:42:27.636344 systemd[1]: Stopped target local-fs.target - Local File Systems. May 13 23:42:27.759563 systemd[1]: Stopped target swap.target - Swaps. May 13 23:42:27.770308 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 13 23:42:27.770404 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 13 23:42:27.786225 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 13 23:42:27.792962 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 23:42:27.806191 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 13 23:42:27.806239 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 23:42:27.820242 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 13 23:42:27.820331 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 13 23:42:27.839125 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 13 23:42:27.839190 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 13 23:42:27.847011 systemd[1]: ignition-files.service: Deactivated successfully. May 13 23:42:27.847058 systemd[1]: Stopped ignition-files.service - Ignition (files). May 13 23:42:27.861909 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. May 13 23:42:27.941352 ignition[1120]: INFO : Ignition 2.20.0 May 13 23:42:27.941352 ignition[1120]: INFO : Stage: umount May 13 23:42:27.941352 ignition[1120]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 23:42:27.941352 ignition[1120]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" May 13 23:42:27.941352 ignition[1120]: INFO : umount: umount passed May 13 23:42:27.941352 ignition[1120]: INFO : Ignition finished successfully May 13 23:42:27.861976 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 13 23:42:27.876403 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 13 23:42:27.890415 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 13 23:42:27.906485 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 13 23:42:27.906783 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 13 23:42:27.915099 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 13 23:42:27.915174 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 13 23:42:27.938008 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 13 23:42:27.941515 systemd[1]: ignition-mount.service: Deactivated successfully. May 13 23:42:27.941904 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 13 23:42:27.953053 systemd[1]: ignition-disks.service: Deactivated successfully. May 13 23:42:27.953116 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 13 23:42:27.965142 systemd[1]: ignition-kargs.service: Deactivated successfully. May 13 23:42:27.965213 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 13 23:42:27.978523 systemd[1]: ignition-fetch.service: Deactivated successfully. May 13 23:42:27.978589 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 13 23:42:27.992364 systemd[1]: Stopped target network.target - Network. May 13 23:42:28.002329 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 13 23:42:28.002404 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 13 23:42:28.015045 systemd[1]: Stopped target paths.target - Path Units. May 13 23:42:28.029369 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 13 23:42:28.033295 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 23:42:28.043195 systemd[1]: Stopped target slices.target - Slice Units. May 13 23:42:28.057356 systemd[1]: Stopped target sockets.target - Socket Units. May 13 23:42:28.071394 systemd[1]: iscsid.socket: Deactivated successfully. May 13 23:42:28.071455 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 13 23:42:28.085609 systemd[1]: iscsiuio.socket: Deactivated successfully. May 13 23:42:28.085645 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 13 23:42:28.099141 systemd[1]: ignition-setup.service: Deactivated successfully. May 13 23:42:28.099199 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 13 23:42:28.111736 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 13 23:42:28.111783 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 13 23:42:28.123869 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 13 23:42:28.417512 kernel: hv_netvsc 000d3af7-7448-000d-3af7-7448000d3af7 eth0: Data path switched from VF: enP10830s1 May 13 23:42:28.135911 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 13 23:42:28.149379 systemd[1]: sysroot-boot.service: Deactivated successfully. May 13 23:42:28.149520 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 13 23:42:28.161612 systemd[1]: systemd-resolved.service: Deactivated successfully. May 13 23:42:28.161736 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 13 23:42:28.181431 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 13 23:42:28.181698 systemd[1]: systemd-networkd.service: Deactivated successfully. May 13 23:42:28.181827 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 13 23:42:28.200890 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 13 23:42:28.202215 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 13 23:42:28.202328 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 13 23:42:28.213405 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 13 23:42:28.213486 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 13 23:42:28.234411 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 13 23:42:28.251636 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 13 23:42:28.251718 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 13 23:42:28.265797 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 13 23:42:28.265854 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 13 23:42:28.282387 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 13 23:42:28.282454 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 13 23:42:28.295921 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 13 23:42:28.296003 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 23:42:28.317042 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 23:42:28.331002 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 13 23:42:28.331106 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 13 23:42:28.352695 systemd[1]: systemd-udevd.service: Deactivated successfully. May 13 23:42:28.352872 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 23:42:28.365023 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 13 23:42:28.365082 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 13 23:42:28.376787 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 13 23:42:28.376833 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 13 23:42:28.388983 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 13 23:42:28.389042 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 13 23:42:28.417651 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 13 23:42:28.417728 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 13 23:42:28.778319 systemd-journald[218]: Received SIGTERM from PID 1 (systemd). May 13 23:42:28.431943 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 23:42:28.432015 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 23:42:28.454472 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 13 23:42:28.467410 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 13 23:42:28.467492 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 23:42:28.488417 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 13 23:42:28.488479 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 13 23:42:28.498928 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 13 23:42:28.499010 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 13 23:42:28.519840 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 23:42:28.519914 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:42:28.541976 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 13 23:42:28.542053 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 13 23:42:28.542525 systemd[1]: network-cleanup.service: Deactivated successfully. May 13 23:42:28.542643 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 13 23:42:28.553674 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 13 23:42:28.553775 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 13 23:42:28.568728 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 13 23:42:28.586096 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 13 23:42:28.649685 systemd[1]: Switching root. May 13 23:42:28.926073 systemd-journald[218]: Journal stopped May 13 23:42:35.352703 kernel: SELinux: policy capability network_peer_controls=1 May 13 23:42:35.352734 kernel: SELinux: policy capability open_perms=1 May 13 23:42:35.352745 kernel: SELinux: policy capability extended_socket_class=1 May 13 23:42:35.352753 kernel: SELinux: policy capability always_check_network=0 May 13 23:42:35.352766 kernel: SELinux: policy capability cgroup_seclabel=1 May 13 23:42:35.352774 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 13 23:42:35.352782 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 13 23:42:35.352790 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 13 23:42:35.352798 kernel: audit: type=1403 audit(1747179750.191:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 13 23:42:35.352812 systemd[1]: Successfully loaded SELinux policy in 144.712ms. May 13 23:42:35.352824 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 18.645ms. May 13 23:42:35.352834 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 13 23:42:35.352842 systemd[1]: Detected virtualization microsoft. May 13 23:42:35.352851 systemd[1]: Detected architecture arm64. May 13 23:42:35.352860 systemd[1]: Detected first boot. May 13 23:42:35.352871 systemd[1]: Hostname set to . May 13 23:42:35.352880 systemd[1]: Initializing machine ID from random generator. May 13 23:42:35.352889 zram_generator::config[1163]: No configuration found. May 13 23:42:35.352898 kernel: NET: Registered PF_VSOCK protocol family May 13 23:42:35.352907 systemd[1]: Populated /etc with preset unit settings. May 13 23:42:35.352916 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 13 23:42:35.352925 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 13 23:42:35.352936 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 13 23:42:35.352945 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 13 23:42:35.352954 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 13 23:42:35.352963 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 13 23:42:35.352973 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 13 23:42:35.352982 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 13 23:42:35.352991 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 13 23:42:35.353002 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 13 23:42:35.353012 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 13 23:42:35.353021 systemd[1]: Created slice user.slice - User and Session Slice. May 13 23:42:35.353030 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 23:42:35.353039 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 23:42:35.353049 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 13 23:42:35.353058 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 13 23:42:35.353067 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 13 23:42:35.353078 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 13 23:42:35.353087 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 13 23:42:35.353096 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 23:42:35.353108 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 13 23:42:35.353117 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 13 23:42:35.353127 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 13 23:42:35.353136 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 13 23:42:35.353145 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 23:42:35.353156 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 13 23:42:35.353165 systemd[1]: Reached target slices.target - Slice Units. May 13 23:42:35.353174 systemd[1]: Reached target swap.target - Swaps. May 13 23:42:35.353183 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 13 23:42:35.353193 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 13 23:42:35.353202 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 13 23:42:35.353218 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 13 23:42:35.353228 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 13 23:42:35.353237 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 13 23:42:35.353246 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 13 23:42:35.353255 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 13 23:42:35.353279 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 13 23:42:35.353290 systemd[1]: Mounting media.mount - External Media Directory... May 13 23:42:35.353301 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 13 23:42:35.353311 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 13 23:42:35.353320 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 13 23:42:35.353330 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 13 23:42:35.353340 systemd[1]: Reached target machines.target - Containers. May 13 23:42:35.353349 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 13 23:42:35.353360 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 23:42:35.353369 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 13 23:42:35.353380 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 13 23:42:35.353390 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 23:42:35.353399 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 13 23:42:35.353408 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 23:42:35.353417 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 13 23:42:35.353427 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 23:42:35.353439 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 13 23:42:35.353448 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 13 23:42:35.353459 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 13 23:42:35.353469 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 13 23:42:35.353478 systemd[1]: Stopped systemd-fsck-usr.service. May 13 23:42:35.353488 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 23:42:35.353500 systemd[1]: Starting systemd-journald.service - Journal Service... May 13 23:42:35.353509 kernel: loop: module loaded May 13 23:42:35.353517 kernel: fuse: init (API version 7.39) May 13 23:42:35.353526 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 13 23:42:35.353535 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 13 23:42:35.353547 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 13 23:42:35.353556 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 13 23:42:35.353566 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 13 23:42:35.353602 systemd-journald[1247]: Collecting audit messages is disabled. May 13 23:42:35.353625 systemd[1]: verity-setup.service: Deactivated successfully. May 13 23:42:35.353635 systemd[1]: Stopped verity-setup.service. May 13 23:42:35.353645 systemd-journald[1247]: Journal started May 13 23:42:35.353666 systemd-journald[1247]: Runtime Journal (/run/log/journal/da5576c7253945fb98c68b1592230d0d) is 8M, max 78.5M, 70.5M free. May 13 23:42:34.054780 systemd[1]: Queued start job for default target multi-user.target. May 13 23:42:34.066314 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. May 13 23:42:34.066741 systemd[1]: systemd-journald.service: Deactivated successfully. May 13 23:42:34.067105 systemd[1]: systemd-journald.service: Consumed 3.849s CPU time. May 13 23:42:35.371310 systemd[1]: Started systemd-journald.service - Journal Service. May 13 23:42:35.371383 kernel: ACPI: bus type drm_connector registered May 13 23:42:35.383527 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 13 23:42:35.390014 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 13 23:42:35.396906 systemd[1]: Mounted media.mount - External Media Directory. May 13 23:42:35.403549 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 13 23:42:35.410671 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 13 23:42:35.417576 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 13 23:42:35.424495 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 13 23:42:35.431831 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 13 23:42:35.440001 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 13 23:42:35.440212 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 13 23:42:35.447816 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 23:42:35.448037 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 23:42:35.456037 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 23:42:35.456247 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 13 23:42:35.463113 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 23:42:35.463468 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 23:42:35.472113 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 13 23:42:35.472338 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 13 23:42:35.479390 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 23:42:35.479586 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 23:42:35.490345 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 13 23:42:35.498318 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 13 23:42:35.506604 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 13 23:42:35.527152 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 13 23:42:35.545339 systemd[1]: Reached target network-pre.target - Preparation for Network. May 13 23:42:35.559479 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 13 23:42:35.581452 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 13 23:42:35.589676 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 13 23:42:35.589727 systemd[1]: Reached target local-fs.target - Local File Systems. May 13 23:42:35.597172 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 13 23:42:35.612846 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 13 23:42:35.621258 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 13 23:42:35.627713 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 23:42:35.634387 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 13 23:42:35.648531 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 13 23:42:35.656390 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 23:42:35.657915 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 13 23:42:35.665260 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 23:42:35.666773 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 23:42:35.678480 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 13 23:42:35.688475 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 13 23:42:35.698111 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 13 23:42:35.706102 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 13 23:42:35.714031 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 13 23:42:35.722778 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 13 23:42:35.730193 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 13 23:42:35.744589 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 13 23:42:35.748079 systemd-journald[1247]: Time spent on flushing to /var/log/journal/da5576c7253945fb98c68b1592230d0d is 20.427ms for 922 entries. May 13 23:42:35.748079 systemd-journald[1247]: System Journal (/var/log/journal/da5576c7253945fb98c68b1592230d0d) is 8M, max 2.6G, 2.6G free. May 13 23:42:35.792978 systemd-journald[1247]: Received client request to flush runtime journal. May 13 23:42:35.761460 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 13 23:42:35.786887 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 13 23:42:35.799696 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 13 23:42:35.809325 kernel: loop0: detected capacity change from 0 to 126448 May 13 23:42:35.829213 udevadm[1313]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 13 23:42:36.167972 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 23:42:36.172315 systemd-tmpfiles[1305]: ACLs are not supported, ignoring. May 13 23:42:36.172330 systemd-tmpfiles[1305]: ACLs are not supported, ignoring. May 13 23:42:36.177980 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 13 23:42:36.187451 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 13 23:42:36.721713 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 13 23:42:36.723914 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 13 23:42:37.541306 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 13 23:42:37.581317 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 13 23:42:37.593304 kernel: loop1: detected capacity change from 0 to 28888 May 13 23:42:37.595190 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 13 23:42:37.626104 systemd-tmpfiles[1325]: ACLs are not supported, ignoring. May 13 23:42:37.626126 systemd-tmpfiles[1325]: ACLs are not supported, ignoring. May 13 23:42:37.633577 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 23:42:39.625415 kernel: loop2: detected capacity change from 0 to 103832 May 13 23:42:39.656922 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 13 23:42:39.665762 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 23:42:39.702151 systemd-udevd[1330]: Using default interface naming scheme 'v255'. May 13 23:42:40.039835 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 23:42:40.058063 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 13 23:42:40.119220 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. May 13 23:42:40.139543 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 13 23:42:40.469716 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 13 23:42:40.491299 kernel: mousedev: PS/2 mouse device common for all mice May 13 23:42:40.528324 kernel: hv_vmbus: registering driver hv_balloon May 13 23:42:40.529393 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 May 13 23:42:40.537551 kernel: hv_balloon: Memory hot add disabled on ARM64 May 13 23:42:40.547380 kernel: hv_vmbus: registering driver hyperv_fb May 13 23:42:40.544906 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:42:40.558323 kernel: hyperv_fb: Synthvid Version major 3, minor 5 May 13 23:42:40.558424 kernel: loop3: detected capacity change from 0 to 189592 May 13 23:42:40.573289 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 May 13 23:42:40.579435 kernel: Console: switching to colour dummy device 80x25 May 13 23:42:40.586942 kernel: Console: switching to colour frame buffer device 128x48 May 13 23:42:40.622504 kernel: loop4: detected capacity change from 0 to 126448 May 13 23:42:40.633375 kernel: loop5: detected capacity change from 0 to 28888 May 13 23:42:40.642303 kernel: loop6: detected capacity change from 0 to 103832 May 13 23:42:40.651321 kernel: loop7: detected capacity change from 0 to 189592 May 13 23:42:40.656316 (sd-merge)[1386]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. May 13 23:42:40.656813 (sd-merge)[1386]: Merged extensions into '/usr'. May 13 23:42:40.660203 systemd[1]: Reload requested from client PID 1303 ('systemd-sysext') (unit systemd-sysext.service)... May 13 23:42:40.660218 systemd[1]: Reloading... May 13 23:42:40.711329 zram_generator::config[1416]: No configuration found. May 13 23:42:40.936843 systemd-networkd[1342]: lo: Link UP May 13 23:42:40.936852 systemd-networkd[1342]: lo: Gained carrier May 13 23:42:40.939388 systemd-networkd[1342]: Enumeration completed May 13 23:42:40.939884 systemd-networkd[1342]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 23:42:40.939964 systemd-networkd[1342]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 23:42:40.999405 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1339) May 13 23:42:41.031389 kernel: mlx5_core 2a4e:00:02.0 enP10830s1: Link up May 13 23:42:41.060301 kernel: hv_netvsc 000d3af7-7448-000d-3af7-7448000d3af7 eth0: Data path switched to VF: enP10830s1 May 13 23:42:41.060820 systemd-networkd[1342]: enP10830s1: Link UP May 13 23:42:41.060915 systemd-networkd[1342]: eth0: Link UP May 13 23:42:41.060919 systemd-networkd[1342]: eth0: Gained carrier May 13 23:42:41.060934 systemd-networkd[1342]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 23:42:41.065667 systemd-networkd[1342]: enP10830s1: Gained carrier May 13 23:42:41.076312 systemd-networkd[1342]: eth0: DHCPv4 address 10.200.20.10/24, gateway 10.200.20.1 acquired from 168.63.129.16 May 13 23:42:41.201121 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 23:42:41.299650 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. May 13 23:42:41.308896 systemd[1]: Reloading finished in 648 ms. May 13 23:42:41.324337 systemd[1]: Started systemd-networkd.service - Network Configuration. May 13 23:42:41.331733 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 13 23:42:41.343392 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 23:42:41.343812 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:42:41.352169 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 13 23:42:41.360311 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 13 23:42:41.396680 systemd[1]: Starting ensure-sysext.service... May 13 23:42:41.408633 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 13 23:42:41.425720 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 13 23:42:41.433885 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 13 23:42:41.442675 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 13 23:42:41.456843 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 13 23:42:41.467487 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:42:41.485062 systemd-tmpfiles[1535]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 13 23:42:41.486830 systemd-tmpfiles[1535]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 13 23:42:41.487576 systemd-tmpfiles[1535]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 13 23:42:41.487782 systemd-tmpfiles[1535]: ACLs are not supported, ignoring. May 13 23:42:41.487825 systemd-tmpfiles[1535]: ACLs are not supported, ignoring. May 13 23:42:41.497354 systemd[1]: Reload requested from client PID 1530 ('systemctl') (unit ensure-sysext.service)... May 13 23:42:41.497380 systemd[1]: Reloading... May 13 23:42:41.567296 zram_generator::config[1573]: No configuration found. May 13 23:42:41.630217 systemd-tmpfiles[1535]: Detected autofs mount point /boot during canonicalization of boot. May 13 23:42:41.630234 systemd-tmpfiles[1535]: Skipping /boot May 13 23:42:41.641047 systemd-tmpfiles[1535]: Detected autofs mount point /boot during canonicalization of boot. May 13 23:42:41.641070 systemd-tmpfiles[1535]: Skipping /boot May 13 23:42:41.694338 lvm[1531]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 23:42:41.706615 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 23:42:41.810123 systemd[1]: Reloading finished in 312 ms. May 13 23:42:41.833136 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 13 23:42:41.840580 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 13 23:42:41.848995 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 13 23:42:41.856912 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 23:42:41.871815 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 13 23:42:41.879820 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 13 23:42:41.891236 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 13 23:42:41.900727 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 13 23:42:41.918379 lvm[1637]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 23:42:41.911603 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 13 23:42:41.931640 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 13 23:42:41.940396 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 13 23:42:41.958910 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 13 23:42:41.980030 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 13 23:42:41.992337 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 23:42:41.996449 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 23:42:42.006500 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 13 23:42:42.019797 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 23:42:42.032591 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 23:42:42.038836 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 23:42:42.038885 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 23:42:42.038944 systemd[1]: Reached target time-set.target - System Time Set. May 13 23:42:42.053113 systemd[1]: Finished ensure-sysext.service. May 13 23:42:42.062618 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 23:42:42.062807 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 23:42:42.071868 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 23:42:42.072205 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 13 23:42:42.079835 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 23:42:42.080153 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 23:42:42.090948 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 23:42:42.091225 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 23:42:42.102941 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 23:42:42.103196 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 23:42:42.144968 systemd-resolved[1639]: Positive Trust Anchors: May 13 23:42:42.144994 systemd-resolved[1639]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 23:42:42.145026 systemd-resolved[1639]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 13 23:42:42.177618 systemd-resolved[1639]: Using system hostname 'ci-4284.0.0-n-791441f790'. May 13 23:42:42.180733 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 13 23:42:42.189474 systemd[1]: Reached target network.target - Network. May 13 23:42:42.195979 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 13 23:42:42.204692 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 13 23:42:42.578823 augenrules[1673]: No rules May 13 23:42:42.580402 systemd[1]: audit-rules.service: Deactivated successfully. May 13 23:42:42.581365 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 13 23:42:42.742412 systemd-networkd[1342]: enP10830s1: Gained IPv6LL May 13 23:42:42.874735 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:42:42.934407 systemd-networkd[1342]: eth0: Gained IPv6LL May 13 23:42:42.937856 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 13 23:42:42.945442 systemd[1]: Reached target network-online.target - Network is Online. May 13 23:42:43.964924 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 13 23:42:43.973280 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 23:42:51.260949 ldconfig[1298]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 13 23:42:51.277480 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 13 23:42:51.287448 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 13 23:42:51.308154 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 13 23:42:51.317310 systemd[1]: Reached target sysinit.target - System Initialization. May 13 23:42:51.323673 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 13 23:42:51.330984 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 13 23:42:51.338953 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 13 23:42:51.345299 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 13 23:42:51.353080 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 13 23:42:51.360721 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 13 23:42:51.360758 systemd[1]: Reached target paths.target - Path Units. May 13 23:42:51.366252 systemd[1]: Reached target timers.target - Timer Units. May 13 23:42:51.386737 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 13 23:42:51.394793 systemd[1]: Starting docker.socket - Docker Socket for the API... May 13 23:42:51.402985 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 13 23:42:51.410778 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 13 23:42:51.418388 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 13 23:42:51.426762 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 13 23:42:51.433138 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 13 23:42:51.440729 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 13 23:42:51.447399 systemd[1]: Reached target sockets.target - Socket Units. May 13 23:42:51.452812 systemd[1]: Reached target basic.target - Basic System. May 13 23:42:51.458662 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 13 23:42:51.458692 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 13 23:42:51.461303 systemd[1]: Starting chronyd.service - NTP client/server... May 13 23:42:51.476398 systemd[1]: Starting containerd.service - containerd container runtime... May 13 23:42:51.490000 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 13 23:42:51.500577 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 13 23:42:51.509393 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 13 23:42:51.516753 (chronyd)[1688]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS May 13 23:42:51.518499 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 13 23:42:51.527615 jq[1692]: false May 13 23:42:51.527846 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 13 23:42:51.527889 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). May 13 23:42:51.530690 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. May 13 23:42:51.538223 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). May 13 23:42:51.541857 chronyd[1700]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) May 13 23:42:51.545255 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:42:51.546145 KVP[1697]: KVP starting; pid is:1697 May 13 23:42:51.556474 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 13 23:42:51.563551 kernel: hv_utils: KVP IC version 4.0 May 13 23:42:51.557589 KVP[1697]: KVP LIC Version: 3.1 May 13 23:42:51.563435 chronyd[1700]: Timezone right/UTC failed leap second check, ignoring May 13 23:42:51.563625 chronyd[1700]: Loaded seccomp filter (level 2) May 13 23:42:51.566434 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 13 23:42:51.573488 extend-filesystems[1693]: Found loop4 May 13 23:42:51.573488 extend-filesystems[1693]: Found loop5 May 13 23:42:51.573488 extend-filesystems[1693]: Found loop6 May 13 23:42:51.573488 extend-filesystems[1693]: Found loop7 May 13 23:42:51.573488 extend-filesystems[1693]: Found sda May 13 23:42:51.638765 extend-filesystems[1693]: Found sda1 May 13 23:42:51.638765 extend-filesystems[1693]: Found sda2 May 13 23:42:51.638765 extend-filesystems[1693]: Found sda3 May 13 23:42:51.638765 extend-filesystems[1693]: Found usr May 13 23:42:51.638765 extend-filesystems[1693]: Found sda4 May 13 23:42:51.638765 extend-filesystems[1693]: Found sda6 May 13 23:42:51.638765 extend-filesystems[1693]: Found sda7 May 13 23:42:51.638765 extend-filesystems[1693]: Found sda9 May 13 23:42:51.638765 extend-filesystems[1693]: Checking size of /dev/sda9 May 13 23:42:51.638765 extend-filesystems[1693]: Old size kept for /dev/sda9 May 13 23:42:51.638765 extend-filesystems[1693]: Found sr0 May 13 23:42:51.583414 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 13 23:42:51.703125 dbus-daemon[1691]: [system] SELinux support is enabled May 13 23:42:51.882860 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1743) May 13 23:42:51.882914 coreos-metadata[1690]: May 13 23:42:51.802 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 May 13 23:42:51.882914 coreos-metadata[1690]: May 13 23:42:51.806 INFO Fetch successful May 13 23:42:51.882914 coreos-metadata[1690]: May 13 23:42:51.806 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 May 13 23:42:51.882914 coreos-metadata[1690]: May 13 23:42:51.820 INFO Fetch successful May 13 23:42:51.882914 coreos-metadata[1690]: May 13 23:42:51.820 INFO Fetching http://168.63.129.16/machine/5f7bee34-5c99-4b1c-82d6-704fcac0c989/fc87f28b%2D45ee%2D458b%2D9844%2Dd4b306792b1f.%5Fci%2D4284.0.0%2Dn%2D791441f790?comp=config&type=sharedConfig&incarnation=1: Attempt #1 May 13 23:42:51.882914 coreos-metadata[1690]: May 13 23:42:51.823 INFO Fetch successful May 13 23:42:51.882914 coreos-metadata[1690]: May 13 23:42:51.824 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 May 13 23:42:51.882914 coreos-metadata[1690]: May 13 23:42:51.841 INFO Fetch successful May 13 23:42:51.596500 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 13 23:42:51.612729 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 13 23:42:51.650045 systemd[1]: Starting systemd-logind.service - User Login Management... May 13 23:42:51.662025 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 13 23:42:51.662613 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 13 23:42:51.663539 systemd[1]: Starting update-engine.service - Update Engine... May 13 23:42:51.883895 update_engine[1726]: I20250513 23:42:51.778450 1726 main.cc:92] Flatcar Update Engine starting May 13 23:42:51.883895 update_engine[1726]: I20250513 23:42:51.791464 1726 update_check_scheduler.cc:74] Next update check in 7m59s May 13 23:42:51.672783 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 13 23:42:51.884180 jq[1729]: true May 13 23:42:51.691431 systemd[1]: Started chronyd.service - NTP client/server. May 13 23:42:51.707861 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 13 23:42:51.731773 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 13 23:42:51.732004 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 13 23:42:51.732333 systemd[1]: extend-filesystems.service: Deactivated successfully. May 13 23:42:51.732504 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 13 23:42:51.764229 systemd[1]: motdgen.service: Deactivated successfully. May 13 23:42:51.765086 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 13 23:42:51.787680 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 13 23:42:51.816511 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 13 23:42:51.816711 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 13 23:42:51.887215 systemd-logind[1722]: New seat seat0. May 13 23:42:51.893820 systemd-logind[1722]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 13 23:42:51.894700 systemd[1]: Started systemd-logind.service - User Login Management. May 13 23:42:51.907575 (ntainerd)[1762]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 13 23:42:51.923656 jq[1761]: true May 13 23:42:51.946973 dbus-daemon[1691]: [system] Successfully activated service 'org.freedesktop.systemd1' May 13 23:42:51.948072 systemd[1]: Started update-engine.service - Update Engine. May 13 23:42:51.950981 tar[1748]: linux-arm64/helm May 13 23:42:51.963859 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 13 23:42:51.964049 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 13 23:42:51.976515 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 13 23:42:51.976645 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 13 23:42:52.067399 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 13 23:42:52.080393 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 13 23:42:52.094701 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 13 23:42:52.230399 bash[1832]: Updated "/home/core/.ssh/authorized_keys" May 13 23:42:52.233814 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 13 23:42:52.254756 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 13 23:42:52.472065 locksmithd[1808]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 13 23:42:52.534381 sshd_keygen[1725]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 13 23:42:52.559302 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 13 23:42:52.572670 systemd[1]: Starting issuegen.service - Generate /run/issue... May 13 23:42:52.584118 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... May 13 23:42:52.611148 systemd[1]: issuegen.service: Deactivated successfully. May 13 23:42:52.612819 systemd[1]: Finished issuegen.service - Generate /run/issue. May 13 23:42:52.630749 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 13 23:42:52.665295 tar[1748]: linux-arm64/LICENSE May 13 23:42:52.665295 tar[1748]: linux-arm64/README.md May 13 23:42:52.671941 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. May 13 23:42:52.682014 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 13 23:42:52.701596 systemd[1]: Started getty@tty1.service - Getty on tty1. May 13 23:42:52.714046 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 13 23:42:52.723907 systemd[1]: Reached target getty.target - Login Prompts. May 13 23:42:52.731756 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 13 23:42:52.773102 containerd[1762]: time="2025-05-13T23:42:52Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 13 23:42:52.775625 containerd[1762]: time="2025-05-13T23:42:52.775575100Z" level=info msg="starting containerd" revision=88aa2f531d6c2922003cc7929e51daf1c14caa0a version=v2.0.1 May 13 23:42:52.788633 containerd[1762]: time="2025-05-13T23:42:52.788489340Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10.2µs" May 13 23:42:52.788766 containerd[1762]: time="2025-05-13T23:42:52.788745700Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 13 23:42:52.788833 containerd[1762]: time="2025-05-13T23:42:52.788819580Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 13 23:42:52.789305 containerd[1762]: time="2025-05-13T23:42:52.789106300Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 13 23:42:52.789305 containerd[1762]: time="2025-05-13T23:42:52.789136660Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 13 23:42:52.789305 containerd[1762]: time="2025-05-13T23:42:52.789174900Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 13 23:42:52.789305 containerd[1762]: time="2025-05-13T23:42:52.789243980Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 13 23:42:52.789305 containerd[1762]: time="2025-05-13T23:42:52.789277700Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 13 23:42:52.789868 containerd[1762]: time="2025-05-13T23:42:52.789834420Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 13 23:42:52.789974 containerd[1762]: time="2025-05-13T23:42:52.789957940Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 13 23:42:52.790306 containerd[1762]: time="2025-05-13T23:42:52.790022180Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 13 23:42:52.790306 containerd[1762]: time="2025-05-13T23:42:52.790036740Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 13 23:42:52.790306 containerd[1762]: time="2025-05-13T23:42:52.790143300Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 13 23:42:52.790640 containerd[1762]: time="2025-05-13T23:42:52.790615780Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 13 23:42:52.790776 containerd[1762]: time="2025-05-13T23:42:52.790757020Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 13 23:42:52.790856 containerd[1762]: time="2025-05-13T23:42:52.790842540Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 13 23:42:52.790957 containerd[1762]: time="2025-05-13T23:42:52.790942460Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 13 23:42:52.791381 containerd[1762]: time="2025-05-13T23:42:52.791339820Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 13 23:42:52.791471 containerd[1762]: time="2025-05-13T23:42:52.791448540Z" level=info msg="metadata content store policy set" policy=shared May 13 23:42:52.821497 containerd[1762]: time="2025-05-13T23:42:52.821392580Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 13 23:42:52.821497 containerd[1762]: time="2025-05-13T23:42:52.821483220Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 13 23:42:52.821497 containerd[1762]: time="2025-05-13T23:42:52.821502300Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 13 23:42:52.821667 containerd[1762]: time="2025-05-13T23:42:52.821516820Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 13 23:42:52.821667 containerd[1762]: time="2025-05-13T23:42:52.821531740Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 13 23:42:52.821667 containerd[1762]: time="2025-05-13T23:42:52.821544740Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 13 23:42:52.821667 containerd[1762]: time="2025-05-13T23:42:52.821557540Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 13 23:42:52.821667 containerd[1762]: time="2025-05-13T23:42:52.821571100Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 13 23:42:52.821667 containerd[1762]: time="2025-05-13T23:42:52.821582660Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 13 23:42:52.821667 containerd[1762]: time="2025-05-13T23:42:52.821596580Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 13 23:42:52.821667 containerd[1762]: time="2025-05-13T23:42:52.821606100Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 13 23:42:52.821667 containerd[1762]: time="2025-05-13T23:42:52.821618700Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 13 23:42:52.821818 containerd[1762]: time="2025-05-13T23:42:52.821800980Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 13 23:42:52.821836 containerd[1762]: time="2025-05-13T23:42:52.821824420Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 13 23:42:52.821854 containerd[1762]: time="2025-05-13T23:42:52.821837140Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 13 23:42:52.821854 containerd[1762]: time="2025-05-13T23:42:52.821850380Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 13 23:42:52.821886 containerd[1762]: time="2025-05-13T23:42:52.821862220Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 13 23:42:52.821886 containerd[1762]: time="2025-05-13T23:42:52.821873460Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 13 23:42:52.821924 containerd[1762]: time="2025-05-13T23:42:52.821884620Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 13 23:42:52.821924 containerd[1762]: time="2025-05-13T23:42:52.821895380Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 13 23:42:52.821924 containerd[1762]: time="2025-05-13T23:42:52.821910380Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 13 23:42:52.821978 containerd[1762]: time="2025-05-13T23:42:52.821924420Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 13 23:42:52.821978 containerd[1762]: time="2025-05-13T23:42:52.821935580Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 13 23:42:52.822077 containerd[1762]: time="2025-05-13T23:42:52.822019260Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 13 23:42:52.822077 containerd[1762]: time="2025-05-13T23:42:52.822044860Z" level=info msg="Start snapshots syncer" May 13 23:42:52.822077 containerd[1762]: time="2025-05-13T23:42:52.822087740Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 13 23:42:52.822381 containerd[1762]: time="2025-05-13T23:42:52.822344580Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 13 23:42:52.822506 containerd[1762]: time="2025-05-13T23:42:52.822402860Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 13 23:42:52.822506 containerd[1762]: time="2025-05-13T23:42:52.822479460Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 13 23:42:52.823285 containerd[1762]: time="2025-05-13T23:42:52.822762700Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 13 23:42:52.823285 containerd[1762]: time="2025-05-13T23:42:52.822817580Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 13 23:42:52.823285 containerd[1762]: time="2025-05-13T23:42:52.822831180Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 13 23:42:52.823285 containerd[1762]: time="2025-05-13T23:42:52.822842660Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 13 23:42:52.823285 containerd[1762]: time="2025-05-13T23:42:52.822856060Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 13 23:42:52.823285 containerd[1762]: time="2025-05-13T23:42:52.822867380Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 13 23:42:52.823285 containerd[1762]: time="2025-05-13T23:42:52.822879780Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 13 23:42:52.823285 containerd[1762]: time="2025-05-13T23:42:52.822910380Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 13 23:42:52.823285 containerd[1762]: time="2025-05-13T23:42:52.822927060Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 13 23:42:52.823285 containerd[1762]: time="2025-05-13T23:42:52.822937180Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 13 23:42:52.823285 containerd[1762]: time="2025-05-13T23:42:52.822962020Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 13 23:42:52.823285 containerd[1762]: time="2025-05-13T23:42:52.822975700Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 13 23:42:52.823285 containerd[1762]: time="2025-05-13T23:42:52.822984980Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 13 23:42:52.823591 containerd[1762]: time="2025-05-13T23:42:52.822994540Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 13 23:42:52.823591 containerd[1762]: time="2025-05-13T23:42:52.823002580Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 13 23:42:52.823591 containerd[1762]: time="2025-05-13T23:42:52.823013180Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 13 23:42:52.823591 containerd[1762]: time="2025-05-13T23:42:52.823025660Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 13 23:42:52.823591 containerd[1762]: time="2025-05-13T23:42:52.823045340Z" level=info msg="runtime interface created" May 13 23:42:52.823591 containerd[1762]: time="2025-05-13T23:42:52.823050620Z" level=info msg="created NRI interface" May 13 23:42:52.823591 containerd[1762]: time="2025-05-13T23:42:52.823060380Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 13 23:42:52.823591 containerd[1762]: time="2025-05-13T23:42:52.823075100Z" level=info msg="Connect containerd service" May 13 23:42:52.823591 containerd[1762]: time="2025-05-13T23:42:52.823105540Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 13 23:42:52.824200 containerd[1762]: time="2025-05-13T23:42:52.823820300Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 23:42:52.879707 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:42:52.890729 (kubelet)[1884]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 23:42:53.300341 kubelet[1884]: E0513 23:42:53.300255 1884 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 23:42:53.303209 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 23:42:53.303414 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 23:42:53.305385 systemd[1]: kubelet.service: Consumed 696ms CPU time, 232.7M memory peak. May 13 23:42:53.627907 containerd[1762]: time="2025-05-13T23:42:53.627279260Z" level=info msg="Start subscribing containerd event" May 13 23:42:53.627907 containerd[1762]: time="2025-05-13T23:42:53.627375700Z" level=info msg="Start recovering state" May 13 23:42:53.627907 containerd[1762]: time="2025-05-13T23:42:53.627471500Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 13 23:42:53.627907 containerd[1762]: time="2025-05-13T23:42:53.627477140Z" level=info msg="Start event monitor" May 13 23:42:53.627907 containerd[1762]: time="2025-05-13T23:42:53.627517140Z" level=info msg="Start cni network conf syncer for default" May 13 23:42:53.627907 containerd[1762]: time="2025-05-13T23:42:53.627524020Z" level=info msg=serving... address=/run/containerd/containerd.sock May 13 23:42:53.627907 containerd[1762]: time="2025-05-13T23:42:53.627526060Z" level=info msg="Start streaming server" May 13 23:42:53.627907 containerd[1762]: time="2025-05-13T23:42:53.627549540Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 13 23:42:53.627907 containerd[1762]: time="2025-05-13T23:42:53.627559100Z" level=info msg="runtime interface starting up..." May 13 23:42:53.627907 containerd[1762]: time="2025-05-13T23:42:53.627565740Z" level=info msg="starting plugins..." May 13 23:42:53.627907 containerd[1762]: time="2025-05-13T23:42:53.627585500Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 13 23:42:53.627907 containerd[1762]: time="2025-05-13T23:42:53.627720620Z" level=info msg="containerd successfully booted in 0.855054s" May 13 23:42:53.628418 systemd[1]: Started containerd.service - containerd container runtime. May 13 23:42:53.638077 systemd[1]: Reached target multi-user.target - Multi-User System. May 13 23:42:53.651360 systemd[1]: Startup finished in 743ms (kernel) + 13.217s (initrd) + 23.602s (userspace) = 37.564s. May 13 23:42:53.911571 login[1872]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) May 13 23:42:53.912836 login[1873]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) May 13 23:42:53.920460 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 13 23:42:53.921597 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 13 23:42:53.930150 systemd-logind[1722]: New session 2 of user core. May 13 23:42:53.941509 systemd-logind[1722]: New session 1 of user core. May 13 23:42:53.951393 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 13 23:42:53.954922 systemd[1]: Starting user@500.service - User Manager for UID 500... May 13 23:42:53.985213 (systemd)[1906]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 13 23:42:53.988108 systemd-logind[1722]: New session c1 of user core. May 13 23:42:54.148078 systemd[1906]: Queued start job for default target default.target. May 13 23:42:54.153532 systemd[1906]: Created slice app.slice - User Application Slice. May 13 23:42:54.153569 systemd[1906]: Reached target paths.target - Paths. May 13 23:42:54.153614 systemd[1906]: Reached target timers.target - Timers. May 13 23:42:54.154938 systemd[1906]: Starting dbus.socket - D-Bus User Message Bus Socket... May 13 23:42:54.167010 systemd[1906]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 13 23:42:54.167127 systemd[1906]: Reached target sockets.target - Sockets. May 13 23:42:54.167172 systemd[1906]: Reached target basic.target - Basic System. May 13 23:42:54.167204 systemd[1906]: Reached target default.target - Main User Target. May 13 23:42:54.167230 systemd[1906]: Startup finished in 171ms. May 13 23:42:54.167394 systemd[1]: Started user@500.service - User Manager for UID 500. May 13 23:42:54.178467 systemd[1]: Started session-1.scope - Session 1 of User core. May 13 23:42:54.179480 systemd[1]: Started session-2.scope - Session 2 of User core. May 13 23:42:54.330616 waagent[1869]: 2025-05-13T23:42:54.330517Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 May 13 23:42:54.336523 waagent[1869]: 2025-05-13T23:42:54.336454Z INFO Daemon Daemon OS: flatcar 4284.0.0 May 13 23:42:54.341604 waagent[1869]: 2025-05-13T23:42:54.341549Z INFO Daemon Daemon Python: 3.11.11 May 13 23:42:54.346173 waagent[1869]: 2025-05-13T23:42:54.346114Z INFO Daemon Daemon Run daemon May 13 23:42:54.350223 waagent[1869]: 2025-05-13T23:42:54.350173Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4284.0.0' May 13 23:42:54.360004 waagent[1869]: 2025-05-13T23:42:54.359938Z INFO Daemon Daemon Using waagent for provisioning May 13 23:42:54.365535 waagent[1869]: 2025-05-13T23:42:54.365480Z INFO Daemon Daemon Activate resource disk May 13 23:42:54.370180 waagent[1869]: 2025-05-13T23:42:54.370122Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb May 13 23:42:54.381457 waagent[1869]: 2025-05-13T23:42:54.381394Z INFO Daemon Daemon Found device: None May 13 23:42:54.386104 waagent[1869]: 2025-05-13T23:42:54.386052Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology May 13 23:42:54.394591 waagent[1869]: 2025-05-13T23:42:54.394541Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 May 13 23:42:54.407116 waagent[1869]: 2025-05-13T23:42:54.407060Z INFO Daemon Daemon Clean protocol and wireserver endpoint May 13 23:42:54.413354 waagent[1869]: 2025-05-13T23:42:54.413302Z INFO Daemon Daemon Running default provisioning handler May 13 23:42:54.425810 waagent[1869]: 2025-05-13T23:42:54.425664Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. May 13 23:42:54.440355 waagent[1869]: 2025-05-13T23:42:54.440291Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' May 13 23:42:54.450811 waagent[1869]: 2025-05-13T23:42:54.450758Z INFO Daemon Daemon cloud-init is enabled: False May 13 23:42:54.456473 waagent[1869]: 2025-05-13T23:42:54.456410Z INFO Daemon Daemon Copying ovf-env.xml May 13 23:42:54.548574 waagent[1869]: 2025-05-13T23:42:54.548486Z INFO Daemon Daemon Successfully mounted dvd May 13 23:42:54.576979 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. May 13 23:42:54.579349 waagent[1869]: 2025-05-13T23:42:54.579255Z INFO Daemon Daemon Detect protocol endpoint May 13 23:42:54.584859 waagent[1869]: 2025-05-13T23:42:54.584798Z INFO Daemon Daemon Clean protocol and wireserver endpoint May 13 23:42:54.590976 waagent[1869]: 2025-05-13T23:42:54.590927Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler May 13 23:42:54.597982 waagent[1869]: 2025-05-13T23:42:54.597927Z INFO Daemon Daemon Test for route to 168.63.129.16 May 13 23:42:54.603610 waagent[1869]: 2025-05-13T23:42:54.603561Z INFO Daemon Daemon Route to 168.63.129.16 exists May 13 23:42:54.609336 waagent[1869]: 2025-05-13T23:42:54.609284Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 May 13 23:42:54.659009 waagent[1869]: 2025-05-13T23:42:54.658964Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 May 13 23:42:54.667156 waagent[1869]: 2025-05-13T23:42:54.667120Z INFO Daemon Daemon Wire protocol version:2012-11-30 May 13 23:42:54.673248 waagent[1869]: 2025-05-13T23:42:54.673191Z INFO Daemon Daemon Server preferred version:2015-04-05 May 13 23:42:54.886399 waagent[1869]: 2025-05-13T23:42:54.884234Z INFO Daemon Daemon Initializing goal state during protocol detection May 13 23:42:54.895249 waagent[1869]: 2025-05-13T23:42:54.895145Z INFO Daemon Daemon Forcing an update of the goal state. May 13 23:42:54.908445 waagent[1869]: 2025-05-13T23:42:54.908367Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] May 13 23:42:54.953311 waagent[1869]: 2025-05-13T23:42:54.953220Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.164 May 13 23:42:54.959905 waagent[1869]: 2025-05-13T23:42:54.959834Z INFO Daemon May 13 23:42:54.963006 waagent[1869]: 2025-05-13T23:42:54.962926Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: b7f53dd7-14a4-437c-b4f9-1ef6a1a4e1d8 eTag: 3135579633423811496 source: Fabric] May 13 23:42:54.975349 waagent[1869]: 2025-05-13T23:42:54.975296Z INFO Daemon The vmSettings originated via Fabric; will ignore them. May 13 23:42:54.982139 waagent[1869]: 2025-05-13T23:42:54.982060Z INFO Daemon May 13 23:42:54.985380 waagent[1869]: 2025-05-13T23:42:54.985313Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] May 13 23:42:54.997005 waagent[1869]: 2025-05-13T23:42:54.996917Z INFO Daemon Daemon Downloading artifacts profile blob May 13 23:42:55.171380 waagent[1869]: 2025-05-13T23:42:55.166696Z INFO Daemon Downloaded certificate {'thumbprint': 'FA3E64C6543546AA28A907D95152631C0CE9FDC5', 'hasPrivateKey': False} May 13 23:42:55.178668 waagent[1869]: 2025-05-13T23:42:55.178610Z INFO Daemon Downloaded certificate {'thumbprint': '0D41467C239788173A09B48CD7CD88EECF5826A5', 'hasPrivateKey': True} May 13 23:42:55.192053 waagent[1869]: 2025-05-13T23:42:55.191988Z INFO Daemon Fetch goal state completed May 13 23:42:55.244066 waagent[1869]: 2025-05-13T23:42:55.243985Z INFO Daemon Daemon Starting provisioning May 13 23:42:55.250079 waagent[1869]: 2025-05-13T23:42:55.249953Z INFO Daemon Daemon Handle ovf-env.xml. May 13 23:42:55.255603 waagent[1869]: 2025-05-13T23:42:55.255496Z INFO Daemon Daemon Set hostname [ci-4284.0.0-n-791441f790] May 13 23:42:55.280316 waagent[1869]: 2025-05-13T23:42:55.277876Z INFO Daemon Daemon Publish hostname [ci-4284.0.0-n-791441f790] May 13 23:42:55.285681 waagent[1869]: 2025-05-13T23:42:55.285578Z INFO Daemon Daemon Examine /proc/net/route for primary interface May 13 23:42:55.294527 waagent[1869]: 2025-05-13T23:42:55.294446Z INFO Daemon Daemon Primary interface is [eth0] May 13 23:42:55.310933 systemd-networkd[1342]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 23:42:55.310942 systemd-networkd[1342]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 23:42:55.310969 systemd-networkd[1342]: eth0: DHCP lease lost May 13 23:42:55.313228 waagent[1869]: 2025-05-13T23:42:55.313149Z INFO Daemon Daemon Create user account if not exists May 13 23:42:55.320776 waagent[1869]: 2025-05-13T23:42:55.320693Z INFO Daemon Daemon User core already exists, skip useradd May 13 23:42:55.327070 waagent[1869]: 2025-05-13T23:42:55.326992Z INFO Daemon Daemon Configure sudoer May 13 23:42:55.333756 waagent[1869]: 2025-05-13T23:42:55.333503Z INFO Daemon Daemon Configure sshd May 13 23:42:55.339762 waagent[1869]: 2025-05-13T23:42:55.339680Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. May 13 23:42:55.355980 waagent[1869]: 2025-05-13T23:42:55.355377Z INFO Daemon Daemon Deploy ssh public key. May 13 23:42:55.367338 systemd-networkd[1342]: eth0: DHCPv4 address 10.200.20.10/24, gateway 10.200.20.1 acquired from 168.63.129.16 May 13 23:42:56.458743 waagent[1869]: 2025-05-13T23:42:56.453015Z INFO Daemon Daemon Provisioning complete May 13 23:42:56.477165 waagent[1869]: 2025-05-13T23:42:56.477099Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping May 13 23:42:56.484395 waagent[1869]: 2025-05-13T23:42:56.484297Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. May 13 23:42:56.494473 waagent[1869]: 2025-05-13T23:42:56.494390Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent May 13 23:42:56.639253 waagent[1962]: 2025-05-13T23:42:56.638674Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) May 13 23:42:56.639253 waagent[1962]: 2025-05-13T23:42:56.638827Z INFO ExtHandler ExtHandler OS: flatcar 4284.0.0 May 13 23:42:56.639253 waagent[1962]: 2025-05-13T23:42:56.638872Z INFO ExtHandler ExtHandler Python: 3.11.11 May 13 23:42:56.639253 waagent[1962]: 2025-05-13T23:42:56.638920Z INFO ExtHandler ExtHandler CPU Arch: aarch64 May 13 23:42:56.705560 waagent[1962]: 2025-05-13T23:42:56.705470Z INFO ExtHandler ExtHandler Distro: flatcar-4284.0.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.11; Arch: aarch64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; May 13 23:42:56.705760 waagent[1962]: 2025-05-13T23:42:56.705723Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 13 23:42:56.705821 waagent[1962]: 2025-05-13T23:42:56.705795Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 May 13 23:42:56.713395 waagent[1962]: 2025-05-13T23:42:56.713226Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] May 13 23:42:56.719934 waagent[1962]: 2025-05-13T23:42:56.719870Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.164 May 13 23:42:56.720520 waagent[1962]: 2025-05-13T23:42:56.720483Z INFO ExtHandler May 13 23:42:56.720597 waagent[1962]: 2025-05-13T23:42:56.720570Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: eb737130-cf8b-4ccf-883e-1bd8eef52302 eTag: 3135579633423811496 source: Fabric] May 13 23:42:56.720901 waagent[1962]: 2025-05-13T23:42:56.720867Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. May 13 23:42:56.721453 waagent[1962]: 2025-05-13T23:42:56.721416Z INFO ExtHandler May 13 23:42:56.721508 waagent[1962]: 2025-05-13T23:42:56.721484Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] May 13 23:42:56.728342 waagent[1962]: 2025-05-13T23:42:56.728297Z INFO ExtHandler ExtHandler Downloading artifacts profile blob May 13 23:42:56.824482 waagent[1962]: 2025-05-13T23:42:56.824378Z INFO ExtHandler Downloaded certificate {'thumbprint': 'FA3E64C6543546AA28A907D95152631C0CE9FDC5', 'hasPrivateKey': False} May 13 23:42:56.825000 waagent[1962]: 2025-05-13T23:42:56.824949Z INFO ExtHandler Downloaded certificate {'thumbprint': '0D41467C239788173A09B48CD7CD88EECF5826A5', 'hasPrivateKey': True} May 13 23:42:56.825530 waagent[1962]: 2025-05-13T23:42:56.825481Z INFO ExtHandler Fetch goal state completed May 13 23:42:56.843960 waagent[1962]: 2025-05-13T23:42:56.843870Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.3.3 11 Feb 2025 (Library: OpenSSL 3.3.3 11 Feb 2025) May 13 23:42:56.849399 waagent[1962]: 2025-05-13T23:42:56.849321Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 1962 May 13 23:42:56.849536 waagent[1962]: 2025-05-13T23:42:56.849517Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** May 13 23:42:56.849915 waagent[1962]: 2025-05-13T23:42:56.849872Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** May 13 23:42:56.851465 waagent[1962]: 2025-05-13T23:42:56.851420Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4284.0.0', '', 'Flatcar Container Linux by Kinvolk'] May 13 23:42:56.851890 waagent[1962]: 2025-05-13T23:42:56.851848Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4284.0.0', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported May 13 23:42:56.852037 waagent[1962]: 2025-05-13T23:42:56.852008Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False May 13 23:42:56.852659 waagent[1962]: 2025-05-13T23:42:56.852619Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules May 13 23:42:56.921217 waagent[1962]: 2025-05-13T23:42:56.921168Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service May 13 23:42:56.921462 waagent[1962]: 2025-05-13T23:42:56.921422Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup May 13 23:42:56.929316 waagent[1962]: 2025-05-13T23:42:56.929005Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now May 13 23:42:56.936750 systemd[1]: Reload requested from client PID 1979 ('systemctl') (unit waagent.service)... May 13 23:42:56.937046 systemd[1]: Reloading... May 13 23:42:57.062306 zram_generator::config[2028]: No configuration found. May 13 23:42:57.173348 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 23:42:57.278396 systemd[1]: Reloading finished in 340 ms. May 13 23:42:57.297750 waagent[1962]: 2025-05-13T23:42:57.297658Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service May 13 23:42:57.299320 waagent[1962]: 2025-05-13T23:42:57.298554Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully May 13 23:42:58.141309 waagent[1962]: 2025-05-13T23:42:58.140983Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. May 13 23:42:58.141842 waagent[1962]: 2025-05-13T23:42:58.141542Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] May 13 23:42:58.142786 waagent[1962]: 2025-05-13T23:42:58.142689Z INFO ExtHandler ExtHandler Starting env monitor service. May 13 23:42:58.143403 waagent[1962]: 2025-05-13T23:42:58.143239Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. May 13 23:42:58.143774 waagent[1962]: 2025-05-13T23:42:58.143685Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread May 13 23:42:58.144038 waagent[1962]: 2025-05-13T23:42:58.143931Z INFO ExtHandler ExtHandler Start Extension Telemetry service. May 13 23:42:58.144737 waagent[1962]: 2025-05-13T23:42:58.144600Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True May 13 23:42:58.144840 waagent[1962]: 2025-05-13T23:42:58.144741Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. May 13 23:42:58.146315 waagent[1962]: 2025-05-13T23:42:58.146068Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 13 23:42:58.146315 waagent[1962]: 2025-05-13T23:42:58.146218Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 May 13 23:42:58.146618 waagent[1962]: 2025-05-13T23:42:58.146546Z INFO EnvHandler ExtHandler Configure routes May 13 23:42:58.146741 waagent[1962]: 2025-05-13T23:42:58.146705Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 13 23:42:58.146858 waagent[1962]: 2025-05-13T23:42:58.146828Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 May 13 23:42:58.147252 waagent[1962]: 2025-05-13T23:42:58.147094Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread May 13 23:42:58.147582 waagent[1962]: 2025-05-13T23:42:58.147530Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. May 13 23:42:58.147957 waagent[1962]: 2025-05-13T23:42:58.147880Z INFO EnvHandler ExtHandler Gateway:None May 13 23:42:58.148015 waagent[1962]: 2025-05-13T23:42:58.147986Z INFO EnvHandler ExtHandler Routes:None May 13 23:42:58.153019 waagent[1962]: 2025-05-13T23:42:58.152933Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: May 13 23:42:58.153019 waagent[1962]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT May 13 23:42:58.153019 waagent[1962]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 May 13 23:42:58.153019 waagent[1962]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 May 13 23:42:58.153019 waagent[1962]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 May 13 23:42:58.153019 waagent[1962]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 May 13 23:42:58.153019 waagent[1962]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 May 13 23:42:58.159326 waagent[1962]: 2025-05-13T23:42:58.158924Z INFO ExtHandler ExtHandler May 13 23:42:58.159326 waagent[1962]: 2025-05-13T23:42:58.159063Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: bc0ba454-f9fe-437e-95c4-e422cb5683ea correlation d79dbac2-895d-4a18-84ad-685bf0faadcc created: 2025-05-13T23:41:32.425179Z] May 13 23:42:58.159899 waagent[1962]: 2025-05-13T23:42:58.159853Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. May 13 23:42:58.161390 waagent[1962]: 2025-05-13T23:42:58.161323Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 2 ms] May 13 23:42:58.209280 waagent[1962]: 2025-05-13T23:42:58.209186Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 0357A722-01C0-4B6C-8962-C2827921C070;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] May 13 23:42:58.221868 waagent[1962]: 2025-05-13T23:42:58.221765Z INFO MonitorHandler ExtHandler Network interfaces: May 13 23:42:58.221868 waagent[1962]: Executing ['ip', '-a', '-o', 'link']: May 13 23:42:58.221868 waagent[1962]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 May 13 23:42:58.221868 waagent[1962]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:f7:74:48 brd ff:ff:ff:ff:ff:ff May 13 23:42:58.221868 waagent[1962]: 3: enP10830s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:f7:74:48 brd ff:ff:ff:ff:ff:ff\ altname enP10830p0s2 May 13 23:42:58.221868 waagent[1962]: Executing ['ip', '-4', '-a', '-o', 'address']: May 13 23:42:58.221868 waagent[1962]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever May 13 23:42:58.221868 waagent[1962]: 2: eth0 inet 10.200.20.10/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever May 13 23:42:58.221868 waagent[1962]: Executing ['ip', '-6', '-a', '-o', 'address']: May 13 23:42:58.221868 waagent[1962]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever May 13 23:42:58.221868 waagent[1962]: 2: eth0 inet6 fe80::20d:3aff:fef7:7448/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever May 13 23:42:58.221868 waagent[1962]: 3: enP10830s1 inet6 fe80::20d:3aff:fef7:7448/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever May 13 23:42:58.268470 waagent[1962]: 2025-05-13T23:42:58.268382Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: May 13 23:42:58.268470 waagent[1962]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) May 13 23:42:58.268470 waagent[1962]: pkts bytes target prot opt in out source destination May 13 23:42:58.268470 waagent[1962]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) May 13 23:42:58.268470 waagent[1962]: pkts bytes target prot opt in out source destination May 13 23:42:58.268470 waagent[1962]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) May 13 23:42:58.268470 waagent[1962]: pkts bytes target prot opt in out source destination May 13 23:42:58.268470 waagent[1962]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 May 13 23:42:58.268470 waagent[1962]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 May 13 23:42:58.268470 waagent[1962]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW May 13 23:42:58.273159 waagent[1962]: 2025-05-13T23:42:58.273093Z INFO EnvHandler ExtHandler Current Firewall rules: May 13 23:42:58.273159 waagent[1962]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) May 13 23:42:58.273159 waagent[1962]: pkts bytes target prot opt in out source destination May 13 23:42:58.273159 waagent[1962]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) May 13 23:42:58.273159 waagent[1962]: pkts bytes target prot opt in out source destination May 13 23:42:58.273159 waagent[1962]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) May 13 23:42:58.273159 waagent[1962]: pkts bytes target prot opt in out source destination May 13 23:42:58.273159 waagent[1962]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 May 13 23:42:58.273159 waagent[1962]: 3 534 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 May 13 23:42:58.273159 waagent[1962]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW May 13 23:42:58.273523 waagent[1962]: 2025-05-13T23:42:58.273438Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 May 13 23:43:03.554147 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 13 23:43:03.555833 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:43:03.686819 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:43:03.697671 (kubelet)[2116]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 23:43:03.744326 kubelet[2116]: E0513 23:43:03.744248 2116 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 23:43:03.747790 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 23:43:03.748105 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 23:43:03.748762 systemd[1]: kubelet.service: Consumed 153ms CPU time, 95M memory peak. May 13 23:43:13.841920 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 13 23:43:13.843509 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:43:13.962130 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:43:13.972663 (kubelet)[2130]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 23:43:14.012651 kubelet[2130]: E0513 23:43:14.012590 2130 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 23:43:14.015466 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 23:43:14.015754 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 23:43:14.016308 systemd[1]: kubelet.service: Consumed 139ms CPU time, 93.6M memory peak. May 13 23:43:15.361718 chronyd[1700]: Selected source PHC0 May 13 23:43:19.282900 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 13 23:43:19.285337 systemd[1]: Started sshd@0-10.200.20.10:22-10.200.16.10:45306.service - OpenSSH per-connection server daemon (10.200.16.10:45306). May 13 23:43:19.814568 sshd[2139]: Accepted publickey for core from 10.200.16.10 port 45306 ssh2: RSA SHA256:vkfaD5ZBcZpTdQVgl7gjxJv9L2x8eoUpkC37aWFhQ2A May 13 23:43:19.815922 sshd-session[2139]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:43:19.820139 systemd-logind[1722]: New session 3 of user core. May 13 23:43:19.824465 systemd[1]: Started session-3.scope - Session 3 of User core. May 13 23:43:20.208206 systemd[1]: Started sshd@1-10.200.20.10:22-10.200.16.10:45308.service - OpenSSH per-connection server daemon (10.200.16.10:45308). May 13 23:43:20.640664 sshd[2144]: Accepted publickey for core from 10.200.16.10 port 45308 ssh2: RSA SHA256:vkfaD5ZBcZpTdQVgl7gjxJv9L2x8eoUpkC37aWFhQ2A May 13 23:43:20.642080 sshd-session[2144]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:43:20.646365 systemd-logind[1722]: New session 4 of user core. May 13 23:43:20.652509 systemd[1]: Started session-4.scope - Session 4 of User core. May 13 23:43:20.961701 sshd[2146]: Connection closed by 10.200.16.10 port 45308 May 13 23:43:20.962344 sshd-session[2144]: pam_unix(sshd:session): session closed for user core May 13 23:43:20.966614 systemd[1]: sshd@1-10.200.20.10:22-10.200.16.10:45308.service: Deactivated successfully. May 13 23:43:20.968642 systemd[1]: session-4.scope: Deactivated successfully. May 13 23:43:20.969477 systemd-logind[1722]: Session 4 logged out. Waiting for processes to exit. May 13 23:43:20.970875 systemd-logind[1722]: Removed session 4. May 13 23:43:21.044183 systemd[1]: Started sshd@2-10.200.20.10:22-10.200.16.10:45322.service - OpenSSH per-connection server daemon (10.200.16.10:45322). May 13 23:43:21.496915 sshd[2152]: Accepted publickey for core from 10.200.16.10 port 45322 ssh2: RSA SHA256:vkfaD5ZBcZpTdQVgl7gjxJv9L2x8eoUpkC37aWFhQ2A May 13 23:43:21.498296 sshd-session[2152]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:43:21.502908 systemd-logind[1722]: New session 5 of user core. May 13 23:43:21.511459 systemd[1]: Started session-5.scope - Session 5 of User core. May 13 23:43:21.831757 sshd[2154]: Connection closed by 10.200.16.10 port 45322 May 13 23:43:21.832679 sshd-session[2152]: pam_unix(sshd:session): session closed for user core May 13 23:43:21.837315 systemd[1]: sshd@2-10.200.20.10:22-10.200.16.10:45322.service: Deactivated successfully. May 13 23:43:21.839689 systemd[1]: session-5.scope: Deactivated successfully. May 13 23:43:21.842016 systemd-logind[1722]: Session 5 logged out. Waiting for processes to exit. May 13 23:43:21.842980 systemd-logind[1722]: Removed session 5. May 13 23:43:21.914004 systemd[1]: Started sshd@3-10.200.20.10:22-10.200.16.10:45332.service - OpenSSH per-connection server daemon (10.200.16.10:45332). May 13 23:43:22.344122 sshd[2160]: Accepted publickey for core from 10.200.16.10 port 45332 ssh2: RSA SHA256:vkfaD5ZBcZpTdQVgl7gjxJv9L2x8eoUpkC37aWFhQ2A May 13 23:43:22.345524 sshd-session[2160]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:43:22.350642 systemd-logind[1722]: New session 6 of user core. May 13 23:43:22.357464 systemd[1]: Started session-6.scope - Session 6 of User core. May 13 23:43:22.665535 sshd[2162]: Connection closed by 10.200.16.10 port 45332 May 13 23:43:22.666111 sshd-session[2160]: pam_unix(sshd:session): session closed for user core May 13 23:43:22.669582 systemd-logind[1722]: Session 6 logged out. Waiting for processes to exit. May 13 23:43:22.671571 systemd[1]: sshd@3-10.200.20.10:22-10.200.16.10:45332.service: Deactivated successfully. May 13 23:43:22.674884 systemd[1]: session-6.scope: Deactivated successfully. May 13 23:43:22.676132 systemd-logind[1722]: Removed session 6. May 13 23:43:22.741929 systemd[1]: Started sshd@4-10.200.20.10:22-10.200.16.10:45346.service - OpenSSH per-connection server daemon (10.200.16.10:45346). May 13 23:43:23.171226 sshd[2168]: Accepted publickey for core from 10.200.16.10 port 45346 ssh2: RSA SHA256:vkfaD5ZBcZpTdQVgl7gjxJv9L2x8eoUpkC37aWFhQ2A May 13 23:43:23.172706 sshd-session[2168]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:43:23.178717 systemd-logind[1722]: New session 7 of user core. May 13 23:43:23.185494 systemd[1]: Started session-7.scope - Session 7 of User core. May 13 23:43:23.526671 sudo[2171]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 13 23:43:23.526972 sudo[2171]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 23:43:23.556701 sudo[2171]: pam_unix(sudo:session): session closed for user root May 13 23:43:23.638631 sshd[2170]: Connection closed by 10.200.16.10 port 45346 May 13 23:43:23.639541 sshd-session[2168]: pam_unix(sshd:session): session closed for user core May 13 23:43:23.644199 systemd-logind[1722]: Session 7 logged out. Waiting for processes to exit. May 13 23:43:23.644958 systemd[1]: sshd@4-10.200.20.10:22-10.200.16.10:45346.service: Deactivated successfully. May 13 23:43:23.648219 systemd[1]: session-7.scope: Deactivated successfully. May 13 23:43:23.649602 systemd-logind[1722]: Removed session 7. May 13 23:43:23.717592 systemd[1]: Started sshd@5-10.200.20.10:22-10.200.16.10:45356.service - OpenSSH per-connection server daemon (10.200.16.10:45356). May 13 23:43:24.070559 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 13 23:43:24.072916 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:43:24.146931 sshd[2177]: Accepted publickey for core from 10.200.16.10 port 45356 ssh2: RSA SHA256:vkfaD5ZBcZpTdQVgl7gjxJv9L2x8eoUpkC37aWFhQ2A May 13 23:43:24.149045 sshd-session[2177]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:43:24.158098 systemd-logind[1722]: New session 8 of user core. May 13 23:43:24.164506 systemd[1]: Started session-8.scope - Session 8 of User core. May 13 23:43:24.217697 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:43:24.231637 (kubelet)[2188]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 23:43:24.274578 kubelet[2188]: E0513 23:43:24.274515 2188 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 23:43:24.277642 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 23:43:24.277942 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 23:43:24.278695 systemd[1]: kubelet.service: Consumed 151ms CPU time, 94.5M memory peak. May 13 23:43:24.386385 sudo[2196]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 13 23:43:24.386698 sudo[2196]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 23:43:24.392339 sudo[2196]: pam_unix(sudo:session): session closed for user root May 13 23:43:24.398373 sudo[2195]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 13 23:43:24.398671 sudo[2195]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 23:43:24.409588 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 13 23:43:24.449119 augenrules[2218]: No rules May 13 23:43:24.450631 systemd[1]: audit-rules.service: Deactivated successfully. May 13 23:43:24.450838 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 13 23:43:24.452234 sudo[2195]: pam_unix(sudo:session): session closed for user root May 13 23:43:24.517552 sshd[2182]: Connection closed by 10.200.16.10 port 45356 May 13 23:43:24.518123 sshd-session[2177]: pam_unix(sshd:session): session closed for user core May 13 23:43:24.521947 systemd[1]: sshd@5-10.200.20.10:22-10.200.16.10:45356.service: Deactivated successfully. May 13 23:43:24.524460 systemd[1]: session-8.scope: Deactivated successfully. May 13 23:43:24.526199 systemd-logind[1722]: Session 8 logged out. Waiting for processes to exit. May 13 23:43:24.527557 systemd-logind[1722]: Removed session 8. May 13 23:43:24.601983 systemd[1]: Started sshd@6-10.200.20.10:22-10.200.16.10:45364.service - OpenSSH per-connection server daemon (10.200.16.10:45364). May 13 23:43:25.059350 sshd[2227]: Accepted publickey for core from 10.200.16.10 port 45364 ssh2: RSA SHA256:vkfaD5ZBcZpTdQVgl7gjxJv9L2x8eoUpkC37aWFhQ2A May 13 23:43:25.060790 sshd-session[2227]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:43:25.067369 systemd-logind[1722]: New session 9 of user core. May 13 23:43:25.075532 systemd[1]: Started session-9.scope - Session 9 of User core. May 13 23:43:25.312512 sudo[2230]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 13 23:43:25.312800 sudo[2230]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 23:43:26.808925 systemd[1]: Starting docker.service - Docker Application Container Engine... May 13 23:43:26.818652 (dockerd)[2248]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 13 23:43:27.450486 dockerd[2248]: time="2025-05-13T23:43:27.450425111Z" level=info msg="Starting up" May 13 23:43:27.453762 dockerd[2248]: time="2025-05-13T23:43:27.453676389Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 13 23:43:27.591396 dockerd[2248]: time="2025-05-13T23:43:27.591332320Z" level=info msg="Loading containers: start." May 13 23:43:27.814326 kernel: Initializing XFRM netlink socket May 13 23:43:27.913661 systemd-networkd[1342]: docker0: Link UP May 13 23:43:27.978947 dockerd[2248]: time="2025-05-13T23:43:27.978873326Z" level=info msg="Loading containers: done." May 13 23:43:28.038534 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2364660388-merged.mount: Deactivated successfully. May 13 23:43:28.048650 dockerd[2248]: time="2025-05-13T23:43:28.048591891Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 13 23:43:28.048786 dockerd[2248]: time="2025-05-13T23:43:28.048714371Z" level=info msg="Docker daemon" commit=c710b88579fcb5e0d53f96dcae976d79323b9166 containerd-snapshotter=false storage-driver=overlay2 version=27.4.1 May 13 23:43:28.048890 dockerd[2248]: time="2025-05-13T23:43:28.048863091Z" level=info msg="Daemon has completed initialization" May 13 23:43:28.215979 dockerd[2248]: time="2025-05-13T23:43:28.215152407Z" level=info msg="API listen on /run/docker.sock" May 13 23:43:28.215681 systemd[1]: Started docker.service - Docker Application Container Engine. May 13 23:43:28.679245 kernel: hv_balloon: Max. dynamic memory size: 4096 MB May 13 23:43:29.116048 containerd[1762]: time="2025-05-13T23:43:29.115993476Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\"" May 13 23:43:29.968506 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3792302968.mount: Deactivated successfully. May 13 23:43:34.341790 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. May 13 23:43:34.343551 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:43:34.503109 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:43:34.515583 (kubelet)[2496]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 23:43:34.556427 kubelet[2496]: E0513 23:43:34.556358 2496 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 23:43:34.559512 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 23:43:34.559846 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 23:43:34.560294 systemd[1]: kubelet.service: Consumed 145ms CPU time, 95M memory peak. May 13 23:43:37.096944 update_engine[1726]: I20250513 23:43:37.096819 1726 update_attempter.cc:509] Updating boot flags... May 13 23:43:37.368297 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (2518) May 13 23:43:37.582491 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (2517) May 13 23:43:38.046871 containerd[1762]: time="2025-05-13T23:43:38.046791132Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:43:38.062530 containerd[1762]: time="2025-05-13T23:43:38.062436646Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.8: active requests=0, bytes read=25554608" May 13 23:43:38.064922 containerd[1762]: time="2025-05-13T23:43:38.064375205Z" level=info msg="ImageCreate event name:\"sha256:ef8fb1ea7c9599dbedea6f9d5589975ebc5bf4ec72f6be6acaaec59a723a09b3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:43:38.068788 containerd[1762]: time="2025-05-13T23:43:38.068728843Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:43:38.070101 containerd[1762]: time="2025-05-13T23:43:38.070052523Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.8\" with image id \"sha256:ef8fb1ea7c9599dbedea6f9d5589975ebc5bf4ec72f6be6acaaec59a723a09b3\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\", size \"25551408\" in 8.954006327s" May 13 23:43:38.070101 containerd[1762]: time="2025-05-13T23:43:38.070101523Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\" returns image reference \"sha256:ef8fb1ea7c9599dbedea6f9d5589975ebc5bf4ec72f6be6acaaec59a723a09b3\"" May 13 23:43:38.071633 containerd[1762]: time="2025-05-13T23:43:38.071370042Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\"" May 13 23:43:39.970298 containerd[1762]: time="2025-05-13T23:43:39.969291128Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:43:39.972055 containerd[1762]: time="2025-05-13T23:43:39.971981327Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.8: active requests=0, bytes read=22458978" May 13 23:43:39.974383 containerd[1762]: time="2025-05-13T23:43:39.974308686Z" level=info msg="ImageCreate event name:\"sha256:ea6e6085feca75547d0422ab0536fe0d18c9ff5831de7a9d6a707c968027bb6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:43:39.980021 containerd[1762]: time="2025-05-13T23:43:39.979915323Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:43:39.981124 containerd[1762]: time="2025-05-13T23:43:39.980979523Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.8\" with image id \"sha256:ea6e6085feca75547d0422ab0536fe0d18c9ff5831de7a9d6a707c968027bb6a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\", size \"23900539\" in 1.909561961s" May 13 23:43:39.981124 containerd[1762]: time="2025-05-13T23:43:39.981024643Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\" returns image reference \"sha256:ea6e6085feca75547d0422ab0536fe0d18c9ff5831de7a9d6a707c968027bb6a\"" May 13 23:43:39.981816 containerd[1762]: time="2025-05-13T23:43:39.981773602Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\"" May 13 23:43:41.344313 containerd[1762]: time="2025-05-13T23:43:41.344200124Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:43:41.347321 containerd[1762]: time="2025-05-13T23:43:41.347205803Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.8: active requests=0, bytes read=17125813" May 13 23:43:41.351249 containerd[1762]: time="2025-05-13T23:43:41.351190881Z" level=info msg="ImageCreate event name:\"sha256:1d2db6ef0dd2f3e08bdfcd46afde7b755b05192841f563d8df54b807daaa7d8d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:43:41.358366 containerd[1762]: time="2025-05-13T23:43:41.358233318Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:43:41.359348 containerd[1762]: time="2025-05-13T23:43:41.359185717Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.8\" with image id \"sha256:1d2db6ef0dd2f3e08bdfcd46afde7b755b05192841f563d8df54b807daaa7d8d\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\", size \"18567392\" in 1.377364795s" May 13 23:43:41.359348 containerd[1762]: time="2025-05-13T23:43:41.359227517Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\" returns image reference \"sha256:1d2db6ef0dd2f3e08bdfcd46afde7b755b05192841f563d8df54b807daaa7d8d\"" May 13 23:43:41.360062 containerd[1762]: time="2025-05-13T23:43:41.359817117Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\"" May 13 23:43:42.814576 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2269136233.mount: Deactivated successfully. May 13 23:43:43.192799 containerd[1762]: time="2025-05-13T23:43:43.192744698Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:43:43.195908 containerd[1762]: time="2025-05-13T23:43:43.195828977Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.8: active requests=0, bytes read=26871917" May 13 23:43:43.198990 containerd[1762]: time="2025-05-13T23:43:43.198951936Z" level=info msg="ImageCreate event name:\"sha256:c5361ece77e80334cd5fb082c0b678cb3244f5834ecacea1719ae6b38b465581\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:43:43.202465 containerd[1762]: time="2025-05-13T23:43:43.202376414Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:43:43.203132 containerd[1762]: time="2025-05-13T23:43:43.202903334Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.8\" with image id \"sha256:c5361ece77e80334cd5fb082c0b678cb3244f5834ecacea1719ae6b38b465581\", repo tag \"registry.k8s.io/kube-proxy:v1.31.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\", size \"26870936\" in 1.843049497s" May 13 23:43:43.203132 containerd[1762]: time="2025-05-13T23:43:43.202949814Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\" returns image reference \"sha256:c5361ece77e80334cd5fb082c0b678cb3244f5834ecacea1719ae6b38b465581\"" May 13 23:43:43.203499 containerd[1762]: time="2025-05-13T23:43:43.203445093Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 13 23:43:43.931124 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3168557071.mount: Deactivated successfully. May 13 23:43:44.591780 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. May 13 23:43:44.593554 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:43:45.163706 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:43:45.173725 (kubelet)[2686]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 23:43:45.220720 kubelet[2686]: E0513 23:43:45.220628 2686 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 23:43:45.223440 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 23:43:45.223594 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 23:43:45.225378 systemd[1]: kubelet.service: Consumed 167ms CPU time, 96.3M memory peak. May 13 23:43:45.870330 containerd[1762]: time="2025-05-13T23:43:45.869597845Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:43:45.873822 containerd[1762]: time="2025-05-13T23:43:45.873747323Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" May 13 23:43:45.882441 containerd[1762]: time="2025-05-13T23:43:45.882335839Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:43:45.888796 containerd[1762]: time="2025-05-13T23:43:45.888694196Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:43:45.890028 containerd[1762]: time="2025-05-13T23:43:45.889876395Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 2.686397822s" May 13 23:43:45.890028 containerd[1762]: time="2025-05-13T23:43:45.889922715Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" May 13 23:43:45.890753 containerd[1762]: time="2025-05-13T23:43:45.890713755Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 13 23:43:46.516546 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4091654749.mount: Deactivated successfully. May 13 23:43:46.542196 containerd[1762]: time="2025-05-13T23:43:46.542133570Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 23:43:46.544467 containerd[1762]: time="2025-05-13T23:43:46.544395649Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" May 13 23:43:46.549710 containerd[1762]: time="2025-05-13T23:43:46.549636246Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 23:43:46.554387 containerd[1762]: time="2025-05-13T23:43:46.554340364Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 23:43:46.555159 containerd[1762]: time="2025-05-13T23:43:46.555000804Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 664.236449ms" May 13 23:43:46.555159 containerd[1762]: time="2025-05-13T23:43:46.555041324Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" May 13 23:43:46.555836 containerd[1762]: time="2025-05-13T23:43:46.555678284Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 13 23:43:47.229853 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3087706819.mount: Deactivated successfully. May 13 23:43:55.341829 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. May 13 23:43:55.344154 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:43:59.230537 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:43:59.241635 (kubelet)[2726]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 23:43:59.290729 kubelet[2726]: E0513 23:43:59.290655 2726 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 23:43:59.293680 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 23:43:59.293896 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 23:43:59.294967 systemd[1]: kubelet.service: Consumed 154ms CPU time, 96.3M memory peak. May 13 23:44:04.369219 containerd[1762]: time="2025-05-13T23:44:04.369145487Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:44:04.374835 containerd[1762]: time="2025-05-13T23:44:04.374444445Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406465" May 13 23:44:04.380289 containerd[1762]: time="2025-05-13T23:44:04.380196044Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:44:04.388185 containerd[1762]: time="2025-05-13T23:44:04.388087442Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:44:04.389553 containerd[1762]: time="2025-05-13T23:44:04.389372321Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 17.833652477s" May 13 23:44:04.389553 containerd[1762]: time="2025-05-13T23:44:04.389427201Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" May 13 23:44:09.341766 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. May 13 23:44:09.345502 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:44:09.620111 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:44:09.629851 (kubelet)[2803]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 23:44:09.671296 kubelet[2803]: E0513 23:44:09.671225 2803 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 23:44:09.675175 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 23:44:09.675434 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 23:44:09.677354 systemd[1]: kubelet.service: Consumed 139ms CPU time, 94.3M memory peak. May 13 23:44:09.817312 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:44:09.817783 systemd[1]: kubelet.service: Consumed 139ms CPU time, 94.3M memory peak. May 13 23:44:09.820096 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:44:09.846584 systemd[1]: Reload requested from client PID 2817 ('systemctl') (unit session-9.scope)... May 13 23:44:09.846744 systemd[1]: Reloading... May 13 23:44:09.965501 zram_generator::config[2864]: No configuration found. May 13 23:44:10.076219 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 23:44:10.181211 systemd[1]: Reloading finished in 334 ms. May 13 23:44:10.215700 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 13 23:44:10.215773 systemd[1]: kubelet.service: Failed with result 'signal'. May 13 23:44:10.217336 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:44:10.217391 systemd[1]: kubelet.service: Consumed 83ms CPU time, 82.4M memory peak. May 13 23:44:10.219951 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:44:10.362234 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:44:10.373602 (kubelet)[2930]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 13 23:44:10.415293 kubelet[2930]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 23:44:10.415293 kubelet[2930]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 13 23:44:10.415293 kubelet[2930]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 23:44:10.415293 kubelet[2930]: I0513 23:44:10.414039 2930 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 23:44:10.762006 kubelet[2930]: I0513 23:44:10.761965 2930 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 13 23:44:10.762162 kubelet[2930]: I0513 23:44:10.762153 2930 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 23:44:10.762535 kubelet[2930]: I0513 23:44:10.762519 2930 server.go:929] "Client rotation is on, will bootstrap in background" May 13 23:44:10.974519 kubelet[2930]: E0513 23:44:10.974446 2930 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.10:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.10:6443: connect: connection refused" logger="UnhandledError" May 13 23:44:10.976735 kubelet[2930]: I0513 23:44:10.976700 2930 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 23:44:10.987469 kubelet[2930]: I0513 23:44:10.987438 2930 server.go:1426] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 13 23:44:10.991912 kubelet[2930]: I0513 23:44:10.991879 2930 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 23:44:10.992781 kubelet[2930]: I0513 23:44:10.992758 2930 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 13 23:44:10.993097 kubelet[2930]: I0513 23:44:10.993063 2930 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 23:44:10.993392 kubelet[2930]: I0513 23:44:10.993170 2930 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4284.0.0-n-791441f790","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 13 23:44:10.993554 kubelet[2930]: I0513 23:44:10.993538 2930 topology_manager.go:138] "Creating topology manager with none policy" May 13 23:44:10.993610 kubelet[2930]: I0513 23:44:10.993602 2930 container_manager_linux.go:300] "Creating device plugin manager" May 13 23:44:10.993785 kubelet[2930]: I0513 23:44:10.993770 2930 state_mem.go:36] "Initialized new in-memory state store" May 13 23:44:10.995746 kubelet[2930]: I0513 23:44:10.995719 2930 kubelet.go:408] "Attempting to sync node with API server" May 13 23:44:10.995977 kubelet[2930]: I0513 23:44:10.995864 2930 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 23:44:10.995977 kubelet[2930]: I0513 23:44:10.995902 2930 kubelet.go:314] "Adding apiserver pod source" May 13 23:44:10.995977 kubelet[2930]: I0513 23:44:10.995914 2930 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 23:44:10.998240 kubelet[2930]: W0513 23:44:10.998031 2930 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4284.0.0-n-791441f790&limit=500&resourceVersion=0": dial tcp 10.200.20.10:6443: connect: connection refused May 13 23:44:10.998240 kubelet[2930]: E0513 23:44:10.998108 2930 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4284.0.0-n-791441f790&limit=500&resourceVersion=0\": dial tcp 10.200.20.10:6443: connect: connection refused" logger="UnhandledError" May 13 23:44:10.999764 kubelet[2930]: W0513 23:44:10.999471 2930 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.10:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.10:6443: connect: connection refused May 13 23:44:10.999764 kubelet[2930]: E0513 23:44:10.999534 2930 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.10:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.10:6443: connect: connection refused" logger="UnhandledError" May 13 23:44:10.999764 kubelet[2930]: I0513 23:44:10.999631 2930 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" May 13 23:44:11.001458 kubelet[2930]: I0513 23:44:11.001375 2930 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 23:44:11.002335 kubelet[2930]: W0513 23:44:11.001996 2930 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 13 23:44:11.003055 kubelet[2930]: I0513 23:44:11.003030 2930 server.go:1269] "Started kubelet" May 13 23:44:11.004839 kubelet[2930]: I0513 23:44:11.004781 2930 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 13 23:44:11.006056 kubelet[2930]: I0513 23:44:11.005798 2930 server.go:460] "Adding debug handlers to kubelet server" May 13 23:44:11.006879 kubelet[2930]: I0513 23:44:11.006822 2930 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 23:44:11.007210 kubelet[2930]: I0513 23:44:11.007193 2930 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 23:44:11.008352 kubelet[2930]: I0513 23:44:11.008318 2930 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 23:44:11.009830 kubelet[2930]: E0513 23:44:11.008700 2930 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.10:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.10:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4284.0.0-n-791441f790.183f3ad149c10cef default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4284.0.0-n-791441f790,UID:ci-4284.0.0-n-791441f790,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4284.0.0-n-791441f790,},FirstTimestamp:2025-05-13 23:44:11.003006191 +0000 UTC m=+0.626337749,LastTimestamp:2025-05-13 23:44:11.003006191 +0000 UTC m=+0.626337749,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4284.0.0-n-791441f790,}" May 13 23:44:11.011794 kubelet[2930]: I0513 23:44:11.010332 2930 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 13 23:44:11.013958 kubelet[2930]: I0513 23:44:11.013849 2930 volume_manager.go:289] "Starting Kubelet Volume Manager" May 13 23:44:11.014638 kubelet[2930]: E0513 23:44:11.014097 2930 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4284.0.0-n-791441f790\" not found" May 13 23:44:11.015726 kubelet[2930]: I0513 23:44:11.015693 2930 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 13 23:44:11.015834 kubelet[2930]: I0513 23:44:11.015772 2930 reconciler.go:26] "Reconciler: start to sync state" May 13 23:44:11.015988 kubelet[2930]: I0513 23:44:11.015957 2930 factory.go:221] Registration of the systemd container factory successfully May 13 23:44:11.016309 kubelet[2930]: I0513 23:44:11.016059 2930 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 23:44:11.017061 kubelet[2930]: E0513 23:44:11.016335 2930 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 23:44:11.017606 kubelet[2930]: E0513 23:44:11.017560 2930 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4284.0.0-n-791441f790?timeout=10s\": dial tcp 10.200.20.10:6443: connect: connection refused" interval="200ms" May 13 23:44:11.017965 kubelet[2930]: W0513 23:44:11.017930 2930 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.10:6443: connect: connection refused May 13 23:44:11.018043 kubelet[2930]: E0513 23:44:11.017971 2930 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.10:6443: connect: connection refused" logger="UnhandledError" May 13 23:44:11.018137 kubelet[2930]: I0513 23:44:11.018108 2930 factory.go:221] Registration of the containerd container factory successfully May 13 23:44:11.044087 kubelet[2930]: I0513 23:44:11.044045 2930 cpu_manager.go:214] "Starting CPU manager" policy="none" May 13 23:44:11.044087 kubelet[2930]: I0513 23:44:11.044067 2930 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 13 23:44:11.044087 kubelet[2930]: I0513 23:44:11.044087 2930 state_mem.go:36] "Initialized new in-memory state store" May 13 23:44:11.114594 kubelet[2930]: E0513 23:44:11.114529 2930 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4284.0.0-n-791441f790\" not found" May 13 23:44:11.214906 kubelet[2930]: E0513 23:44:11.214880 2930 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4284.0.0-n-791441f790\" not found" May 13 23:44:11.218556 kubelet[2930]: E0513 23:44:11.218506 2930 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4284.0.0-n-791441f790?timeout=10s\": dial tcp 10.200.20.10:6443: connect: connection refused" interval="400ms" May 13 23:44:11.316029 kubelet[2930]: E0513 23:44:11.315624 2930 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4284.0.0-n-791441f790\" not found" May 13 23:44:11.416677 kubelet[2930]: E0513 23:44:11.416622 2930 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4284.0.0-n-791441f790\" not found" May 13 23:44:11.427660 kubelet[2930]: I0513 23:44:11.427543 2930 policy_none.go:49] "None policy: Start" May 13 23:44:11.428947 kubelet[2930]: I0513 23:44:11.428590 2930 memory_manager.go:170] "Starting memorymanager" policy="None" May 13 23:44:11.428947 kubelet[2930]: I0513 23:44:11.428632 2930 state_mem.go:35] "Initializing new in-memory state store" May 13 23:44:11.441665 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 13 23:44:11.450400 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 13 23:44:11.454498 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 13 23:44:11.470005 kubelet[2930]: I0513 23:44:11.469340 2930 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 23:44:11.470005 kubelet[2930]: I0513 23:44:11.469562 2930 eviction_manager.go:189] "Eviction manager: starting control loop" May 13 23:44:11.470005 kubelet[2930]: I0513 23:44:11.469574 2930 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 23:44:11.470005 kubelet[2930]: I0513 23:44:11.469864 2930 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 23:44:11.473297 kubelet[2930]: E0513 23:44:11.473239 2930 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4284.0.0-n-791441f790\" not found" May 13 23:44:11.478212 kubelet[2930]: I0513 23:44:11.478146 2930 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 23:44:11.479947 kubelet[2930]: I0513 23:44:11.479598 2930 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 23:44:11.479947 kubelet[2930]: I0513 23:44:11.479627 2930 status_manager.go:217] "Starting to sync pod status with apiserver" May 13 23:44:11.479947 kubelet[2930]: I0513 23:44:11.479645 2930 kubelet.go:2321] "Starting kubelet main sync loop" May 13 23:44:11.479947 kubelet[2930]: E0513 23:44:11.479690 2930 kubelet.go:2345] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" May 13 23:44:11.482359 kubelet[2930]: W0513 23:44:11.481957 2930 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.10:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.10:6443: connect: connection refused May 13 23:44:11.482477 kubelet[2930]: E0513 23:44:11.482377 2930 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.10:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.10:6443: connect: connection refused" logger="UnhandledError" May 13 23:44:11.571899 kubelet[2930]: I0513 23:44:11.571724 2930 kubelet_node_status.go:72] "Attempting to register node" node="ci-4284.0.0-n-791441f790" May 13 23:44:11.572158 kubelet[2930]: E0513 23:44:11.572092 2930 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.10:6443/api/v1/nodes\": dial tcp 10.200.20.10:6443: connect: connection refused" node="ci-4284.0.0-n-791441f790" May 13 23:44:11.594860 systemd[1]: Created slice kubepods-burstable-pod94ff95aa0db22d12864bef4e9a194967.slice - libcontainer container kubepods-burstable-pod94ff95aa0db22d12864bef4e9a194967.slice. May 13 23:44:11.608879 systemd[1]: Created slice kubepods-burstable-pod8e65f9f363ee162671b86888a06fc8d0.slice - libcontainer container kubepods-burstable-pod8e65f9f363ee162671b86888a06fc8d0.slice. May 13 23:44:11.617838 systemd[1]: Created slice kubepods-burstable-pod183faa966135729e76bef1f35664f755.slice - libcontainer container kubepods-burstable-pod183faa966135729e76bef1f35664f755.slice. May 13 23:44:11.619136 kubelet[2930]: E0513 23:44:11.619089 2930 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4284.0.0-n-791441f790?timeout=10s\": dial tcp 10.200.20.10:6443: connect: connection refused" interval="800ms" May 13 23:44:11.619755 kubelet[2930]: I0513 23:44:11.619536 2930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/94ff95aa0db22d12864bef4e9a194967-kubeconfig\") pod \"kube-scheduler-ci-4284.0.0-n-791441f790\" (UID: \"94ff95aa0db22d12864bef4e9a194967\") " pod="kube-system/kube-scheduler-ci-4284.0.0-n-791441f790" May 13 23:44:11.619755 kubelet[2930]: I0513 23:44:11.619564 2930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8e65f9f363ee162671b86888a06fc8d0-k8s-certs\") pod \"kube-apiserver-ci-4284.0.0-n-791441f790\" (UID: \"8e65f9f363ee162671b86888a06fc8d0\") " pod="kube-system/kube-apiserver-ci-4284.0.0-n-791441f790" May 13 23:44:11.619755 kubelet[2930]: I0513 23:44:11.619582 2930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8e65f9f363ee162671b86888a06fc8d0-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4284.0.0-n-791441f790\" (UID: \"8e65f9f363ee162671b86888a06fc8d0\") " pod="kube-system/kube-apiserver-ci-4284.0.0-n-791441f790" May 13 23:44:11.619755 kubelet[2930]: I0513 23:44:11.619603 2930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/183faa966135729e76bef1f35664f755-flexvolume-dir\") pod \"kube-controller-manager-ci-4284.0.0-n-791441f790\" (UID: \"183faa966135729e76bef1f35664f755\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-n-791441f790" May 13 23:44:11.619755 kubelet[2930]: I0513 23:44:11.619619 2930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/183faa966135729e76bef1f35664f755-k8s-certs\") pod \"kube-controller-manager-ci-4284.0.0-n-791441f790\" (UID: \"183faa966135729e76bef1f35664f755\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-n-791441f790" May 13 23:44:11.619909 kubelet[2930]: I0513 23:44:11.619635 2930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8e65f9f363ee162671b86888a06fc8d0-ca-certs\") pod \"kube-apiserver-ci-4284.0.0-n-791441f790\" (UID: \"8e65f9f363ee162671b86888a06fc8d0\") " pod="kube-system/kube-apiserver-ci-4284.0.0-n-791441f790" May 13 23:44:11.619909 kubelet[2930]: I0513 23:44:11.619652 2930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/183faa966135729e76bef1f35664f755-ca-certs\") pod \"kube-controller-manager-ci-4284.0.0-n-791441f790\" (UID: \"183faa966135729e76bef1f35664f755\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-n-791441f790" May 13 23:44:11.619909 kubelet[2930]: I0513 23:44:11.619670 2930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/183faa966135729e76bef1f35664f755-kubeconfig\") pod \"kube-controller-manager-ci-4284.0.0-n-791441f790\" (UID: \"183faa966135729e76bef1f35664f755\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-n-791441f790" May 13 23:44:11.619909 kubelet[2930]: I0513 23:44:11.619687 2930 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/183faa966135729e76bef1f35664f755-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4284.0.0-n-791441f790\" (UID: \"183faa966135729e76bef1f35664f755\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-n-791441f790" May 13 23:44:11.774567 kubelet[2930]: I0513 23:44:11.774510 2930 kubelet_node_status.go:72] "Attempting to register node" node="ci-4284.0.0-n-791441f790" May 13 23:44:11.774932 kubelet[2930]: E0513 23:44:11.774895 2930 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.10:6443/api/v1/nodes\": dial tcp 10.200.20.10:6443: connect: connection refused" node="ci-4284.0.0-n-791441f790" May 13 23:44:11.799762 kubelet[2930]: E0513 23:44:11.799619 2930 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.10:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.10:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4284.0.0-n-791441f790.183f3ad149c10cef default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4284.0.0-n-791441f790,UID:ci-4284.0.0-n-791441f790,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4284.0.0-n-791441f790,},FirstTimestamp:2025-05-13 23:44:11.003006191 +0000 UTC m=+0.626337749,LastTimestamp:2025-05-13 23:44:11.003006191 +0000 UTC m=+0.626337749,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4284.0.0-n-791441f790,}" May 13 23:44:11.868634 kubelet[2930]: W0513 23:44:11.868484 2930 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.10:6443: connect: connection refused May 13 23:44:11.868634 kubelet[2930]: E0513 23:44:11.868595 2930 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.10:6443: connect: connection refused" logger="UnhandledError" May 13 23:44:11.907793 containerd[1762]: time="2025-05-13T23:44:11.907689486Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4284.0.0-n-791441f790,Uid:94ff95aa0db22d12864bef4e9a194967,Namespace:kube-system,Attempt:0,}" May 13 23:44:11.914927 containerd[1762]: time="2025-05-13T23:44:11.914669047Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4284.0.0-n-791441f790,Uid:8e65f9f363ee162671b86888a06fc8d0,Namespace:kube-system,Attempt:0,}" May 13 23:44:11.920956 containerd[1762]: time="2025-05-13T23:44:11.920741728Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4284.0.0-n-791441f790,Uid:183faa966135729e76bef1f35664f755,Namespace:kube-system,Attempt:0,}" May 13 23:44:12.013158 containerd[1762]: time="2025-05-13T23:44:12.013106219Z" level=info msg="connecting to shim 1a9d8437ea001faa5e725e0725cb162abffe9f33d2dd17a67dfa843607283242" address="unix:///run/containerd/s/600b85188ebb156f73858e923fc8b7aa50216afbc388c954658d0bf87bda53d0" namespace=k8s.io protocol=ttrpc version=3 May 13 23:44:12.016477 containerd[1762]: time="2025-05-13T23:44:12.016428420Z" level=info msg="connecting to shim 35ca59c2cd8a41568d810fd33d73436d9b6fae46a374053da4472a41b63b7552" address="unix:///run/containerd/s/d9bd6590624c0c7b711be1677ee819b60c06fd8fa73041d255e63a90f4bc9e53" namespace=k8s.io protocol=ttrpc version=3 May 13 23:44:12.049211 containerd[1762]: time="2025-05-13T23:44:12.049081344Z" level=info msg="connecting to shim c2d9821341090df96f5ff4d9d03f79e2c94d5f0aea651d9ac06eee1603b48f25" address="unix:///run/containerd/s/ee37078274afa74ee30db88e1ac3d47ee5fb7812d726918e1b6add218adfcc53" namespace=k8s.io protocol=ttrpc version=3 May 13 23:44:12.051477 systemd[1]: Started cri-containerd-35ca59c2cd8a41568d810fd33d73436d9b6fae46a374053da4472a41b63b7552.scope - libcontainer container 35ca59c2cd8a41568d810fd33d73436d9b6fae46a374053da4472a41b63b7552. May 13 23:44:12.056137 systemd[1]: Started cri-containerd-1a9d8437ea001faa5e725e0725cb162abffe9f33d2dd17a67dfa843607283242.scope - libcontainer container 1a9d8437ea001faa5e725e0725cb162abffe9f33d2dd17a67dfa843607283242. May 13 23:44:12.088478 systemd[1]: Started cri-containerd-c2d9821341090df96f5ff4d9d03f79e2c94d5f0aea651d9ac06eee1603b48f25.scope - libcontainer container c2d9821341090df96f5ff4d9d03f79e2c94d5f0aea651d9ac06eee1603b48f25. May 13 23:44:12.123579 containerd[1762]: time="2025-05-13T23:44:12.123392673Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4284.0.0-n-791441f790,Uid:8e65f9f363ee162671b86888a06fc8d0,Namespace:kube-system,Attempt:0,} returns sandbox id \"35ca59c2cd8a41568d810fd33d73436d9b6fae46a374053da4472a41b63b7552\"" May 13 23:44:12.129157 containerd[1762]: time="2025-05-13T23:44:12.129025514Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4284.0.0-n-791441f790,Uid:94ff95aa0db22d12864bef4e9a194967,Namespace:kube-system,Attempt:0,} returns sandbox id \"1a9d8437ea001faa5e725e0725cb162abffe9f33d2dd17a67dfa843607283242\"" May 13 23:44:12.130783 containerd[1762]: time="2025-05-13T23:44:12.130670154Z" level=info msg="CreateContainer within sandbox \"35ca59c2cd8a41568d810fd33d73436d9b6fae46a374053da4472a41b63b7552\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 13 23:44:12.133953 containerd[1762]: time="2025-05-13T23:44:12.133419954Z" level=info msg="CreateContainer within sandbox \"1a9d8437ea001faa5e725e0725cb162abffe9f33d2dd17a67dfa843607283242\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 13 23:44:12.159200 containerd[1762]: time="2025-05-13T23:44:12.159140477Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4284.0.0-n-791441f790,Uid:183faa966135729e76bef1f35664f755,Namespace:kube-system,Attempt:0,} returns sandbox id \"c2d9821341090df96f5ff4d9d03f79e2c94d5f0aea651d9ac06eee1603b48f25\"" May 13 23:44:12.162514 containerd[1762]: time="2025-05-13T23:44:12.162473318Z" level=info msg="CreateContainer within sandbox \"c2d9821341090df96f5ff4d9d03f79e2c94d5f0aea651d9ac06eee1603b48f25\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 13 23:44:12.165501 containerd[1762]: time="2025-05-13T23:44:12.165459078Z" level=info msg="Container 10e959918c4ea4727867b70718c670671d94b1c8dffa87ba2b397b813001e451: CDI devices from CRI Config.CDIDevices: []" May 13 23:44:12.173498 containerd[1762]: time="2025-05-13T23:44:12.173456479Z" level=info msg="Container 62ad9bc48324928b335d1f73bff3ac3922498e6cdfa6499bde3494d050efba0b: CDI devices from CRI Config.CDIDevices: []" May 13 23:44:12.179865 kubelet[2930]: I0513 23:44:12.179482 2930 kubelet_node_status.go:72] "Attempting to register node" node="ci-4284.0.0-n-791441f790" May 13 23:44:12.179865 kubelet[2930]: E0513 23:44:12.179820 2930 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.10:6443/api/v1/nodes\": dial tcp 10.200.20.10:6443: connect: connection refused" node="ci-4284.0.0-n-791441f790" May 13 23:44:12.206831 kubelet[2930]: W0513 23:44:12.206773 2930 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4284.0.0-n-791441f790&limit=500&resourceVersion=0": dial tcp 10.200.20.10:6443: connect: connection refused May 13 23:44:12.207040 kubelet[2930]: E0513 23:44:12.207005 2930 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4284.0.0-n-791441f790&limit=500&resourceVersion=0\": dial tcp 10.200.20.10:6443: connect: connection refused" logger="UnhandledError" May 13 23:44:12.211653 containerd[1762]: time="2025-05-13T23:44:12.211610084Z" level=info msg="CreateContainer within sandbox \"35ca59c2cd8a41568d810fd33d73436d9b6fae46a374053da4472a41b63b7552\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"10e959918c4ea4727867b70718c670671d94b1c8dffa87ba2b397b813001e451\"" May 13 23:44:12.212420 containerd[1762]: time="2025-05-13T23:44:12.212326604Z" level=info msg="StartContainer for \"10e959918c4ea4727867b70718c670671d94b1c8dffa87ba2b397b813001e451\"" May 13 23:44:12.213523 containerd[1762]: time="2025-05-13T23:44:12.213486204Z" level=info msg="connecting to shim 10e959918c4ea4727867b70718c670671d94b1c8dffa87ba2b397b813001e451" address="unix:///run/containerd/s/d9bd6590624c0c7b711be1677ee819b60c06fd8fa73041d255e63a90f4bc9e53" protocol=ttrpc version=3 May 13 23:44:12.219783 kubelet[2930]: W0513 23:44:12.219571 2930 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.10:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.10:6443: connect: connection refused May 13 23:44:12.219783 kubelet[2930]: E0513 23:44:12.219738 2930 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.10:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.10:6443: connect: connection refused" logger="UnhandledError" May 13 23:44:12.229802 containerd[1762]: time="2025-05-13T23:44:12.229757246Z" level=info msg="CreateContainer within sandbox \"1a9d8437ea001faa5e725e0725cb162abffe9f33d2dd17a67dfa843607283242\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"62ad9bc48324928b335d1f73bff3ac3922498e6cdfa6499bde3494d050efba0b\"" May 13 23:44:12.231842 containerd[1762]: time="2025-05-13T23:44:12.231665846Z" level=info msg="StartContainer for \"62ad9bc48324928b335d1f73bff3ac3922498e6cdfa6499bde3494d050efba0b\"" May 13 23:44:12.232811 containerd[1762]: time="2025-05-13T23:44:12.232754846Z" level=info msg="connecting to shim 62ad9bc48324928b335d1f73bff3ac3922498e6cdfa6499bde3494d050efba0b" address="unix:///run/containerd/s/600b85188ebb156f73858e923fc8b7aa50216afbc388c954658d0bf87bda53d0" protocol=ttrpc version=3 May 13 23:44:12.235076 containerd[1762]: time="2025-05-13T23:44:12.234981287Z" level=info msg="Container 5c56b44691f4752ebfd998085e6ea937b4be264c8c29b7568798bb76b604bc71: CDI devices from CRI Config.CDIDevices: []" May 13 23:44:12.236008 systemd[1]: Started cri-containerd-10e959918c4ea4727867b70718c670671d94b1c8dffa87ba2b397b813001e451.scope - libcontainer container 10e959918c4ea4727867b70718c670671d94b1c8dffa87ba2b397b813001e451. May 13 23:44:12.255474 systemd[1]: Started cri-containerd-62ad9bc48324928b335d1f73bff3ac3922498e6cdfa6499bde3494d050efba0b.scope - libcontainer container 62ad9bc48324928b335d1f73bff3ac3922498e6cdfa6499bde3494d050efba0b. May 13 23:44:12.263201 containerd[1762]: time="2025-05-13T23:44:12.262323250Z" level=info msg="CreateContainer within sandbox \"c2d9821341090df96f5ff4d9d03f79e2c94d5f0aea651d9ac06eee1603b48f25\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"5c56b44691f4752ebfd998085e6ea937b4be264c8c29b7568798bb76b604bc71\"" May 13 23:44:12.265474 containerd[1762]: time="2025-05-13T23:44:12.263450770Z" level=info msg="StartContainer for \"5c56b44691f4752ebfd998085e6ea937b4be264c8c29b7568798bb76b604bc71\"" May 13 23:44:12.266092 containerd[1762]: time="2025-05-13T23:44:12.266066370Z" level=info msg="connecting to shim 5c56b44691f4752ebfd998085e6ea937b4be264c8c29b7568798bb76b604bc71" address="unix:///run/containerd/s/ee37078274afa74ee30db88e1ac3d47ee5fb7812d726918e1b6add218adfcc53" protocol=ttrpc version=3 May 13 23:44:12.291602 systemd[1]: Started cri-containerd-5c56b44691f4752ebfd998085e6ea937b4be264c8c29b7568798bb76b604bc71.scope - libcontainer container 5c56b44691f4752ebfd998085e6ea937b4be264c8c29b7568798bb76b604bc71. May 13 23:44:12.310590 containerd[1762]: time="2025-05-13T23:44:12.310454536Z" level=info msg="StartContainer for \"10e959918c4ea4727867b70718c670671d94b1c8dffa87ba2b397b813001e451\" returns successfully" May 13 23:44:12.385390 containerd[1762]: time="2025-05-13T23:44:12.384016505Z" level=info msg="StartContainer for \"62ad9bc48324928b335d1f73bff3ac3922498e6cdfa6499bde3494d050efba0b\" returns successfully" May 13 23:44:12.386531 containerd[1762]: time="2025-05-13T23:44:12.386484625Z" level=info msg="StartContainer for \"5c56b44691f4752ebfd998085e6ea937b4be264c8c29b7568798bb76b604bc71\" returns successfully" May 13 23:44:12.983681 kubelet[2930]: I0513 23:44:12.983106 2930 kubelet_node_status.go:72] "Attempting to register node" node="ci-4284.0.0-n-791441f790" May 13 23:44:14.257940 kubelet[2930]: E0513 23:44:14.257897 2930 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4284.0.0-n-791441f790\" not found" node="ci-4284.0.0-n-791441f790" May 13 23:44:14.334468 kubelet[2930]: I0513 23:44:14.334413 2930 kubelet_node_status.go:75] "Successfully registered node" node="ci-4284.0.0-n-791441f790" May 13 23:44:14.334468 kubelet[2930]: E0513 23:44:14.334468 2930 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4284.0.0-n-791441f790\": node \"ci-4284.0.0-n-791441f790\" not found" May 13 23:44:14.454078 kubelet[2930]: E0513 23:44:14.454016 2930 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4284.0.0-n-791441f790\" not found" May 13 23:44:14.555243 kubelet[2930]: E0513 23:44:14.554921 2930 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4284.0.0-n-791441f790\" not found" May 13 23:44:14.655742 kubelet[2930]: E0513 23:44:14.655700 2930 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4284.0.0-n-791441f790\" not found" May 13 23:44:14.756367 kubelet[2930]: E0513 23:44:14.756322 2930 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4284.0.0-n-791441f790\" not found" May 13 23:44:15.000492 kubelet[2930]: I0513 23:44:15.000434 2930 apiserver.go:52] "Watching apiserver" May 13 23:44:15.016234 kubelet[2930]: I0513 23:44:15.016168 2930 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 13 23:44:15.302208 kubelet[2930]: W0513 23:44:15.302061 2930 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 13 23:44:16.803989 systemd[1]: Reload requested from client PID 3201 ('systemctl') (unit session-9.scope)... May 13 23:44:16.804005 systemd[1]: Reloading... May 13 23:44:16.900295 zram_generator::config[3254]: No configuration found. May 13 23:44:17.001599 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 23:44:17.022530 kubelet[2930]: W0513 23:44:17.022255 2930 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 13 23:44:17.123898 systemd[1]: Reloading finished in 319 ms. May 13 23:44:17.150167 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:44:17.161767 systemd[1]: kubelet.service: Deactivated successfully. May 13 23:44:17.162147 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:44:17.162302 systemd[1]: kubelet.service: Consumed 769ms CPU time, 113.9M memory peak. May 13 23:44:17.165121 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:44:17.351870 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:44:17.362629 (kubelet)[3312]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 13 23:44:17.564913 kubelet[3312]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 23:44:17.564913 kubelet[3312]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 13 23:44:17.564913 kubelet[3312]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 23:44:17.564913 kubelet[3312]: I0513 23:44:17.408161 3312 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 23:44:17.564913 kubelet[3312]: I0513 23:44:17.418565 3312 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 13 23:44:17.564913 kubelet[3312]: I0513 23:44:17.418587 3312 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 23:44:17.564913 kubelet[3312]: I0513 23:44:17.418807 3312 server.go:929] "Client rotation is on, will bootstrap in background" May 13 23:44:17.568079 kubelet[3312]: I0513 23:44:17.567562 3312 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 13 23:44:17.570713 kubelet[3312]: I0513 23:44:17.570447 3312 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 23:44:17.576442 kubelet[3312]: I0513 23:44:17.576403 3312 server.go:1426] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 13 23:44:17.580760 kubelet[3312]: I0513 23:44:17.580714 3312 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 23:44:17.581090 kubelet[3312]: I0513 23:44:17.580860 3312 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 13 23:44:17.581090 kubelet[3312]: I0513 23:44:17.580974 3312 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 23:44:17.581519 kubelet[3312]: I0513 23:44:17.581000 3312 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4284.0.0-n-791441f790","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 13 23:44:17.581519 kubelet[3312]: I0513 23:44:17.581172 3312 topology_manager.go:138] "Creating topology manager with none policy" May 13 23:44:17.581519 kubelet[3312]: I0513 23:44:17.581181 3312 container_manager_linux.go:300] "Creating device plugin manager" May 13 23:44:17.581519 kubelet[3312]: I0513 23:44:17.581211 3312 state_mem.go:36] "Initialized new in-memory state store" May 13 23:44:17.581519 kubelet[3312]: I0513 23:44:17.581344 3312 kubelet.go:408] "Attempting to sync node with API server" May 13 23:44:17.581778 kubelet[3312]: I0513 23:44:17.581357 3312 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 23:44:17.585098 kubelet[3312]: I0513 23:44:17.582468 3312 kubelet.go:314] "Adding apiserver pod source" May 13 23:44:17.585098 kubelet[3312]: I0513 23:44:17.582501 3312 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 23:44:17.587978 kubelet[3312]: I0513 23:44:17.587947 3312 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" May 13 23:44:17.590035 kubelet[3312]: I0513 23:44:17.589999 3312 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 23:44:17.592768 kubelet[3312]: I0513 23:44:17.592735 3312 server.go:1269] "Started kubelet" May 13 23:44:17.596397 kubelet[3312]: I0513 23:44:17.596338 3312 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 13 23:44:17.597128 kubelet[3312]: I0513 23:44:17.597084 3312 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 23:44:17.600874 kubelet[3312]: I0513 23:44:17.600846 3312 server.go:460] "Adding debug handlers to kubelet server" May 13 23:44:17.604280 kubelet[3312]: I0513 23:44:17.601831 3312 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 23:44:17.604570 kubelet[3312]: I0513 23:44:17.604548 3312 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 23:44:17.613450 kubelet[3312]: I0513 23:44:17.613409 3312 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 13 23:44:17.615302 kubelet[3312]: I0513 23:44:17.614816 3312 volume_manager.go:289] "Starting Kubelet Volume Manager" May 13 23:44:17.615302 kubelet[3312]: E0513 23:44:17.615179 3312 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4284.0.0-n-791441f790\" not found" May 13 23:44:17.617273 kubelet[3312]: I0513 23:44:17.617236 3312 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 13 23:44:17.617472 kubelet[3312]: I0513 23:44:17.617455 3312 reconciler.go:26] "Reconciler: start to sync state" May 13 23:44:17.619471 kubelet[3312]: I0513 23:44:17.619041 3312 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 23:44:17.620478 kubelet[3312]: I0513 23:44:17.619997 3312 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 23:44:17.620478 kubelet[3312]: I0513 23:44:17.620026 3312 status_manager.go:217] "Starting to sync pod status with apiserver" May 13 23:44:17.620478 kubelet[3312]: I0513 23:44:17.620041 3312 kubelet.go:2321] "Starting kubelet main sync loop" May 13 23:44:17.620478 kubelet[3312]: E0513 23:44:17.620078 3312 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 23:44:17.628306 kubelet[3312]: I0513 23:44:17.627449 3312 factory.go:221] Registration of the systemd container factory successfully May 13 23:44:17.628577 kubelet[3312]: I0513 23:44:17.628557 3312 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 23:44:17.637286 kubelet[3312]: I0513 23:44:17.636565 3312 factory.go:221] Registration of the containerd container factory successfully May 13 23:44:17.721354 kubelet[3312]: E0513 23:44:17.721160 3312 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 13 23:44:17.727818 kubelet[3312]: I0513 23:44:17.727780 3312 cpu_manager.go:214] "Starting CPU manager" policy="none" May 13 23:44:17.727818 kubelet[3312]: I0513 23:44:17.727802 3312 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 13 23:44:17.727818 kubelet[3312]: I0513 23:44:17.727823 3312 state_mem.go:36] "Initialized new in-memory state store" May 13 23:44:17.727982 kubelet[3312]: I0513 23:44:17.727975 3312 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 13 23:44:17.728005 kubelet[3312]: I0513 23:44:17.727986 3312 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 13 23:44:17.728025 kubelet[3312]: I0513 23:44:17.728006 3312 policy_none.go:49] "None policy: Start" May 13 23:44:17.729092 kubelet[3312]: I0513 23:44:17.729065 3312 memory_manager.go:170] "Starting memorymanager" policy="None" May 13 23:44:17.729183 kubelet[3312]: I0513 23:44:17.729100 3312 state_mem.go:35] "Initializing new in-memory state store" May 13 23:44:17.729345 kubelet[3312]: I0513 23:44:17.729323 3312 state_mem.go:75] "Updated machine memory state" May 13 23:44:17.737101 kubelet[3312]: I0513 23:44:17.736877 3312 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 23:44:17.737101 kubelet[3312]: I0513 23:44:17.737058 3312 eviction_manager.go:189] "Eviction manager: starting control loop" May 13 23:44:17.737101 kubelet[3312]: I0513 23:44:17.737070 3312 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 23:44:17.739236 kubelet[3312]: I0513 23:44:17.739208 3312 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 23:44:17.851882 kubelet[3312]: I0513 23:44:17.851785 3312 kubelet_node_status.go:72] "Attempting to register node" node="ci-4284.0.0-n-791441f790" May 13 23:44:17.865645 kubelet[3312]: I0513 23:44:17.865605 3312 kubelet_node_status.go:111] "Node was previously registered" node="ci-4284.0.0-n-791441f790" May 13 23:44:17.865758 kubelet[3312]: I0513 23:44:17.865686 3312 kubelet_node_status.go:75] "Successfully registered node" node="ci-4284.0.0-n-791441f790" May 13 23:44:17.870432 sudo[3344]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 13 23:44:17.871208 sudo[3344]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 13 23:44:17.935972 kubelet[3312]: W0513 23:44:17.935931 3312 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 13 23:44:17.937781 kubelet[3312]: W0513 23:44:17.937173 3312 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 13 23:44:17.937781 kubelet[3312]: E0513 23:44:17.937231 3312 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4284.0.0-n-791441f790\" already exists" pod="kube-system/kube-controller-manager-ci-4284.0.0-n-791441f790" May 13 23:44:17.937781 kubelet[3312]: W0513 23:44:17.937240 3312 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 13 23:44:17.937781 kubelet[3312]: E0513 23:44:17.937558 3312 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4284.0.0-n-791441f790\" already exists" pod="kube-system/kube-apiserver-ci-4284.0.0-n-791441f790" May 13 23:44:18.021388 kubelet[3312]: I0513 23:44:18.021353 3312 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/94ff95aa0db22d12864bef4e9a194967-kubeconfig\") pod \"kube-scheduler-ci-4284.0.0-n-791441f790\" (UID: \"94ff95aa0db22d12864bef4e9a194967\") " pod="kube-system/kube-scheduler-ci-4284.0.0-n-791441f790" May 13 23:44:18.021772 kubelet[3312]: I0513 23:44:18.021555 3312 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8e65f9f363ee162671b86888a06fc8d0-ca-certs\") pod \"kube-apiserver-ci-4284.0.0-n-791441f790\" (UID: \"8e65f9f363ee162671b86888a06fc8d0\") " pod="kube-system/kube-apiserver-ci-4284.0.0-n-791441f790" May 13 23:44:18.021772 kubelet[3312]: I0513 23:44:18.021581 3312 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/183faa966135729e76bef1f35664f755-k8s-certs\") pod \"kube-controller-manager-ci-4284.0.0-n-791441f790\" (UID: \"183faa966135729e76bef1f35664f755\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-n-791441f790" May 13 23:44:18.021772 kubelet[3312]: I0513 23:44:18.021601 3312 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/183faa966135729e76bef1f35664f755-kubeconfig\") pod \"kube-controller-manager-ci-4284.0.0-n-791441f790\" (UID: \"183faa966135729e76bef1f35664f755\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-n-791441f790" May 13 23:44:18.021772 kubelet[3312]: I0513 23:44:18.021635 3312 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8e65f9f363ee162671b86888a06fc8d0-k8s-certs\") pod \"kube-apiserver-ci-4284.0.0-n-791441f790\" (UID: \"8e65f9f363ee162671b86888a06fc8d0\") " pod="kube-system/kube-apiserver-ci-4284.0.0-n-791441f790" May 13 23:44:18.021772 kubelet[3312]: I0513 23:44:18.021651 3312 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8e65f9f363ee162671b86888a06fc8d0-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4284.0.0-n-791441f790\" (UID: \"8e65f9f363ee162671b86888a06fc8d0\") " pod="kube-system/kube-apiserver-ci-4284.0.0-n-791441f790" May 13 23:44:18.021908 kubelet[3312]: I0513 23:44:18.021669 3312 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/183faa966135729e76bef1f35664f755-ca-certs\") pod \"kube-controller-manager-ci-4284.0.0-n-791441f790\" (UID: \"183faa966135729e76bef1f35664f755\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-n-791441f790" May 13 23:44:18.021908 kubelet[3312]: I0513 23:44:18.021695 3312 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/183faa966135729e76bef1f35664f755-flexvolume-dir\") pod \"kube-controller-manager-ci-4284.0.0-n-791441f790\" (UID: \"183faa966135729e76bef1f35664f755\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-n-791441f790" May 13 23:44:18.021908 kubelet[3312]: I0513 23:44:18.021714 3312 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/183faa966135729e76bef1f35664f755-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4284.0.0-n-791441f790\" (UID: \"183faa966135729e76bef1f35664f755\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-n-791441f790" May 13 23:44:18.334945 sudo[3344]: pam_unix(sudo:session): session closed for user root May 13 23:44:18.583993 kubelet[3312]: I0513 23:44:18.583747 3312 apiserver.go:52] "Watching apiserver" May 13 23:44:18.617941 kubelet[3312]: I0513 23:44:18.617893 3312 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 13 23:44:18.735093 kubelet[3312]: I0513 23:44:18.734537 3312 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4284.0.0-n-791441f790" podStartSLOduration=1.7345212920000002 podStartE2EDuration="1.734521292s" podCreationTimestamp="2025-05-13 23:44:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:44:18.722535337 +0000 UTC m=+1.355777741" watchObservedRunningTime="2025-05-13 23:44:18.734521292 +0000 UTC m=+1.367763696" May 13 23:44:18.735781 kubelet[3312]: I0513 23:44:18.735492 3312 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4284.0.0-n-791441f790" podStartSLOduration=1.7354820119999999 podStartE2EDuration="1.735482012s" podCreationTimestamp="2025-05-13 23:44:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:44:18.734887892 +0000 UTC m=+1.368130296" watchObservedRunningTime="2025-05-13 23:44:18.735482012 +0000 UTC m=+1.368724416" May 13 23:44:18.747564 kubelet[3312]: I0513 23:44:18.747338 3312 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4284.0.0-n-791441f790" podStartSLOduration=3.747321887 podStartE2EDuration="3.747321887s" podCreationTimestamp="2025-05-13 23:44:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:44:18.746122727 +0000 UTC m=+1.379365211" watchObservedRunningTime="2025-05-13 23:44:18.747321887 +0000 UTC m=+1.380564291" May 13 23:44:19.797552 sudo[2230]: pam_unix(sudo:session): session closed for user root May 13 23:44:19.883683 sshd[2229]: Connection closed by 10.200.16.10 port 45364 May 13 23:44:19.884315 sshd-session[2227]: pam_unix(sshd:session): session closed for user core May 13 23:44:19.888181 systemd[1]: sshd@6-10.200.20.10:22-10.200.16.10:45364.service: Deactivated successfully. May 13 23:44:19.890596 systemd[1]: session-9.scope: Deactivated successfully. May 13 23:44:19.890854 systemd[1]: session-9.scope: Consumed 6.875s CPU time, 260.5M memory peak. May 13 23:44:19.892811 systemd-logind[1722]: Session 9 logged out. Waiting for processes to exit. May 13 23:44:19.893783 systemd-logind[1722]: Removed session 9. May 13 23:44:22.214024 kubelet[3312]: I0513 23:44:22.213977 3312 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 13 23:44:22.214789 containerd[1762]: time="2025-05-13T23:44:22.214482557Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 13 23:44:22.216314 kubelet[3312]: I0513 23:44:22.215215 3312 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 13 23:44:23.302521 systemd[1]: Created slice kubepods-besteffort-pod54fac63e_5b97_484b_95d2_66ef2f21b7ef.slice - libcontainer container kubepods-besteffort-pod54fac63e_5b97_484b_95d2_66ef2f21b7ef.slice. May 13 23:44:23.321761 systemd[1]: Created slice kubepods-burstable-pod787c05ce_63c6_4617_ac05_aa7f2868d100.slice - libcontainer container kubepods-burstable-pod787c05ce_63c6_4617_ac05_aa7f2868d100.slice. May 13 23:44:23.357607 kubelet[3312]: I0513 23:44:23.356559 3312 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/54fac63e-5b97-484b-95d2-66ef2f21b7ef-lib-modules\") pod \"kube-proxy-lfkch\" (UID: \"54fac63e-5b97-484b-95d2-66ef2f21b7ef\") " pod="kube-system/kube-proxy-lfkch" May 13 23:44:23.357607 kubelet[3312]: I0513 23:44:23.356634 3312 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/787c05ce-63c6-4617-ac05-aa7f2868d100-bpf-maps\") pod \"cilium-lgrxz\" (UID: \"787c05ce-63c6-4617-ac05-aa7f2868d100\") " pod="kube-system/cilium-lgrxz" May 13 23:44:23.357607 kubelet[3312]: I0513 23:44:23.356658 3312 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/54fac63e-5b97-484b-95d2-66ef2f21b7ef-kube-proxy\") pod \"kube-proxy-lfkch\" (UID: \"54fac63e-5b97-484b-95d2-66ef2f21b7ef\") " pod="kube-system/kube-proxy-lfkch" May 13 23:44:23.357607 kubelet[3312]: I0513 23:44:23.356673 3312 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/787c05ce-63c6-4617-ac05-aa7f2868d100-lib-modules\") pod \"cilium-lgrxz\" (UID: \"787c05ce-63c6-4617-ac05-aa7f2868d100\") " pod="kube-system/cilium-lgrxz" May 13 23:44:23.357607 kubelet[3312]: I0513 23:44:23.356688 3312 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/787c05ce-63c6-4617-ac05-aa7f2868d100-hubble-tls\") pod \"cilium-lgrxz\" (UID: \"787c05ce-63c6-4617-ac05-aa7f2868d100\") " pod="kube-system/cilium-lgrxz" May 13 23:44:23.357607 kubelet[3312]: I0513 23:44:23.356706 3312 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkmlh\" (UniqueName: \"kubernetes.io/projected/54fac63e-5b97-484b-95d2-66ef2f21b7ef-kube-api-access-wkmlh\") pod \"kube-proxy-lfkch\" (UID: \"54fac63e-5b97-484b-95d2-66ef2f21b7ef\") " pod="kube-system/kube-proxy-lfkch" May 13 23:44:23.358109 kubelet[3312]: I0513 23:44:23.356722 3312 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/787c05ce-63c6-4617-ac05-aa7f2868d100-cni-path\") pod \"cilium-lgrxz\" (UID: \"787c05ce-63c6-4617-ac05-aa7f2868d100\") " pod="kube-system/cilium-lgrxz" May 13 23:44:23.358109 kubelet[3312]: I0513 23:44:23.356738 3312 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/787c05ce-63c6-4617-ac05-aa7f2868d100-cilium-cgroup\") pod \"cilium-lgrxz\" (UID: \"787c05ce-63c6-4617-ac05-aa7f2868d100\") " pod="kube-system/cilium-lgrxz" May 13 23:44:23.358109 kubelet[3312]: I0513 23:44:23.356752 3312 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/787c05ce-63c6-4617-ac05-aa7f2868d100-xtables-lock\") pod \"cilium-lgrxz\" (UID: \"787c05ce-63c6-4617-ac05-aa7f2868d100\") " pod="kube-system/cilium-lgrxz" May 13 23:44:23.358109 kubelet[3312]: I0513 23:44:23.356766 3312 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/787c05ce-63c6-4617-ac05-aa7f2868d100-etc-cni-netd\") pod \"cilium-lgrxz\" (UID: \"787c05ce-63c6-4617-ac05-aa7f2868d100\") " pod="kube-system/cilium-lgrxz" May 13 23:44:23.358109 kubelet[3312]: I0513 23:44:23.356781 3312 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kn7td\" (UniqueName: \"kubernetes.io/projected/787c05ce-63c6-4617-ac05-aa7f2868d100-kube-api-access-kn7td\") pod \"cilium-lgrxz\" (UID: \"787c05ce-63c6-4617-ac05-aa7f2868d100\") " pod="kube-system/cilium-lgrxz" May 13 23:44:23.358109 kubelet[3312]: I0513 23:44:23.356796 3312 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/787c05ce-63c6-4617-ac05-aa7f2868d100-cilium-run\") pod \"cilium-lgrxz\" (UID: \"787c05ce-63c6-4617-ac05-aa7f2868d100\") " pod="kube-system/cilium-lgrxz" May 13 23:44:23.358240 kubelet[3312]: I0513 23:44:23.356855 3312 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/54fac63e-5b97-484b-95d2-66ef2f21b7ef-xtables-lock\") pod \"kube-proxy-lfkch\" (UID: \"54fac63e-5b97-484b-95d2-66ef2f21b7ef\") " pod="kube-system/kube-proxy-lfkch" May 13 23:44:23.358240 kubelet[3312]: I0513 23:44:23.356871 3312 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/787c05ce-63c6-4617-ac05-aa7f2868d100-host-proc-sys-net\") pod \"cilium-lgrxz\" (UID: \"787c05ce-63c6-4617-ac05-aa7f2868d100\") " pod="kube-system/cilium-lgrxz" May 13 23:44:23.358240 kubelet[3312]: I0513 23:44:23.356887 3312 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/787c05ce-63c6-4617-ac05-aa7f2868d100-clustermesh-secrets\") pod \"cilium-lgrxz\" (UID: \"787c05ce-63c6-4617-ac05-aa7f2868d100\") " pod="kube-system/cilium-lgrxz" May 13 23:44:23.358240 kubelet[3312]: I0513 23:44:23.356903 3312 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/787c05ce-63c6-4617-ac05-aa7f2868d100-cilium-config-path\") pod \"cilium-lgrxz\" (UID: \"787c05ce-63c6-4617-ac05-aa7f2868d100\") " pod="kube-system/cilium-lgrxz" May 13 23:44:23.358240 kubelet[3312]: I0513 23:44:23.356917 3312 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/787c05ce-63c6-4617-ac05-aa7f2868d100-host-proc-sys-kernel\") pod \"cilium-lgrxz\" (UID: \"787c05ce-63c6-4617-ac05-aa7f2868d100\") " pod="kube-system/cilium-lgrxz" May 13 23:44:23.358367 kubelet[3312]: I0513 23:44:23.356934 3312 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/787c05ce-63c6-4617-ac05-aa7f2868d100-hostproc\") pod \"cilium-lgrxz\" (UID: \"787c05ce-63c6-4617-ac05-aa7f2868d100\") " pod="kube-system/cilium-lgrxz" May 13 23:44:23.372981 systemd[1]: Created slice kubepods-besteffort-podecc7e9a2_db2c_46dc_bca5_4cad3f619693.slice - libcontainer container kubepods-besteffort-podecc7e9a2_db2c_46dc_bca5_4cad3f619693.slice. May 13 23:44:23.458310 kubelet[3312]: I0513 23:44:23.457218 3312 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ecc7e9a2-db2c-46dc-bca5-4cad3f619693-cilium-config-path\") pod \"cilium-operator-5d85765b45-vwsd6\" (UID: \"ecc7e9a2-db2c-46dc-bca5-4cad3f619693\") " pod="kube-system/cilium-operator-5d85765b45-vwsd6" May 13 23:44:23.458310 kubelet[3312]: I0513 23:44:23.457311 3312 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlbcq\" (UniqueName: \"kubernetes.io/projected/ecc7e9a2-db2c-46dc-bca5-4cad3f619693-kube-api-access-wlbcq\") pod \"cilium-operator-5d85765b45-vwsd6\" (UID: \"ecc7e9a2-db2c-46dc-bca5-4cad3f619693\") " pod="kube-system/cilium-operator-5d85765b45-vwsd6" May 13 23:44:23.613062 containerd[1762]: time="2025-05-13T23:44:23.613018744Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lfkch,Uid:54fac63e-5b97-484b-95d2-66ef2f21b7ef,Namespace:kube-system,Attempt:0,}" May 13 23:44:23.632547 containerd[1762]: time="2025-05-13T23:44:23.632289668Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lgrxz,Uid:787c05ce-63c6-4617-ac05-aa7f2868d100,Namespace:kube-system,Attempt:0,}" May 13 23:44:23.685001 containerd[1762]: time="2025-05-13T23:44:23.684838038Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-vwsd6,Uid:ecc7e9a2-db2c-46dc-bca5-4cad3f619693,Namespace:kube-system,Attempt:0,}" May 13 23:44:23.685833 containerd[1762]: time="2025-05-13T23:44:23.685779118Z" level=info msg="connecting to shim 85487c21abd0b2e76bc11f061881f474430650540814df8eb05dcf96331b32be" address="unix:///run/containerd/s/ce14069e4e325f26a0217b3efcad2070a7f8163f951b3b61f45b57bb739c4971" namespace=k8s.io protocol=ttrpc version=3 May 13 23:44:23.707507 systemd[1]: Started cri-containerd-85487c21abd0b2e76bc11f061881f474430650540814df8eb05dcf96331b32be.scope - libcontainer container 85487c21abd0b2e76bc11f061881f474430650540814df8eb05dcf96331b32be. May 13 23:44:23.725405 containerd[1762]: time="2025-05-13T23:44:23.725259405Z" level=info msg="connecting to shim b983f11939ba18052f6cdc98ddba969935be51187f5791bc26b0774179f8ccdc" address="unix:///run/containerd/s/e266c0f7312ebf694b32c825ee0acaafbbf8eaf40bdd34b0c25394147bfb6ce5" namespace=k8s.io protocol=ttrpc version=3 May 13 23:44:23.757685 containerd[1762]: time="2025-05-13T23:44:23.757648812Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lfkch,Uid:54fac63e-5b97-484b-95d2-66ef2f21b7ef,Namespace:kube-system,Attempt:0,} returns sandbox id \"85487c21abd0b2e76bc11f061881f474430650540814df8eb05dcf96331b32be\"" May 13 23:44:23.760494 systemd[1]: Started cri-containerd-b983f11939ba18052f6cdc98ddba969935be51187f5791bc26b0774179f8ccdc.scope - libcontainer container b983f11939ba18052f6cdc98ddba969935be51187f5791bc26b0774179f8ccdc. May 13 23:44:23.764951 containerd[1762]: time="2025-05-13T23:44:23.763573773Z" level=info msg="CreateContainer within sandbox \"85487c21abd0b2e76bc11f061881f474430650540814df8eb05dcf96331b32be\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 13 23:44:23.784706 containerd[1762]: time="2025-05-13T23:44:23.784655337Z" level=info msg="connecting to shim d6d4e21b51fab399a57e782ed309724c04245f02dc9b2b18da6c3dd33b1456ee" address="unix:///run/containerd/s/d41a2e0d5dba063214245c6ed2cec6daa3f2d4a36958b0c2799450ac979421b7" namespace=k8s.io protocol=ttrpc version=3 May 13 23:44:23.802815 containerd[1762]: time="2025-05-13T23:44:23.802626780Z" level=info msg="Container ee71f62ea5374f1f84d0d8794ffd0c051fe487f4c6507eb6a068a01980c3347c: CDI devices from CRI Config.CDIDevices: []" May 13 23:44:23.805993 containerd[1762]: time="2025-05-13T23:44:23.805909061Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lgrxz,Uid:787c05ce-63c6-4617-ac05-aa7f2868d100,Namespace:kube-system,Attempt:0,} returns sandbox id \"b983f11939ba18052f6cdc98ddba969935be51187f5791bc26b0774179f8ccdc\"" May 13 23:44:23.812437 containerd[1762]: time="2025-05-13T23:44:23.809969062Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 13 23:44:23.817509 systemd[1]: Started cri-containerd-d6d4e21b51fab399a57e782ed309724c04245f02dc9b2b18da6c3dd33b1456ee.scope - libcontainer container d6d4e21b51fab399a57e782ed309724c04245f02dc9b2b18da6c3dd33b1456ee. May 13 23:44:23.829617 containerd[1762]: time="2025-05-13T23:44:23.829406785Z" level=info msg="CreateContainer within sandbox \"85487c21abd0b2e76bc11f061881f474430650540814df8eb05dcf96331b32be\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ee71f62ea5374f1f84d0d8794ffd0c051fe487f4c6507eb6a068a01980c3347c\"" May 13 23:44:23.830284 containerd[1762]: time="2025-05-13T23:44:23.830218225Z" level=info msg="StartContainer for \"ee71f62ea5374f1f84d0d8794ffd0c051fe487f4c6507eb6a068a01980c3347c\"" May 13 23:44:23.832324 containerd[1762]: time="2025-05-13T23:44:23.832205586Z" level=info msg="connecting to shim ee71f62ea5374f1f84d0d8794ffd0c051fe487f4c6507eb6a068a01980c3347c" address="unix:///run/containerd/s/ce14069e4e325f26a0217b3efcad2070a7f8163f951b3b61f45b57bb739c4971" protocol=ttrpc version=3 May 13 23:44:23.862500 systemd[1]: Started cri-containerd-ee71f62ea5374f1f84d0d8794ffd0c051fe487f4c6507eb6a068a01980c3347c.scope - libcontainer container ee71f62ea5374f1f84d0d8794ffd0c051fe487f4c6507eb6a068a01980c3347c. May 13 23:44:23.875662 containerd[1762]: time="2025-05-13T23:44:23.875514954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-vwsd6,Uid:ecc7e9a2-db2c-46dc-bca5-4cad3f619693,Namespace:kube-system,Attempt:0,} returns sandbox id \"d6d4e21b51fab399a57e782ed309724c04245f02dc9b2b18da6c3dd33b1456ee\"" May 13 23:44:23.910097 containerd[1762]: time="2025-05-13T23:44:23.910034281Z" level=info msg="StartContainer for \"ee71f62ea5374f1f84d0d8794ffd0c051fe487f4c6507eb6a068a01980c3347c\" returns successfully" May 13 23:44:26.398942 kubelet[3312]: I0513 23:44:26.398365 3312 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-lfkch" podStartSLOduration=3.398344321 podStartE2EDuration="3.398344321s" podCreationTimestamp="2025-05-13 23:44:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:44:24.733471838 +0000 UTC m=+7.366714242" watchObservedRunningTime="2025-05-13 23:44:26.398344321 +0000 UTC m=+9.031586725" May 13 23:44:31.679475 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3551544325.mount: Deactivated successfully. May 13 23:44:36.683560 containerd[1762]: time="2025-05-13T23:44:36.683488468Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:44:36.688307 containerd[1762]: time="2025-05-13T23:44:36.688099026Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" May 13 23:44:36.695419 containerd[1762]: time="2025-05-13T23:44:36.695308424Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:44:36.696723 containerd[1762]: time="2025-05-13T23:44:36.696408103Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 12.886396681s" May 13 23:44:36.696723 containerd[1762]: time="2025-05-13T23:44:36.696454583Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" May 13 23:44:36.700632 containerd[1762]: time="2025-05-13T23:44:36.700588862Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 13 23:44:36.701324 containerd[1762]: time="2025-05-13T23:44:36.701291701Z" level=info msg="CreateContainer within sandbox \"b983f11939ba18052f6cdc98ddba969935be51187f5791bc26b0774179f8ccdc\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 13 23:44:36.757788 containerd[1762]: time="2025-05-13T23:44:36.757747121Z" level=info msg="Container 9e6882e419d22f90b4c6de549232f832d6e81779c278b8dc94282dc0e032dc55: CDI devices from CRI Config.CDIDevices: []" May 13 23:44:36.763030 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount992014649.mount: Deactivated successfully. May 13 23:44:36.778936 containerd[1762]: time="2025-05-13T23:44:36.778868393Z" level=info msg="CreateContainer within sandbox \"b983f11939ba18052f6cdc98ddba969935be51187f5791bc26b0774179f8ccdc\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9e6882e419d22f90b4c6de549232f832d6e81779c278b8dc94282dc0e032dc55\"" May 13 23:44:36.780288 containerd[1762]: time="2025-05-13T23:44:36.779493833Z" level=info msg="StartContainer for \"9e6882e419d22f90b4c6de549232f832d6e81779c278b8dc94282dc0e032dc55\"" May 13 23:44:36.781637 containerd[1762]: time="2025-05-13T23:44:36.781305033Z" level=info msg="connecting to shim 9e6882e419d22f90b4c6de549232f832d6e81779c278b8dc94282dc0e032dc55" address="unix:///run/containerd/s/e266c0f7312ebf694b32c825ee0acaafbbf8eaf40bdd34b0c25394147bfb6ce5" protocol=ttrpc version=3 May 13 23:44:36.802541 systemd[1]: Started cri-containerd-9e6882e419d22f90b4c6de549232f832d6e81779c278b8dc94282dc0e032dc55.scope - libcontainer container 9e6882e419d22f90b4c6de549232f832d6e81779c278b8dc94282dc0e032dc55. May 13 23:44:36.835567 containerd[1762]: time="2025-05-13T23:44:36.835403933Z" level=info msg="StartContainer for \"9e6882e419d22f90b4c6de549232f832d6e81779c278b8dc94282dc0e032dc55\" returns successfully" May 13 23:44:36.846879 systemd[1]: cri-containerd-9e6882e419d22f90b4c6de549232f832d6e81779c278b8dc94282dc0e032dc55.scope: Deactivated successfully. May 13 23:44:36.851022 containerd[1762]: time="2025-05-13T23:44:36.850957368Z" level=info msg="received exit event container_id:\"9e6882e419d22f90b4c6de549232f832d6e81779c278b8dc94282dc0e032dc55\" id:\"9e6882e419d22f90b4c6de549232f832d6e81779c278b8dc94282dc0e032dc55\" pid:3721 exited_at:{seconds:1747179876 nanos:850041528}" May 13 23:44:36.851520 containerd[1762]: time="2025-05-13T23:44:36.851007287Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9e6882e419d22f90b4c6de549232f832d6e81779c278b8dc94282dc0e032dc55\" id:\"9e6882e419d22f90b4c6de549232f832d6e81779c278b8dc94282dc0e032dc55\" pid:3721 exited_at:{seconds:1747179876 nanos:850041528}" May 13 23:44:36.875463 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9e6882e419d22f90b4c6de549232f832d6e81779c278b8dc94282dc0e032dc55-rootfs.mount: Deactivated successfully. May 13 23:44:38.758730 containerd[1762]: time="2025-05-13T23:44:38.757384361Z" level=info msg="CreateContainer within sandbox \"b983f11939ba18052f6cdc98ddba969935be51187f5791bc26b0774179f8ccdc\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 13 23:44:38.796836 containerd[1762]: time="2025-05-13T23:44:38.796464547Z" level=info msg="Container 6c97ac43ae21de8e2bd0fffbd29ba6e1dd0abc6c0f5352ea5cea774554726c88: CDI devices from CRI Config.CDIDevices: []" May 13 23:44:38.820883 containerd[1762]: time="2025-05-13T23:44:38.820814618Z" level=info msg="CreateContainer within sandbox \"b983f11939ba18052f6cdc98ddba969935be51187f5791bc26b0774179f8ccdc\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"6c97ac43ae21de8e2bd0fffbd29ba6e1dd0abc6c0f5352ea5cea774554726c88\"" May 13 23:44:38.822302 containerd[1762]: time="2025-05-13T23:44:38.821926258Z" level=info msg="StartContainer for \"6c97ac43ae21de8e2bd0fffbd29ba6e1dd0abc6c0f5352ea5cea774554726c88\"" May 13 23:44:38.824183 containerd[1762]: time="2025-05-13T23:44:38.824009457Z" level=info msg="connecting to shim 6c97ac43ae21de8e2bd0fffbd29ba6e1dd0abc6c0f5352ea5cea774554726c88" address="unix:///run/containerd/s/e266c0f7312ebf694b32c825ee0acaafbbf8eaf40bdd34b0c25394147bfb6ce5" protocol=ttrpc version=3 May 13 23:44:38.851511 systemd[1]: Started cri-containerd-6c97ac43ae21de8e2bd0fffbd29ba6e1dd0abc6c0f5352ea5cea774554726c88.scope - libcontainer container 6c97ac43ae21de8e2bd0fffbd29ba6e1dd0abc6c0f5352ea5cea774554726c88. May 13 23:44:38.889219 containerd[1762]: time="2025-05-13T23:44:38.889182794Z" level=info msg="StartContainer for \"6c97ac43ae21de8e2bd0fffbd29ba6e1dd0abc6c0f5352ea5cea774554726c88\" returns successfully" May 13 23:44:38.898589 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 13 23:44:38.898949 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 13 23:44:38.900729 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 13 23:44:38.903603 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 23:44:38.907872 containerd[1762]: time="2025-05-13T23:44:38.906579148Z" level=info msg="received exit event container_id:\"6c97ac43ae21de8e2bd0fffbd29ba6e1dd0abc6c0f5352ea5cea774554726c88\" id:\"6c97ac43ae21de8e2bd0fffbd29ba6e1dd0abc6c0f5352ea5cea774554726c88\" pid:3764 exited_at:{seconds:1747179878 nanos:905724828}" May 13 23:44:38.906819 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 13 23:44:38.907302 systemd[1]: cri-containerd-6c97ac43ae21de8e2bd0fffbd29ba6e1dd0abc6c0f5352ea5cea774554726c88.scope: Deactivated successfully. May 13 23:44:38.910891 containerd[1762]: time="2025-05-13T23:44:38.910812466Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6c97ac43ae21de8e2bd0fffbd29ba6e1dd0abc6c0f5352ea5cea774554726c88\" id:\"6c97ac43ae21de8e2bd0fffbd29ba6e1dd0abc6c0f5352ea5cea774554726c88\" pid:3764 exited_at:{seconds:1747179878 nanos:905724828}" May 13 23:44:38.929368 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 23:44:38.941041 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6c97ac43ae21de8e2bd0fffbd29ba6e1dd0abc6c0f5352ea5cea774554726c88-rootfs.mount: Deactivated successfully. May 13 23:44:39.765513 containerd[1762]: time="2025-05-13T23:44:39.764045719Z" level=info msg="CreateContainer within sandbox \"b983f11939ba18052f6cdc98ddba969935be51187f5791bc26b0774179f8ccdc\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 13 23:44:39.805088 containerd[1762]: time="2025-05-13T23:44:39.805040904Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:44:39.814729 containerd[1762]: time="2025-05-13T23:44:39.814493861Z" level=info msg="Container 4f11759d0b2dcc9ca0f0abb199a07ebfd5dde33ddf7c3a7a0ba9e9e2c00ffddf: CDI devices from CRI Config.CDIDevices: []" May 13 23:44:39.815785 containerd[1762]: time="2025-05-13T23:44:39.815724540Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" May 13 23:44:39.821180 containerd[1762]: time="2025-05-13T23:44:39.820448379Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:44:39.831889 containerd[1762]: time="2025-05-13T23:44:39.831831055Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 3.130953753s" May 13 23:44:39.831889 containerd[1762]: time="2025-05-13T23:44:39.831887975Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" May 13 23:44:39.834445 containerd[1762]: time="2025-05-13T23:44:39.834030334Z" level=info msg="CreateContainer within sandbox \"d6d4e21b51fab399a57e782ed309724c04245f02dc9b2b18da6c3dd33b1456ee\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 13 23:44:39.847634 containerd[1762]: time="2025-05-13T23:44:39.847589449Z" level=info msg="CreateContainer within sandbox \"b983f11939ba18052f6cdc98ddba969935be51187f5791bc26b0774179f8ccdc\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"4f11759d0b2dcc9ca0f0abb199a07ebfd5dde33ddf7c3a7a0ba9e9e2c00ffddf\"" May 13 23:44:39.848736 containerd[1762]: time="2025-05-13T23:44:39.848590289Z" level=info msg="StartContainer for \"4f11759d0b2dcc9ca0f0abb199a07ebfd5dde33ddf7c3a7a0ba9e9e2c00ffddf\"" May 13 23:44:39.851365 containerd[1762]: time="2025-05-13T23:44:39.851325168Z" level=info msg="connecting to shim 4f11759d0b2dcc9ca0f0abb199a07ebfd5dde33ddf7c3a7a0ba9e9e2c00ffddf" address="unix:///run/containerd/s/e266c0f7312ebf694b32c825ee0acaafbbf8eaf40bdd34b0c25394147bfb6ce5" protocol=ttrpc version=3 May 13 23:44:39.870838 containerd[1762]: time="2025-05-13T23:44:39.870053761Z" level=info msg="Container 95f2b8d28401d3d3df62ca8f618132a08a62ec8f3fd396fe0888d3894549475f: CDI devices from CRI Config.CDIDevices: []" May 13 23:44:39.873517 systemd[1]: Started cri-containerd-4f11759d0b2dcc9ca0f0abb199a07ebfd5dde33ddf7c3a7a0ba9e9e2c00ffddf.scope - libcontainer container 4f11759d0b2dcc9ca0f0abb199a07ebfd5dde33ddf7c3a7a0ba9e9e2c00ffddf. May 13 23:44:39.889044 containerd[1762]: time="2025-05-13T23:44:39.888980874Z" level=info msg="CreateContainer within sandbox \"d6d4e21b51fab399a57e782ed309724c04245f02dc9b2b18da6c3dd33b1456ee\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"95f2b8d28401d3d3df62ca8f618132a08a62ec8f3fd396fe0888d3894549475f\"" May 13 23:44:39.891220 containerd[1762]: time="2025-05-13T23:44:39.891154233Z" level=info msg="StartContainer for \"95f2b8d28401d3d3df62ca8f618132a08a62ec8f3fd396fe0888d3894549475f\"" May 13 23:44:39.893085 containerd[1762]: time="2025-05-13T23:44:39.892815033Z" level=info msg="connecting to shim 95f2b8d28401d3d3df62ca8f618132a08a62ec8f3fd396fe0888d3894549475f" address="unix:///run/containerd/s/d41a2e0d5dba063214245c6ed2cec6daa3f2d4a36958b0c2799450ac979421b7" protocol=ttrpc version=3 May 13 23:44:39.918486 systemd[1]: Started cri-containerd-95f2b8d28401d3d3df62ca8f618132a08a62ec8f3fd396fe0888d3894549475f.scope - libcontainer container 95f2b8d28401d3d3df62ca8f618132a08a62ec8f3fd396fe0888d3894549475f. May 13 23:44:39.928725 systemd[1]: cri-containerd-4f11759d0b2dcc9ca0f0abb199a07ebfd5dde33ddf7c3a7a0ba9e9e2c00ffddf.scope: Deactivated successfully. May 13 23:44:39.933927 containerd[1762]: time="2025-05-13T23:44:39.933627178Z" level=info msg="received exit event container_id:\"4f11759d0b2dcc9ca0f0abb199a07ebfd5dde33ddf7c3a7a0ba9e9e2c00ffddf\" id:\"4f11759d0b2dcc9ca0f0abb199a07ebfd5dde33ddf7c3a7a0ba9e9e2c00ffddf\" pid:3826 exited_at:{seconds:1747179879 nanos:933392098}" May 13 23:44:39.933927 containerd[1762]: time="2025-05-13T23:44:39.933897138Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4f11759d0b2dcc9ca0f0abb199a07ebfd5dde33ddf7c3a7a0ba9e9e2c00ffddf\" id:\"4f11759d0b2dcc9ca0f0abb199a07ebfd5dde33ddf7c3a7a0ba9e9e2c00ffddf\" pid:3826 exited_at:{seconds:1747179879 nanos:933392098}" May 13 23:44:39.936363 containerd[1762]: time="2025-05-13T23:44:39.936326697Z" level=info msg="StartContainer for \"4f11759d0b2dcc9ca0f0abb199a07ebfd5dde33ddf7c3a7a0ba9e9e2c00ffddf\" returns successfully" May 13 23:44:40.272839 containerd[1762]: time="2025-05-13T23:44:40.272726336Z" level=info msg="StartContainer for \"95f2b8d28401d3d3df62ca8f618132a08a62ec8f3fd396fe0888d3894549475f\" returns successfully" May 13 23:44:40.771562 containerd[1762]: time="2025-05-13T23:44:40.771455356Z" level=info msg="CreateContainer within sandbox \"b983f11939ba18052f6cdc98ddba969935be51187f5791bc26b0774179f8ccdc\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 13 23:44:40.802620 containerd[1762]: time="2025-05-13T23:44:40.802573025Z" level=info msg="Container 8dcba35987490e1729f746fdf63fc25613ec5ff07c6e95f0cf0426c0f6d60bcd: CDI devices from CRI Config.CDIDevices: []" May 13 23:44:40.805390 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4f11759d0b2dcc9ca0f0abb199a07ebfd5dde33ddf7c3a7a0ba9e9e2c00ffddf-rootfs.mount: Deactivated successfully. May 13 23:44:40.819961 containerd[1762]: time="2025-05-13T23:44:40.819905979Z" level=info msg="CreateContainer within sandbox \"b983f11939ba18052f6cdc98ddba969935be51187f5791bc26b0774179f8ccdc\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"8dcba35987490e1729f746fdf63fc25613ec5ff07c6e95f0cf0426c0f6d60bcd\"" May 13 23:44:40.822614 containerd[1762]: time="2025-05-13T23:44:40.822572298Z" level=info msg="StartContainer for \"8dcba35987490e1729f746fdf63fc25613ec5ff07c6e95f0cf0426c0f6d60bcd\"" May 13 23:44:40.824552 containerd[1762]: time="2025-05-13T23:44:40.824498017Z" level=info msg="connecting to shim 8dcba35987490e1729f746fdf63fc25613ec5ff07c6e95f0cf0426c0f6d60bcd" address="unix:///run/containerd/s/e266c0f7312ebf694b32c825ee0acaafbbf8eaf40bdd34b0c25394147bfb6ce5" protocol=ttrpc version=3 May 13 23:44:40.858614 systemd[1]: Started cri-containerd-8dcba35987490e1729f746fdf63fc25613ec5ff07c6e95f0cf0426c0f6d60bcd.scope - libcontainer container 8dcba35987490e1729f746fdf63fc25613ec5ff07c6e95f0cf0426c0f6d60bcd. May 13 23:44:40.935223 systemd[1]: cri-containerd-8dcba35987490e1729f746fdf63fc25613ec5ff07c6e95f0cf0426c0f6d60bcd.scope: Deactivated successfully. May 13 23:44:40.937761 containerd[1762]: time="2025-05-13T23:44:40.936520457Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8dcba35987490e1729f746fdf63fc25613ec5ff07c6e95f0cf0426c0f6d60bcd\" id:\"8dcba35987490e1729f746fdf63fc25613ec5ff07c6e95f0cf0426c0f6d60bcd\" pid:3898 exited_at:{seconds:1747179880 nanos:935974337}" May 13 23:44:40.940347 containerd[1762]: time="2025-05-13T23:44:40.939783936Z" level=info msg="received exit event container_id:\"8dcba35987490e1729f746fdf63fc25613ec5ff07c6e95f0cf0426c0f6d60bcd\" id:\"8dcba35987490e1729f746fdf63fc25613ec5ff07c6e95f0cf0426c0f6d60bcd\" pid:3898 exited_at:{seconds:1747179880 nanos:935974337}" May 13 23:44:40.946775 containerd[1762]: time="2025-05-13T23:44:40.946642773Z" level=info msg="StartContainer for \"8dcba35987490e1729f746fdf63fc25613ec5ff07c6e95f0cf0426c0f6d60bcd\" returns successfully" May 13 23:44:40.955454 kubelet[3312]: I0513 23:44:40.955056 3312 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-vwsd6" podStartSLOduration=2.000483431 podStartE2EDuration="17.95503409s" podCreationTimestamp="2025-05-13 23:44:23 +0000 UTC" firstStartedPulling="2025-05-13 23:44:23.878036595 +0000 UTC m=+6.511278999" lastFinishedPulling="2025-05-13 23:44:39.832587254 +0000 UTC m=+22.465829658" observedRunningTime="2025-05-13 23:44:40.786466391 +0000 UTC m=+23.419708835" watchObservedRunningTime="2025-05-13 23:44:40.95503409 +0000 UTC m=+23.588276494" May 13 23:44:40.975497 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8dcba35987490e1729f746fdf63fc25613ec5ff07c6e95f0cf0426c0f6d60bcd-rootfs.mount: Deactivated successfully. May 13 23:44:41.787296 containerd[1762]: time="2025-05-13T23:44:41.787227951Z" level=info msg="CreateContainer within sandbox \"b983f11939ba18052f6cdc98ddba969935be51187f5791bc26b0774179f8ccdc\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 13 23:44:41.816971 containerd[1762]: time="2025-05-13T23:44:41.816563540Z" level=info msg="Container eff60ec307c65f3fdc8a64c3647d4d6ab60f5b58b0e8e1d093142ec6ecb39c5e: CDI devices from CRI Config.CDIDevices: []" May 13 23:44:41.819943 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2751846362.mount: Deactivated successfully. May 13 23:44:41.832948 containerd[1762]: time="2025-05-13T23:44:41.832885974Z" level=info msg="CreateContainer within sandbox \"b983f11939ba18052f6cdc98ddba969935be51187f5791bc26b0774179f8ccdc\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"eff60ec307c65f3fdc8a64c3647d4d6ab60f5b58b0e8e1d093142ec6ecb39c5e\"" May 13 23:44:41.834071 containerd[1762]: time="2025-05-13T23:44:41.834005494Z" level=info msg="StartContainer for \"eff60ec307c65f3fdc8a64c3647d4d6ab60f5b58b0e8e1d093142ec6ecb39c5e\"" May 13 23:44:41.835709 containerd[1762]: time="2025-05-13T23:44:41.835663853Z" level=info msg="connecting to shim eff60ec307c65f3fdc8a64c3647d4d6ab60f5b58b0e8e1d093142ec6ecb39c5e" address="unix:///run/containerd/s/e266c0f7312ebf694b32c825ee0acaafbbf8eaf40bdd34b0c25394147bfb6ce5" protocol=ttrpc version=3 May 13 23:44:41.859512 systemd[1]: Started cri-containerd-eff60ec307c65f3fdc8a64c3647d4d6ab60f5b58b0e8e1d093142ec6ecb39c5e.scope - libcontainer container eff60ec307c65f3fdc8a64c3647d4d6ab60f5b58b0e8e1d093142ec6ecb39c5e. May 13 23:44:41.902481 containerd[1762]: time="2025-05-13T23:44:41.902429989Z" level=info msg="StartContainer for \"eff60ec307c65f3fdc8a64c3647d4d6ab60f5b58b0e8e1d093142ec6ecb39c5e\" returns successfully" May 13 23:44:41.998181 containerd[1762]: time="2025-05-13T23:44:41.996948315Z" level=info msg="TaskExit event in podsandbox handler container_id:\"eff60ec307c65f3fdc8a64c3647d4d6ab60f5b58b0e8e1d093142ec6ecb39c5e\" id:\"0a199d7f4881891624227a94c6599f6ee0bc3c3b8f31dbdf9801a9e1e5c5483d\" pid:3962 exited_at:{seconds:1747179881 nanos:996330235}" May 13 23:44:42.091255 kubelet[3312]: I0513 23:44:42.090926 3312 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 13 23:44:42.141567 systemd[1]: Created slice kubepods-burstable-poda32f9734_fe38_48bb_879b_94db70e8351e.slice - libcontainer container kubepods-burstable-poda32f9734_fe38_48bb_879b_94db70e8351e.slice. May 13 23:44:42.157107 systemd[1]: Created slice kubepods-burstable-pod49c9f577_0cb1_4955_8dec_ac00bab33b4c.slice - libcontainer container kubepods-burstable-pod49c9f577_0cb1_4955_8dec_ac00bab33b4c.slice. May 13 23:44:42.191608 kubelet[3312]: I0513 23:44:42.191553 3312 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/49c9f577-0cb1-4955-8dec-ac00bab33b4c-config-volume\") pod \"coredns-6f6b679f8f-vt492\" (UID: \"49c9f577-0cb1-4955-8dec-ac00bab33b4c\") " pod="kube-system/coredns-6f6b679f8f-vt492" May 13 23:44:42.192230 kubelet[3312]: I0513 23:44:42.191925 3312 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a32f9734-fe38-48bb-879b-94db70e8351e-config-volume\") pod \"coredns-6f6b679f8f-bn8cj\" (UID: \"a32f9734-fe38-48bb-879b-94db70e8351e\") " pod="kube-system/coredns-6f6b679f8f-bn8cj" May 13 23:44:42.192230 kubelet[3312]: I0513 23:44:42.192084 3312 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7lcb\" (UniqueName: \"kubernetes.io/projected/a32f9734-fe38-48bb-879b-94db70e8351e-kube-api-access-x7lcb\") pod \"coredns-6f6b679f8f-bn8cj\" (UID: \"a32f9734-fe38-48bb-879b-94db70e8351e\") " pod="kube-system/coredns-6f6b679f8f-bn8cj" May 13 23:44:42.192757 kubelet[3312]: I0513 23:44:42.192118 3312 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7kqw\" (UniqueName: \"kubernetes.io/projected/49c9f577-0cb1-4955-8dec-ac00bab33b4c-kube-api-access-w7kqw\") pod \"coredns-6f6b679f8f-vt492\" (UID: \"49c9f577-0cb1-4955-8dec-ac00bab33b4c\") " pod="kube-system/coredns-6f6b679f8f-vt492" May 13 23:44:42.451932 containerd[1762]: time="2025-05-13T23:44:42.451480792Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-bn8cj,Uid:a32f9734-fe38-48bb-879b-94db70e8351e,Namespace:kube-system,Attempt:0,}" May 13 23:44:42.477203 containerd[1762]: time="2025-05-13T23:44:42.476723743Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-vt492,Uid:49c9f577-0cb1-4955-8dec-ac00bab33b4c,Namespace:kube-system,Attempt:0,}" May 13 23:44:44.193787 systemd-networkd[1342]: cilium_host: Link UP May 13 23:44:44.193912 systemd-networkd[1342]: cilium_net: Link UP May 13 23:44:44.193914 systemd-networkd[1342]: cilium_net: Gained carrier May 13 23:44:44.194048 systemd-networkd[1342]: cilium_host: Gained carrier May 13 23:44:44.194147 systemd-networkd[1342]: cilium_net: Gained IPv6LL May 13 23:44:44.354807 systemd-networkd[1342]: cilium_vxlan: Link UP May 13 23:44:44.354821 systemd-networkd[1342]: cilium_vxlan: Gained carrier May 13 23:44:44.629494 kernel: NET: Registered PF_ALG protocol family May 13 23:44:44.791355 systemd-networkd[1342]: cilium_host: Gained IPv6LL May 13 23:44:45.391044 systemd-networkd[1342]: lxc_health: Link UP May 13 23:44:45.397086 systemd-networkd[1342]: lxc_health: Gained carrier May 13 23:44:45.533949 systemd-networkd[1342]: lxc72663f5994fb: Link UP May 13 23:44:45.543402 kernel: eth0: renamed from tmpfc291 May 13 23:44:45.550058 systemd-networkd[1342]: lxc72663f5994fb: Gained carrier May 13 23:44:45.624403 systemd-networkd[1342]: cilium_vxlan: Gained IPv6LL May 13 23:44:45.658303 kubelet[3312]: I0513 23:44:45.657653 3312 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-lgrxz" podStartSLOduration=9.768191765 podStartE2EDuration="22.657634087s" podCreationTimestamp="2025-05-13 23:44:23 +0000 UTC" firstStartedPulling="2025-05-13 23:44:23.808454541 +0000 UTC m=+6.441696945" lastFinishedPulling="2025-05-13 23:44:36.697896863 +0000 UTC m=+19.331139267" observedRunningTime="2025-05-13 23:44:42.809947703 +0000 UTC m=+25.443190107" watchObservedRunningTime="2025-05-13 23:44:45.657634087 +0000 UTC m=+28.290876491" May 13 23:44:46.000288 systemd-networkd[1342]: lxc1e5071be5f43: Link UP May 13 23:44:46.010322 kernel: eth0: renamed from tmp7171b May 13 23:44:46.015852 systemd-networkd[1342]: lxc1e5071be5f43: Gained carrier May 13 23:44:47.096955 systemd-networkd[1342]: lxc1e5071be5f43: Gained IPv6LL May 13 23:44:47.223508 systemd-networkd[1342]: lxc_health: Gained IPv6LL May 13 23:44:47.478504 systemd-networkd[1342]: lxc72663f5994fb: Gained IPv6LL May 13 23:44:50.066604 containerd[1762]: time="2025-05-13T23:44:50.065857764Z" level=info msg="connecting to shim 7171b5d78df67afab97de95a40bafb44b2cfe04fd9e495ad0a3e6c73e92cd4e5" address="unix:///run/containerd/s/ca744e818bc929ae1001ad58594936a3a991e0c666059776fd9865083a8ff329" namespace=k8s.io protocol=ttrpc version=3 May 13 23:44:50.107623 containerd[1762]: time="2025-05-13T23:44:50.107075032Z" level=info msg="connecting to shim fc291feecfe1f1f2d3e9b9c78f1b65ec047562bab2134e512cd265a926cc8a48" address="unix:///run/containerd/s/df8dd13ac6fae422fc79ffeb6e441710a63b2a4b65b85ddbe76db9a51b9b2a79" namespace=k8s.io protocol=ttrpc version=3 May 13 23:44:50.108795 systemd[1]: Started cri-containerd-7171b5d78df67afab97de95a40bafb44b2cfe04fd9e495ad0a3e6c73e92cd4e5.scope - libcontainer container 7171b5d78df67afab97de95a40bafb44b2cfe04fd9e495ad0a3e6c73e92cd4e5. May 13 23:44:50.152528 systemd[1]: Started cri-containerd-fc291feecfe1f1f2d3e9b9c78f1b65ec047562bab2134e512cd265a926cc8a48.scope - libcontainer container fc291feecfe1f1f2d3e9b9c78f1b65ec047562bab2134e512cd265a926cc8a48. May 13 23:44:50.195474 containerd[1762]: time="2025-05-13T23:44:50.195421165Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-bn8cj,Uid:a32f9734-fe38-48bb-879b-94db70e8351e,Namespace:kube-system,Attempt:0,} returns sandbox id \"7171b5d78df67afab97de95a40bafb44b2cfe04fd9e495ad0a3e6c73e92cd4e5\"" May 13 23:44:50.199346 containerd[1762]: time="2025-05-13T23:44:50.199296244Z" level=info msg="CreateContainer within sandbox \"7171b5d78df67afab97de95a40bafb44b2cfe04fd9e495ad0a3e6c73e92cd4e5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 23:44:50.232496 containerd[1762]: time="2025-05-13T23:44:50.232248874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-vt492,Uid:49c9f577-0cb1-4955-8dec-ac00bab33b4c,Namespace:kube-system,Attempt:0,} returns sandbox id \"fc291feecfe1f1f2d3e9b9c78f1b65ec047562bab2134e512cd265a926cc8a48\"" May 13 23:44:50.237067 containerd[1762]: time="2025-05-13T23:44:50.237013633Z" level=info msg="CreateContainer within sandbox \"fc291feecfe1f1f2d3e9b9c78f1b65ec047562bab2134e512cd265a926cc8a48\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 23:44:50.241881 containerd[1762]: time="2025-05-13T23:44:50.241827751Z" level=info msg="Container ba9aab648f80f1d93215dc057c787959699a34a95e2f4bc60fa1d6bffa94b325: CDI devices from CRI Config.CDIDevices: []" May 13 23:44:50.269689 containerd[1762]: time="2025-05-13T23:44:50.269631823Z" level=info msg="Container 513f8ba034b268b7d09be5a86e52a2ef2ae794e598c7d848d36b1b4abf8166ea: CDI devices from CRI Config.CDIDevices: []" May 13 23:44:50.277571 containerd[1762]: time="2025-05-13T23:44:50.277437821Z" level=info msg="CreateContainer within sandbox \"7171b5d78df67afab97de95a40bafb44b2cfe04fd9e495ad0a3e6c73e92cd4e5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ba9aab648f80f1d93215dc057c787959699a34a95e2f4bc60fa1d6bffa94b325\"" May 13 23:44:50.278451 containerd[1762]: time="2025-05-13T23:44:50.278216460Z" level=info msg="StartContainer for \"ba9aab648f80f1d93215dc057c787959699a34a95e2f4bc60fa1d6bffa94b325\"" May 13 23:44:50.280442 containerd[1762]: time="2025-05-13T23:44:50.280156620Z" level=info msg="connecting to shim ba9aab648f80f1d93215dc057c787959699a34a95e2f4bc60fa1d6bffa94b325" address="unix:///run/containerd/s/ca744e818bc929ae1001ad58594936a3a991e0c666059776fd9865083a8ff329" protocol=ttrpc version=3 May 13 23:44:50.290819 containerd[1762]: time="2025-05-13T23:44:50.290724417Z" level=info msg="CreateContainer within sandbox \"fc291feecfe1f1f2d3e9b9c78f1b65ec047562bab2134e512cd265a926cc8a48\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"513f8ba034b268b7d09be5a86e52a2ef2ae794e598c7d848d36b1b4abf8166ea\"" May 13 23:44:50.291758 containerd[1762]: time="2025-05-13T23:44:50.291710016Z" level=info msg="StartContainer for \"513f8ba034b268b7d09be5a86e52a2ef2ae794e598c7d848d36b1b4abf8166ea\"" May 13 23:44:50.292851 containerd[1762]: time="2025-05-13T23:44:50.292682336Z" level=info msg="connecting to shim 513f8ba034b268b7d09be5a86e52a2ef2ae794e598c7d848d36b1b4abf8166ea" address="unix:///run/containerd/s/df8dd13ac6fae422fc79ffeb6e441710a63b2a4b65b85ddbe76db9a51b9b2a79" protocol=ttrpc version=3 May 13 23:44:50.307649 systemd[1]: Started cri-containerd-ba9aab648f80f1d93215dc057c787959699a34a95e2f4bc60fa1d6bffa94b325.scope - libcontainer container ba9aab648f80f1d93215dc057c787959699a34a95e2f4bc60fa1d6bffa94b325. May 13 23:44:50.319534 systemd[1]: Started cri-containerd-513f8ba034b268b7d09be5a86e52a2ef2ae794e598c7d848d36b1b4abf8166ea.scope - libcontainer container 513f8ba034b268b7d09be5a86e52a2ef2ae794e598c7d848d36b1b4abf8166ea. May 13 23:44:50.368603 containerd[1762]: time="2025-05-13T23:44:50.368463553Z" level=info msg="StartContainer for \"513f8ba034b268b7d09be5a86e52a2ef2ae794e598c7d848d36b1b4abf8166ea\" returns successfully" May 13 23:44:50.374309 containerd[1762]: time="2025-05-13T23:44:50.373359992Z" level=info msg="StartContainer for \"ba9aab648f80f1d93215dc057c787959699a34a95e2f4bc60fa1d6bffa94b325\" returns successfully" May 13 23:44:50.843335 kubelet[3312]: I0513 23:44:50.842867 3312 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-bn8cj" podStartSLOduration=27.842837011 podStartE2EDuration="27.842837011s" podCreationTimestamp="2025-05-13 23:44:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:44:50.824833376 +0000 UTC m=+33.458075780" watchObservedRunningTime="2025-05-13 23:44:50.842837011 +0000 UTC m=+33.476079415" May 13 23:44:51.055321 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3551600415.mount: Deactivated successfully. May 13 23:46:23.391404 systemd[1]: Started sshd@7-10.200.20.10:22-10.200.16.10:45440.service - OpenSSH per-connection server daemon (10.200.16.10:45440). May 13 23:46:23.819082 sshd[4626]: Accepted publickey for core from 10.200.16.10 port 45440 ssh2: RSA SHA256:vkfaD5ZBcZpTdQVgl7gjxJv9L2x8eoUpkC37aWFhQ2A May 13 23:46:23.820458 sshd-session[4626]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:46:23.825192 systemd-logind[1722]: New session 10 of user core. May 13 23:46:23.841489 systemd[1]: Started session-10.scope - Session 10 of User core. May 13 23:46:24.214095 sshd[4629]: Connection closed by 10.200.16.10 port 45440 May 13 23:46:24.213927 sshd-session[4626]: pam_unix(sshd:session): session closed for user core May 13 23:46:24.217962 systemd[1]: sshd@7-10.200.20.10:22-10.200.16.10:45440.service: Deactivated successfully. May 13 23:46:24.220093 systemd[1]: session-10.scope: Deactivated successfully. May 13 23:46:24.221221 systemd-logind[1722]: Session 10 logged out. Waiting for processes to exit. May 13 23:46:24.222109 systemd-logind[1722]: Removed session 10. May 13 23:46:29.292292 systemd[1]: Started sshd@8-10.200.20.10:22-10.200.16.10:49630.service - OpenSSH per-connection server daemon (10.200.16.10:49630). May 13 23:46:29.721582 sshd[4643]: Accepted publickey for core from 10.200.16.10 port 49630 ssh2: RSA SHA256:vkfaD5ZBcZpTdQVgl7gjxJv9L2x8eoUpkC37aWFhQ2A May 13 23:46:29.723150 sshd-session[4643]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:46:29.727697 systemd-logind[1722]: New session 11 of user core. May 13 23:46:29.734466 systemd[1]: Started session-11.scope - Session 11 of User core. May 13 23:46:30.118072 sshd[4645]: Connection closed by 10.200.16.10 port 49630 May 13 23:46:30.118882 sshd-session[4643]: pam_unix(sshd:session): session closed for user core May 13 23:46:30.123191 systemd[1]: sshd@8-10.200.20.10:22-10.200.16.10:49630.service: Deactivated successfully. May 13 23:46:30.125961 systemd[1]: session-11.scope: Deactivated successfully. May 13 23:46:30.127386 systemd-logind[1722]: Session 11 logged out. Waiting for processes to exit. May 13 23:46:30.128370 systemd-logind[1722]: Removed session 11. May 13 23:46:35.201407 systemd[1]: Started sshd@9-10.200.20.10:22-10.200.16.10:49646.service - OpenSSH per-connection server daemon (10.200.16.10:49646). May 13 23:46:35.658755 sshd[4658]: Accepted publickey for core from 10.200.16.10 port 49646 ssh2: RSA SHA256:vkfaD5ZBcZpTdQVgl7gjxJv9L2x8eoUpkC37aWFhQ2A May 13 23:46:35.660200 sshd-session[4658]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:46:35.665283 systemd-logind[1722]: New session 12 of user core. May 13 23:46:35.675495 systemd[1]: Started session-12.scope - Session 12 of User core. May 13 23:46:36.061416 sshd[4660]: Connection closed by 10.200.16.10 port 49646 May 13 23:46:36.061884 sshd-session[4658]: pam_unix(sshd:session): session closed for user core May 13 23:46:36.067053 systemd-logind[1722]: Session 12 logged out. Waiting for processes to exit. May 13 23:46:36.067329 systemd[1]: sshd@9-10.200.20.10:22-10.200.16.10:49646.service: Deactivated successfully. May 13 23:46:36.070246 systemd[1]: session-12.scope: Deactivated successfully. May 13 23:46:36.071966 systemd-logind[1722]: Removed session 12. May 13 23:46:41.151125 systemd[1]: Started sshd@10-10.200.20.10:22-10.200.16.10:60724.service - OpenSSH per-connection server daemon (10.200.16.10:60724). May 13 23:46:41.584710 sshd[4673]: Accepted publickey for core from 10.200.16.10 port 60724 ssh2: RSA SHA256:vkfaD5ZBcZpTdQVgl7gjxJv9L2x8eoUpkC37aWFhQ2A May 13 23:46:41.586019 sshd-session[4673]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:46:41.590750 systemd-logind[1722]: New session 13 of user core. May 13 23:46:41.596502 systemd[1]: Started session-13.scope - Session 13 of User core. May 13 23:46:41.970021 sshd[4675]: Connection closed by 10.200.16.10 port 60724 May 13 23:46:41.970928 sshd-session[4673]: pam_unix(sshd:session): session closed for user core May 13 23:46:41.973988 systemd-logind[1722]: Session 13 logged out. Waiting for processes to exit. May 13 23:46:41.974899 systemd[1]: sshd@10-10.200.20.10:22-10.200.16.10:60724.service: Deactivated successfully. May 13 23:46:41.978885 systemd[1]: session-13.scope: Deactivated successfully. May 13 23:46:41.981597 systemd-logind[1722]: Removed session 13. May 13 23:46:47.047444 systemd[1]: Started sshd@11-10.200.20.10:22-10.200.16.10:60726.service - OpenSSH per-connection server daemon (10.200.16.10:60726). May 13 23:46:47.472867 sshd[4689]: Accepted publickey for core from 10.200.16.10 port 60726 ssh2: RSA SHA256:vkfaD5ZBcZpTdQVgl7gjxJv9L2x8eoUpkC37aWFhQ2A May 13 23:46:47.475379 sshd-session[4689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:46:47.481301 systemd-logind[1722]: New session 14 of user core. May 13 23:46:47.489095 systemd[1]: Started session-14.scope - Session 14 of User core. May 13 23:46:47.844781 sshd[4691]: Connection closed by 10.200.16.10 port 60726 May 13 23:46:47.845397 sshd-session[4689]: pam_unix(sshd:session): session closed for user core May 13 23:46:47.849008 systemd[1]: sshd@11-10.200.20.10:22-10.200.16.10:60726.service: Deactivated successfully. May 13 23:46:47.851001 systemd[1]: session-14.scope: Deactivated successfully. May 13 23:46:47.851977 systemd-logind[1722]: Session 14 logged out. Waiting for processes to exit. May 13 23:46:47.853119 systemd-logind[1722]: Removed session 14. May 13 23:46:47.922544 systemd[1]: Started sshd@12-10.200.20.10:22-10.200.16.10:60730.service - OpenSSH per-connection server daemon (10.200.16.10:60730). May 13 23:46:48.348709 sshd[4704]: Accepted publickey for core from 10.200.16.10 port 60730 ssh2: RSA SHA256:vkfaD5ZBcZpTdQVgl7gjxJv9L2x8eoUpkC37aWFhQ2A May 13 23:46:48.350124 sshd-session[4704]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:46:48.354673 systemd-logind[1722]: New session 15 of user core. May 13 23:46:48.359428 systemd[1]: Started session-15.scope - Session 15 of User core. May 13 23:46:48.751654 sshd[4706]: Connection closed by 10.200.16.10 port 60730 May 13 23:46:48.752377 sshd-session[4704]: pam_unix(sshd:session): session closed for user core May 13 23:46:48.755993 systemd-logind[1722]: Session 15 logged out. Waiting for processes to exit. May 13 23:46:48.756869 systemd[1]: sshd@12-10.200.20.10:22-10.200.16.10:60730.service: Deactivated successfully. May 13 23:46:48.759788 systemd[1]: session-15.scope: Deactivated successfully. May 13 23:46:48.761159 systemd-logind[1722]: Removed session 15. May 13 23:46:48.839579 systemd[1]: Started sshd@13-10.200.20.10:22-10.200.16.10:52572.service - OpenSSH per-connection server daemon (10.200.16.10:52572). May 13 23:46:49.295916 sshd[4716]: Accepted publickey for core from 10.200.16.10 port 52572 ssh2: RSA SHA256:vkfaD5ZBcZpTdQVgl7gjxJv9L2x8eoUpkC37aWFhQ2A May 13 23:46:49.297321 sshd-session[4716]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:46:49.301489 systemd-logind[1722]: New session 16 of user core. May 13 23:46:49.309440 systemd[1]: Started session-16.scope - Session 16 of User core. May 13 23:46:49.679307 sshd[4718]: Connection closed by 10.200.16.10 port 52572 May 13 23:46:49.679885 sshd-session[4716]: pam_unix(sshd:session): session closed for user core May 13 23:46:49.683802 systemd[1]: sshd@13-10.200.20.10:22-10.200.16.10:52572.service: Deactivated successfully. May 13 23:46:49.685682 systemd[1]: session-16.scope: Deactivated successfully. May 13 23:46:49.687054 systemd-logind[1722]: Session 16 logged out. Waiting for processes to exit. May 13 23:46:49.688917 systemd-logind[1722]: Removed session 16. May 13 23:46:54.762317 systemd[1]: Started sshd@14-10.200.20.10:22-10.200.16.10:52576.service - OpenSSH per-connection server daemon (10.200.16.10:52576). May 13 23:46:55.227884 sshd[4732]: Accepted publickey for core from 10.200.16.10 port 52576 ssh2: RSA SHA256:vkfaD5ZBcZpTdQVgl7gjxJv9L2x8eoUpkC37aWFhQ2A May 13 23:46:55.229450 sshd-session[4732]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:46:55.234776 systemd-logind[1722]: New session 17 of user core. May 13 23:46:55.238439 systemd[1]: Started session-17.scope - Session 17 of User core. May 13 23:46:55.625980 sshd[4734]: Connection closed by 10.200.16.10 port 52576 May 13 23:46:55.626602 sshd-session[4732]: pam_unix(sshd:session): session closed for user core May 13 23:46:55.630538 systemd[1]: sshd@14-10.200.20.10:22-10.200.16.10:52576.service: Deactivated successfully. May 13 23:46:55.632798 systemd[1]: session-17.scope: Deactivated successfully. May 13 23:46:55.634741 systemd-logind[1722]: Session 17 logged out. Waiting for processes to exit. May 13 23:46:55.636003 systemd-logind[1722]: Removed session 17. May 13 23:47:00.704465 systemd[1]: Started sshd@15-10.200.20.10:22-10.200.16.10:36348.service - OpenSSH per-connection server daemon (10.200.16.10:36348). May 13 23:47:01.138460 sshd[4745]: Accepted publickey for core from 10.200.16.10 port 36348 ssh2: RSA SHA256:vkfaD5ZBcZpTdQVgl7gjxJv9L2x8eoUpkC37aWFhQ2A May 13 23:47:01.139835 sshd-session[4745]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:47:01.144311 systemd-logind[1722]: New session 18 of user core. May 13 23:47:01.148459 systemd[1]: Started session-18.scope - Session 18 of User core. May 13 23:47:01.540199 sshd[4747]: Connection closed by 10.200.16.10 port 36348 May 13 23:47:01.539946 sshd-session[4745]: pam_unix(sshd:session): session closed for user core May 13 23:47:01.544622 systemd[1]: sshd@15-10.200.20.10:22-10.200.16.10:36348.service: Deactivated successfully. May 13 23:47:01.551783 systemd[1]: session-18.scope: Deactivated successfully. May 13 23:47:01.554158 systemd-logind[1722]: Session 18 logged out. Waiting for processes to exit. May 13 23:47:01.556709 systemd-logind[1722]: Removed session 18. May 13 23:47:01.628168 systemd[1]: Started sshd@16-10.200.20.10:22-10.200.16.10:36362.service - OpenSSH per-connection server daemon (10.200.16.10:36362). May 13 23:47:02.094075 sshd[4759]: Accepted publickey for core from 10.200.16.10 port 36362 ssh2: RSA SHA256:vkfaD5ZBcZpTdQVgl7gjxJv9L2x8eoUpkC37aWFhQ2A May 13 23:47:02.095650 sshd-session[4759]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:47:02.102397 systemd-logind[1722]: New session 19 of user core. May 13 23:47:02.106540 systemd[1]: Started session-19.scope - Session 19 of User core. May 13 23:47:02.551094 sshd[4761]: Connection closed by 10.200.16.10 port 36362 May 13 23:47:02.551881 sshd-session[4759]: pam_unix(sshd:session): session closed for user core May 13 23:47:02.556048 systemd-logind[1722]: Session 19 logged out. Waiting for processes to exit. May 13 23:47:02.556706 systemd[1]: sshd@16-10.200.20.10:22-10.200.16.10:36362.service: Deactivated successfully. May 13 23:47:02.559414 systemd[1]: session-19.scope: Deactivated successfully. May 13 23:47:02.561245 systemd-logind[1722]: Removed session 19. May 13 23:47:02.635635 systemd[1]: Started sshd@17-10.200.20.10:22-10.200.16.10:36372.service - OpenSSH per-connection server daemon (10.200.16.10:36372). May 13 23:47:03.097507 sshd[4771]: Accepted publickey for core from 10.200.16.10 port 36372 ssh2: RSA SHA256:vkfaD5ZBcZpTdQVgl7gjxJv9L2x8eoUpkC37aWFhQ2A May 13 23:47:03.099030 sshd-session[4771]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:47:03.105019 systemd-logind[1722]: New session 20 of user core. May 13 23:47:03.113720 systemd[1]: Started session-20.scope - Session 20 of User core. May 13 23:47:04.931309 sshd[4773]: Connection closed by 10.200.16.10 port 36372 May 13 23:47:04.931932 sshd-session[4771]: pam_unix(sshd:session): session closed for user core May 13 23:47:04.935191 systemd[1]: sshd@17-10.200.20.10:22-10.200.16.10:36372.service: Deactivated successfully. May 13 23:47:04.939032 systemd[1]: session-20.scope: Deactivated successfully. May 13 23:47:04.939825 systemd[1]: session-20.scope: Consumed 469ms CPU time, 65.6M memory peak. May 13 23:47:04.941414 systemd-logind[1722]: Session 20 logged out. Waiting for processes to exit. May 13 23:47:04.942845 systemd-logind[1722]: Removed session 20. May 13 23:47:05.010888 systemd[1]: Started sshd@18-10.200.20.10:22-10.200.16.10:36374.service - OpenSSH per-connection server daemon (10.200.16.10:36374). May 13 23:47:05.469128 sshd[4790]: Accepted publickey for core from 10.200.16.10 port 36374 ssh2: RSA SHA256:vkfaD5ZBcZpTdQVgl7gjxJv9L2x8eoUpkC37aWFhQ2A May 13 23:47:05.471208 sshd-session[4790]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:47:05.480289 systemd-logind[1722]: New session 21 of user core. May 13 23:47:05.495521 systemd[1]: Started session-21.scope - Session 21 of User core. May 13 23:47:05.995582 sshd[4792]: Connection closed by 10.200.16.10 port 36374 May 13 23:47:05.996220 sshd-session[4790]: pam_unix(sshd:session): session closed for user core May 13 23:47:06.000301 systemd[1]: sshd@18-10.200.20.10:22-10.200.16.10:36374.service: Deactivated successfully. May 13 23:47:06.002212 systemd[1]: session-21.scope: Deactivated successfully. May 13 23:47:06.004655 systemd-logind[1722]: Session 21 logged out. Waiting for processes to exit. May 13 23:47:06.008940 systemd-logind[1722]: Removed session 21. May 13 23:47:06.077612 systemd[1]: Started sshd@19-10.200.20.10:22-10.200.16.10:36376.service - OpenSSH per-connection server daemon (10.200.16.10:36376). May 13 23:47:06.547129 sshd[4802]: Accepted publickey for core from 10.200.16.10 port 36376 ssh2: RSA SHA256:vkfaD5ZBcZpTdQVgl7gjxJv9L2x8eoUpkC37aWFhQ2A May 13 23:47:06.548576 sshd-session[4802]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:47:06.553258 systemd-logind[1722]: New session 22 of user core. May 13 23:47:06.561444 systemd[1]: Started session-22.scope - Session 22 of User core. May 13 23:47:06.932187 sshd[4804]: Connection closed by 10.200.16.10 port 36376 May 13 23:47:06.932847 sshd-session[4802]: pam_unix(sshd:session): session closed for user core May 13 23:47:06.936281 systemd-logind[1722]: Session 22 logged out. Waiting for processes to exit. May 13 23:47:06.936689 systemd[1]: sshd@19-10.200.20.10:22-10.200.16.10:36376.service: Deactivated successfully. May 13 23:47:06.939460 systemd[1]: session-22.scope: Deactivated successfully. May 13 23:47:06.941891 systemd-logind[1722]: Removed session 22. May 13 23:47:12.024597 systemd[1]: Started sshd@20-10.200.20.10:22-10.200.16.10:46724.service - OpenSSH per-connection server daemon (10.200.16.10:46724). May 13 23:47:12.488819 sshd[4815]: Accepted publickey for core from 10.200.16.10 port 46724 ssh2: RSA SHA256:vkfaD5ZBcZpTdQVgl7gjxJv9L2x8eoUpkC37aWFhQ2A May 13 23:47:12.490231 sshd-session[4815]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:47:12.499878 systemd-logind[1722]: New session 23 of user core. May 13 23:47:12.507471 systemd[1]: Started session-23.scope - Session 23 of User core. May 13 23:47:12.881587 sshd[4817]: Connection closed by 10.200.16.10 port 46724 May 13 23:47:12.880682 sshd-session[4815]: pam_unix(sshd:session): session closed for user core May 13 23:47:12.885190 systemd[1]: sshd@20-10.200.20.10:22-10.200.16.10:46724.service: Deactivated successfully. May 13 23:47:12.887079 systemd[1]: session-23.scope: Deactivated successfully. May 13 23:47:12.888634 systemd-logind[1722]: Session 23 logged out. Waiting for processes to exit. May 13 23:47:12.890317 systemd-logind[1722]: Removed session 23. May 13 23:47:17.965606 systemd[1]: Started sshd@21-10.200.20.10:22-10.200.16.10:46740.service - OpenSSH per-connection server daemon (10.200.16.10:46740). May 13 23:47:18.395615 sshd[4833]: Accepted publickey for core from 10.200.16.10 port 46740 ssh2: RSA SHA256:vkfaD5ZBcZpTdQVgl7gjxJv9L2x8eoUpkC37aWFhQ2A May 13 23:47:18.396908 sshd-session[4833]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:47:18.401372 systemd-logind[1722]: New session 24 of user core. May 13 23:47:18.407458 systemd[1]: Started session-24.scope - Session 24 of User core. May 13 23:47:18.781909 sshd[4835]: Connection closed by 10.200.16.10 port 46740 May 13 23:47:18.782645 sshd-session[4833]: pam_unix(sshd:session): session closed for user core May 13 23:47:18.786380 systemd[1]: sshd@21-10.200.20.10:22-10.200.16.10:46740.service: Deactivated successfully. May 13 23:47:18.789035 systemd[1]: session-24.scope: Deactivated successfully. May 13 23:47:18.790700 systemd-logind[1722]: Session 24 logged out. Waiting for processes to exit. May 13 23:47:18.791883 systemd-logind[1722]: Removed session 24. May 13 23:47:23.861573 systemd[1]: Started sshd@22-10.200.20.10:22-10.200.16.10:48172.service - OpenSSH per-connection server daemon (10.200.16.10:48172). May 13 23:47:24.319075 sshd[4847]: Accepted publickey for core from 10.200.16.10 port 48172 ssh2: RSA SHA256:vkfaD5ZBcZpTdQVgl7gjxJv9L2x8eoUpkC37aWFhQ2A May 13 23:47:24.320093 sshd-session[4847]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:47:24.325764 systemd-logind[1722]: New session 25 of user core. May 13 23:47:24.334487 systemd[1]: Started session-25.scope - Session 25 of User core. May 13 23:47:24.699690 sshd[4851]: Connection closed by 10.200.16.10 port 48172 May 13 23:47:24.700386 sshd-session[4847]: pam_unix(sshd:session): session closed for user core May 13 23:47:24.704206 systemd-logind[1722]: Session 25 logged out. Waiting for processes to exit. May 13 23:47:24.705594 systemd[1]: sshd@22-10.200.20.10:22-10.200.16.10:48172.service: Deactivated successfully. May 13 23:47:24.708505 systemd[1]: session-25.scope: Deactivated successfully. May 13 23:47:24.710103 systemd-logind[1722]: Removed session 25. May 13 23:47:29.779892 systemd[1]: Started sshd@23-10.200.20.10:22-10.200.16.10:43376.service - OpenSSH per-connection server daemon (10.200.16.10:43376). May 13 23:47:30.212941 sshd[4863]: Accepted publickey for core from 10.200.16.10 port 43376 ssh2: RSA SHA256:vkfaD5ZBcZpTdQVgl7gjxJv9L2x8eoUpkC37aWFhQ2A May 13 23:47:30.214094 sshd-session[4863]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:47:30.222363 systemd-logind[1722]: New session 26 of user core. May 13 23:47:30.228484 systemd[1]: Started session-26.scope - Session 26 of User core. May 13 23:47:30.592812 sshd[4866]: Connection closed by 10.200.16.10 port 43376 May 13 23:47:30.593384 sshd-session[4863]: pam_unix(sshd:session): session closed for user core May 13 23:47:30.596599 systemd-logind[1722]: Session 26 logged out. Waiting for processes to exit. May 13 23:47:30.596835 systemd[1]: sshd@23-10.200.20.10:22-10.200.16.10:43376.service: Deactivated successfully. May 13 23:47:30.598734 systemd[1]: session-26.scope: Deactivated successfully. May 13 23:47:30.600394 systemd-logind[1722]: Removed session 26. May 13 23:47:30.675833 systemd[1]: Started sshd@24-10.200.20.10:22-10.200.16.10:43392.service - OpenSSH per-connection server daemon (10.200.16.10:43392). May 13 23:47:31.108065 sshd[4879]: Accepted publickey for core from 10.200.16.10 port 43392 ssh2: RSA SHA256:vkfaD5ZBcZpTdQVgl7gjxJv9L2x8eoUpkC37aWFhQ2A May 13 23:47:31.109456 sshd-session[4879]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:47:31.115012 systemd-logind[1722]: New session 27 of user core. May 13 23:47:31.124437 systemd[1]: Started session-27.scope - Session 27 of User core. May 13 23:47:33.226171 kubelet[3312]: I0513 23:47:33.225994 3312 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-vt492" podStartSLOduration=190.225974929 podStartE2EDuration="3m10.225974929s" podCreationTimestamp="2025-05-13 23:44:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:44:50.863466765 +0000 UTC m=+33.496709169" watchObservedRunningTime="2025-05-13 23:47:33.225974929 +0000 UTC m=+195.859217333" May 13 23:47:33.246976 containerd[1762]: time="2025-05-13T23:47:33.246780524Z" level=info msg="StopContainer for \"95f2b8d28401d3d3df62ca8f618132a08a62ec8f3fd396fe0888d3894549475f\" with timeout 30 (s)" May 13 23:47:33.247962 containerd[1762]: time="2025-05-13T23:47:33.247917124Z" level=info msg="Stop container \"95f2b8d28401d3d3df62ca8f618132a08a62ec8f3fd396fe0888d3894549475f\" with signal terminated" May 13 23:47:33.256307 containerd[1762]: time="2025-05-13T23:47:33.256247282Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 23:47:33.263456 containerd[1762]: time="2025-05-13T23:47:33.263315080Z" level=info msg="TaskExit event in podsandbox handler container_id:\"eff60ec307c65f3fdc8a64c3647d4d6ab60f5b58b0e8e1d093142ec6ecb39c5e\" id:\"3633a25076c2aa80955b4163cec68b06cdf040f7fdfe868b2f47295b8c95767e\" pid:4902 exited_at:{seconds:1747180053 nanos:261816360}" May 13 23:47:33.263972 systemd[1]: cri-containerd-95f2b8d28401d3d3df62ca8f618132a08a62ec8f3fd396fe0888d3894549475f.scope: Deactivated successfully. May 13 23:47:33.265641 containerd[1762]: time="2025-05-13T23:47:33.264507840Z" level=info msg="StopContainer for \"eff60ec307c65f3fdc8a64c3647d4d6ab60f5b58b0e8e1d093142ec6ecb39c5e\" with timeout 2 (s)" May 13 23:47:33.267906 containerd[1762]: time="2025-05-13T23:47:33.267645399Z" level=info msg="Stop container \"eff60ec307c65f3fdc8a64c3647d4d6ab60f5b58b0e8e1d093142ec6ecb39c5e\" with signal terminated" May 13 23:47:33.270094 containerd[1762]: time="2025-05-13T23:47:33.270001438Z" level=info msg="received exit event container_id:\"95f2b8d28401d3d3df62ca8f618132a08a62ec8f3fd396fe0888d3894549475f\" id:\"95f2b8d28401d3d3df62ca8f618132a08a62ec8f3fd396fe0888d3894549475f\" pid:3852 exited_at:{seconds:1747180053 nanos:268996678}" May 13 23:47:33.270881 containerd[1762]: time="2025-05-13T23:47:33.270438238Z" level=info msg="TaskExit event in podsandbox handler container_id:\"95f2b8d28401d3d3df62ca8f618132a08a62ec8f3fd396fe0888d3894549475f\" id:\"95f2b8d28401d3d3df62ca8f618132a08a62ec8f3fd396fe0888d3894549475f\" pid:3852 exited_at:{seconds:1747180053 nanos:268996678}" May 13 23:47:33.279638 systemd-networkd[1342]: lxc_health: Link DOWN May 13 23:47:33.279648 systemd-networkd[1342]: lxc_health: Lost carrier May 13 23:47:33.298899 systemd[1]: cri-containerd-eff60ec307c65f3fdc8a64c3647d4d6ab60f5b58b0e8e1d093142ec6ecb39c5e.scope: Deactivated successfully. May 13 23:47:33.300305 systemd[1]: cri-containerd-eff60ec307c65f3fdc8a64c3647d4d6ab60f5b58b0e8e1d093142ec6ecb39c5e.scope: Consumed 7.491s CPU time, 125.9M memory peak, 136K read from disk, 12.9M written to disk. May 13 23:47:33.302682 containerd[1762]: time="2025-05-13T23:47:33.302441910Z" level=info msg="received exit event container_id:\"eff60ec307c65f3fdc8a64c3647d4d6ab60f5b58b0e8e1d093142ec6ecb39c5e\" id:\"eff60ec307c65f3fdc8a64c3647d4d6ab60f5b58b0e8e1d093142ec6ecb39c5e\" pid:3934 exited_at:{seconds:1747180053 nanos:299975511}" May 13 23:47:33.302682 containerd[1762]: time="2025-05-13T23:47:33.302612830Z" level=info msg="TaskExit event in podsandbox handler container_id:\"eff60ec307c65f3fdc8a64c3647d4d6ab60f5b58b0e8e1d093142ec6ecb39c5e\" id:\"eff60ec307c65f3fdc8a64c3647d4d6ab60f5b58b0e8e1d093142ec6ecb39c5e\" pid:3934 exited_at:{seconds:1747180053 nanos:299975511}" May 13 23:47:33.315763 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-95f2b8d28401d3d3df62ca8f618132a08a62ec8f3fd396fe0888d3894549475f-rootfs.mount: Deactivated successfully. May 13 23:47:33.330390 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eff60ec307c65f3fdc8a64c3647d4d6ab60f5b58b0e8e1d093142ec6ecb39c5e-rootfs.mount: Deactivated successfully. May 13 23:47:35.061640 containerd[1762]: time="2025-05-13T23:47:35.061597961Z" level=info msg="StopContainer for \"95f2b8d28401d3d3df62ca8f618132a08a62ec8f3fd396fe0888d3894549475f\" returns successfully" May 13 23:47:35.063398 containerd[1762]: time="2025-05-13T23:47:35.062926241Z" level=info msg="StopPodSandbox for \"d6d4e21b51fab399a57e782ed309724c04245f02dc9b2b18da6c3dd33b1456ee\"" May 13 23:47:35.063398 containerd[1762]: time="2025-05-13T23:47:35.063000441Z" level=info msg="Container to stop \"95f2b8d28401d3d3df62ca8f618132a08a62ec8f3fd396fe0888d3894549475f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 23:47:35.069202 systemd[1]: cri-containerd-d6d4e21b51fab399a57e782ed309724c04245f02dc9b2b18da6c3dd33b1456ee.scope: Deactivated successfully. May 13 23:47:35.076057 containerd[1762]: time="2025-05-13T23:47:35.075871398Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d6d4e21b51fab399a57e782ed309724c04245f02dc9b2b18da6c3dd33b1456ee\" id:\"d6d4e21b51fab399a57e782ed309724c04245f02dc9b2b18da6c3dd33b1456ee\" pid:3511 exit_status:137 exited_at:{seconds:1747180055 nanos:75494518}" May 13 23:47:35.102549 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d6d4e21b51fab399a57e782ed309724c04245f02dc9b2b18da6c3dd33b1456ee-rootfs.mount: Deactivated successfully. May 13 23:47:35.257983 sshd[4881]: Connection closed by 10.200.16.10 port 43392 May 13 23:47:35.262304 systemd-logind[1722]: Session 27 logged out. Waiting for processes to exit. May 13 23:47:35.258841 sshd-session[4879]: pam_unix(sshd:session): session closed for user core May 13 23:47:35.417669 containerd[1762]: time="2025-05-13T23:47:35.279387628Z" level=info msg="Kill container \"eff60ec307c65f3fdc8a64c3647d4d6ab60f5b58b0e8e1d093142ec6ecb39c5e\"" May 13 23:47:35.263066 systemd[1]: sshd@24-10.200.20.10:22-10.200.16.10:43392.service: Deactivated successfully. May 13 23:47:35.265830 systemd[1]: session-27.scope: Deactivated successfully. May 13 23:47:35.266239 systemd[1]: session-27.scope: Consumed 1.246s CPU time, 23.6M memory peak. May 13 23:47:35.267126 systemd-logind[1722]: Removed session 27. May 13 23:47:35.342880 systemd[1]: Started sshd@25-10.200.20.10:22-10.200.16.10:43408.service - OpenSSH per-connection server daemon (10.200.16.10:43408). May 13 23:47:35.423422 containerd[1762]: time="2025-05-13T23:47:35.421858993Z" level=info msg="StopContainer for \"eff60ec307c65f3fdc8a64c3647d4d6ab60f5b58b0e8e1d093142ec6ecb39c5e\" returns successfully" May 13 23:47:35.424703 containerd[1762]: time="2025-05-13T23:47:35.424575753Z" level=info msg="StopPodSandbox for \"b983f11939ba18052f6cdc98ddba969935be51187f5791bc26b0774179f8ccdc\"" May 13 23:47:35.424746 containerd[1762]: time="2025-05-13T23:47:35.424692433Z" level=info msg="Container to stop \"4f11759d0b2dcc9ca0f0abb199a07ebfd5dde33ddf7c3a7a0ba9e9e2c00ffddf\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 23:47:35.424746 containerd[1762]: time="2025-05-13T23:47:35.424717393Z" level=info msg="Container to stop \"8dcba35987490e1729f746fdf63fc25613ec5ff07c6e95f0cf0426c0f6d60bcd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 23:47:35.424746 containerd[1762]: time="2025-05-13T23:47:35.424727713Z" level=info msg="Container to stop \"9e6882e419d22f90b4c6de549232f832d6e81779c278b8dc94282dc0e032dc55\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 23:47:35.424746 containerd[1762]: time="2025-05-13T23:47:35.424737393Z" level=info msg="Container to stop \"6c97ac43ae21de8e2bd0fffbd29ba6e1dd0abc6c0f5352ea5cea774554726c88\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 23:47:35.424824 containerd[1762]: time="2025-05-13T23:47:35.424750273Z" level=info msg="Container to stop \"eff60ec307c65f3fdc8a64c3647d4d6ab60f5b58b0e8e1d093142ec6ecb39c5e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 23:47:35.441504 systemd[1]: cri-containerd-b983f11939ba18052f6cdc98ddba969935be51187f5791bc26b0774179f8ccdc.scope: Deactivated successfully. May 13 23:47:35.461258 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b983f11939ba18052f6cdc98ddba969935be51187f5791bc26b0774179f8ccdc-rootfs.mount: Deactivated successfully. May 13 23:47:35.952001 sshd[4980]: Accepted publickey for core from 10.200.16.10 port 43408 ssh2: RSA SHA256:vkfaD5ZBcZpTdQVgl7gjxJv9L2x8eoUpkC37aWFhQ2A May 13 23:47:35.952364 sshd-session[4980]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:47:35.956486 systemd-logind[1722]: New session 28 of user core. May 13 23:47:35.964522 systemd[1]: Started session-28.scope - Session 28 of User core. May 13 23:47:36.410500 containerd[1762]: time="2025-05-13T23:47:36.409592433Z" level=info msg="shim disconnected" id=d6d4e21b51fab399a57e782ed309724c04245f02dc9b2b18da6c3dd33b1456ee namespace=k8s.io May 13 23:47:36.410500 containerd[1762]: time="2025-05-13T23:47:36.409649193Z" level=warning msg="cleaning up after shim disconnected" id=d6d4e21b51fab399a57e782ed309724c04245f02dc9b2b18da6c3dd33b1456ee namespace=k8s.io May 13 23:47:36.410500 containerd[1762]: time="2025-05-13T23:47:36.409677192Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 23:47:36.418590 containerd[1762]: time="2025-05-13T23:47:36.418129430Z" level=info msg="shim disconnected" id=b983f11939ba18052f6cdc98ddba969935be51187f5791bc26b0774179f8ccdc namespace=k8s.io May 13 23:47:36.418590 containerd[1762]: time="2025-05-13T23:47:36.418168430Z" level=warning msg="cleaning up after shim disconnected" id=b983f11939ba18052f6cdc98ddba969935be51187f5791bc26b0774179f8ccdc namespace=k8s.io May 13 23:47:36.418590 containerd[1762]: time="2025-05-13T23:47:36.418202390Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 23:47:36.439665 containerd[1762]: time="2025-05-13T23:47:36.439378505Z" level=error msg="Failed to handle event container_id:\"d6d4e21b51fab399a57e782ed309724c04245f02dc9b2b18da6c3dd33b1456ee\" id:\"d6d4e21b51fab399a57e782ed309724c04245f02dc9b2b18da6c3dd33b1456ee\" pid:3511 exit_status:137 exited_at:{seconds:1747180055 nanos:75494518} for d6d4e21b51fab399a57e782ed309724c04245f02dc9b2b18da6c3dd33b1456ee" error="failed to handle container TaskExit event: failed to stop sandbox: failed to delete task: ttrpc: closed" May 13 23:47:36.439665 containerd[1762]: time="2025-05-13T23:47:36.439542865Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b983f11939ba18052f6cdc98ddba969935be51187f5791bc26b0774179f8ccdc\" id:\"b983f11939ba18052f6cdc98ddba969935be51187f5791bc26b0774179f8ccdc\" pid:3464 exit_status:137 exited_at:{seconds:1747180055 nanos:444757268}" May 13 23:47:36.440288 containerd[1762]: time="2025-05-13T23:47:36.440030545Z" level=info msg="received exit event sandbox_id:\"d6d4e21b51fab399a57e782ed309724c04245f02dc9b2b18da6c3dd33b1456ee\" exit_status:137 exited_at:{seconds:1747180055 nanos:75494518}" May 13 23:47:36.440462 containerd[1762]: time="2025-05-13T23:47:36.440441385Z" level=info msg="received exit event sandbox_id:\"b983f11939ba18052f6cdc98ddba969935be51187f5791bc26b0774179f8ccdc\" exit_status:137 exited_at:{seconds:1747180055 nanos:444757268}" May 13 23:47:36.446061 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d6d4e21b51fab399a57e782ed309724c04245f02dc9b2b18da6c3dd33b1456ee-shm.mount: Deactivated successfully. May 13 23:47:36.446160 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b983f11939ba18052f6cdc98ddba969935be51187f5791bc26b0774179f8ccdc-shm.mount: Deactivated successfully. May 13 23:47:36.446515 containerd[1762]: time="2025-05-13T23:47:36.446421144Z" level=info msg="TearDown network for sandbox \"b983f11939ba18052f6cdc98ddba969935be51187f5791bc26b0774179f8ccdc\" successfully" May 13 23:47:36.446515 containerd[1762]: time="2025-05-13T23:47:36.446448664Z" level=info msg="StopPodSandbox for \"b983f11939ba18052f6cdc98ddba969935be51187f5791bc26b0774179f8ccdc\" returns successfully" May 13 23:47:36.452702 containerd[1762]: time="2025-05-13T23:47:36.450691662Z" level=info msg="TearDown network for sandbox \"d6d4e21b51fab399a57e782ed309724c04245f02dc9b2b18da6c3dd33b1456ee\" successfully" May 13 23:47:36.454954 containerd[1762]: time="2025-05-13T23:47:36.450716542Z" level=info msg="StopPodSandbox for \"d6d4e21b51fab399a57e782ed309724c04245f02dc9b2b18da6c3dd33b1456ee\" returns successfully" May 13 23:47:36.459562 kubelet[3312]: I0513 23:47:36.459459 3312 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d6d4e21b51fab399a57e782ed309724c04245f02dc9b2b18da6c3dd33b1456ee" May 13 23:47:36.477682 kubelet[3312]: I0513 23:47:36.476961 3312 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b983f11939ba18052f6cdc98ddba969935be51187f5791bc26b0774179f8ccdc" May 13 23:47:36.506416 kubelet[3312]: I0513 23:47:36.504845 3312 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/787c05ce-63c6-4617-ac05-aa7f2868d100-host-proc-sys-net\") pod \"787c05ce-63c6-4617-ac05-aa7f2868d100\" (UID: \"787c05ce-63c6-4617-ac05-aa7f2868d100\") " May 13 23:47:36.506416 kubelet[3312]: I0513 23:47:36.504884 3312 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/787c05ce-63c6-4617-ac05-aa7f2868d100-etc-cni-netd\") pod \"787c05ce-63c6-4617-ac05-aa7f2868d100\" (UID: \"787c05ce-63c6-4617-ac05-aa7f2868d100\") " May 13 23:47:36.506416 kubelet[3312]: I0513 23:47:36.504907 3312 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/787c05ce-63c6-4617-ac05-aa7f2868d100-cilium-run\") pod \"787c05ce-63c6-4617-ac05-aa7f2868d100\" (UID: \"787c05ce-63c6-4617-ac05-aa7f2868d100\") " May 13 23:47:36.506416 kubelet[3312]: I0513 23:47:36.504921 3312 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/787c05ce-63c6-4617-ac05-aa7f2868d100-host-proc-sys-kernel\") pod \"787c05ce-63c6-4617-ac05-aa7f2868d100\" (UID: \"787c05ce-63c6-4617-ac05-aa7f2868d100\") " May 13 23:47:36.506416 kubelet[3312]: I0513 23:47:36.504945 3312 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ecc7e9a2-db2c-46dc-bca5-4cad3f619693-cilium-config-path\") pod \"ecc7e9a2-db2c-46dc-bca5-4cad3f619693\" (UID: \"ecc7e9a2-db2c-46dc-bca5-4cad3f619693\") " May 13 23:47:36.506416 kubelet[3312]: I0513 23:47:36.504960 3312 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/787c05ce-63c6-4617-ac05-aa7f2868d100-bpf-maps\") pod \"787c05ce-63c6-4617-ac05-aa7f2868d100\" (UID: \"787c05ce-63c6-4617-ac05-aa7f2868d100\") " May 13 23:47:36.506662 kubelet[3312]: I0513 23:47:36.504976 3312 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/787c05ce-63c6-4617-ac05-aa7f2868d100-hubble-tls\") pod \"787c05ce-63c6-4617-ac05-aa7f2868d100\" (UID: \"787c05ce-63c6-4617-ac05-aa7f2868d100\") " May 13 23:47:36.506662 kubelet[3312]: I0513 23:47:36.504993 3312 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wlbcq\" (UniqueName: \"kubernetes.io/projected/ecc7e9a2-db2c-46dc-bca5-4cad3f619693-kube-api-access-wlbcq\") pod \"ecc7e9a2-db2c-46dc-bca5-4cad3f619693\" (UID: \"ecc7e9a2-db2c-46dc-bca5-4cad3f619693\") " May 13 23:47:36.506662 kubelet[3312]: I0513 23:47:36.505009 3312 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/787c05ce-63c6-4617-ac05-aa7f2868d100-hostproc\") pod \"787c05ce-63c6-4617-ac05-aa7f2868d100\" (UID: \"787c05ce-63c6-4617-ac05-aa7f2868d100\") " May 13 23:47:36.506662 kubelet[3312]: I0513 23:47:36.505022 3312 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/787c05ce-63c6-4617-ac05-aa7f2868d100-cni-path\") pod \"787c05ce-63c6-4617-ac05-aa7f2868d100\" (UID: \"787c05ce-63c6-4617-ac05-aa7f2868d100\") " May 13 23:47:36.506662 kubelet[3312]: I0513 23:47:36.505035 3312 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/787c05ce-63c6-4617-ac05-aa7f2868d100-xtables-lock\") pod \"787c05ce-63c6-4617-ac05-aa7f2868d100\" (UID: \"787c05ce-63c6-4617-ac05-aa7f2868d100\") " May 13 23:47:36.506662 kubelet[3312]: I0513 23:47:36.505053 3312 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/787c05ce-63c6-4617-ac05-aa7f2868d100-cilium-config-path\") pod \"787c05ce-63c6-4617-ac05-aa7f2868d100\" (UID: \"787c05ce-63c6-4617-ac05-aa7f2868d100\") " May 13 23:47:36.506784 kubelet[3312]: I0513 23:47:36.505067 3312 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/787c05ce-63c6-4617-ac05-aa7f2868d100-lib-modules\") pod \"787c05ce-63c6-4617-ac05-aa7f2868d100\" (UID: \"787c05ce-63c6-4617-ac05-aa7f2868d100\") " May 13 23:47:36.506784 kubelet[3312]: I0513 23:47:36.505082 3312 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/787c05ce-63c6-4617-ac05-aa7f2868d100-cilium-cgroup\") pod \"787c05ce-63c6-4617-ac05-aa7f2868d100\" (UID: \"787c05ce-63c6-4617-ac05-aa7f2868d100\") " May 13 23:47:36.506784 kubelet[3312]: I0513 23:47:36.505099 3312 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kn7td\" (UniqueName: \"kubernetes.io/projected/787c05ce-63c6-4617-ac05-aa7f2868d100-kube-api-access-kn7td\") pod \"787c05ce-63c6-4617-ac05-aa7f2868d100\" (UID: \"787c05ce-63c6-4617-ac05-aa7f2868d100\") " May 13 23:47:36.506784 kubelet[3312]: I0513 23:47:36.505116 3312 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/787c05ce-63c6-4617-ac05-aa7f2868d100-clustermesh-secrets\") pod \"787c05ce-63c6-4617-ac05-aa7f2868d100\" (UID: \"787c05ce-63c6-4617-ac05-aa7f2868d100\") " May 13 23:47:36.510764 kubelet[3312]: I0513 23:47:36.510491 3312 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/787c05ce-63c6-4617-ac05-aa7f2868d100-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "787c05ce-63c6-4617-ac05-aa7f2868d100" (UID: "787c05ce-63c6-4617-ac05-aa7f2868d100"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 23:47:36.515732 kubelet[3312]: I0513 23:47:36.511574 3312 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/787c05ce-63c6-4617-ac05-aa7f2868d100-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "787c05ce-63c6-4617-ac05-aa7f2868d100" (UID: "787c05ce-63c6-4617-ac05-aa7f2868d100"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 23:47:36.514946 systemd[1]: var-lib-kubelet-pods-787c05ce\x2d63c6\x2d4617\x2dac05\x2daa7f2868d100-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 13 23:47:36.517534 kubelet[3312]: I0513 23:47:36.513336 3312 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/787c05ce-63c6-4617-ac05-aa7f2868d100-hostproc" (OuterVolumeSpecName: "hostproc") pod "787c05ce-63c6-4617-ac05-aa7f2868d100" (UID: "787c05ce-63c6-4617-ac05-aa7f2868d100"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 23:47:36.517662 kubelet[3312]: I0513 23:47:36.513359 3312 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/787c05ce-63c6-4617-ac05-aa7f2868d100-cni-path" (OuterVolumeSpecName: "cni-path") pod "787c05ce-63c6-4617-ac05-aa7f2868d100" (UID: "787c05ce-63c6-4617-ac05-aa7f2868d100"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 23:47:36.517738 kubelet[3312]: I0513 23:47:36.513381 3312 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/787c05ce-63c6-4617-ac05-aa7f2868d100-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "787c05ce-63c6-4617-ac05-aa7f2868d100" (UID: "787c05ce-63c6-4617-ac05-aa7f2868d100"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 23:47:36.517806 kubelet[3312]: I0513 23:47:36.516772 3312 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/787c05ce-63c6-4617-ac05-aa7f2868d100-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "787c05ce-63c6-4617-ac05-aa7f2868d100" (UID: "787c05ce-63c6-4617-ac05-aa7f2868d100"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 23:47:36.517878 kubelet[3312]: I0513 23:47:36.516797 3312 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/787c05ce-63c6-4617-ac05-aa7f2868d100-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "787c05ce-63c6-4617-ac05-aa7f2868d100" (UID: "787c05ce-63c6-4617-ac05-aa7f2868d100"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 23:47:36.517968 kubelet[3312]: I0513 23:47:36.517954 3312 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/787c05ce-63c6-4617-ac05-aa7f2868d100-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "787c05ce-63c6-4617-ac05-aa7f2868d100" (UID: "787c05ce-63c6-4617-ac05-aa7f2868d100"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 23:47:36.520422 kubelet[3312]: I0513 23:47:36.520378 3312 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/787c05ce-63c6-4617-ac05-aa7f2868d100-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "787c05ce-63c6-4617-ac05-aa7f2868d100" (UID: "787c05ce-63c6-4617-ac05-aa7f2868d100"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 23:47:36.520509 kubelet[3312]: I0513 23:47:36.520428 3312 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/787c05ce-63c6-4617-ac05-aa7f2868d100-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "787c05ce-63c6-4617-ac05-aa7f2868d100" (UID: "787c05ce-63c6-4617-ac05-aa7f2868d100"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 23:47:36.521467 systemd[1]: var-lib-kubelet-pods-ecc7e9a2\x2ddb2c\x2d46dc\x2dbca5\x2d4cad3f619693-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwlbcq.mount: Deactivated successfully. May 13 23:47:36.527095 kubelet[3312]: I0513 23:47:36.526962 3312 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ecc7e9a2-db2c-46dc-bca5-4cad3f619693-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ecc7e9a2-db2c-46dc-bca5-4cad3f619693" (UID: "ecc7e9a2-db2c-46dc-bca5-4cad3f619693"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 13 23:47:36.527253 kubelet[3312]: I0513 23:47:36.527236 3312 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/787c05ce-63c6-4617-ac05-aa7f2868d100-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "787c05ce-63c6-4617-ac05-aa7f2868d100" (UID: "787c05ce-63c6-4617-ac05-aa7f2868d100"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 13 23:47:36.529913 systemd[1]: var-lib-kubelet-pods-787c05ce\x2d63c6\x2d4617\x2dac05\x2daa7f2868d100-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 13 23:47:36.533972 kubelet[3312]: I0513 23:47:36.530860 3312 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/787c05ce-63c6-4617-ac05-aa7f2868d100-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "787c05ce-63c6-4617-ac05-aa7f2868d100" (UID: "787c05ce-63c6-4617-ac05-aa7f2868d100"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 13 23:47:36.534573 kubelet[3312]: I0513 23:47:36.532282 3312 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ecc7e9a2-db2c-46dc-bca5-4cad3f619693-kube-api-access-wlbcq" (OuterVolumeSpecName: "kube-api-access-wlbcq") pod "ecc7e9a2-db2c-46dc-bca5-4cad3f619693" (UID: "ecc7e9a2-db2c-46dc-bca5-4cad3f619693"). InnerVolumeSpecName "kube-api-access-wlbcq". PluginName "kubernetes.io/projected", VolumeGidValue "" May 13 23:47:36.535510 kubelet[3312]: I0513 23:47:36.535468 3312 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/787c05ce-63c6-4617-ac05-aa7f2868d100-kube-api-access-kn7td" (OuterVolumeSpecName: "kube-api-access-kn7td") pod "787c05ce-63c6-4617-ac05-aa7f2868d100" (UID: "787c05ce-63c6-4617-ac05-aa7f2868d100"). InnerVolumeSpecName "kube-api-access-kn7td". PluginName "kubernetes.io/projected", VolumeGidValue "" May 13 23:47:36.535510 kubelet[3312]: I0513 23:47:36.535509 3312 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/787c05ce-63c6-4617-ac05-aa7f2868d100-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "787c05ce-63c6-4617-ac05-aa7f2868d100" (UID: "787c05ce-63c6-4617-ac05-aa7f2868d100"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 13 23:47:36.606192 kubelet[3312]: I0513 23:47:36.606157 3312 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/787c05ce-63c6-4617-ac05-aa7f2868d100-clustermesh-secrets\") on node \"ci-4284.0.0-n-791441f790\" DevicePath \"\"" May 13 23:47:36.606525 kubelet[3312]: I0513 23:47:36.606378 3312 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/787c05ce-63c6-4617-ac05-aa7f2868d100-host-proc-sys-net\") on node \"ci-4284.0.0-n-791441f790\" DevicePath \"\"" May 13 23:47:36.606525 kubelet[3312]: I0513 23:47:36.606396 3312 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/787c05ce-63c6-4617-ac05-aa7f2868d100-etc-cni-netd\") on node \"ci-4284.0.0-n-791441f790\" DevicePath \"\"" May 13 23:47:36.606525 kubelet[3312]: I0513 23:47:36.606405 3312 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/787c05ce-63c6-4617-ac05-aa7f2868d100-bpf-maps\") on node \"ci-4284.0.0-n-791441f790\" DevicePath \"\"" May 13 23:47:36.606525 kubelet[3312]: I0513 23:47:36.606416 3312 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/787c05ce-63c6-4617-ac05-aa7f2868d100-hubble-tls\") on node \"ci-4284.0.0-n-791441f790\" DevicePath \"\"" May 13 23:47:36.606525 kubelet[3312]: I0513 23:47:36.606459 3312 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/787c05ce-63c6-4617-ac05-aa7f2868d100-cilium-run\") on node \"ci-4284.0.0-n-791441f790\" DevicePath \"\"" May 13 23:47:36.606525 kubelet[3312]: I0513 23:47:36.606470 3312 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/787c05ce-63c6-4617-ac05-aa7f2868d100-host-proc-sys-kernel\") on node \"ci-4284.0.0-n-791441f790\" DevicePath \"\"" May 13 23:47:36.606525 kubelet[3312]: I0513 23:47:36.606478 3312 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ecc7e9a2-db2c-46dc-bca5-4cad3f619693-cilium-config-path\") on node \"ci-4284.0.0-n-791441f790\" DevicePath \"\"" May 13 23:47:36.606525 kubelet[3312]: I0513 23:47:36.606488 3312 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/787c05ce-63c6-4617-ac05-aa7f2868d100-hostproc\") on node \"ci-4284.0.0-n-791441f790\" DevicePath \"\"" May 13 23:47:36.606712 kubelet[3312]: I0513 23:47:36.606497 3312 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/787c05ce-63c6-4617-ac05-aa7f2868d100-cni-path\") on node \"ci-4284.0.0-n-791441f790\" DevicePath \"\"" May 13 23:47:36.606712 kubelet[3312]: I0513 23:47:36.606505 3312 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/787c05ce-63c6-4617-ac05-aa7f2868d100-xtables-lock\") on node \"ci-4284.0.0-n-791441f790\" DevicePath \"\"" May 13 23:47:36.606817 kubelet[3312]: I0513 23:47:36.606768 3312 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-wlbcq\" (UniqueName: \"kubernetes.io/projected/ecc7e9a2-db2c-46dc-bca5-4cad3f619693-kube-api-access-wlbcq\") on node \"ci-4284.0.0-n-791441f790\" DevicePath \"\"" May 13 23:47:36.606817 kubelet[3312]: I0513 23:47:36.606781 3312 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/787c05ce-63c6-4617-ac05-aa7f2868d100-lib-modules\") on node \"ci-4284.0.0-n-791441f790\" DevicePath \"\"" May 13 23:47:36.606817 kubelet[3312]: I0513 23:47:36.606789 3312 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/787c05ce-63c6-4617-ac05-aa7f2868d100-cilium-cgroup\") on node \"ci-4284.0.0-n-791441f790\" DevicePath \"\"" May 13 23:47:36.606817 kubelet[3312]: I0513 23:47:36.606797 3312 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/787c05ce-63c6-4617-ac05-aa7f2868d100-cilium-config-path\") on node \"ci-4284.0.0-n-791441f790\" DevicePath \"\"" May 13 23:47:36.606817 kubelet[3312]: I0513 23:47:36.606805 3312 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-kn7td\" (UniqueName: \"kubernetes.io/projected/787c05ce-63c6-4617-ac05-aa7f2868d100-kube-api-access-kn7td\") on node \"ci-4284.0.0-n-791441f790\" DevicePath \"\"" May 13 23:47:37.031331 kubelet[3312]: E0513 23:47:37.030757 3312 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="787c05ce-63c6-4617-ac05-aa7f2868d100" containerName="mount-cgroup" May 13 23:47:37.031331 kubelet[3312]: E0513 23:47:37.030790 3312 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="787c05ce-63c6-4617-ac05-aa7f2868d100" containerName="apply-sysctl-overwrites" May 13 23:47:37.031331 kubelet[3312]: E0513 23:47:37.030798 3312 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="787c05ce-63c6-4617-ac05-aa7f2868d100" containerName="mount-bpf-fs" May 13 23:47:37.031331 kubelet[3312]: E0513 23:47:37.030804 3312 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ecc7e9a2-db2c-46dc-bca5-4cad3f619693" containerName="cilium-operator" May 13 23:47:37.031331 kubelet[3312]: E0513 23:47:37.030810 3312 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="787c05ce-63c6-4617-ac05-aa7f2868d100" containerName="clean-cilium-state" May 13 23:47:37.031331 kubelet[3312]: E0513 23:47:37.030816 3312 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="787c05ce-63c6-4617-ac05-aa7f2868d100" containerName="cilium-agent" May 13 23:47:37.031331 kubelet[3312]: I0513 23:47:37.030841 3312 memory_manager.go:354] "RemoveStaleState removing state" podUID="787c05ce-63c6-4617-ac05-aa7f2868d100" containerName="cilium-agent" May 13 23:47:37.031331 kubelet[3312]: I0513 23:47:37.030847 3312 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecc7e9a2-db2c-46dc-bca5-4cad3f619693" containerName="cilium-operator" May 13 23:47:37.038233 systemd[1]: Created slice kubepods-burstable-pod450d2aed_ff94_476b_a2cb_36181c610eb0.slice - libcontainer container kubepods-burstable-pod450d2aed_ff94_476b_a2cb_36181c610eb0.slice. May 13 23:47:37.086296 sshd[5002]: Connection closed by 10.200.16.10 port 43408 May 13 23:47:37.086872 sshd-session[4980]: pam_unix(sshd:session): session closed for user core May 13 23:47:37.091920 systemd[1]: sshd@25-10.200.20.10:22-10.200.16.10:43408.service: Deactivated successfully. May 13 23:47:37.096177 systemd[1]: session-28.scope: Deactivated successfully. May 13 23:47:37.098578 systemd-logind[1722]: Session 28 logged out. Waiting for processes to exit. May 13 23:47:37.100593 systemd-logind[1722]: Removed session 28. May 13 23:47:37.109696 kubelet[3312]: I0513 23:47:37.109655 3312 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/450d2aed-ff94-476b-a2cb-36181c610eb0-cilium-run\") pod \"cilium-2h2dc\" (UID: \"450d2aed-ff94-476b-a2cb-36181c610eb0\") " pod="kube-system/cilium-2h2dc" May 13 23:47:37.109826 kubelet[3312]: I0513 23:47:37.109701 3312 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/450d2aed-ff94-476b-a2cb-36181c610eb0-cilium-ipsec-secrets\") pod \"cilium-2h2dc\" (UID: \"450d2aed-ff94-476b-a2cb-36181c610eb0\") " pod="kube-system/cilium-2h2dc" May 13 23:47:37.109826 kubelet[3312]: I0513 23:47:37.109723 3312 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/450d2aed-ff94-476b-a2cb-36181c610eb0-cni-path\") pod \"cilium-2h2dc\" (UID: \"450d2aed-ff94-476b-a2cb-36181c610eb0\") " pod="kube-system/cilium-2h2dc" May 13 23:47:37.109826 kubelet[3312]: I0513 23:47:37.109737 3312 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/450d2aed-ff94-476b-a2cb-36181c610eb0-etc-cni-netd\") pod \"cilium-2h2dc\" (UID: \"450d2aed-ff94-476b-a2cb-36181c610eb0\") " pod="kube-system/cilium-2h2dc" May 13 23:47:37.109826 kubelet[3312]: I0513 23:47:37.109754 3312 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/450d2aed-ff94-476b-a2cb-36181c610eb0-xtables-lock\") pod \"cilium-2h2dc\" (UID: \"450d2aed-ff94-476b-a2cb-36181c610eb0\") " pod="kube-system/cilium-2h2dc" May 13 23:47:37.109826 kubelet[3312]: I0513 23:47:37.109771 3312 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/450d2aed-ff94-476b-a2cb-36181c610eb0-cilium-config-path\") pod \"cilium-2h2dc\" (UID: \"450d2aed-ff94-476b-a2cb-36181c610eb0\") " pod="kube-system/cilium-2h2dc" May 13 23:47:37.109826 kubelet[3312]: I0513 23:47:37.109786 3312 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/450d2aed-ff94-476b-a2cb-36181c610eb0-hubble-tls\") pod \"cilium-2h2dc\" (UID: \"450d2aed-ff94-476b-a2cb-36181c610eb0\") " pod="kube-system/cilium-2h2dc" May 13 23:47:37.109976 kubelet[3312]: I0513 23:47:37.109803 3312 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/450d2aed-ff94-476b-a2cb-36181c610eb0-host-proc-sys-net\") pod \"cilium-2h2dc\" (UID: \"450d2aed-ff94-476b-a2cb-36181c610eb0\") " pod="kube-system/cilium-2h2dc" May 13 23:47:37.109976 kubelet[3312]: I0513 23:47:37.109817 3312 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/450d2aed-ff94-476b-a2cb-36181c610eb0-host-proc-sys-kernel\") pod \"cilium-2h2dc\" (UID: \"450d2aed-ff94-476b-a2cb-36181c610eb0\") " pod="kube-system/cilium-2h2dc" May 13 23:47:37.109976 kubelet[3312]: I0513 23:47:37.109831 3312 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/450d2aed-ff94-476b-a2cb-36181c610eb0-clustermesh-secrets\") pod \"cilium-2h2dc\" (UID: \"450d2aed-ff94-476b-a2cb-36181c610eb0\") " pod="kube-system/cilium-2h2dc" May 13 23:47:37.109976 kubelet[3312]: I0513 23:47:37.109846 3312 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/450d2aed-ff94-476b-a2cb-36181c610eb0-hostproc\") pod \"cilium-2h2dc\" (UID: \"450d2aed-ff94-476b-a2cb-36181c610eb0\") " pod="kube-system/cilium-2h2dc" May 13 23:47:37.109976 kubelet[3312]: I0513 23:47:37.109862 3312 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/450d2aed-ff94-476b-a2cb-36181c610eb0-cilium-cgroup\") pod \"cilium-2h2dc\" (UID: \"450d2aed-ff94-476b-a2cb-36181c610eb0\") " pod="kube-system/cilium-2h2dc" May 13 23:47:37.109976 kubelet[3312]: I0513 23:47:37.109877 3312 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/450d2aed-ff94-476b-a2cb-36181c610eb0-lib-modules\") pod \"cilium-2h2dc\" (UID: \"450d2aed-ff94-476b-a2cb-36181c610eb0\") " pod="kube-system/cilium-2h2dc" May 13 23:47:37.110097 kubelet[3312]: I0513 23:47:37.109893 3312 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdm5l\" (UniqueName: \"kubernetes.io/projected/450d2aed-ff94-476b-a2cb-36181c610eb0-kube-api-access-wdm5l\") pod \"cilium-2h2dc\" (UID: \"450d2aed-ff94-476b-a2cb-36181c610eb0\") " pod="kube-system/cilium-2h2dc" May 13 23:47:37.110097 kubelet[3312]: I0513 23:47:37.109908 3312 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/450d2aed-ff94-476b-a2cb-36181c610eb0-bpf-maps\") pod \"cilium-2h2dc\" (UID: \"450d2aed-ff94-476b-a2cb-36181c610eb0\") " pod="kube-system/cilium-2h2dc" May 13 23:47:37.169530 systemd[1]: Started sshd@26-10.200.20.10:22-10.200.16.10:43416.service - OpenSSH per-connection server daemon (10.200.16.10:43416). May 13 23:47:37.342385 containerd[1762]: time="2025-05-13T23:47:37.342336205Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2h2dc,Uid:450d2aed-ff94-476b-a2cb-36181c610eb0,Namespace:kube-system,Attempt:0,}" May 13 23:47:37.446808 systemd[1]: var-lib-kubelet-pods-787c05ce\x2d63c6\x2d4617\x2dac05\x2daa7f2868d100-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkn7td.mount: Deactivated successfully. May 13 23:47:37.486129 systemd[1]: Removed slice kubepods-besteffort-podecc7e9a2_db2c_46dc_bca5_4cad3f619693.slice - libcontainer container kubepods-besteffort-podecc7e9a2_db2c_46dc_bca5_4cad3f619693.slice. May 13 23:47:37.488567 systemd[1]: Removed slice kubepods-burstable-pod787c05ce_63c6_4617_ac05_aa7f2868d100.slice - libcontainer container kubepods-burstable-pod787c05ce_63c6_4617_ac05_aa7f2868d100.slice. May 13 23:47:37.488658 systemd[1]: kubepods-burstable-pod787c05ce_63c6_4617_ac05_aa7f2868d100.slice: Consumed 7.582s CPU time, 126.4M memory peak, 136K read from disk, 12.9M written to disk. May 13 23:47:37.623590 kubelet[3312]: I0513 23:47:37.623482 3312 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="787c05ce-63c6-4617-ac05-aa7f2868d100" path="/var/lib/kubelet/pods/787c05ce-63c6-4617-ac05-aa7f2868d100/volumes" May 13 23:47:37.624061 kubelet[3312]: I0513 23:47:37.624040 3312 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ecc7e9a2-db2c-46dc-bca5-4cad3f619693" path="/var/lib/kubelet/pods/ecc7e9a2-db2c-46dc-bca5-4cad3f619693/volumes" May 13 23:47:37.625853 sshd[5046]: Accepted publickey for core from 10.200.16.10 port 43416 ssh2: RSA SHA256:vkfaD5ZBcZpTdQVgl7gjxJv9L2x8eoUpkC37aWFhQ2A May 13 23:47:37.627430 sshd-session[5046]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:47:37.631916 systemd-logind[1722]: New session 29 of user core. May 13 23:47:37.636473 systemd[1]: Started session-29.scope - Session 29 of User core. May 13 23:47:37.663139 containerd[1762]: time="2025-05-13T23:47:37.663095167Z" level=info msg="connecting to shim 90e96aee0157e526b4a5bdcc40ce82cb06c3235e642af0865ee374ccc7043c30" address="unix:///run/containerd/s/b58e6ea9ab03eebce4947b1a10362cb2750b8a449116ae82dcda8655a7b1ed99" namespace=k8s.io protocol=ttrpc version=3 May 13 23:47:37.689505 systemd[1]: Started cri-containerd-90e96aee0157e526b4a5bdcc40ce82cb06c3235e642af0865ee374ccc7043c30.scope - libcontainer container 90e96aee0157e526b4a5bdcc40ce82cb06c3235e642af0865ee374ccc7043c30. May 13 23:47:37.724214 containerd[1762]: time="2025-05-13T23:47:37.723990792Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2h2dc,Uid:450d2aed-ff94-476b-a2cb-36181c610eb0,Namespace:kube-system,Attempt:0,} returns sandbox id \"90e96aee0157e526b4a5bdcc40ce82cb06c3235e642af0865ee374ccc7043c30\"" May 13 23:47:37.727718 containerd[1762]: time="2025-05-13T23:47:37.727654031Z" level=info msg="CreateContainer within sandbox \"90e96aee0157e526b4a5bdcc40ce82cb06c3235e642af0865ee374ccc7043c30\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 13 23:47:37.797991 kubelet[3312]: E0513 23:47:37.797860 3312 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 13 23:47:37.823578 containerd[1762]: time="2025-05-13T23:47:37.823447448Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d6d4e21b51fab399a57e782ed309724c04245f02dc9b2b18da6c3dd33b1456ee\" id:\"d6d4e21b51fab399a57e782ed309724c04245f02dc9b2b18da6c3dd33b1456ee\" pid:3511 exit_status:137 exited_at:{seconds:1747180055 nanos:75494518}" May 13 23:47:37.868173 containerd[1762]: time="2025-05-13T23:47:37.868082677Z" level=info msg="Container f5dc2c89d831d711cfdbf0e54292ee7e30e58b5be9dceba74b09d08393c82a7b: CDI devices from CRI Config.CDIDevices: []" May 13 23:47:37.871791 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1687763825.mount: Deactivated successfully. May 13 23:47:37.945032 sshd[5052]: Connection closed by 10.200.16.10 port 43416 May 13 23:47:37.945964 sshd-session[5046]: pam_unix(sshd:session): session closed for user core May 13 23:47:37.949785 systemd[1]: sshd@26-10.200.20.10:22-10.200.16.10:43416.service: Deactivated successfully. May 13 23:47:37.953157 systemd[1]: session-29.scope: Deactivated successfully. May 13 23:47:37.954917 systemd-logind[1722]: Session 29 logged out. Waiting for processes to exit. May 13 23:47:37.956071 systemd-logind[1722]: Removed session 29. May 13 23:47:37.959562 containerd[1762]: time="2025-05-13T23:47:37.959403575Z" level=info msg="CreateContainer within sandbox \"90e96aee0157e526b4a5bdcc40ce82cb06c3235e642af0865ee374ccc7043c30\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f5dc2c89d831d711cfdbf0e54292ee7e30e58b5be9dceba74b09d08393c82a7b\"" May 13 23:47:37.960347 containerd[1762]: time="2025-05-13T23:47:37.960314734Z" level=info msg="StartContainer for \"f5dc2c89d831d711cfdbf0e54292ee7e30e58b5be9dceba74b09d08393c82a7b\"" May 13 23:47:37.961643 containerd[1762]: time="2025-05-13T23:47:37.961596814Z" level=info msg="connecting to shim f5dc2c89d831d711cfdbf0e54292ee7e30e58b5be9dceba74b09d08393c82a7b" address="unix:///run/containerd/s/b58e6ea9ab03eebce4947b1a10362cb2750b8a449116ae82dcda8655a7b1ed99" protocol=ttrpc version=3 May 13 23:47:37.986524 systemd[1]: Started cri-containerd-f5dc2c89d831d711cfdbf0e54292ee7e30e58b5be9dceba74b09d08393c82a7b.scope - libcontainer container f5dc2c89d831d711cfdbf0e54292ee7e30e58b5be9dceba74b09d08393c82a7b. May 13 23:47:38.025664 containerd[1762]: time="2025-05-13T23:47:38.025604438Z" level=info msg="StartContainer for \"f5dc2c89d831d711cfdbf0e54292ee7e30e58b5be9dceba74b09d08393c82a7b\" returns successfully" May 13 23:47:38.031768 systemd[1]: Started sshd@27-10.200.20.10:22-10.200.16.10:43418.service - OpenSSH per-connection server daemon (10.200.16.10:43418). May 13 23:47:38.032560 systemd[1]: cri-containerd-f5dc2c89d831d711cfdbf0e54292ee7e30e58b5be9dceba74b09d08393c82a7b.scope: Deactivated successfully. May 13 23:47:38.036728 containerd[1762]: time="2025-05-13T23:47:38.036679116Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f5dc2c89d831d711cfdbf0e54292ee7e30e58b5be9dceba74b09d08393c82a7b\" id:\"f5dc2c89d831d711cfdbf0e54292ee7e30e58b5be9dceba74b09d08393c82a7b\" pid:5115 exited_at:{seconds:1747180058 nanos:35829196}" May 13 23:47:38.036846 containerd[1762]: time="2025-05-13T23:47:38.036769356Z" level=info msg="received exit event container_id:\"f5dc2c89d831d711cfdbf0e54292ee7e30e58b5be9dceba74b09d08393c82a7b\" id:\"f5dc2c89d831d711cfdbf0e54292ee7e30e58b5be9dceba74b09d08393c82a7b\" pid:5115 exited_at:{seconds:1747180058 nanos:35829196}" May 13 23:47:38.498307 sshd[5135]: Accepted publickey for core from 10.200.16.10 port 43418 ssh2: RSA SHA256:vkfaD5ZBcZpTdQVgl7gjxJv9L2x8eoUpkC37aWFhQ2A May 13 23:47:38.499596 sshd-session[5135]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:47:38.507532 systemd-logind[1722]: New session 30 of user core. May 13 23:47:38.517438 systemd[1]: Started session-30.scope - Session 30 of User core. May 13 23:47:41.493844 containerd[1762]: time="2025-05-13T23:47:41.493614217Z" level=info msg="CreateContainer within sandbox \"90e96aee0157e526b4a5bdcc40ce82cb06c3235e642af0865ee374ccc7043c30\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 13 23:47:41.504908 kubelet[3312]: I0513 23:47:41.504853 3312 setters.go:600] "Node became not ready" node="ci-4284.0.0-n-791441f790" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-13T23:47:41Z","lastTransitionTime":"2025-05-13T23:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 13 23:47:41.615295 containerd[1762]: time="2025-05-13T23:47:41.615012420Z" level=info msg="Container 3b0e973ea5acabd0ad3cd080c849aaecfe33a6c2a3736b84ea03be3ec7e55292: CDI devices from CRI Config.CDIDevices: []" May 13 23:47:41.710445 containerd[1762]: time="2025-05-13T23:47:41.710391152Z" level=info msg="CreateContainer within sandbox \"90e96aee0157e526b4a5bdcc40ce82cb06c3235e642af0865ee374ccc7043c30\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"3b0e973ea5acabd0ad3cd080c849aaecfe33a6c2a3736b84ea03be3ec7e55292\"" May 13 23:47:41.711238 containerd[1762]: time="2025-05-13T23:47:41.711206831Z" level=info msg="StartContainer for \"3b0e973ea5acabd0ad3cd080c849aaecfe33a6c2a3736b84ea03be3ec7e55292\"" May 13 23:47:41.714702 containerd[1762]: time="2025-05-13T23:47:41.714587110Z" level=info msg="connecting to shim 3b0e973ea5acabd0ad3cd080c849aaecfe33a6c2a3736b84ea03be3ec7e55292" address="unix:///run/containerd/s/b58e6ea9ab03eebce4947b1a10362cb2750b8a449116ae82dcda8655a7b1ed99" protocol=ttrpc version=3 May 13 23:47:41.735441 systemd[1]: Started cri-containerd-3b0e973ea5acabd0ad3cd080c849aaecfe33a6c2a3736b84ea03be3ec7e55292.scope - libcontainer container 3b0e973ea5acabd0ad3cd080c849aaecfe33a6c2a3736b84ea03be3ec7e55292. May 13 23:47:41.768293 systemd[1]: cri-containerd-3b0e973ea5acabd0ad3cd080c849aaecfe33a6c2a3736b84ea03be3ec7e55292.scope: Deactivated successfully. May 13 23:47:41.773465 containerd[1762]: time="2025-05-13T23:47:41.773420213Z" level=info msg="StartContainer for \"3b0e973ea5acabd0ad3cd080c849aaecfe33a6c2a3736b84ea03be3ec7e55292\" returns successfully" May 13 23:47:41.773685 containerd[1762]: time="2025-05-13T23:47:41.773662973Z" level=info msg="received exit event container_id:\"3b0e973ea5acabd0ad3cd080c849aaecfe33a6c2a3736b84ea03be3ec7e55292\" id:\"3b0e973ea5acabd0ad3cd080c849aaecfe33a6c2a3736b84ea03be3ec7e55292\" pid:5168 exited_at:{seconds:1747180061 nanos:770385974}" May 13 23:47:41.773835 containerd[1762]: time="2025-05-13T23:47:41.773813453Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3b0e973ea5acabd0ad3cd080c849aaecfe33a6c2a3736b84ea03be3ec7e55292\" id:\"3b0e973ea5acabd0ad3cd080c849aaecfe33a6c2a3736b84ea03be3ec7e55292\" pid:5168 exited_at:{seconds:1747180061 nanos:770385974}" May 13 23:47:41.798137 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3b0e973ea5acabd0ad3cd080c849aaecfe33a6c2a3736b84ea03be3ec7e55292-rootfs.mount: Deactivated successfully. May 13 23:47:42.798977 kubelet[3312]: E0513 23:47:42.798915 3312 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 13 23:47:43.502309 containerd[1762]: time="2025-05-13T23:47:43.499919537Z" level=info msg="CreateContainer within sandbox \"90e96aee0157e526b4a5bdcc40ce82cb06c3235e642af0865ee374ccc7043c30\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 13 23:47:43.625372 kubelet[3312]: E0513 23:47:43.622075 3312 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-6f6b679f8f-bn8cj" podUID="a32f9734-fe38-48bb-879b-94db70e8351e" May 13 23:47:43.625510 containerd[1762]: time="2025-05-13T23:47:43.624665939Z" level=info msg="Container bbc7fd231d9abc7c70614243d80a0380b0963486cd5d751ea509cc45bf6dcb03: CDI devices from CRI Config.CDIDevices: []" May 13 23:47:43.764253 containerd[1762]: time="2025-05-13T23:47:43.764131098Z" level=info msg="CreateContainer within sandbox \"90e96aee0157e526b4a5bdcc40ce82cb06c3235e642af0865ee374ccc7043c30\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"bbc7fd231d9abc7c70614243d80a0380b0963486cd5d751ea509cc45bf6dcb03\"" May 13 23:47:43.765113 containerd[1762]: time="2025-05-13T23:47:43.765082337Z" level=info msg="StartContainer for \"bbc7fd231d9abc7c70614243d80a0380b0963486cd5d751ea509cc45bf6dcb03\"" May 13 23:47:43.768242 containerd[1762]: time="2025-05-13T23:47:43.768144136Z" level=info msg="connecting to shim bbc7fd231d9abc7c70614243d80a0380b0963486cd5d751ea509cc45bf6dcb03" address="unix:///run/containerd/s/b58e6ea9ab03eebce4947b1a10362cb2750b8a449116ae82dcda8655a7b1ed99" protocol=ttrpc version=3 May 13 23:47:43.793519 systemd[1]: Started cri-containerd-bbc7fd231d9abc7c70614243d80a0380b0963486cd5d751ea509cc45bf6dcb03.scope - libcontainer container bbc7fd231d9abc7c70614243d80a0380b0963486cd5d751ea509cc45bf6dcb03. May 13 23:47:43.826631 systemd[1]: cri-containerd-bbc7fd231d9abc7c70614243d80a0380b0963486cd5d751ea509cc45bf6dcb03.scope: Deactivated successfully. May 13 23:47:43.831305 containerd[1762]: time="2025-05-13T23:47:43.830765238Z" level=info msg="received exit event container_id:\"bbc7fd231d9abc7c70614243d80a0380b0963486cd5d751ea509cc45bf6dcb03\" id:\"bbc7fd231d9abc7c70614243d80a0380b0963486cd5d751ea509cc45bf6dcb03\" pid:5209 exited_at:{seconds:1747180063 nanos:830151478}" May 13 23:47:43.831305 containerd[1762]: time="2025-05-13T23:47:43.831009518Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bbc7fd231d9abc7c70614243d80a0380b0963486cd5d751ea509cc45bf6dcb03\" id:\"bbc7fd231d9abc7c70614243d80a0380b0963486cd5d751ea509cc45bf6dcb03\" pid:5209 exited_at:{seconds:1747180063 nanos:830151478}" May 13 23:47:43.840109 containerd[1762]: time="2025-05-13T23:47:43.839996195Z" level=info msg="StartContainer for \"bbc7fd231d9abc7c70614243d80a0380b0963486cd5d751ea509cc45bf6dcb03\" returns successfully" May 13 23:47:43.853515 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bbc7fd231d9abc7c70614243d80a0380b0963486cd5d751ea509cc45bf6dcb03-rootfs.mount: Deactivated successfully. May 13 23:47:45.513989 containerd[1762]: time="2025-05-13T23:47:45.513930574Z" level=info msg="CreateContainer within sandbox \"90e96aee0157e526b4a5bdcc40ce82cb06c3235e642af0865ee374ccc7043c30\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 13 23:47:45.622104 kubelet[3312]: E0513 23:47:45.621534 3312 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-6f6b679f8f-bn8cj" podUID="a32f9734-fe38-48bb-879b-94db70e8351e" May 13 23:47:45.670554 containerd[1762]: time="2025-05-13T23:47:45.670504088Z" level=info msg="Container f50b9fa562056c6585855fa9361d441f39d52bebe45d901aecd0d553b392b6bb: CDI devices from CRI Config.CDIDevices: []" May 13 23:47:45.805812 containerd[1762]: time="2025-05-13T23:47:45.805687807Z" level=info msg="CreateContainer within sandbox \"90e96aee0157e526b4a5bdcc40ce82cb06c3235e642af0865ee374ccc7043c30\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f50b9fa562056c6585855fa9361d441f39d52bebe45d901aecd0d553b392b6bb\"" May 13 23:47:45.807835 containerd[1762]: time="2025-05-13T23:47:45.807738047Z" level=info msg="StartContainer for \"f50b9fa562056c6585855fa9361d441f39d52bebe45d901aecd0d553b392b6bb\"" May 13 23:47:45.808628 containerd[1762]: time="2025-05-13T23:47:45.808598766Z" level=info msg="connecting to shim f50b9fa562056c6585855fa9361d441f39d52bebe45d901aecd0d553b392b6bb" address="unix:///run/containerd/s/b58e6ea9ab03eebce4947b1a10362cb2750b8a449116ae82dcda8655a7b1ed99" protocol=ttrpc version=3 May 13 23:47:45.832427 systemd[1]: Started cri-containerd-f50b9fa562056c6585855fa9361d441f39d52bebe45d901aecd0d553b392b6bb.scope - libcontainer container f50b9fa562056c6585855fa9361d441f39d52bebe45d901aecd0d553b392b6bb. May 13 23:47:45.861453 systemd[1]: cri-containerd-f50b9fa562056c6585855fa9361d441f39d52bebe45d901aecd0d553b392b6bb.scope: Deactivated successfully. May 13 23:47:45.863113 containerd[1762]: time="2025-05-13T23:47:45.863054390Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f50b9fa562056c6585855fa9361d441f39d52bebe45d901aecd0d553b392b6bb\" id:\"f50b9fa562056c6585855fa9361d441f39d52bebe45d901aecd0d553b392b6bb\" pid:5250 exited_at:{seconds:1747180065 nanos:862410350}" May 13 23:47:45.867095 containerd[1762]: time="2025-05-13T23:47:45.867049789Z" level=info msg="received exit event container_id:\"f50b9fa562056c6585855fa9361d441f39d52bebe45d901aecd0d553b392b6bb\" id:\"f50b9fa562056c6585855fa9361d441f39d52bebe45d901aecd0d553b392b6bb\" pid:5250 exited_at:{seconds:1747180065 nanos:862410350}" May 13 23:47:45.874302 containerd[1762]: time="2025-05-13T23:47:45.874130667Z" level=info msg="StartContainer for \"f50b9fa562056c6585855fa9361d441f39d52bebe45d901aecd0d553b392b6bb\" returns successfully" May 13 23:47:45.887788 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f50b9fa562056c6585855fa9361d441f39d52bebe45d901aecd0d553b392b6bb-rootfs.mount: Deactivated successfully. May 13 23:47:47.526811 containerd[1762]: time="2025-05-13T23:47:47.526767253Z" level=info msg="CreateContainer within sandbox \"90e96aee0157e526b4a5bdcc40ce82cb06c3235e642af0865ee374ccc7043c30\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 13 23:47:47.622290 kubelet[3312]: E0513 23:47:47.621198 3312 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-6f6b679f8f-bn8cj" podUID="a32f9734-fe38-48bb-879b-94db70e8351e" May 13 23:47:47.669319 containerd[1762]: time="2025-05-13T23:47:47.668570850Z" level=info msg="Container e92168d4780c378d073989e1430097a101e7fc38c76b169515b1e50c47b8b692: CDI devices from CRI Config.CDIDevices: []" May 13 23:47:47.764600 containerd[1762]: time="2025-05-13T23:47:47.764545941Z" level=info msg="CreateContainer within sandbox \"90e96aee0157e526b4a5bdcc40ce82cb06c3235e642af0865ee374ccc7043c30\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e92168d4780c378d073989e1430097a101e7fc38c76b169515b1e50c47b8b692\"" May 13 23:47:47.765802 containerd[1762]: time="2025-05-13T23:47:47.765633101Z" level=info msg="StartContainer for \"e92168d4780c378d073989e1430097a101e7fc38c76b169515b1e50c47b8b692\"" May 13 23:47:47.767242 containerd[1762]: time="2025-05-13T23:47:47.767111580Z" level=info msg="connecting to shim e92168d4780c378d073989e1430097a101e7fc38c76b169515b1e50c47b8b692" address="unix:///run/containerd/s/b58e6ea9ab03eebce4947b1a10362cb2750b8a449116ae82dcda8655a7b1ed99" protocol=ttrpc version=3 May 13 23:47:47.798448 systemd[1]: Started cri-containerd-e92168d4780c378d073989e1430097a101e7fc38c76b169515b1e50c47b8b692.scope - libcontainer container e92168d4780c378d073989e1430097a101e7fc38c76b169515b1e50c47b8b692. May 13 23:47:47.800565 kubelet[3312]: E0513 23:47:47.800495 3312 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 13 23:47:47.839325 containerd[1762]: time="2025-05-13T23:47:47.837791265Z" level=info msg="StartContainer for \"e92168d4780c378d073989e1430097a101e7fc38c76b169515b1e50c47b8b692\" returns successfully" May 13 23:47:47.915494 containerd[1762]: time="2025-05-13T23:47:47.915456228Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e92168d4780c378d073989e1430097a101e7fc38c76b169515b1e50c47b8b692\" id:\"eacb495d53cc7bd679d451500854c2e3e2ae762ca9df41c03e4ac2d5abdef190\" pid:5317 exited_at:{seconds:1747180067 nanos:915122988}" May 13 23:47:48.258433 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) May 13 23:47:48.549580 kubelet[3312]: I0513 23:47:48.548725 3312 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-2h2dc" podStartSLOduration=11.548707399 podStartE2EDuration="11.548707399s" podCreationTimestamp="2025-05-13 23:47:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:47:48.54809468 +0000 UTC m=+211.181337084" watchObservedRunningTime="2025-05-13 23:47:48.548707399 +0000 UTC m=+211.181949803" May 13 23:47:49.014295 containerd[1762]: time="2025-05-13T23:47:49.014177613Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e92168d4780c378d073989e1430097a101e7fc38c76b169515b1e50c47b8b692\" id:\"eb38a64714fe1d4ec246c7d3b2fed9b8c61ffb4dab10691db5e9283da11e7781\" pid:5397 exit_status:1 exited_at:{seconds:1747180069 nanos:13390253}" May 13 23:47:49.017261 kubelet[3312]: E0513 23:47:49.017116 3312 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:44018->127.0.0.1:44541: write tcp 127.0.0.1:44018->127.0.0.1:44541: write: connection reset by peer May 13 23:47:49.621934 kubelet[3312]: E0513 23:47:49.621472 3312 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-6f6b679f8f-bn8cj" podUID="a32f9734-fe38-48bb-879b-94db70e8351e" May 13 23:47:50.980580 systemd-networkd[1342]: lxc_health: Link UP May 13 23:47:51.011407 systemd-networkd[1342]: lxc_health: Gained carrier May 13 23:47:51.196305 containerd[1762]: time="2025-05-13T23:47:51.195994191Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e92168d4780c378d073989e1430097a101e7fc38c76b169515b1e50c47b8b692\" id:\"4c92afb7cd8b36b8183ea5bb8fc6f3c771a101e74eb0b93141d3fc416665f76c\" pid:5839 exit_status:1 exited_at:{seconds:1747180071 nanos:195006351}" May 13 23:47:51.623766 kubelet[3312]: E0513 23:47:51.623424 3312 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-6f6b679f8f-bn8cj" podUID="a32f9734-fe38-48bb-879b-94db70e8351e" May 13 23:47:52.054443 systemd-networkd[1342]: lxc_health: Gained IPv6LL May 13 23:47:53.314310 containerd[1762]: time="2025-05-13T23:47:53.313664240Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e92168d4780c378d073989e1430097a101e7fc38c76b169515b1e50c47b8b692\" id:\"b6674e75b933a4b356e6e1b45bce1eb9089efbc07c31f6785f9bc55de17a06f2\" pid:5877 exited_at:{seconds:1747180073 nanos:312716321}" May 13 23:47:55.432461 containerd[1762]: time="2025-05-13T23:47:55.432414539Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e92168d4780c378d073989e1430097a101e7fc38c76b169515b1e50c47b8b692\" id:\"52cb88d6e4d708e44fc445bacfb91d88fc166067f353cc0eb3cfc6694e3201bc\" pid:5911 exited_at:{seconds:1747180075 nanos:432020820}" May 13 23:47:57.544841 containerd[1762]: time="2025-05-13T23:47:57.544773978Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e92168d4780c378d073989e1430097a101e7fc38c76b169515b1e50c47b8b692\" id:\"6cc13a5d0653c1af3d0ed2f2acc419a018de047fb6f7e1cca593891c2a7840b8\" pid:5934 exited_at:{seconds:1747180077 nanos:544091898}" May 13 23:47:59.682879 containerd[1762]: time="2025-05-13T23:47:59.682816171Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e92168d4780c378d073989e1430097a101e7fc38c76b169515b1e50c47b8b692\" id:\"cf96768dd5f4105e05e955451e9801cda342fc69c29b39cc29c78c66843655c3\" pid:5959 exited_at:{seconds:1747180079 nanos:682416931}" May 13 23:48:01.787252 containerd[1762]: time="2025-05-13T23:48:01.787197168Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e92168d4780c378d073989e1430097a101e7fc38c76b169515b1e50c47b8b692\" id:\"294a0885125570a09f709b40319d3cec5588be382cc3a2566480f54404222e54\" pid:5982 exited_at:{seconds:1747180081 nanos:786650488}" May 13 23:48:03.917604 containerd[1762]: time="2025-05-13T23:48:03.917447352Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e92168d4780c378d073989e1430097a101e7fc38c76b169515b1e50c47b8b692\" id:\"4155c08447903ed4144a2f6b20e5f033156e706fa10df78fb8c3b6ffed2c3e6c\" pid:6005 exited_at:{seconds:1747180083 nanos:917006152}" May 13 23:48:04.008315 sshd[5149]: Connection closed by 10.200.16.10 port 43418 May 13 23:48:04.008935 sshd-session[5135]: pam_unix(sshd:session): session closed for user core May 13 23:48:04.011942 systemd[1]: sshd@27-10.200.20.10:22-10.200.16.10:43418.service: Deactivated successfully. May 13 23:48:04.014139 systemd[1]: session-30.scope: Deactivated successfully. May 13 23:48:04.015945 systemd-logind[1722]: Session 30 logged out. Waiting for processes to exit. May 13 23:48:04.016922 systemd-logind[1722]: Removed session 30.