May 15 00:04:20.362853 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 15 00:04:20.362876 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Wed May 14 22:22:56 -00 2025 May 15 00:04:20.362884 kernel: KASLR enabled May 15 00:04:20.362890 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') May 15 00:04:20.362897 kernel: printk: bootconsole [pl11] enabled May 15 00:04:20.362903 kernel: efi: EFI v2.7 by EDK II May 15 00:04:20.362910 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f20e698 RNG=0x3fd5f998 MEMRESERVE=0x3e477598 May 15 00:04:20.362916 kernel: random: crng init done May 15 00:04:20.362922 kernel: secureboot: Secure boot disabled May 15 00:04:20.362928 kernel: ACPI: Early table checksum verification disabled May 15 00:04:20.362934 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) May 15 00:04:20.362940 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 15 00:04:20.362945 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) May 15 00:04:20.362953 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) May 15 00:04:20.362960 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) May 15 00:04:20.362966 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) May 15 00:04:20.362973 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 15 00:04:20.362980 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) May 15 00:04:20.362986 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) May 15 00:04:20.362992 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) May 15 00:04:20.362999 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) May 15 00:04:20.363005 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 15 00:04:20.363011 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 May 15 00:04:20.363017 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] May 15 00:04:20.363023 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] May 15 00:04:20.363030 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] May 15 00:04:20.363036 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] May 15 00:04:20.363042 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] May 15 00:04:20.363050 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] May 15 00:04:20.363056 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] May 15 00:04:20.363062 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] May 15 00:04:20.363068 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] May 15 00:04:20.363074 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] May 15 00:04:20.363080 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] May 15 00:04:20.363086 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] May 15 00:04:20.363092 kernel: NUMA: NODE_DATA [mem 0x1bf7ee800-0x1bf7f3fff] May 15 00:04:20.363098 kernel: Zone ranges: May 15 00:04:20.363105 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] May 15 00:04:20.363111 kernel: DMA32 empty May 15 00:04:20.363117 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] May 15 00:04:20.363127 kernel: Movable zone start for each node May 15 00:04:20.363133 kernel: Early memory node ranges May 15 00:04:20.363140 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] May 15 00:04:20.363146 kernel: node 0: [mem 0x0000000000824000-0x000000003e45ffff] May 15 00:04:20.363153 kernel: node 0: [mem 0x000000003e460000-0x000000003e46ffff] May 15 00:04:20.363161 kernel: node 0: [mem 0x000000003e470000-0x000000003e54ffff] May 15 00:04:20.363167 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] May 15 00:04:20.363174 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] May 15 00:04:20.363180 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] May 15 00:04:20.363187 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] May 15 00:04:20.365243 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] May 15 00:04:20.365252 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] May 15 00:04:20.365260 kernel: On node 0, zone DMA: 36 pages in unavailable ranges May 15 00:04:20.365266 kernel: psci: probing for conduit method from ACPI. May 15 00:04:20.365273 kernel: psci: PSCIv1.1 detected in firmware. May 15 00:04:20.365282 kernel: psci: Using standard PSCI v0.2 function IDs May 15 00:04:20.365289 kernel: psci: MIGRATE_INFO_TYPE not supported. May 15 00:04:20.365301 kernel: psci: SMC Calling Convention v1.4 May 15 00:04:20.365307 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 May 15 00:04:20.365314 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 May 15 00:04:20.365321 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 May 15 00:04:20.365331 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 May 15 00:04:20.365338 kernel: pcpu-alloc: [0] 0 [0] 1 May 15 00:04:20.365345 kernel: Detected PIPT I-cache on CPU0 May 15 00:04:20.365352 kernel: CPU features: detected: GIC system register CPU interface May 15 00:04:20.365358 kernel: CPU features: detected: Hardware dirty bit management May 15 00:04:20.365365 kernel: CPU features: detected: Spectre-BHB May 15 00:04:20.365371 kernel: CPU features: kernel page table isolation forced ON by KASLR May 15 00:04:20.365383 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 15 00:04:20.365390 kernel: CPU features: detected: ARM erratum 1418040 May 15 00:04:20.365396 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) May 15 00:04:20.365403 kernel: CPU features: detected: SSBS not fully self-synchronizing May 15 00:04:20.365409 kernel: alternatives: applying boot alternatives May 15 00:04:20.365420 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=bfa141d6f8686d8fe96245516ecbaee60c938beef41636c397e3939a2c9a6ed9 May 15 00:04:20.365430 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 15 00:04:20.365437 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 15 00:04:20.365444 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 15 00:04:20.365450 kernel: Fallback order for Node 0: 0 May 15 00:04:20.365457 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 May 15 00:04:20.365468 kernel: Policy zone: Normal May 15 00:04:20.365474 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 15 00:04:20.365481 kernel: software IO TLB: area num 2. May 15 00:04:20.365487 kernel: software IO TLB: mapped [mem 0x0000000036540000-0x000000003a540000] (64MB) May 15 00:04:20.365494 kernel: Memory: 3983588K/4194160K available (10368K kernel code, 2186K rwdata, 8100K rodata, 38336K init, 897K bss, 210572K reserved, 0K cma-reserved) May 15 00:04:20.365501 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 15 00:04:20.365508 kernel: rcu: Preemptible hierarchical RCU implementation. May 15 00:04:20.365518 kernel: rcu: RCU event tracing is enabled. May 15 00:04:20.365524 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 15 00:04:20.365531 kernel: Trampoline variant of Tasks RCU enabled. May 15 00:04:20.365538 kernel: Tracing variant of Tasks RCU enabled. May 15 00:04:20.365546 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 15 00:04:20.365553 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 15 00:04:20.365559 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 15 00:04:20.365569 kernel: GICv3: 960 SPIs implemented May 15 00:04:20.365576 kernel: GICv3: 0 Extended SPIs implemented May 15 00:04:20.365582 kernel: Root IRQ handler: gic_handle_irq May 15 00:04:20.365589 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI May 15 00:04:20.365595 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 May 15 00:04:20.365602 kernel: ITS: No ITS available, not enabling LPIs May 15 00:04:20.365609 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 15 00:04:20.365618 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 00:04:20.365624 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 15 00:04:20.365633 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 15 00:04:20.365639 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 15 00:04:20.365646 kernel: Console: colour dummy device 80x25 May 15 00:04:20.365653 kernel: printk: console [tty1] enabled May 15 00:04:20.365660 kernel: ACPI: Core revision 20230628 May 15 00:04:20.365670 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 15 00:04:20.365677 kernel: pid_max: default: 32768 minimum: 301 May 15 00:04:20.365684 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 15 00:04:20.365691 kernel: landlock: Up and running. May 15 00:04:20.365700 kernel: SELinux: Initializing. May 15 00:04:20.365707 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 15 00:04:20.365714 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 15 00:04:20.365724 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 15 00:04:20.365730 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 15 00:04:20.365737 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 May 15 00:04:20.365744 kernel: Hyper-V: Host Build 10.0.22477.1619-1-0 May 15 00:04:20.365757 kernel: Hyper-V: enabling crash_kexec_post_notifiers May 15 00:04:20.365767 kernel: rcu: Hierarchical SRCU implementation. May 15 00:04:20.365775 kernel: rcu: Max phase no-delay instances is 400. May 15 00:04:20.365782 kernel: Remapping and enabling EFI services. May 15 00:04:20.365789 kernel: smp: Bringing up secondary CPUs ... May 15 00:04:20.365798 kernel: Detected PIPT I-cache on CPU1 May 15 00:04:20.365805 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 May 15 00:04:20.365812 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 00:04:20.365822 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 15 00:04:20.365829 kernel: smp: Brought up 1 node, 2 CPUs May 15 00:04:20.365838 kernel: SMP: Total of 2 processors activated. May 15 00:04:20.365845 kernel: CPU features: detected: 32-bit EL0 Support May 15 00:04:20.365852 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence May 15 00:04:20.365860 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 15 00:04:20.365867 kernel: CPU features: detected: CRC32 instructions May 15 00:04:20.365874 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 15 00:04:20.365882 kernel: CPU features: detected: LSE atomic instructions May 15 00:04:20.365889 kernel: CPU features: detected: Privileged Access Never May 15 00:04:20.365896 kernel: CPU: All CPU(s) started at EL1 May 15 00:04:20.365905 kernel: alternatives: applying system-wide alternatives May 15 00:04:20.365912 kernel: devtmpfs: initialized May 15 00:04:20.365919 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 15 00:04:20.365926 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 15 00:04:20.365934 kernel: pinctrl core: initialized pinctrl subsystem May 15 00:04:20.365941 kernel: SMBIOS 3.1.0 present. May 15 00:04:20.365948 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 May 15 00:04:20.365955 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 15 00:04:20.365962 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 15 00:04:20.365971 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 15 00:04:20.365979 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 15 00:04:20.365986 kernel: audit: initializing netlink subsys (disabled) May 15 00:04:20.365993 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 May 15 00:04:20.366000 kernel: thermal_sys: Registered thermal governor 'step_wise' May 15 00:04:20.366007 kernel: cpuidle: using governor menu May 15 00:04:20.366015 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 15 00:04:20.366022 kernel: ASID allocator initialised with 32768 entries May 15 00:04:20.366029 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 15 00:04:20.366037 kernel: Serial: AMBA PL011 UART driver May 15 00:04:20.366044 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 15 00:04:20.366052 kernel: Modules: 0 pages in range for non-PLT usage May 15 00:04:20.366059 kernel: Modules: 509264 pages in range for PLT usage May 15 00:04:20.366066 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 15 00:04:20.366073 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 15 00:04:20.366080 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 15 00:04:20.366088 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 15 00:04:20.366095 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 15 00:04:20.366103 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 15 00:04:20.366110 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 15 00:04:20.366118 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 15 00:04:20.366125 kernel: ACPI: Added _OSI(Module Device) May 15 00:04:20.366132 kernel: ACPI: Added _OSI(Processor Device) May 15 00:04:20.366139 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 15 00:04:20.366146 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 15 00:04:20.366153 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 15 00:04:20.366161 kernel: ACPI: Interpreter enabled May 15 00:04:20.366169 kernel: ACPI: Using GIC for interrupt routing May 15 00:04:20.366177 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA May 15 00:04:20.366184 kernel: printk: console [ttyAMA0] enabled May 15 00:04:20.373182 kernel: printk: bootconsole [pl11] disabled May 15 00:04:20.373226 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA May 15 00:04:20.373243 kernel: iommu: Default domain type: Translated May 15 00:04:20.373257 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 15 00:04:20.373271 kernel: efivars: Registered efivars operations May 15 00:04:20.373286 kernel: vgaarb: loaded May 15 00:04:20.373310 kernel: clocksource: Switched to clocksource arch_sys_counter May 15 00:04:20.373325 kernel: VFS: Disk quotas dquot_6.6.0 May 15 00:04:20.373340 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 15 00:04:20.373354 kernel: pnp: PnP ACPI init May 15 00:04:20.373368 kernel: pnp: PnP ACPI: found 0 devices May 15 00:04:20.373378 kernel: NET: Registered PF_INET protocol family May 15 00:04:20.373385 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 15 00:04:20.373393 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 15 00:04:20.373400 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 15 00:04:20.373409 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 15 00:04:20.373416 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 15 00:04:20.373424 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 15 00:04:20.373431 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 15 00:04:20.373439 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 15 00:04:20.373447 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 15 00:04:20.373454 kernel: PCI: CLS 0 bytes, default 64 May 15 00:04:20.373461 kernel: kvm [1]: HYP mode not available May 15 00:04:20.373468 kernel: Initialise system trusted keyrings May 15 00:04:20.373478 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 15 00:04:20.373485 kernel: Key type asymmetric registered May 15 00:04:20.373492 kernel: Asymmetric key parser 'x509' registered May 15 00:04:20.373499 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 15 00:04:20.373506 kernel: io scheduler mq-deadline registered May 15 00:04:20.373514 kernel: io scheduler kyber registered May 15 00:04:20.373521 kernel: io scheduler bfq registered May 15 00:04:20.373528 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 15 00:04:20.373535 kernel: thunder_xcv, ver 1.0 May 15 00:04:20.373544 kernel: thunder_bgx, ver 1.0 May 15 00:04:20.373551 kernel: nicpf, ver 1.0 May 15 00:04:20.373558 kernel: nicvf, ver 1.0 May 15 00:04:20.373726 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 15 00:04:20.373802 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-15T00:04:19 UTC (1747267459) May 15 00:04:20.373812 kernel: efifb: probing for efifb May 15 00:04:20.373820 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k May 15 00:04:20.373827 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 May 15 00:04:20.373838 kernel: efifb: scrolling: redraw May 15 00:04:20.373845 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 May 15 00:04:20.373852 kernel: Console: switching to colour frame buffer device 128x48 May 15 00:04:20.373860 kernel: fb0: EFI VGA frame buffer device May 15 00:04:20.373867 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... May 15 00:04:20.373874 kernel: hid: raw HID events driver (C) Jiri Kosina May 15 00:04:20.373881 kernel: No ACPI PMU IRQ for CPU0 May 15 00:04:20.373888 kernel: No ACPI PMU IRQ for CPU1 May 15 00:04:20.373896 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available May 15 00:04:20.373904 kernel: watchdog: Delayed init of the lockup detector failed: -19 May 15 00:04:20.373911 kernel: watchdog: Hard watchdog permanently disabled May 15 00:04:20.373919 kernel: NET: Registered PF_INET6 protocol family May 15 00:04:20.373926 kernel: Segment Routing with IPv6 May 15 00:04:20.373933 kernel: In-situ OAM (IOAM) with IPv6 May 15 00:04:20.373940 kernel: NET: Registered PF_PACKET protocol family May 15 00:04:20.373947 kernel: Key type dns_resolver registered May 15 00:04:20.373954 kernel: registered taskstats version 1 May 15 00:04:20.373961 kernel: Loading compiled-in X.509 certificates May 15 00:04:20.373970 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: cdb7ce3984a1665183e8a6ab3419833bc5e4e7f4' May 15 00:04:20.373977 kernel: Key type .fscrypt registered May 15 00:04:20.373984 kernel: Key type fscrypt-provisioning registered May 15 00:04:20.373992 kernel: ima: No TPM chip found, activating TPM-bypass! May 15 00:04:20.373999 kernel: ima: Allocated hash algorithm: sha1 May 15 00:04:20.374006 kernel: ima: No architecture policies found May 15 00:04:20.374014 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 15 00:04:20.374021 kernel: clk: Disabling unused clocks May 15 00:04:20.374028 kernel: Freeing unused kernel memory: 38336K May 15 00:04:20.374036 kernel: Run /init as init process May 15 00:04:20.374043 kernel: with arguments: May 15 00:04:20.374051 kernel: /init May 15 00:04:20.374058 kernel: with environment: May 15 00:04:20.374065 kernel: HOME=/ May 15 00:04:20.374072 kernel: TERM=linux May 15 00:04:20.374079 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 15 00:04:20.374087 systemd[1]: Successfully made /usr/ read-only. May 15 00:04:20.374099 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 15 00:04:20.374107 systemd[1]: Detected virtualization microsoft. May 15 00:04:20.374115 systemd[1]: Detected architecture arm64. May 15 00:04:20.374122 systemd[1]: Running in initrd. May 15 00:04:20.374129 systemd[1]: No hostname configured, using default hostname. May 15 00:04:20.374137 systemd[1]: Hostname set to . May 15 00:04:20.374145 systemd[1]: Initializing machine ID from random generator. May 15 00:04:20.374164 systemd[1]: Queued start job for default target initrd.target. May 15 00:04:20.374174 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 15 00:04:20.374182 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 15 00:04:20.374202 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 15 00:04:20.374212 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 15 00:04:20.374219 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 15 00:04:20.374228 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 15 00:04:20.374237 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 15 00:04:20.374248 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 15 00:04:20.374256 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 15 00:04:20.374264 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 15 00:04:20.374271 systemd[1]: Reached target paths.target - Path Units. May 15 00:04:20.374279 systemd[1]: Reached target slices.target - Slice Units. May 15 00:04:20.374287 systemd[1]: Reached target swap.target - Swaps. May 15 00:04:20.374294 systemd[1]: Reached target timers.target - Timer Units. May 15 00:04:20.374302 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 15 00:04:20.374311 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 15 00:04:20.374319 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 15 00:04:20.374327 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 15 00:04:20.374335 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 15 00:04:20.374342 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 15 00:04:20.374350 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 15 00:04:20.374358 systemd[1]: Reached target sockets.target - Socket Units. May 15 00:04:20.374366 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 15 00:04:20.374374 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 15 00:04:20.374383 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 15 00:04:20.374390 systemd[1]: Starting systemd-fsck-usr.service... May 15 00:04:20.374399 systemd[1]: Starting systemd-journald.service - Journal Service... May 15 00:04:20.374407 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 15 00:04:20.374436 systemd-journald[218]: Collecting audit messages is disabled. May 15 00:04:20.374459 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 00:04:20.374468 systemd-journald[218]: Journal started May 15 00:04:20.374486 systemd-journald[218]: Runtime Journal (/run/log/journal/10ec4f438aa24913a5dc9868275bd5af) is 8M, max 78.5M, 70.5M free. May 15 00:04:20.375144 systemd-modules-load[220]: Inserted module 'overlay' May 15 00:04:20.394897 systemd[1]: Started systemd-journald.service - Journal Service. May 15 00:04:20.405208 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 15 00:04:20.407229 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 15 00:04:20.416253 kernel: Bridge firewalling registered May 15 00:04:20.410013 systemd-modules-load[220]: Inserted module 'br_netfilter' May 15 00:04:20.422864 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 15 00:04:20.434513 systemd[1]: Finished systemd-fsck-usr.service. May 15 00:04:20.445319 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 15 00:04:20.451673 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 15 00:04:20.469521 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 15 00:04:20.484408 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 15 00:04:20.512485 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 15 00:04:20.532418 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 15 00:04:20.546685 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 00:04:20.572830 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 15 00:04:20.581416 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 15 00:04:20.608443 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 15 00:04:20.618437 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 15 00:04:20.645893 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 15 00:04:20.654806 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 15 00:04:20.682655 dracut-cmdline[251]: dracut-dracut-053 May 15 00:04:20.682655 dracut-cmdline[251]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=bfa141d6f8686d8fe96245516ecbaee60c938beef41636c397e3939a2c9a6ed9 May 15 00:04:20.682409 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 15 00:04:20.756570 systemd-resolved[264]: Positive Trust Anchors: May 15 00:04:20.757238 systemd-resolved[264]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 15 00:04:20.757275 systemd-resolved[264]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 15 00:04:20.759518 systemd-resolved[264]: Defaulting to hostname 'linux'. May 15 00:04:20.761563 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 15 00:04:20.768153 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 15 00:04:20.877212 kernel: SCSI subsystem initialized May 15 00:04:20.886208 kernel: Loading iSCSI transport class v2.0-870. May 15 00:04:20.897222 kernel: iscsi: registered transport (tcp) May 15 00:04:20.914494 kernel: iscsi: registered transport (qla4xxx) May 15 00:04:20.914513 kernel: QLogic iSCSI HBA Driver May 15 00:04:20.949148 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 15 00:04:20.963446 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 15 00:04:20.998564 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 15 00:04:20.998632 kernel: device-mapper: uevent: version 1.0.3 May 15 00:04:21.005007 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 15 00:04:21.056226 kernel: raid6: neonx8 gen() 15795 MB/s May 15 00:04:21.076202 kernel: raid6: neonx4 gen() 15823 MB/s May 15 00:04:21.096199 kernel: raid6: neonx2 gen() 13208 MB/s May 15 00:04:21.117202 kernel: raid6: neonx1 gen() 10486 MB/s May 15 00:04:21.137199 kernel: raid6: int64x8 gen() 6792 MB/s May 15 00:04:21.157199 kernel: raid6: int64x4 gen() 7362 MB/s May 15 00:04:21.178201 kernel: raid6: int64x2 gen() 6114 MB/s May 15 00:04:21.202057 kernel: raid6: int64x1 gen() 5058 MB/s May 15 00:04:21.202069 kernel: raid6: using algorithm neonx4 gen() 15823 MB/s May 15 00:04:21.226324 kernel: raid6: .... xor() 12423 MB/s, rmw enabled May 15 00:04:21.226335 kernel: raid6: using neon recovery algorithm May 15 00:04:21.239385 kernel: xor: measuring software checksum speed May 15 00:04:21.239410 kernel: 8regs : 21584 MB/sec May 15 00:04:21.243054 kernel: 32regs : 21681 MB/sec May 15 00:04:21.246500 kernel: arm64_neon : 27898 MB/sec May 15 00:04:21.250698 kernel: xor: using function: arm64_neon (27898 MB/sec) May 15 00:04:21.301206 kernel: Btrfs loaded, zoned=no, fsverity=no May 15 00:04:21.311187 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 15 00:04:21.334350 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 15 00:04:21.358282 systemd-udevd[439]: Using default interface naming scheme 'v255'. May 15 00:04:21.363816 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 15 00:04:21.384326 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 15 00:04:21.404699 dracut-pre-trigger[452]: rd.md=0: removing MD RAID activation May 15 00:04:21.436895 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 15 00:04:21.450466 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 15 00:04:21.488382 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 15 00:04:21.511585 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 15 00:04:21.538382 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 15 00:04:21.553426 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 15 00:04:21.572723 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 15 00:04:21.591067 kernel: hv_vmbus: Vmbus version:5.3 May 15 00:04:21.592272 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 15 00:04:21.616224 kernel: pps_core: LinuxPPS API ver. 1 registered May 15 00:04:21.616303 kernel: hv_vmbus: registering driver hid_hyperv May 15 00:04:21.635206 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 May 15 00:04:21.635263 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on May 15 00:04:21.648233 kernel: hv_vmbus: registering driver hv_netvsc May 15 00:04:21.649689 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 15 00:04:21.669718 kernel: hv_vmbus: registering driver hv_storvsc May 15 00:04:21.669757 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 15 00:04:21.669768 kernel: scsi host1: storvsc_host_t May 15 00:04:21.691622 kernel: scsi host0: storvsc_host_t May 15 00:04:21.691878 kernel: hv_vmbus: registering driver hyperv_keyboard May 15 00:04:21.691892 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 May 15 00:04:21.715011 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 May 15 00:04:21.715081 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 May 15 00:04:21.716978 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 15 00:04:21.717183 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 00:04:21.734792 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 15 00:04:21.766610 kernel: PTP clock support registered May 15 00:04:21.742340 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 15 00:04:21.793096 kernel: hv_utils: Registering HyperV Utility Driver May 15 00:04:21.793122 kernel: hv_vmbus: registering driver hv_utils May 15 00:04:21.742909 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 15 00:04:21.788006 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 15 00:04:22.093765 kernel: hv_utils: Shutdown IC version 3.2 May 15 00:04:22.093788 kernel: hv_utils: Heartbeat IC version 3.0 May 15 00:04:22.093799 kernel: hv_utils: TimeSync IC version 4.0 May 15 00:04:22.093808 kernel: hv_netvsc 000d3a07-0632-000d-3a07-0632000d3a07 eth0: VF slot 1 added May 15 00:04:22.093950 kernel: sr 0:0:0:2: [sr0] scsi-1 drive May 15 00:04:22.078868 systemd-resolved[264]: Clock change detected. Flushing caches. May 15 00:04:22.118918 kernel: hv_vmbus: registering driver hv_pci May 15 00:04:22.118946 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 15 00:04:22.096091 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 00:04:22.111119 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 15 00:04:22.333305 kernel: hv_pci 679c5808-1d04-4d94-bb0f-f5c402683d26: PCI VMBus probing: Using version 0x10004 May 15 00:04:22.333503 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 May 15 00:04:22.333647 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) May 15 00:04:22.333776 kernel: hv_pci 679c5808-1d04-4d94-bb0f-f5c402683d26: PCI host bridge to bus 1d04:00 May 15 00:04:22.333878 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks May 15 00:04:22.146388 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 15 00:04:22.442731 kernel: pci_bus 1d04:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] May 15 00:04:22.443024 kernel: pci_bus 1d04:00: No busn resource found for root bus, will use [bus 00-ff] May 15 00:04:22.443113 kernel: pci 1d04:00:02.0: [15b3:1018] type 00 class 0x020000 May 15 00:04:22.443236 kernel: pci 1d04:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] May 15 00:04:22.443327 kernel: pci 1d04:00:02.0: enabling Extended Tags May 15 00:04:22.443415 kernel: sd 0:0:0:0: [sda] Write Protect is off May 15 00:04:22.443518 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 May 15 00:04:22.443599 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA May 15 00:04:22.443717 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 15 00:04:22.443729 kernel: sd 0:0:0:0: [sda] Attached SCSI disk May 15 00:04:22.443826 kernel: pci 1d04:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 1d04:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) May 15 00:04:22.443920 kernel: pci_bus 1d04:00: busn_res: [bus 00-ff] end is updated to 00 May 15 00:04:22.443996 kernel: pci 1d04:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] May 15 00:04:22.168301 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 15 00:04:22.168471 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 15 00:04:22.313770 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 15 00:04:22.461762 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 00:04:22.489978 kernel: mlx5_core 1d04:00:02.0: enabling device (0000 -> 0002) May 15 00:04:22.490347 kernel: mlx5_core 1d04:00:02.0: firmware version: 16.31.2424 May 15 00:04:22.498663 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 15 00:04:22.517916 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 15 00:04:22.559014 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 00:04:22.780095 kernel: hv_netvsc 000d3a07-0632-000d-3a07-0632000d3a07 eth0: VF registering: eth1 May 15 00:04:22.780352 kernel: mlx5_core 1d04:00:02.0 eth1: joined to eth0 May 15 00:04:22.790836 kernel: mlx5_core 1d04:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) May 15 00:04:22.801678 kernel: mlx5_core 1d04:00:02.0 enP7428s1: renamed from eth1 May 15 00:04:23.604655 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by (udev-worker) (505) May 15 00:04:23.622551 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. May 15 00:04:23.715648 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. May 15 00:04:23.744652 kernel: BTRFS: device fsid 369506fd-904a-45c2-a4ab-2d03e7866799 devid 1 transid 44 /dev/sda3 scanned by (udev-worker) (501) May 15 00:04:23.747554 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. May 15 00:04:23.767567 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. May 15 00:04:23.776607 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. May 15 00:04:23.804860 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 15 00:04:23.834645 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 15 00:04:23.843655 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 15 00:04:24.853420 disk-uuid[607]: The operation has completed successfully. May 15 00:04:24.858757 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 15 00:04:24.912658 systemd[1]: disk-uuid.service: Deactivated successfully. May 15 00:04:24.914651 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 15 00:04:24.965772 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 15 00:04:24.982273 sh[693]: Success May 15 00:04:25.018943 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 15 00:04:25.310521 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 15 00:04:25.331648 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 15 00:04:25.342325 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 15 00:04:25.376559 kernel: BTRFS info (device dm-0): first mount of filesystem 369506fd-904a-45c2-a4ab-2d03e7866799 May 15 00:04:25.376613 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 15 00:04:25.384455 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 15 00:04:25.390152 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 15 00:04:25.394699 kernel: BTRFS info (device dm-0): using free space tree May 15 00:04:25.730883 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 15 00:04:25.737428 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 15 00:04:25.752904 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 15 00:04:25.760886 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 15 00:04:25.807986 kernel: BTRFS info (device sda6): first mount of filesystem 02f9d4a0-2ee9-4834-b15d-b55399b9ff01 May 15 00:04:25.808060 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm May 15 00:04:25.812708 kernel: BTRFS info (device sda6): using free space tree May 15 00:04:25.836744 kernel: BTRFS info (device sda6): auto enabling async discard May 15 00:04:25.847913 kernel: BTRFS info (device sda6): last unmount of filesystem 02f9d4a0-2ee9-4834-b15d-b55399b9ff01 May 15 00:04:25.854217 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 15 00:04:25.868909 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 15 00:04:25.912292 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 15 00:04:25.931785 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 15 00:04:25.965186 systemd-networkd[874]: lo: Link UP May 15 00:04:25.968690 systemd-networkd[874]: lo: Gained carrier May 15 00:04:25.971546 systemd-networkd[874]: Enumeration completed May 15 00:04:25.972738 systemd[1]: Started systemd-networkd.service - Network Configuration. May 15 00:04:25.973305 systemd-networkd[874]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 00:04:25.973314 systemd-networkd[874]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 15 00:04:25.979352 systemd[1]: Reached target network.target - Network. May 15 00:04:26.072649 kernel: mlx5_core 1d04:00:02.0 enP7428s1: Link up May 15 00:04:26.161250 kernel: hv_netvsc 000d3a07-0632-000d-3a07-0632000d3a07 eth0: Data path switched to VF: enP7428s1 May 15 00:04:26.160901 systemd-networkd[874]: enP7428s1: Link UP May 15 00:04:26.160975 systemd-networkd[874]: eth0: Link UP May 15 00:04:26.161104 systemd-networkd[874]: eth0: Gained carrier May 15 00:04:26.161112 systemd-networkd[874]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 00:04:26.169887 systemd-networkd[874]: enP7428s1: Gained carrier May 15 00:04:26.197686 systemd-networkd[874]: eth0: DHCPv4 address 10.200.20.16/24, gateway 10.200.20.1 acquired from 168.63.129.16 May 15 00:04:26.852699 ignition[819]: Ignition 2.20.0 May 15 00:04:26.852708 ignition[819]: Stage: fetch-offline May 15 00:04:26.852744 ignition[819]: no configs at "/usr/lib/ignition/base.d" May 15 00:04:26.865639 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 15 00:04:26.852752 ignition[819]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 15 00:04:26.856258 ignition[819]: parsed url from cmdline: "" May 15 00:04:26.856262 ignition[819]: no config URL provided May 15 00:04:26.856270 ignition[819]: reading system config file "/usr/lib/ignition/user.ign" May 15 00:04:26.856286 ignition[819]: no config at "/usr/lib/ignition/user.ign" May 15 00:04:26.888967 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 15 00:04:26.856292 ignition[819]: failed to fetch config: resource requires networking May 15 00:04:26.856503 ignition[819]: Ignition finished successfully May 15 00:04:26.909483 ignition[884]: Ignition 2.20.0 May 15 00:04:26.909489 ignition[884]: Stage: fetch May 15 00:04:26.909754 ignition[884]: no configs at "/usr/lib/ignition/base.d" May 15 00:04:26.909764 ignition[884]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 15 00:04:26.909925 ignition[884]: parsed url from cmdline: "" May 15 00:04:26.909929 ignition[884]: no config URL provided May 15 00:04:26.909934 ignition[884]: reading system config file "/usr/lib/ignition/user.ign" May 15 00:04:26.909942 ignition[884]: no config at "/usr/lib/ignition/user.ign" May 15 00:04:26.909972 ignition[884]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 May 15 00:04:26.996053 ignition[884]: GET result: OK May 15 00:04:26.996148 ignition[884]: config has been read from IMDS userdata May 15 00:04:26.996189 ignition[884]: parsing config with SHA512: 0d95d71f6d1d536e6bb91ffb841bfd45795f4296cc7bb3e4765028c6b2f35c5989858aeb28e86e50a995c4b954001981abc54d5a4271b7174e223afa23f1fb99 May 15 00:04:27.004947 unknown[884]: fetched base config from "system" May 15 00:04:27.004958 unknown[884]: fetched base config from "system" May 15 00:04:27.005427 ignition[884]: fetch: fetch complete May 15 00:04:27.004964 unknown[884]: fetched user config from "azure" May 15 00:04:27.005432 ignition[884]: fetch: fetch passed May 15 00:04:27.013721 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 15 00:04:27.005476 ignition[884]: Ignition finished successfully May 15 00:04:27.035440 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 15 00:04:27.063401 ignition[891]: Ignition 2.20.0 May 15 00:04:27.063419 ignition[891]: Stage: kargs May 15 00:04:27.068010 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 15 00:04:27.063622 ignition[891]: no configs at "/usr/lib/ignition/base.d" May 15 00:04:27.063650 ignition[891]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 15 00:04:27.064856 ignition[891]: kargs: kargs passed May 15 00:04:27.097001 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 15 00:04:27.064931 ignition[891]: Ignition finished successfully May 15 00:04:27.119966 ignition[897]: Ignition 2.20.0 May 15 00:04:27.119977 ignition[897]: Stage: disks May 15 00:04:27.120166 ignition[897]: no configs at "/usr/lib/ignition/base.d" May 15 00:04:27.128501 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 15 00:04:27.120176 ignition[897]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 15 00:04:27.135080 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 15 00:04:27.124397 ignition[897]: disks: disks passed May 15 00:04:27.147439 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 15 00:04:27.124463 ignition[897]: Ignition finished successfully May 15 00:04:27.159844 systemd[1]: Reached target local-fs.target - Local File Systems. May 15 00:04:27.171560 systemd[1]: Reached target sysinit.target - System Initialization. May 15 00:04:27.183712 systemd[1]: Reached target basic.target - Basic System. May 15 00:04:27.210920 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 15 00:04:27.290336 systemd-fsck[905]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks May 15 00:04:27.302615 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 15 00:04:27.321840 systemd[1]: Mounting sysroot.mount - /sysroot... May 15 00:04:27.377648 kernel: EXT4-fs (sda9): mounted filesystem 737cda88-7069-47ce-b2bc-d891099a68fb r/w with ordered data mode. Quota mode: none. May 15 00:04:27.378492 systemd[1]: Mounted sysroot.mount - /sysroot. May 15 00:04:27.387475 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 15 00:04:27.432725 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 15 00:04:27.443426 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 15 00:04:27.462956 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... May 15 00:04:27.499242 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (916) May 15 00:04:27.499266 kernel: BTRFS info (device sda6): first mount of filesystem 02f9d4a0-2ee9-4834-b15d-b55399b9ff01 May 15 00:04:27.499277 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm May 15 00:04:27.499294 kernel: BTRFS info (device sda6): using free space tree May 15 00:04:27.499303 kernel: BTRFS info (device sda6): auto enabling async discard May 15 00:04:27.470916 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 15 00:04:27.470957 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 15 00:04:27.507983 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 15 00:04:27.524844 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 15 00:04:27.551895 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 15 00:04:27.842737 systemd-networkd[874]: eth0: Gained IPv6LL May 15 00:04:28.098743 systemd-networkd[874]: enP7428s1: Gained IPv6LL May 15 00:04:28.199777 coreos-metadata[918]: May 15 00:04:28.199 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 May 15 00:04:28.208043 coreos-metadata[918]: May 15 00:04:28.207 INFO Fetch successful May 15 00:04:28.208043 coreos-metadata[918]: May 15 00:04:28.207 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 May 15 00:04:28.224416 coreos-metadata[918]: May 15 00:04:28.220 INFO Fetch successful May 15 00:04:28.239232 coreos-metadata[918]: May 15 00:04:28.239 INFO wrote hostname ci-4230.1.1-n-c70fe96ece to /sysroot/etc/hostname May 15 00:04:28.248158 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 15 00:04:28.487709 initrd-setup-root[946]: cut: /sysroot/etc/passwd: No such file or directory May 15 00:04:28.535198 initrd-setup-root[953]: cut: /sysroot/etc/group: No such file or directory May 15 00:04:28.543466 initrd-setup-root[960]: cut: /sysroot/etc/shadow: No such file or directory May 15 00:04:28.551464 initrd-setup-root[967]: cut: /sysroot/etc/gshadow: No such file or directory May 15 00:04:29.753952 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 15 00:04:29.770794 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 15 00:04:29.784845 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 15 00:04:29.801847 kernel: BTRFS info (device sda6): last unmount of filesystem 02f9d4a0-2ee9-4834-b15d-b55399b9ff01 May 15 00:04:29.799477 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 15 00:04:29.827698 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 15 00:04:29.841753 ignition[1035]: INFO : Ignition 2.20.0 May 15 00:04:29.847502 ignition[1035]: INFO : Stage: mount May 15 00:04:29.847502 ignition[1035]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 00:04:29.847502 ignition[1035]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" May 15 00:04:29.847502 ignition[1035]: INFO : mount: mount passed May 15 00:04:29.847502 ignition[1035]: INFO : Ignition finished successfully May 15 00:04:29.853598 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 15 00:04:29.880830 systemd[1]: Starting ignition-files.service - Ignition (files)... May 15 00:04:29.902379 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 15 00:04:29.933673 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (1046) May 15 00:04:29.933737 kernel: BTRFS info (device sda6): first mount of filesystem 02f9d4a0-2ee9-4834-b15d-b55399b9ff01 May 15 00:04:29.940085 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm May 15 00:04:29.944382 kernel: BTRFS info (device sda6): using free space tree May 15 00:04:29.951665 kernel: BTRFS info (device sda6): auto enabling async discard May 15 00:04:29.952976 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 15 00:04:29.983606 ignition[1063]: INFO : Ignition 2.20.0 May 15 00:04:29.983606 ignition[1063]: INFO : Stage: files May 15 00:04:29.992055 ignition[1063]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 00:04:29.992055 ignition[1063]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" May 15 00:04:29.992055 ignition[1063]: DEBUG : files: compiled without relabeling support, skipping May 15 00:04:29.992055 ignition[1063]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 15 00:04:29.992055 ignition[1063]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 15 00:04:30.050176 ignition[1063]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 15 00:04:30.058098 ignition[1063]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 15 00:04:30.058098 ignition[1063]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 15 00:04:30.058098 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 15 00:04:30.058098 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 May 15 00:04:30.050545 unknown[1063]: wrote ssh authorized keys file for user: core May 15 00:04:30.106322 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 15 00:04:30.227519 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 15 00:04:30.227519 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 15 00:04:30.249619 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 May 15 00:04:30.751903 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 15 00:04:30.820298 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 15 00:04:30.830135 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 15 00:04:30.830135 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 15 00:04:30.830135 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 15 00:04:30.830135 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 15 00:04:30.830135 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 15 00:04:30.830135 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 15 00:04:30.830135 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 15 00:04:30.830135 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 15 00:04:30.830135 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 15 00:04:30.830135 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 15 00:04:30.830135 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 15 00:04:30.830135 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 15 00:04:30.830135 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 15 00:04:30.830135 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 May 15 00:04:31.228109 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 15 00:04:31.443803 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 15 00:04:31.443803 ignition[1063]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 15 00:04:31.468646 ignition[1063]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 15 00:04:31.468646 ignition[1063]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 15 00:04:31.468646 ignition[1063]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 15 00:04:31.468646 ignition[1063]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" May 15 00:04:31.468646 ignition[1063]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" May 15 00:04:31.468646 ignition[1063]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" May 15 00:04:31.468646 ignition[1063]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" May 15 00:04:31.468646 ignition[1063]: INFO : files: files passed May 15 00:04:31.468646 ignition[1063]: INFO : Ignition finished successfully May 15 00:04:31.468397 systemd[1]: Finished ignition-files.service - Ignition (files). May 15 00:04:31.510577 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 15 00:04:31.531917 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 15 00:04:31.558291 systemd[1]: ignition-quench.service: Deactivated successfully. May 15 00:04:31.630430 initrd-setup-root-after-ignition[1090]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 15 00:04:31.630430 initrd-setup-root-after-ignition[1090]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 15 00:04:31.558420 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 15 00:04:31.665955 initrd-setup-root-after-ignition[1094]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 15 00:04:31.608687 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 15 00:04:31.625376 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 15 00:04:31.666908 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 15 00:04:31.715527 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 15 00:04:31.715779 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 15 00:04:31.738363 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 15 00:04:31.745342 systemd[1]: Reached target initrd.target - Initrd Default Target. May 15 00:04:31.758479 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 15 00:04:31.775794 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 15 00:04:31.803154 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 15 00:04:31.821916 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 15 00:04:31.843415 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 15 00:04:31.852468 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 15 00:04:31.865810 systemd[1]: Stopped target timers.target - Timer Units. May 15 00:04:31.877129 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 15 00:04:31.877320 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 15 00:04:31.893225 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 15 00:04:31.899432 systemd[1]: Stopped target basic.target - Basic System. May 15 00:04:31.910984 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 15 00:04:31.921909 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 15 00:04:31.933107 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 15 00:04:31.944921 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 15 00:04:31.956876 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 15 00:04:31.969696 systemd[1]: Stopped target sysinit.target - System Initialization. May 15 00:04:31.980855 systemd[1]: Stopped target local-fs.target - Local File Systems. May 15 00:04:31.992892 systemd[1]: Stopped target swap.target - Swaps. May 15 00:04:32.003114 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 15 00:04:32.003287 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 15 00:04:32.018763 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 15 00:04:32.029463 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 15 00:04:32.041770 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 15 00:04:32.047287 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 15 00:04:32.054793 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 15 00:04:32.054958 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 15 00:04:32.074758 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 15 00:04:32.074943 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 15 00:04:32.090095 systemd[1]: ignition-files.service: Deactivated successfully. May 15 00:04:32.090243 systemd[1]: Stopped ignition-files.service - Ignition (files). May 15 00:04:32.101407 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. May 15 00:04:32.101585 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 15 00:04:32.163608 ignition[1115]: INFO : Ignition 2.20.0 May 15 00:04:32.163608 ignition[1115]: INFO : Stage: umount May 15 00:04:32.163608 ignition[1115]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 00:04:32.163608 ignition[1115]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" May 15 00:04:32.163608 ignition[1115]: INFO : umount: umount passed May 15 00:04:32.163608 ignition[1115]: INFO : Ignition finished successfully May 15 00:04:32.135792 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 15 00:04:32.151746 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 15 00:04:32.167499 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 15 00:04:32.167683 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 15 00:04:32.184353 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 15 00:04:32.184480 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 15 00:04:32.200416 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 15 00:04:32.201176 systemd[1]: ignition-mount.service: Deactivated successfully. May 15 00:04:32.201281 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 15 00:04:32.207596 systemd[1]: sysroot-boot.service: Deactivated successfully. May 15 00:04:32.207706 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 15 00:04:32.217956 systemd[1]: ignition-disks.service: Deactivated successfully. May 15 00:04:32.218069 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 15 00:04:32.228343 systemd[1]: ignition-kargs.service: Deactivated successfully. May 15 00:04:32.228405 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 15 00:04:32.239572 systemd[1]: ignition-fetch.service: Deactivated successfully. May 15 00:04:32.239705 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 15 00:04:32.250373 systemd[1]: Stopped target network.target - Network. May 15 00:04:32.260985 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 15 00:04:32.261048 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 15 00:04:32.273982 systemd[1]: Stopped target paths.target - Path Units. May 15 00:04:32.284309 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 15 00:04:32.294665 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 15 00:04:32.302542 systemd[1]: Stopped target slices.target - Slice Units. May 15 00:04:32.313003 systemd[1]: Stopped target sockets.target - Socket Units. May 15 00:04:32.323714 systemd[1]: iscsid.socket: Deactivated successfully. May 15 00:04:32.323767 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 15 00:04:32.334170 systemd[1]: iscsiuio.socket: Deactivated successfully. May 15 00:04:32.334200 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 15 00:04:32.344952 systemd[1]: ignition-setup.service: Deactivated successfully. May 15 00:04:32.345008 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 15 00:04:32.355793 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 15 00:04:32.355844 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 15 00:04:32.367198 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 15 00:04:32.367240 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 15 00:04:32.378393 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 15 00:04:32.389470 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 15 00:04:32.402351 systemd[1]: systemd-resolved.service: Deactivated successfully. May 15 00:04:32.402466 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 15 00:04:32.420492 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 15 00:04:32.420819 systemd[1]: systemd-networkd.service: Deactivated successfully. May 15 00:04:32.420941 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 15 00:04:32.439789 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 15 00:04:32.440057 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 15 00:04:32.440257 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 15 00:04:32.454599 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 15 00:04:32.686092 kernel: hv_netvsc 000d3a07-0632-000d-3a07-0632000d3a07 eth0: Data path switched from VF: enP7428s1 May 15 00:04:32.454699 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 15 00:04:32.489834 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 15 00:04:32.501008 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 15 00:04:32.501097 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 15 00:04:32.512791 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 15 00:04:32.512849 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 15 00:04:32.529389 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 15 00:04:32.529455 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 15 00:04:32.535807 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 15 00:04:32.535861 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 15 00:04:32.552106 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 15 00:04:32.561617 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 15 00:04:32.561723 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 15 00:04:32.596775 systemd[1]: systemd-udevd.service: Deactivated successfully. May 15 00:04:32.596956 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 15 00:04:32.609063 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 15 00:04:32.609108 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 15 00:04:32.621087 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 15 00:04:32.621133 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 15 00:04:32.631689 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 15 00:04:32.631748 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 15 00:04:32.649332 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 15 00:04:32.649396 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 15 00:04:32.659543 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 15 00:04:32.659595 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 00:04:32.703834 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 15 00:04:32.722443 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 15 00:04:32.722515 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 15 00:04:32.745234 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 15 00:04:32.745290 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 15 00:04:32.752681 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 15 00:04:32.752735 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 15 00:04:32.765349 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 15 00:04:32.765401 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 15 00:04:32.784034 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 15 00:04:32.784106 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 15 00:04:32.784442 systemd[1]: network-cleanup.service: Deactivated successfully. May 15 00:04:32.784548 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 15 00:04:32.796204 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 15 00:04:33.009864 systemd-journald[218]: Received SIGTERM from PID 1 (systemd). May 15 00:04:32.796297 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 15 00:04:32.809913 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 15 00:04:32.846924 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 15 00:04:32.863781 systemd[1]: Switching root. May 15 00:04:33.031421 systemd-journald[218]: Journal stopped May 15 00:04:39.217498 kernel: SELinux: policy capability network_peer_controls=1 May 15 00:04:39.217519 kernel: SELinux: policy capability open_perms=1 May 15 00:04:39.217529 kernel: SELinux: policy capability extended_socket_class=1 May 15 00:04:39.217537 kernel: SELinux: policy capability always_check_network=0 May 15 00:04:39.217546 kernel: SELinux: policy capability cgroup_seclabel=1 May 15 00:04:39.217554 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 15 00:04:39.217562 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 15 00:04:39.217572 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 15 00:04:39.217580 kernel: audit: type=1403 audit(1747267474.649:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 15 00:04:39.217590 systemd[1]: Successfully loaded SELinux policy in 138.544ms. May 15 00:04:39.217601 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.423ms. May 15 00:04:39.217611 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 15 00:04:39.217619 systemd[1]: Detected virtualization microsoft. May 15 00:04:39.217643 systemd[1]: Detected architecture arm64. May 15 00:04:39.217654 systemd[1]: Detected first boot. May 15 00:04:39.217666 systemd[1]: Hostname set to . May 15 00:04:39.217674 systemd[1]: Initializing machine ID from random generator. May 15 00:04:39.217683 zram_generator::config[1158]: No configuration found. May 15 00:04:39.217692 kernel: NET: Registered PF_VSOCK protocol family May 15 00:04:39.217700 systemd[1]: Populated /etc with preset unit settings. May 15 00:04:39.217710 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 15 00:04:39.217719 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 15 00:04:39.217729 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 15 00:04:39.217738 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 15 00:04:39.217747 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 15 00:04:39.217757 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 15 00:04:39.217766 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 15 00:04:39.217775 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 15 00:04:39.217785 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 15 00:04:39.217796 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 15 00:04:39.217805 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 15 00:04:39.217814 systemd[1]: Created slice user.slice - User and Session Slice. May 15 00:04:39.217823 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 15 00:04:39.217832 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 15 00:04:39.217842 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 15 00:04:39.217851 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 15 00:04:39.217860 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 15 00:04:39.217870 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 15 00:04:39.217880 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 15 00:04:39.217889 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 15 00:04:39.217900 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 15 00:04:39.217909 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 15 00:04:39.217918 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 15 00:04:39.217928 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 15 00:04:39.217937 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 15 00:04:39.217948 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 15 00:04:39.217963 systemd[1]: Reached target slices.target - Slice Units. May 15 00:04:39.217972 systemd[1]: Reached target swap.target - Swaps. May 15 00:04:39.217981 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 15 00:04:39.217992 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 15 00:04:39.218001 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 15 00:04:39.218014 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 15 00:04:39.218023 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 15 00:04:39.218032 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 15 00:04:39.218042 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 15 00:04:39.218051 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 15 00:04:39.218060 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 15 00:04:39.218069 systemd[1]: Mounting media.mount - External Media Directory... May 15 00:04:39.218080 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 15 00:04:39.218089 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 15 00:04:39.218099 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 15 00:04:39.218108 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 15 00:04:39.218118 systemd[1]: Reached target machines.target - Containers. May 15 00:04:39.218127 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 15 00:04:39.218137 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 00:04:39.218146 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 15 00:04:39.218157 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 15 00:04:39.218167 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 15 00:04:39.218177 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 15 00:04:39.218186 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 15 00:04:39.218197 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 15 00:04:39.218206 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 15 00:04:39.218216 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 15 00:04:39.218226 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 15 00:04:39.218236 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 15 00:04:39.218246 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 15 00:04:39.218255 systemd[1]: Stopped systemd-fsck-usr.service. May 15 00:04:39.218265 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 15 00:04:39.218274 systemd[1]: Starting systemd-journald.service - Journal Service... May 15 00:04:39.218283 kernel: loop: module loaded May 15 00:04:39.218291 kernel: fuse: init (API version 7.39) May 15 00:04:39.218300 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 15 00:04:39.218309 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 15 00:04:39.218321 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 15 00:04:39.218330 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 15 00:04:39.218339 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 15 00:04:39.218367 systemd-journald[1255]: Collecting audit messages is disabled. May 15 00:04:39.218389 systemd[1]: verity-setup.service: Deactivated successfully. May 15 00:04:39.218399 systemd-journald[1255]: Journal started May 15 00:04:39.218418 systemd-journald[1255]: Runtime Journal (/run/log/journal/41235c93c47344e1b129773f1551b908) is 8M, max 78.5M, 70.5M free. May 15 00:04:37.872216 systemd[1]: Queued start job for default target multi-user.target. May 15 00:04:37.884501 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. May 15 00:04:37.884904 systemd[1]: systemd-journald.service: Deactivated successfully. May 15 00:04:37.885238 systemd[1]: systemd-journald.service: Consumed 3.592s CPU time. May 15 00:04:39.224651 systemd[1]: Stopped verity-setup.service. May 15 00:04:39.245060 systemd[1]: Started systemd-journald.service - Journal Service. May 15 00:04:39.245851 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 15 00:04:39.252690 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 15 00:04:39.259779 systemd[1]: Mounted media.mount - External Media Directory. May 15 00:04:39.266152 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 15 00:04:39.277712 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 15 00:04:39.279656 kernel: ACPI: bus type drm_connector registered May 15 00:04:39.286386 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 15 00:04:39.293549 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 15 00:04:39.303286 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 15 00:04:39.313087 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 15 00:04:39.313256 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 15 00:04:39.323017 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 00:04:39.323189 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 15 00:04:39.330983 systemd[1]: modprobe@drm.service: Deactivated successfully. May 15 00:04:39.332662 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 15 00:04:39.340697 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 00:04:39.341816 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 15 00:04:39.350184 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 15 00:04:39.350360 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 15 00:04:39.357354 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 00:04:39.357511 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 15 00:04:39.364556 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 15 00:04:39.372320 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 15 00:04:39.380608 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 15 00:04:39.389799 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 15 00:04:39.398207 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 15 00:04:39.414997 systemd[1]: Reached target network-pre.target - Preparation for Network. May 15 00:04:39.431740 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 15 00:04:39.439376 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 15 00:04:39.446072 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 15 00:04:39.446112 systemd[1]: Reached target local-fs.target - Local File Systems. May 15 00:04:39.453489 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 15 00:04:39.462592 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 15 00:04:39.470883 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 15 00:04:39.478037 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 00:04:39.542831 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 15 00:04:39.550898 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 15 00:04:39.558260 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 00:04:39.559456 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 15 00:04:39.566074 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 15 00:04:39.567840 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 15 00:04:39.575839 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 15 00:04:39.593502 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 15 00:04:39.602815 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 15 00:04:39.613290 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 15 00:04:39.621115 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 15 00:04:39.629417 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 15 00:04:39.638364 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 15 00:04:39.650299 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 15 00:04:39.664899 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 15 00:04:39.673344 udevadm[1301]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 15 00:04:39.727090 systemd-journald[1255]: Time spent on flushing to /var/log/journal/41235c93c47344e1b129773f1551b908 is 206.427ms for 925 entries. May 15 00:04:39.727090 systemd-journald[1255]: System Journal (/var/log/journal/41235c93c47344e1b129773f1551b908) is 8M, max 2.6G, 2.6G free. May 15 00:04:41.972518 systemd-journald[1255]: Received client request to flush runtime journal. May 15 00:04:41.972592 kernel: loop0: detected capacity change from 0 to 189592 May 15 00:04:41.972614 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 15 00:04:41.972651 kernel: loop1: detected capacity change from 0 to 123192 May 15 00:04:40.048204 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 15 00:04:40.140542 systemd-tmpfiles[1300]: ACLs are not supported, ignoring. May 15 00:04:40.140554 systemd-tmpfiles[1300]: ACLs are not supported, ignoring. May 15 00:04:40.144848 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 15 00:04:40.161791 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 15 00:04:40.598235 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 15 00:04:40.608826 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 15 00:04:40.628479 systemd-tmpfiles[1318]: ACLs are not supported, ignoring. May 15 00:04:40.628490 systemd-tmpfiles[1318]: ACLs are not supported, ignoring. May 15 00:04:40.632095 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 15 00:04:41.974155 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 15 00:04:42.503644 kernel: loop2: detected capacity change from 0 to 113512 May 15 00:04:43.328908 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 15 00:04:43.330460 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 15 00:04:44.042754 kernel: loop3: detected capacity change from 0 to 28720 May 15 00:04:44.078735 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 15 00:04:44.091887 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 15 00:04:44.115230 systemd-udevd[1327]: Using default interface naming scheme 'v255'. May 15 00:04:44.302799 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 15 00:04:44.322951 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 15 00:04:44.398379 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 15 00:04:44.433066 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. May 15 00:04:44.465443 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 15 00:04:44.544656 kernel: mousedev: PS/2 mouse device common for all mice May 15 00:04:44.595659 kernel: hv_vmbus: registering driver hv_balloon May 15 00:04:44.595770 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 May 15 00:04:44.607049 kernel: hv_balloon: Memory hot add disabled on ARM64 May 15 00:04:44.671661 kernel: loop4: detected capacity change from 0 to 189592 May 15 00:04:44.686814 kernel: loop5: detected capacity change from 0 to 123192 May 15 00:04:44.700851 kernel: loop6: detected capacity change from 0 to 113512 May 15 00:04:44.708088 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 00:04:44.715640 kernel: loop7: detected capacity change from 0 to 28720 May 15 00:04:44.719226 (sd-merge)[1378]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. May 15 00:04:44.720029 (sd-merge)[1378]: Merged extensions into '/usr'. May 15 00:04:44.742264 systemd[1]: Reload requested from client PID 1298 ('systemd-sysext') (unit systemd-sysext.service)... May 15 00:04:44.742279 systemd[1]: Reloading... May 15 00:04:44.806120 kernel: hv_vmbus: registering driver hyperv_fb May 15 00:04:44.806216 zram_generator::config[1407]: No configuration found. May 15 00:04:44.806245 kernel: hyperv_fb: Synthvid Version major 3, minor 5 May 15 00:04:44.823983 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 May 15 00:04:44.831283 kernel: Console: switching to colour dummy device 80x25 May 15 00:04:44.839215 kernel: Console: switching to colour frame buffer device 128x48 May 15 00:04:45.054934 systemd-networkd[1343]: lo: Link UP May 15 00:04:45.054942 systemd-networkd[1343]: lo: Gained carrier May 15 00:04:45.056848 systemd-networkd[1343]: Enumeration completed May 15 00:04:45.057165 systemd-networkd[1343]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 00:04:45.057177 systemd-networkd[1343]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 15 00:04:45.111656 kernel: mlx5_core 1d04:00:02.0 enP7428s1: Link up May 15 00:04:45.157267 kernel: hv_netvsc 000d3a07-0632-000d-3a07-0632000d3a07 eth0: Data path switched to VF: enP7428s1 May 15 00:04:45.157975 systemd-networkd[1343]: enP7428s1: Link UP May 15 00:04:45.158068 systemd-networkd[1343]: eth0: Link UP May 15 00:04:45.158071 systemd-networkd[1343]: eth0: Gained carrier May 15 00:04:45.158086 systemd-networkd[1343]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 00:04:45.162963 systemd-networkd[1343]: enP7428s1: Gained carrier May 15 00:04:45.176908 systemd-networkd[1343]: eth0: DHCPv4 address 10.200.20.16/24, gateway 10.200.20.1 acquired from 168.63.129.16 May 15 00:04:45.178701 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 44 scanned by (udev-worker) (1345) May 15 00:04:45.284080 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 00:04:45.405912 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. May 15 00:04:45.414145 systemd[1]: Reloading finished in 671 ms. May 15 00:04:45.429882 systemd[1]: Started systemd-networkd.service - Network Configuration. May 15 00:04:45.437706 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 15 00:04:45.472905 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 15 00:04:45.490137 systemd[1]: Starting ensure-sysext.service... May 15 00:04:45.497871 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 15 00:04:45.510894 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 15 00:04:45.519388 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 15 00:04:45.529004 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 15 00:04:45.544806 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 15 00:04:45.562494 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 15 00:04:45.562965 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 15 00:04:45.568615 systemd-tmpfiles[1532]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 15 00:04:45.568848 systemd-tmpfiles[1532]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 15 00:04:45.569485 systemd-tmpfiles[1532]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 15 00:04:45.569725 systemd-tmpfiles[1532]: ACLs are not supported, ignoring. May 15 00:04:45.569772 systemd-tmpfiles[1532]: ACLs are not supported, ignoring. May 15 00:04:45.572195 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 15 00:04:45.582123 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 00:04:45.592334 systemd[1]: Reload requested from client PID 1526 ('systemctl') (unit ensure-sysext.service)... May 15 00:04:45.592352 systemd[1]: Reloading... May 15 00:04:45.621510 lvm[1527]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 15 00:04:45.664102 systemd-tmpfiles[1532]: Detected autofs mount point /boot during canonicalization of boot. May 15 00:04:45.664119 systemd-tmpfiles[1532]: Skipping /boot May 15 00:04:45.673655 zram_generator::config[1574]: No configuration found. May 15 00:04:45.679280 systemd-tmpfiles[1532]: Detected autofs mount point /boot during canonicalization of boot. May 15 00:04:45.679299 systemd-tmpfiles[1532]: Skipping /boot May 15 00:04:45.804558 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 00:04:45.944787 systemd[1]: Reloading finished in 352 ms. May 15 00:04:45.959875 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 15 00:04:45.983201 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 15 00:04:45.994605 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 15 00:04:46.006382 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 15 00:04:46.017125 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 15 00:04:46.035951 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 15 00:04:46.049966 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 15 00:04:46.059361 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 15 00:04:46.072072 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 15 00:04:46.086205 lvm[1638]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 15 00:04:46.093250 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 15 00:04:46.115978 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 15 00:04:46.125760 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 15 00:04:46.140678 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 15 00:04:46.175706 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 15 00:04:46.192251 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 00:04:46.198103 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 15 00:04:46.222029 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 15 00:04:46.237784 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 15 00:04:46.261043 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 15 00:04:46.270823 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 00:04:46.271019 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 15 00:04:46.271174 systemd[1]: Reached target time-set.target - System Time Set. May 15 00:04:46.275793 systemd-networkd[1343]: eth0: Gained IPv6LL May 15 00:04:46.280655 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 15 00:04:46.292428 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 15 00:04:46.300048 augenrules[1668]: No rules May 15 00:04:46.303351 systemd[1]: audit-rules.service: Deactivated successfully. May 15 00:04:46.303558 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 15 00:04:46.311054 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 00:04:46.311214 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 15 00:04:46.318611 systemd[1]: modprobe@drm.service: Deactivated successfully. May 15 00:04:46.318796 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 15 00:04:46.326538 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 00:04:46.326727 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 15 00:04:46.335762 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 00:04:46.335915 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 15 00:04:46.346569 systemd[1]: Finished ensure-sysext.service. May 15 00:04:46.355752 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 00:04:46.355827 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 15 00:04:46.402826 systemd-resolved[1641]: Positive Trust Anchors: May 15 00:04:46.402845 systemd-resolved[1641]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 15 00:04:46.402876 systemd-resolved[1641]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 15 00:04:46.460983 systemd-resolved[1641]: Using system hostname 'ci-4230.1.1-n-c70fe96ece'. May 15 00:04:46.462575 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 15 00:04:46.470058 systemd[1]: Reached target network.target - Network. May 15 00:04:46.475333 systemd[1]: Reached target network-online.target - Network is Online. May 15 00:04:46.482370 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 15 00:04:46.658755 systemd-networkd[1343]: enP7428s1: Gained IPv6LL May 15 00:04:46.782518 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 15 00:04:46.790760 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 15 00:04:50.364646 ldconfig[1293]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 15 00:04:50.384389 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 15 00:04:50.397824 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 15 00:04:50.427142 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 15 00:04:50.434604 systemd[1]: Reached target sysinit.target - System Initialization. May 15 00:04:50.441613 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 15 00:04:50.449622 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 15 00:04:50.458739 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 15 00:04:50.465896 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 15 00:04:50.474322 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 15 00:04:50.482608 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 15 00:04:50.482666 systemd[1]: Reached target paths.target - Path Units. May 15 00:04:50.488658 systemd[1]: Reached target timers.target - Timer Units. May 15 00:04:50.495335 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 15 00:04:50.503929 systemd[1]: Starting docker.socket - Docker Socket for the API... May 15 00:04:50.512310 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 15 00:04:50.520676 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 15 00:04:50.528475 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 15 00:04:50.544311 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 15 00:04:50.552293 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 15 00:04:50.560872 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 15 00:04:50.567992 systemd[1]: Reached target sockets.target - Socket Units. May 15 00:04:50.574322 systemd[1]: Reached target basic.target - Basic System. May 15 00:04:50.580281 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 15 00:04:50.580312 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 15 00:04:50.589722 systemd[1]: Starting chronyd.service - NTP client/server... May 15 00:04:50.599765 systemd[1]: Starting containerd.service - containerd container runtime... May 15 00:04:50.611798 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 15 00:04:50.623438 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 15 00:04:50.632170 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 15 00:04:50.639723 (chronyd)[1687]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS May 15 00:04:50.640942 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 15 00:04:50.648017 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 15 00:04:50.648127 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). May 15 00:04:50.650311 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. May 15 00:04:50.653080 jq[1694]: false May 15 00:04:50.661225 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). May 15 00:04:50.669754 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 00:04:50.678489 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 15 00:04:50.685479 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 15 00:04:50.694744 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 15 00:04:50.704350 KVP[1696]: KVP starting; pid is:1696 May 15 00:04:50.707397 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 15 00:04:50.717423 KVP[1696]: KVP LIC Version: 3.1 May 15 00:04:50.717675 kernel: hv_utils: KVP IC version 4.0 May 15 00:04:50.718091 chronyd[1705]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) May 15 00:04:50.720848 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 15 00:04:50.738089 systemd[1]: Starting systemd-logind.service - User Login Management... May 15 00:04:50.749399 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 15 00:04:50.749917 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 15 00:04:50.752996 systemd[1]: Starting update-engine.service - Update Engine... May 15 00:04:50.760743 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 15 00:04:50.771093 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 15 00:04:50.773254 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 15 00:04:50.773780 jq[1716]: true May 15 00:04:50.777967 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 15 00:04:50.778162 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 15 00:04:50.787702 chronyd[1705]: Timezone right/UTC failed leap second check, ignoring May 15 00:04:50.787865 chronyd[1705]: Loaded seccomp filter (level 2) May 15 00:04:50.792283 systemd[1]: Started chronyd.service - NTP client/server. May 15 00:04:50.808695 extend-filesystems[1695]: Found loop4 May 15 00:04:50.808695 extend-filesystems[1695]: Found loop5 May 15 00:04:50.808695 extend-filesystems[1695]: Found loop6 May 15 00:04:50.808695 extend-filesystems[1695]: Found loop7 May 15 00:04:50.808695 extend-filesystems[1695]: Found sda May 15 00:04:50.808695 extend-filesystems[1695]: Found sda1 May 15 00:04:50.808695 extend-filesystems[1695]: Found sda2 May 15 00:04:50.808695 extend-filesystems[1695]: Found sda3 May 15 00:04:50.808695 extend-filesystems[1695]: Found usr May 15 00:04:50.808695 extend-filesystems[1695]: Found sda4 May 15 00:04:50.808695 extend-filesystems[1695]: Found sda6 May 15 00:04:50.808695 extend-filesystems[1695]: Found sda7 May 15 00:04:50.808695 extend-filesystems[1695]: Found sda9 May 15 00:04:50.808695 extend-filesystems[1695]: Checking size of /dev/sda9 May 15 00:04:50.935658 jq[1720]: true May 15 00:04:50.813553 (ntainerd)[1724]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 15 00:04:50.831696 systemd[1]: motdgen.service: Deactivated successfully. May 15 00:04:50.832845 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 15 00:04:50.852251 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 15 00:04:50.864352 systemd-logind[1707]: New seat seat0. May 15 00:04:50.875668 systemd-logind[1707]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) May 15 00:04:50.887005 systemd[1]: Started systemd-logind.service - User Login Management. May 15 00:04:51.264419 tar[1719]: linux-arm64/helm May 15 00:04:51.272010 update_engine[1711]: I20250515 00:04:51.271544 1711 main.cc:92] Flatcar Update Engine starting May 15 00:04:51.272853 extend-filesystems[1695]: Old size kept for /dev/sda9 May 15 00:04:51.292765 extend-filesystems[1695]: Found sr0 May 15 00:04:51.288408 dbus-daemon[1690]: [system] SELinux support is enabled May 15 00:04:51.280412 systemd[1]: extend-filesystems.service: Deactivated successfully. May 15 00:04:51.337325 update_engine[1711]: I20250515 00:04:51.306243 1711 update_check_scheduler.cc:74] Next update check in 11m28s May 15 00:04:51.281968 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 15 00:04:51.327910 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 15 00:04:51.344611 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 15 00:04:51.344710 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 15 00:04:51.352878 dbus-daemon[1690]: [system] Successfully activated service 'org.freedesktop.systemd1' May 15 00:04:51.354149 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 15 00:04:51.354173 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 15 00:04:51.363924 systemd[1]: Started update-engine.service - Update Engine. May 15 00:04:51.397716 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 44 scanned by (udev-worker) (1743) May 15 00:04:51.399222 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 15 00:04:51.445175 coreos-metadata[1689]: May 15 00:04:51.445 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 May 15 00:04:51.462649 coreos-metadata[1689]: May 15 00:04:51.461 INFO Fetch successful May 15 00:04:51.462649 coreos-metadata[1689]: May 15 00:04:51.462 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 May 15 00:04:51.469004 coreos-metadata[1689]: May 15 00:04:51.468 INFO Fetch successful May 15 00:04:51.469004 coreos-metadata[1689]: May 15 00:04:51.468 INFO Fetching http://168.63.129.16/machine/d914d880-5e14-4508-990f-c88601c0becd/5043deac%2Db619%2D432e%2Dbccf%2D07a21ed8dbd6.%5Fci%2D4230.1.1%2Dn%2Dc70fe96ece?comp=config&type=sharedConfig&incarnation=1: Attempt #1 May 15 00:04:51.471781 coreos-metadata[1689]: May 15 00:04:51.471 INFO Fetch successful May 15 00:04:51.472000 coreos-metadata[1689]: May 15 00:04:51.471 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 May 15 00:04:51.487138 coreos-metadata[1689]: May 15 00:04:51.487 INFO Fetch successful May 15 00:04:51.535876 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 15 00:04:51.551277 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 15 00:04:51.608786 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 00:04:51.617021 (kubelet)[1836]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 00:04:51.876307 bash[1764]: Updated "/home/core/.ssh/authorized_keys" May 15 00:04:51.879996 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 15 00:04:51.891031 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 15 00:04:52.060755 locksmithd[1782]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 15 00:04:52.098560 kubelet[1836]: E0515 00:04:52.098519 1836 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 00:04:52.103000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 00:04:52.103322 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 00:04:52.104230 systemd[1]: kubelet.service: Consumed 661ms CPU time, 233.4M memory peak. May 15 00:04:52.110761 sshd_keygen[1715]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 15 00:04:52.130662 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 15 00:04:52.143380 systemd[1]: Starting issuegen.service - Generate /run/issue... May 15 00:04:52.162059 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... May 15 00:04:52.169602 systemd[1]: issuegen.service: Deactivated successfully. May 15 00:04:52.171665 systemd[1]: Finished issuegen.service - Generate /run/issue. May 15 00:04:52.194692 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 15 00:04:52.203096 tar[1719]: linux-arm64/LICENSE May 15 00:04:52.203194 tar[1719]: linux-arm64/README.md May 15 00:04:52.213851 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. May 15 00:04:52.221212 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 15 00:04:52.242600 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 15 00:04:52.252993 systemd[1]: Started getty@tty1.service - Getty on tty1. May 15 00:04:52.260835 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 15 00:04:52.271931 systemd[1]: Reached target getty.target - Login Prompts. May 15 00:04:52.345905 containerd[1724]: time="2025-05-15T00:04:52.345811860Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 May 15 00:04:52.370635 containerd[1724]: time="2025-05-15T00:04:52.370576100Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 15 00:04:52.371971 containerd[1724]: time="2025-05-15T00:04:52.371940380Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.89-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 15 00:04:52.372053 containerd[1724]: time="2025-05-15T00:04:52.372040460Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 15 00:04:52.372115 containerd[1724]: time="2025-05-15T00:04:52.372102980Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 15 00:04:52.372327 containerd[1724]: time="2025-05-15T00:04:52.372311580Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 15 00:04:52.372387 containerd[1724]: time="2025-05-15T00:04:52.372375860Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 15 00:04:52.372493 containerd[1724]: time="2025-05-15T00:04:52.372477540Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 15 00:04:52.373331 containerd[1724]: time="2025-05-15T00:04:52.372532300Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 15 00:04:52.373331 containerd[1724]: time="2025-05-15T00:04:52.372755460Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 15 00:04:52.373331 containerd[1724]: time="2025-05-15T00:04:52.372788660Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 15 00:04:52.373331 containerd[1724]: time="2025-05-15T00:04:52.372802700Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 15 00:04:52.373331 containerd[1724]: time="2025-05-15T00:04:52.372810980Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 15 00:04:52.373331 containerd[1724]: time="2025-05-15T00:04:52.372882300Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 15 00:04:52.373331 containerd[1724]: time="2025-05-15T00:04:52.373057820Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 15 00:04:52.373331 containerd[1724]: time="2025-05-15T00:04:52.373175220Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 15 00:04:52.373331 containerd[1724]: time="2025-05-15T00:04:52.373187100Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 15 00:04:52.373331 containerd[1724]: time="2025-05-15T00:04:52.373266020Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 15 00:04:52.373331 containerd[1724]: time="2025-05-15T00:04:52.373302260Z" level=info msg="metadata content store policy set" policy=shared May 15 00:04:52.432975 containerd[1724]: time="2025-05-15T00:04:52.432886740Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 15 00:04:52.433119 containerd[1724]: time="2025-05-15T00:04:52.433104300Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 15 00:04:52.433182 containerd[1724]: time="2025-05-15T00:04:52.433163100Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 15 00:04:52.433238 containerd[1724]: time="2025-05-15T00:04:52.433226460Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 15 00:04:52.433288 containerd[1724]: time="2025-05-15T00:04:52.433277820Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 15 00:04:52.433503 containerd[1724]: time="2025-05-15T00:04:52.433486220Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 15 00:04:52.433894 containerd[1724]: time="2025-05-15T00:04:52.433865700Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 15 00:04:52.434033 containerd[1724]: time="2025-05-15T00:04:52.434011540Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 15 00:04:52.434058 containerd[1724]: time="2025-05-15T00:04:52.434035420Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 15 00:04:52.434058 containerd[1724]: time="2025-05-15T00:04:52.434050860Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 15 00:04:52.434090 containerd[1724]: time="2025-05-15T00:04:52.434063980Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 15 00:04:52.434090 containerd[1724]: time="2025-05-15T00:04:52.434077020Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 15 00:04:52.434139 containerd[1724]: time="2025-05-15T00:04:52.434089620Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 15 00:04:52.434139 containerd[1724]: time="2025-05-15T00:04:52.434104020Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 15 00:04:52.434139 containerd[1724]: time="2025-05-15T00:04:52.434119740Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 15 00:04:52.434139 containerd[1724]: time="2025-05-15T00:04:52.434133820Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 15 00:04:52.434201 containerd[1724]: time="2025-05-15T00:04:52.434145900Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 15 00:04:52.434201 containerd[1724]: time="2025-05-15T00:04:52.434158020Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 15 00:04:52.434201 containerd[1724]: time="2025-05-15T00:04:52.434178940Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 15 00:04:52.434201 containerd[1724]: time="2025-05-15T00:04:52.434192540Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 15 00:04:52.434268 containerd[1724]: time="2025-05-15T00:04:52.434204260Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 15 00:04:52.434268 containerd[1724]: time="2025-05-15T00:04:52.434217300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 15 00:04:52.434268 containerd[1724]: time="2025-05-15T00:04:52.434228660Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 15 00:04:52.434268 containerd[1724]: time="2025-05-15T00:04:52.434242340Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 15 00:04:52.434268 containerd[1724]: time="2025-05-15T00:04:52.434253740Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 15 00:04:52.434268 containerd[1724]: time="2025-05-15T00:04:52.434266500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 15 00:04:52.434367 containerd[1724]: time="2025-05-15T00:04:52.434286060Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 15 00:04:52.434367 containerd[1724]: time="2025-05-15T00:04:52.434301260Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 15 00:04:52.434367 containerd[1724]: time="2025-05-15T00:04:52.434312420Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 15 00:04:52.434367 containerd[1724]: time="2025-05-15T00:04:52.434323900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 15 00:04:52.434367 containerd[1724]: time="2025-05-15T00:04:52.434337180Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 15 00:04:52.434367 containerd[1724]: time="2025-05-15T00:04:52.434351380Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 15 00:04:52.434479 containerd[1724]: time="2025-05-15T00:04:52.434372260Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 15 00:04:52.434479 containerd[1724]: time="2025-05-15T00:04:52.434384780Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 15 00:04:52.434479 containerd[1724]: time="2025-05-15T00:04:52.434395580Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 15 00:04:52.434479 containerd[1724]: time="2025-05-15T00:04:52.434452580Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 15 00:04:52.434479 containerd[1724]: time="2025-05-15T00:04:52.434470340Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 15 00:04:52.434568 containerd[1724]: time="2025-05-15T00:04:52.434479900Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 15 00:04:52.434568 containerd[1724]: time="2025-05-15T00:04:52.434491180Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 15 00:04:52.434568 containerd[1724]: time="2025-05-15T00:04:52.434499660Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 15 00:04:52.434568 containerd[1724]: time="2025-05-15T00:04:52.434511060Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 15 00:04:52.434568 containerd[1724]: time="2025-05-15T00:04:52.434520220Z" level=info msg="NRI interface is disabled by configuration." May 15 00:04:52.434568 containerd[1724]: time="2025-05-15T00:04:52.434531180Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 15 00:04:52.435477 containerd[1724]: time="2025-05-15T00:04:52.435048580Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 15 00:04:52.435477 containerd[1724]: time="2025-05-15T00:04:52.435224980Z" level=info msg="Connect containerd service" May 15 00:04:52.435477 containerd[1724]: time="2025-05-15T00:04:52.435274860Z" level=info msg="using legacy CRI server" May 15 00:04:52.435477 containerd[1724]: time="2025-05-15T00:04:52.435283140Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 15 00:04:52.435958 containerd[1724]: time="2025-05-15T00:04:52.435443500Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 15 00:04:52.436905 containerd[1724]: time="2025-05-15T00:04:52.436867300Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 15 00:04:52.438640 containerd[1724]: time="2025-05-15T00:04:52.437043100Z" level=info msg="Start subscribing containerd event" May 15 00:04:52.438640 containerd[1724]: time="2025-05-15T00:04:52.437094220Z" level=info msg="Start recovering state" May 15 00:04:52.438640 containerd[1724]: time="2025-05-15T00:04:52.437161140Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 15 00:04:52.438640 containerd[1724]: time="2025-05-15T00:04:52.437162060Z" level=info msg="Start event monitor" May 15 00:04:52.438640 containerd[1724]: time="2025-05-15T00:04:52.437195460Z" level=info msg="Start snapshots syncer" May 15 00:04:52.438640 containerd[1724]: time="2025-05-15T00:04:52.437206020Z" level=info msg="Start cni network conf syncer for default" May 15 00:04:52.438640 containerd[1724]: time="2025-05-15T00:04:52.437213420Z" level=info msg="Start streaming server" May 15 00:04:52.438640 containerd[1724]: time="2025-05-15T00:04:52.437198460Z" level=info msg=serving... address=/run/containerd/containerd.sock May 15 00:04:52.437422 systemd[1]: Started containerd.service - containerd container runtime. May 15 00:04:52.443800 containerd[1724]: time="2025-05-15T00:04:52.443752700Z" level=info msg="containerd successfully booted in 0.099344s" May 15 00:04:52.447954 systemd[1]: Reached target multi-user.target - Multi-User System. May 15 00:04:52.457715 systemd[1]: Startup finished in 721ms (kernel) + 14.491s (initrd) + 17.944s (userspace) = 33.158s. May 15 00:04:52.886753 login[1879]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying May 15 00:04:52.888101 login[1880]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) May 15 00:04:52.894066 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 15 00:04:52.903937 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 15 00:04:52.911141 systemd-logind[1707]: New session 2 of user core. May 15 00:04:52.916353 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 15 00:04:52.922911 systemd[1]: Starting user@500.service - User Manager for UID 500... May 15 00:04:52.926339 (systemd)[1891]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 15 00:04:52.928740 systemd-logind[1707]: New session c1 of user core. May 15 00:04:53.093355 systemd[1891]: Queued start job for default target default.target. May 15 00:04:53.101478 systemd[1891]: Created slice app.slice - User Application Slice. May 15 00:04:53.101649 systemd[1891]: Reached target paths.target - Paths. May 15 00:04:53.101778 systemd[1891]: Reached target timers.target - Timers. May 15 00:04:53.103008 systemd[1891]: Starting dbus.socket - D-Bus User Message Bus Socket... May 15 00:04:53.112779 systemd[1891]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 15 00:04:53.112835 systemd[1891]: Reached target sockets.target - Sockets. May 15 00:04:53.112874 systemd[1891]: Reached target basic.target - Basic System. May 15 00:04:53.112907 systemd[1891]: Reached target default.target - Main User Target. May 15 00:04:53.112930 systemd[1891]: Startup finished in 178ms. May 15 00:04:53.113104 systemd[1]: Started user@500.service - User Manager for UID 500. May 15 00:04:53.125210 systemd[1]: Started session-2.scope - Session 2 of User core. May 15 00:04:53.887239 login[1879]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) May 15 00:04:53.891582 systemd-logind[1707]: New session 1 of user core. May 15 00:04:53.898765 systemd[1]: Started session-1.scope - Session 1 of User core. May 15 00:04:54.469557 waagent[1875]: 2025-05-15T00:04:54.469462Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 May 15 00:04:54.475565 waagent[1875]: 2025-05-15T00:04:54.475488Z INFO Daemon Daemon OS: flatcar 4230.1.1 May 15 00:04:54.480129 waagent[1875]: 2025-05-15T00:04:54.480059Z INFO Daemon Daemon Python: 3.11.11 May 15 00:04:54.484321 waagent[1875]: 2025-05-15T00:04:54.484242Z INFO Daemon Daemon Run daemon May 15 00:04:54.488667 waagent[1875]: 2025-05-15T00:04:54.488601Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4230.1.1' May 15 00:04:54.497391 waagent[1875]: 2025-05-15T00:04:54.497335Z INFO Daemon Daemon Using waagent for provisioning May 15 00:04:54.502321 waagent[1875]: 2025-05-15T00:04:54.502277Z INFO Daemon Daemon Activate resource disk May 15 00:04:54.507240 waagent[1875]: 2025-05-15T00:04:54.507193Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb May 15 00:04:54.520228 waagent[1875]: 2025-05-15T00:04:54.520142Z INFO Daemon Daemon Found device: None May 15 00:04:54.525081 waagent[1875]: 2025-05-15T00:04:54.525033Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology May 15 00:04:54.533409 waagent[1875]: 2025-05-15T00:04:54.533353Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 May 15 00:04:54.544888 waagent[1875]: 2025-05-15T00:04:54.544833Z INFO Daemon Daemon Clean protocol and wireserver endpoint May 15 00:04:54.550512 waagent[1875]: 2025-05-15T00:04:54.550466Z INFO Daemon Daemon Running default provisioning handler May 15 00:04:54.561598 waagent[1875]: 2025-05-15T00:04:54.561533Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. May 15 00:04:54.575362 waagent[1875]: 2025-05-15T00:04:54.575300Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' May 15 00:04:54.585194 waagent[1875]: 2025-05-15T00:04:54.585138Z INFO Daemon Daemon cloud-init is enabled: False May 15 00:04:54.590195 waagent[1875]: 2025-05-15T00:04:54.590150Z INFO Daemon Daemon Copying ovf-env.xml May 15 00:04:54.715753 waagent[1875]: 2025-05-15T00:04:54.715655Z INFO Daemon Daemon Successfully mounted dvd May 15 00:04:54.767990 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. May 15 00:04:54.770897 waagent[1875]: 2025-05-15T00:04:54.770820Z INFO Daemon Daemon Detect protocol endpoint May 15 00:04:54.775937 waagent[1875]: 2025-05-15T00:04:54.775878Z INFO Daemon Daemon Clean protocol and wireserver endpoint May 15 00:04:54.781858 waagent[1875]: 2025-05-15T00:04:54.781803Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler May 15 00:04:54.788785 waagent[1875]: 2025-05-15T00:04:54.788731Z INFO Daemon Daemon Test for route to 168.63.129.16 May 15 00:04:54.794314 waagent[1875]: 2025-05-15T00:04:54.794267Z INFO Daemon Daemon Route to 168.63.129.16 exists May 15 00:04:54.799490 waagent[1875]: 2025-05-15T00:04:54.799443Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 May 15 00:04:54.846019 waagent[1875]: 2025-05-15T00:04:54.845971Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 May 15 00:04:54.853231 waagent[1875]: 2025-05-15T00:04:54.853173Z INFO Daemon Daemon Wire protocol version:2012-11-30 May 15 00:04:54.858813 waagent[1875]: 2025-05-15T00:04:54.858750Z INFO Daemon Daemon Server preferred version:2015-04-05 May 15 00:04:55.233698 waagent[1875]: 2025-05-15T00:04:55.233082Z INFO Daemon Daemon Initializing goal state during protocol detection May 15 00:04:55.240233 waagent[1875]: 2025-05-15T00:04:55.240156Z INFO Daemon Daemon Forcing an update of the goal state. May 15 00:04:55.250202 waagent[1875]: 2025-05-15T00:04:55.250145Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] May 15 00:04:55.315924 waagent[1875]: 2025-05-15T00:04:55.315878Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.164 May 15 00:04:55.323585 waagent[1875]: 2025-05-15T00:04:55.323535Z INFO Daemon May 15 00:04:55.327017 waagent[1875]: 2025-05-15T00:04:55.326970Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 187c3a95-287a-4ec5-b3b8-814f262dbc42 eTag: 6988706643314089344 source: Fabric] May 15 00:04:55.339612 waagent[1875]: 2025-05-15T00:04:55.339561Z INFO Daemon The vmSettings originated via Fabric; will ignore them. May 15 00:04:55.346440 waagent[1875]: 2025-05-15T00:04:55.346388Z INFO Daemon May 15 00:04:55.349409 waagent[1875]: 2025-05-15T00:04:55.349362Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] May 15 00:04:55.361023 waagent[1875]: 2025-05-15T00:04:55.360988Z INFO Daemon Daemon Downloading artifacts profile blob May 15 00:04:55.454579 waagent[1875]: 2025-05-15T00:04:55.454486Z INFO Daemon Downloaded certificate {'thumbprint': '83B4AE56CBF349F5FDD67486BC258FF263348801', 'hasPrivateKey': True} May 15 00:04:55.464999 waagent[1875]: 2025-05-15T00:04:55.464945Z INFO Daemon Downloaded certificate {'thumbprint': '69C1E43D03ED0947740CAACAB16D82BBAEAC9385', 'hasPrivateKey': False} May 15 00:04:55.475331 waagent[1875]: 2025-05-15T00:04:55.475280Z INFO Daemon Fetch goal state completed May 15 00:04:55.490200 waagent[1875]: 2025-05-15T00:04:55.490112Z INFO Daemon Daemon Starting provisioning May 15 00:04:55.495597 waagent[1875]: 2025-05-15T00:04:55.495536Z INFO Daemon Daemon Handle ovf-env.xml. May 15 00:04:55.500604 waagent[1875]: 2025-05-15T00:04:55.500552Z INFO Daemon Daemon Set hostname [ci-4230.1.1-n-c70fe96ece] May 15 00:04:55.523661 waagent[1875]: 2025-05-15T00:04:55.523502Z INFO Daemon Daemon Publish hostname [ci-4230.1.1-n-c70fe96ece] May 15 00:04:55.531476 waagent[1875]: 2025-05-15T00:04:55.531411Z INFO Daemon Daemon Examine /proc/net/route for primary interface May 15 00:04:55.542652 waagent[1875]: 2025-05-15T00:04:55.538452Z INFO Daemon Daemon Primary interface is [eth0] May 15 00:04:55.551061 systemd-networkd[1343]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 00:04:55.551069 systemd-networkd[1343]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 15 00:04:55.551095 systemd-networkd[1343]: eth0: DHCP lease lost May 15 00:04:55.556669 waagent[1875]: 2025-05-15T00:04:55.552138Z INFO Daemon Daemon Create user account if not exists May 15 00:04:55.558434 waagent[1875]: 2025-05-15T00:04:55.558376Z INFO Daemon Daemon User core already exists, skip useradd May 15 00:04:55.564489 waagent[1875]: 2025-05-15T00:04:55.564439Z INFO Daemon Daemon Configure sudoer May 15 00:04:55.569692 waagent[1875]: 2025-05-15T00:04:55.569608Z INFO Daemon Daemon Configure sshd May 15 00:04:55.587267 waagent[1875]: 2025-05-15T00:04:55.574660Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. May 15 00:04:55.588386 waagent[1875]: 2025-05-15T00:04:55.588254Z INFO Daemon Daemon Deploy ssh public key. May 15 00:04:55.596719 systemd-networkd[1343]: eth0: DHCPv4 address 10.200.20.16/24, gateway 10.200.20.1 acquired from 168.63.129.16 May 15 00:04:56.698646 waagent[1875]: 2025-05-15T00:04:56.698217Z INFO Daemon Daemon Provisioning complete May 15 00:04:56.716269 waagent[1875]: 2025-05-15T00:04:56.716216Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping May 15 00:04:56.722441 waagent[1875]: 2025-05-15T00:04:56.722391Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. May 15 00:04:56.732216 waagent[1875]: 2025-05-15T00:04:56.732171Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent May 15 00:04:56.864265 waagent[1946]: 2025-05-15T00:04:56.863714Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) May 15 00:04:56.864265 waagent[1946]: 2025-05-15T00:04:56.863875Z INFO ExtHandler ExtHandler OS: flatcar 4230.1.1 May 15 00:04:56.864265 waagent[1946]: 2025-05-15T00:04:56.863929Z INFO ExtHandler ExtHandler Python: 3.11.11 May 15 00:04:56.893482 waagent[1946]: 2025-05-15T00:04:56.893398Z INFO ExtHandler ExtHandler Distro: flatcar-4230.1.1; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.11; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; May 15 00:04:56.893851 waagent[1946]: 2025-05-15T00:04:56.893811Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 15 00:04:56.893987 waagent[1946]: 2025-05-15T00:04:56.893955Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 May 15 00:04:56.905388 waagent[1946]: 2025-05-15T00:04:56.905308Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] May 15 00:04:56.914409 waagent[1946]: 2025-05-15T00:04:56.914362Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.164 May 15 00:04:56.915122 waagent[1946]: 2025-05-15T00:04:56.915080Z INFO ExtHandler May 15 00:04:56.915650 waagent[1946]: 2025-05-15T00:04:56.915263Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 3aec47ce-6988-42d4-8cb9-beaabc643093 eTag: 6988706643314089344 source: Fabric] May 15 00:04:56.915650 waagent[1946]: 2025-05-15T00:04:56.915568Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. May 15 00:04:56.916322 waagent[1946]: 2025-05-15T00:04:56.916281Z INFO ExtHandler May 15 00:04:56.916473 waagent[1946]: 2025-05-15T00:04:56.916441Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] May 15 00:04:56.923818 waagent[1946]: 2025-05-15T00:04:56.923778Z INFO ExtHandler ExtHandler Downloading artifacts profile blob May 15 00:04:57.001750 waagent[1946]: 2025-05-15T00:04:57.001253Z INFO ExtHandler Downloaded certificate {'thumbprint': '83B4AE56CBF349F5FDD67486BC258FF263348801', 'hasPrivateKey': True} May 15 00:04:57.001871 waagent[1946]: 2025-05-15T00:04:57.001818Z INFO ExtHandler Downloaded certificate {'thumbprint': '69C1E43D03ED0947740CAACAB16D82BBAEAC9385', 'hasPrivateKey': False} May 15 00:04:57.002288 waagent[1946]: 2025-05-15T00:04:57.002240Z INFO ExtHandler Fetch goal state completed May 15 00:04:57.018947 waagent[1946]: 2025-05-15T00:04:57.018883Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1946 May 15 00:04:57.019105 waagent[1946]: 2025-05-15T00:04:57.019069Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** May 15 00:04:57.020807 waagent[1946]: 2025-05-15T00:04:57.020761Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4230.1.1', '', 'Flatcar Container Linux by Kinvolk'] May 15 00:04:57.021195 waagent[1946]: 2025-05-15T00:04:57.021157Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules May 15 00:04:57.059453 waagent[1946]: 2025-05-15T00:04:57.059405Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service May 15 00:04:57.059697 waagent[1946]: 2025-05-15T00:04:57.059651Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup May 15 00:04:57.065661 waagent[1946]: 2025-05-15T00:04:57.065577Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now May 15 00:04:57.072252 systemd[1]: Reload requested from client PID 1961 ('systemctl') (unit waagent.service)... May 15 00:04:57.072497 systemd[1]: Reloading... May 15 00:04:57.166692 zram_generator::config[2000]: No configuration found. May 15 00:04:57.277770 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 00:04:57.401615 systemd[1]: Reloading finished in 328 ms. May 15 00:04:57.412176 waagent[1946]: 2025-05-15T00:04:57.411817Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service May 15 00:04:57.418267 systemd[1]: Reload requested from client PID 2054 ('systemctl') (unit waagent.service)... May 15 00:04:57.418281 systemd[1]: Reloading... May 15 00:04:57.501726 zram_generator::config[2093]: No configuration found. May 15 00:04:57.609119 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 00:04:57.733927 systemd[1]: Reloading finished in 315 ms. May 15 00:04:57.749597 waagent[1946]: 2025-05-15T00:04:57.748815Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service May 15 00:04:57.749597 waagent[1946]: 2025-05-15T00:04:57.748992Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully May 15 00:04:58.543203 waagent[1946]: 2025-05-15T00:04:58.541997Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. May 15 00:04:58.543203 waagent[1946]: 2025-05-15T00:04:58.542593Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] May 15 00:04:58.543589 waagent[1946]: 2025-05-15T00:04:58.543419Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 15 00:04:58.543589 waagent[1946]: 2025-05-15T00:04:58.543506Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 May 15 00:04:58.543780 waagent[1946]: 2025-05-15T00:04:58.543728Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. May 15 00:04:58.543908 waagent[1946]: 2025-05-15T00:04:58.543854Z INFO ExtHandler ExtHandler Starting env monitor service. May 15 00:04:58.544121 waagent[1946]: 2025-05-15T00:04:58.544067Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: May 15 00:04:58.544121 waagent[1946]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT May 15 00:04:58.544121 waagent[1946]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 May 15 00:04:58.544121 waagent[1946]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 May 15 00:04:58.544121 waagent[1946]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 May 15 00:04:58.544121 waagent[1946]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 May 15 00:04:58.544121 waagent[1946]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 May 15 00:04:58.544692 waagent[1946]: 2025-05-15T00:04:58.544636Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. May 15 00:04:58.545163 waagent[1946]: 2025-05-15T00:04:58.545104Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread May 15 00:04:58.545378 waagent[1946]: 2025-05-15T00:04:58.545310Z INFO ExtHandler ExtHandler Start Extension Telemetry service. May 15 00:04:58.546055 waagent[1946]: 2025-05-15T00:04:58.545989Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True May 15 00:04:58.546117 waagent[1946]: 2025-05-15T00:04:58.546081Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 15 00:04:58.546244 waagent[1946]: 2025-05-15T00:04:58.546169Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. May 15 00:04:58.546347 waagent[1946]: 2025-05-15T00:04:58.546291Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 May 15 00:04:58.546548 waagent[1946]: 2025-05-15T00:04:58.546487Z INFO EnvHandler ExtHandler Configure routes May 15 00:04:58.546832 waagent[1946]: 2025-05-15T00:04:58.546780Z INFO EnvHandler ExtHandler Gateway:None May 15 00:04:58.546964 waagent[1946]: 2025-05-15T00:04:58.546878Z INFO EnvHandler ExtHandler Routes:None May 15 00:04:58.547454 waagent[1946]: 2025-05-15T00:04:58.547122Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread May 15 00:04:58.554898 waagent[1946]: 2025-05-15T00:04:58.554810Z INFO ExtHandler ExtHandler May 15 00:04:58.555493 waagent[1946]: 2025-05-15T00:04:58.555432Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: b47865c9-6b51-4831-8123-a5b8e09f52db correlation 335fbf28-5b7b-4a9f-bce8-920c1fc2759e created: 2025-05-15T00:03:17.901360Z] May 15 00:04:58.556660 waagent[1946]: 2025-05-15T00:04:58.556413Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. May 15 00:04:58.557165 waagent[1946]: 2025-05-15T00:04:58.557107Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 2 ms] May 15 00:04:58.602459 waagent[1946]: 2025-05-15T00:04:58.602343Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 6F27925A-A0BF-47A9-8D5E-EF2A76D45A23;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] May 15 00:04:58.625393 waagent[1946]: 2025-05-15T00:04:58.624977Z INFO MonitorHandler ExtHandler Network interfaces: May 15 00:04:58.625393 waagent[1946]: Executing ['ip', '-a', '-o', 'link']: May 15 00:04:58.625393 waagent[1946]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 May 15 00:04:58.625393 waagent[1946]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:07:06:32 brd ff:ff:ff:ff:ff:ff May 15 00:04:58.625393 waagent[1946]: 3: enP7428s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:07:06:32 brd ff:ff:ff:ff:ff:ff\ altname enP7428p0s2 May 15 00:04:58.625393 waagent[1946]: Executing ['ip', '-4', '-a', '-o', 'address']: May 15 00:04:58.625393 waagent[1946]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever May 15 00:04:58.625393 waagent[1946]: 2: eth0 inet 10.200.20.16/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever May 15 00:04:58.625393 waagent[1946]: Executing ['ip', '-6', '-a', '-o', 'address']: May 15 00:04:58.625393 waagent[1946]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever May 15 00:04:58.625393 waagent[1946]: 2: eth0 inet6 fe80::20d:3aff:fe07:632/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever May 15 00:04:58.625393 waagent[1946]: 3: enP7428s1 inet6 fe80::20d:3aff:fe07:632/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever May 15 00:04:58.670022 waagent[1946]: 2025-05-15T00:04:58.669960Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: May 15 00:04:58.670022 waagent[1946]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) May 15 00:04:58.670022 waagent[1946]: pkts bytes target prot opt in out source destination May 15 00:04:58.670022 waagent[1946]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) May 15 00:04:58.670022 waagent[1946]: pkts bytes target prot opt in out source destination May 15 00:04:58.670022 waagent[1946]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) May 15 00:04:58.670022 waagent[1946]: pkts bytes target prot opt in out source destination May 15 00:04:58.670022 waagent[1946]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 May 15 00:04:58.670022 waagent[1946]: 5 457 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 May 15 00:04:58.670022 waagent[1946]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW May 15 00:04:58.673094 waagent[1946]: 2025-05-15T00:04:58.673028Z INFO EnvHandler ExtHandler Current Firewall rules: May 15 00:04:58.673094 waagent[1946]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) May 15 00:04:58.673094 waagent[1946]: pkts bytes target prot opt in out source destination May 15 00:04:58.673094 waagent[1946]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) May 15 00:04:58.673094 waagent[1946]: pkts bytes target prot opt in out source destination May 15 00:04:58.673094 waagent[1946]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) May 15 00:04:58.673094 waagent[1946]: pkts bytes target prot opt in out source destination May 15 00:04:58.673094 waagent[1946]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 May 15 00:04:58.673094 waagent[1946]: 10 1102 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 May 15 00:04:58.673094 waagent[1946]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW May 15 00:04:58.673345 waagent[1946]: 2025-05-15T00:04:58.673306Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 May 15 00:05:02.353488 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 15 00:05:02.363827 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 00:05:02.467780 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 00:05:02.468508 (kubelet)[2189]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 00:05:02.505619 kubelet[2189]: E0515 00:05:02.505504 2189 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 00:05:02.508287 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 00:05:02.508444 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 00:05:02.508778 systemd[1]: kubelet.service: Consumed 118ms CPU time, 96.6M memory peak. May 15 00:05:05.556838 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 15 00:05:05.558770 systemd[1]: Started sshd@0-10.200.20.16:22-10.200.16.10:34764.service - OpenSSH per-connection server daemon (10.200.16.10:34764). May 15 00:05:06.485926 sshd[2197]: Accepted publickey for core from 10.200.16.10 port 34764 ssh2: RSA SHA256:7ZishQ9HWvAtdE+Xy1E7/rrbLiv2T8OuOKfxsc80/d0 May 15 00:05:06.487233 sshd-session[2197]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:05:06.491860 systemd-logind[1707]: New session 3 of user core. May 15 00:05:06.497843 systemd[1]: Started session-3.scope - Session 3 of User core. May 15 00:05:06.888894 systemd[1]: Started sshd@1-10.200.20.16:22-10.200.16.10:34770.service - OpenSSH per-connection server daemon (10.200.16.10:34770). May 15 00:05:07.338662 sshd[2202]: Accepted publickey for core from 10.200.16.10 port 34770 ssh2: RSA SHA256:7ZishQ9HWvAtdE+Xy1E7/rrbLiv2T8OuOKfxsc80/d0 May 15 00:05:07.339886 sshd-session[2202]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:05:07.345011 systemd-logind[1707]: New session 4 of user core. May 15 00:05:07.350820 systemd[1]: Started session-4.scope - Session 4 of User core. May 15 00:05:07.661564 sshd[2204]: Connection closed by 10.200.16.10 port 34770 May 15 00:05:07.661417 sshd-session[2202]: pam_unix(sshd:session): session closed for user core May 15 00:05:07.665659 systemd[1]: sshd@1-10.200.20.16:22-10.200.16.10:34770.service: Deactivated successfully. May 15 00:05:07.667345 systemd[1]: session-4.scope: Deactivated successfully. May 15 00:05:07.668070 systemd-logind[1707]: Session 4 logged out. Waiting for processes to exit. May 15 00:05:07.669236 systemd-logind[1707]: Removed session 4. May 15 00:05:07.751332 systemd[1]: Started sshd@2-10.200.20.16:22-10.200.16.10:34772.service - OpenSSH per-connection server daemon (10.200.16.10:34772). May 15 00:05:08.232405 sshd[2210]: Accepted publickey for core from 10.200.16.10 port 34772 ssh2: RSA SHA256:7ZishQ9HWvAtdE+Xy1E7/rrbLiv2T8OuOKfxsc80/d0 May 15 00:05:08.233708 sshd-session[2210]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:05:08.239401 systemd-logind[1707]: New session 5 of user core. May 15 00:05:08.241817 systemd[1]: Started session-5.scope - Session 5 of User core. May 15 00:05:08.591020 sshd[2212]: Connection closed by 10.200.16.10 port 34772 May 15 00:05:08.590810 sshd-session[2210]: pam_unix(sshd:session): session closed for user core May 15 00:05:08.594166 systemd[1]: sshd@2-10.200.20.16:22-10.200.16.10:34772.service: Deactivated successfully. May 15 00:05:08.596132 systemd[1]: session-5.scope: Deactivated successfully. May 15 00:05:08.597804 systemd-logind[1707]: Session 5 logged out. Waiting for processes to exit. May 15 00:05:08.598715 systemd-logind[1707]: Removed session 5. May 15 00:05:08.683899 systemd[1]: Started sshd@3-10.200.20.16:22-10.200.16.10:35974.service - OpenSSH per-connection server daemon (10.200.16.10:35974). May 15 00:05:09.169813 sshd[2218]: Accepted publickey for core from 10.200.16.10 port 35974 ssh2: RSA SHA256:7ZishQ9HWvAtdE+Xy1E7/rrbLiv2T8OuOKfxsc80/d0 May 15 00:05:09.171049 sshd-session[2218]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:05:09.176578 systemd-logind[1707]: New session 6 of user core. May 15 00:05:09.181800 systemd[1]: Started session-6.scope - Session 6 of User core. May 15 00:05:09.524670 sshd[2220]: Connection closed by 10.200.16.10 port 35974 May 15 00:05:09.524312 sshd-session[2218]: pam_unix(sshd:session): session closed for user core May 15 00:05:09.527612 systemd[1]: sshd@3-10.200.20.16:22-10.200.16.10:35974.service: Deactivated successfully. May 15 00:05:09.529183 systemd[1]: session-6.scope: Deactivated successfully. May 15 00:05:09.531187 systemd-logind[1707]: Session 6 logged out. Waiting for processes to exit. May 15 00:05:09.532447 systemd-logind[1707]: Removed session 6. May 15 00:05:09.613908 systemd[1]: Started sshd@4-10.200.20.16:22-10.200.16.10:35988.service - OpenSSH per-connection server daemon (10.200.16.10:35988). May 15 00:05:10.065287 sshd[2226]: Accepted publickey for core from 10.200.16.10 port 35988 ssh2: RSA SHA256:7ZishQ9HWvAtdE+Xy1E7/rrbLiv2T8OuOKfxsc80/d0 May 15 00:05:10.066507 sshd-session[2226]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:05:10.071851 systemd-logind[1707]: New session 7 of user core. May 15 00:05:10.077792 systemd[1]: Started session-7.scope - Session 7 of User core. May 15 00:05:10.474493 sudo[2229]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 15 00:05:10.474779 sudo[2229]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 00:05:10.491451 sudo[2229]: pam_unix(sudo:session): session closed for user root May 15 00:05:10.562934 sshd[2228]: Connection closed by 10.200.16.10 port 35988 May 15 00:05:10.562183 sshd-session[2226]: pam_unix(sshd:session): session closed for user core May 15 00:05:10.565857 systemd[1]: sshd@4-10.200.20.16:22-10.200.16.10:35988.service: Deactivated successfully. May 15 00:05:10.567763 systemd[1]: session-7.scope: Deactivated successfully. May 15 00:05:10.568527 systemd-logind[1707]: Session 7 logged out. Waiting for processes to exit. May 15 00:05:10.569579 systemd-logind[1707]: Removed session 7. May 15 00:05:10.646143 systemd[1]: Started sshd@5-10.200.20.16:22-10.200.16.10:36000.service - OpenSSH per-connection server daemon (10.200.16.10:36000). May 15 00:05:11.095817 sshd[2235]: Accepted publickey for core from 10.200.16.10 port 36000 ssh2: RSA SHA256:7ZishQ9HWvAtdE+Xy1E7/rrbLiv2T8OuOKfxsc80/d0 May 15 00:05:11.097175 sshd-session[2235]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:05:11.102672 systemd-logind[1707]: New session 8 of user core. May 15 00:05:11.109863 systemd[1]: Started session-8.scope - Session 8 of User core. May 15 00:05:11.349123 sudo[2239]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 15 00:05:11.349374 sudo[2239]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 00:05:11.353198 sudo[2239]: pam_unix(sudo:session): session closed for user root May 15 00:05:11.358187 sudo[2238]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 15 00:05:11.358443 sudo[2238]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 00:05:11.374969 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 15 00:05:11.398326 augenrules[2261]: No rules May 15 00:05:11.399963 systemd[1]: audit-rules.service: Deactivated successfully. May 15 00:05:11.400198 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 15 00:05:11.403234 sudo[2238]: pam_unix(sudo:session): session closed for user root May 15 00:05:11.473672 sshd[2237]: Connection closed by 10.200.16.10 port 36000 May 15 00:05:11.474214 sshd-session[2235]: pam_unix(sshd:session): session closed for user core May 15 00:05:11.477275 systemd-logind[1707]: Session 8 logged out. Waiting for processes to exit. May 15 00:05:11.478218 systemd[1]: sshd@5-10.200.20.16:22-10.200.16.10:36000.service: Deactivated successfully. May 15 00:05:11.480153 systemd[1]: session-8.scope: Deactivated successfully. May 15 00:05:11.482061 systemd-logind[1707]: Removed session 8. May 15 00:05:11.570883 systemd[1]: Started sshd@6-10.200.20.16:22-10.200.16.10:36008.service - OpenSSH per-connection server daemon (10.200.16.10:36008). May 15 00:05:12.052836 sshd[2270]: Accepted publickey for core from 10.200.16.10 port 36008 ssh2: RSA SHA256:7ZishQ9HWvAtdE+Xy1E7/rrbLiv2T8OuOKfxsc80/d0 May 15 00:05:12.054026 sshd-session[2270]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:05:12.057896 systemd-logind[1707]: New session 9 of user core. May 15 00:05:12.068790 systemd[1]: Started session-9.scope - Session 9 of User core. May 15 00:05:12.324374 sudo[2273]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 15 00:05:12.324674 sudo[2273]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 00:05:12.603439 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 15 00:05:12.608821 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 00:05:12.802842 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 00:05:12.806315 (kubelet)[2290]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 00:05:12.844107 kubelet[2290]: E0515 00:05:12.844048 2290 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 00:05:12.845783 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 00:05:12.845907 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 00:05:12.846513 systemd[1]: kubelet.service: Consumed 113ms CPU time, 94.6M memory peak. May 15 00:05:14.573527 chronyd[1705]: Selected source PHC0 May 15 00:05:16.667379 (dockerd)[2306]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 15 00:05:16.667732 systemd[1]: Starting docker.service - Docker Application Container Engine... May 15 00:05:17.717121 dockerd[2306]: time="2025-05-15T00:05:17.717066591Z" level=info msg="Starting up" May 15 00:05:18.029120 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport630120057-merged.mount: Deactivated successfully. May 15 00:05:18.135347 dockerd[2306]: time="2025-05-15T00:05:18.135303916Z" level=info msg="Loading containers: start." May 15 00:05:18.404661 kernel: Initializing XFRM netlink socket May 15 00:05:18.542062 systemd-networkd[1343]: docker0: Link UP May 15 00:05:18.590965 dockerd[2306]: time="2025-05-15T00:05:18.590916999Z" level=info msg="Loading containers: done." May 15 00:05:18.623918 dockerd[2306]: time="2025-05-15T00:05:18.623797601Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 15 00:05:18.623918 dockerd[2306]: time="2025-05-15T00:05:18.623901521Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 May 15 00:05:18.624123 dockerd[2306]: time="2025-05-15T00:05:18.624028921Z" level=info msg="Daemon has completed initialization" May 15 00:05:18.714359 dockerd[2306]: time="2025-05-15T00:05:18.714023739Z" level=info msg="API listen on /run/docker.sock" May 15 00:05:18.714928 systemd[1]: Started docker.service - Docker Application Container Engine. May 15 00:05:19.796678 containerd[1724]: time="2025-05-15T00:05:19.796616429Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\"" May 15 00:05:20.747696 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2064417601.mount: Deactivated successfully. May 15 00:05:22.484043 containerd[1724]: time="2025-05-15T00:05:22.483987858Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:05:22.489759 containerd[1724]: time="2025-05-15T00:05:22.489232292Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.8: active requests=0, bytes read=25554608" May 15 00:05:22.496038 containerd[1724]: time="2025-05-15T00:05:22.495965604Z" level=info msg="ImageCreate event name:\"sha256:ef8fb1ea7c9599dbedea6f9d5589975ebc5bf4ec72f6be6acaaec59a723a09b3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:05:22.503143 containerd[1724]: time="2025-05-15T00:05:22.503091836Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:05:22.504021 containerd[1724]: time="2025-05-15T00:05:22.503852755Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.8\" with image id \"sha256:ef8fb1ea7c9599dbedea6f9d5589975ebc5bf4ec72f6be6acaaec59a723a09b3\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\", size \"25551408\" in 2.707175806s" May 15 00:05:22.504021 containerd[1724]: time="2025-05-15T00:05:22.503888275Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\" returns image reference \"sha256:ef8fb1ea7c9599dbedea6f9d5589975ebc5bf4ec72f6be6acaaec59a723a09b3\"" May 15 00:05:22.504904 containerd[1724]: time="2025-05-15T00:05:22.504608954Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\"" May 15 00:05:22.853498 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 15 00:05:22.862007 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 00:05:22.946979 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 00:05:22.952294 (kubelet)[2551]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 00:05:23.001170 kubelet[2551]: E0515 00:05:23.001110 2551 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 00:05:23.003779 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 00:05:23.004051 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 00:05:23.004621 systemd[1]: kubelet.service: Consumed 123ms CPU time, 96.8M memory peak. May 15 00:05:24.400665 containerd[1724]: time="2025-05-15T00:05:24.400354557Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:05:24.403561 containerd[1724]: time="2025-05-15T00:05:24.403332432Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.8: active requests=0, bytes read=22458978" May 15 00:05:24.407500 containerd[1724]: time="2025-05-15T00:05:24.407462746Z" level=info msg="ImageCreate event name:\"sha256:ea6e6085feca75547d0422ab0536fe0d18c9ff5831de7a9d6a707c968027bb6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:05:24.416347 containerd[1724]: time="2025-05-15T00:05:24.416296493Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:05:24.417820 containerd[1724]: time="2025-05-15T00:05:24.417286892Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.8\" with image id \"sha256:ea6e6085feca75547d0422ab0536fe0d18c9ff5831de7a9d6a707c968027bb6a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\", size \"23900539\" in 1.912526698s" May 15 00:05:24.417820 containerd[1724]: time="2025-05-15T00:05:24.417320612Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\" returns image reference \"sha256:ea6e6085feca75547d0422ab0536fe0d18c9ff5831de7a9d6a707c968027bb6a\"" May 15 00:05:24.417955 containerd[1724]: time="2025-05-15T00:05:24.417930491Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\"" May 15 00:05:29.527664 containerd[1724]: time="2025-05-15T00:05:29.527589042Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:05:29.533649 containerd[1724]: time="2025-05-15T00:05:29.533561315Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.8: active requests=0, bytes read=17125813" May 15 00:05:29.576613 containerd[1724]: time="2025-05-15T00:05:29.576523103Z" level=info msg="ImageCreate event name:\"sha256:1d2db6ef0dd2f3e08bdfcd46afde7b755b05192841f563d8df54b807daaa7d8d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:05:29.621495 containerd[1724]: time="2025-05-15T00:05:29.620546250Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:05:29.621495 containerd[1724]: time="2025-05-15T00:05:29.621330649Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.8\" with image id \"sha256:1d2db6ef0dd2f3e08bdfcd46afde7b755b05192841f563d8df54b807daaa7d8d\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\", size \"18567392\" in 5.203364998s" May 15 00:05:29.621495 containerd[1724]: time="2025-05-15T00:05:29.621365649Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\" returns image reference \"sha256:1d2db6ef0dd2f3e08bdfcd46afde7b755b05192841f563d8df54b807daaa7d8d\"" May 15 00:05:29.622253 containerd[1724]: time="2025-05-15T00:05:29.622168168Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\"" May 15 00:05:32.722782 kernel: hv_balloon: Max. dynamic memory size: 4096 MB May 15 00:05:33.103611 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. May 15 00:05:33.114212 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 00:05:35.510501 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 00:05:35.514057 (kubelet)[2574]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 00:05:35.546142 kubelet[2574]: E0515 00:05:35.546037 2574 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 00:05:35.548102 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 00:05:35.548248 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 00:05:35.548742 systemd[1]: kubelet.service: Consumed 111ms CPU time, 94.1M memory peak. May 15 00:05:36.318649 update_engine[1711]: I20250515 00:05:36.318573 1711 update_attempter.cc:509] Updating boot flags... May 15 00:05:38.928680 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 44 scanned by (udev-worker) (2595) May 15 00:05:39.078575 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 44 scanned by (udev-worker) (2596) May 15 00:05:39.216735 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 44 scanned by (udev-worker) (2596) May 15 00:05:41.647927 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3855765360.mount: Deactivated successfully. May 15 00:05:42.421126 containerd[1724]: time="2025-05-15T00:05:42.421067618Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:05:42.427360 containerd[1724]: time="2025-05-15T00:05:42.427141053Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.8: active requests=0, bytes read=26871917" May 15 00:05:42.484905 containerd[1724]: time="2025-05-15T00:05:42.484863721Z" level=info msg="ImageCreate event name:\"sha256:c5361ece77e80334cd5fb082c0b678cb3244f5834ecacea1719ae6b38b465581\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:05:42.529169 containerd[1724]: time="2025-05-15T00:05:42.529098281Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:05:42.530284 containerd[1724]: time="2025-05-15T00:05:42.529957320Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.8\" with image id \"sha256:c5361ece77e80334cd5fb082c0b678cb3244f5834ecacea1719ae6b38b465581\", repo tag \"registry.k8s.io/kube-proxy:v1.31.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\", size \"26870936\" in 12.907581432s" May 15 00:05:42.530284 containerd[1724]: time="2025-05-15T00:05:42.529991000Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\" returns image reference \"sha256:c5361ece77e80334cd5fb082c0b678cb3244f5834ecacea1719ae6b38b465581\"" May 15 00:05:42.530566 containerd[1724]: time="2025-05-15T00:05:42.530436639Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 15 00:05:43.938022 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1760601213.mount: Deactivated successfully. May 15 00:05:45.299671 containerd[1724]: time="2025-05-15T00:05:45.299579613Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:05:45.306825 containerd[1724]: time="2025-05-15T00:05:45.306731526Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" May 15 00:05:45.326781 containerd[1724]: time="2025-05-15T00:05:45.326710508Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:05:45.338665 containerd[1724]: time="2025-05-15T00:05:45.338406857Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:05:45.339610 containerd[1724]: time="2025-05-15T00:05:45.339574696Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 2.809003497s" May 15 00:05:45.339657 containerd[1724]: time="2025-05-15T00:05:45.339612056Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" May 15 00:05:45.340694 containerd[1724]: time="2025-05-15T00:05:45.340660855Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 15 00:05:45.603510 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. May 15 00:05:45.610825 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 00:05:45.699931 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 00:05:45.712931 (kubelet)[2806]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 00:05:45.747506 kubelet[2806]: E0515 00:05:45.747439 2806 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 00:05:45.749978 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 00:05:45.750237 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 00:05:45.750656 systemd[1]: kubelet.service: Consumed 118ms CPU time, 96.2M memory peak. May 15 00:05:46.528422 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2803236560.mount: Deactivated successfully. May 15 00:05:46.580394 containerd[1724]: time="2025-05-15T00:05:46.580339916Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:05:46.584697 containerd[1724]: time="2025-05-15T00:05:46.584643191Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" May 15 00:05:46.592750 containerd[1724]: time="2025-05-15T00:05:46.592692982Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:05:46.599421 containerd[1724]: time="2025-05-15T00:05:46.599365295Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:05:46.600401 containerd[1724]: time="2025-05-15T00:05:46.600043534Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 1.259349919s" May 15 00:05:46.600401 containerd[1724]: time="2025-05-15T00:05:46.600078254Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" May 15 00:05:46.600592 containerd[1724]: time="2025-05-15T00:05:46.600558573Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 15 00:05:47.353144 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1846341769.mount: Deactivated successfully. May 15 00:05:49.687662 containerd[1724]: time="2025-05-15T00:05:49.687417439Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:05:49.730302 containerd[1724]: time="2025-05-15T00:05:49.730237946Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406465" May 15 00:05:49.776797 containerd[1724]: time="2025-05-15T00:05:49.776716368Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:05:49.785240 containerd[1724]: time="2025-05-15T00:05:49.785164118Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:05:49.786609 containerd[1724]: time="2025-05-15T00:05:49.786437476Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 3.185843783s" May 15 00:05:49.786609 containerd[1724]: time="2025-05-15T00:05:49.786472996Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" May 15 00:05:54.273254 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 15 00:05:54.273409 systemd[1]: kubelet.service: Consumed 118ms CPU time, 96.2M memory peak. May 15 00:05:54.280099 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 00:05:54.307806 systemd[1]: Reload requested from client PID 2896 ('systemctl') (unit session-9.scope)... May 15 00:05:54.307824 systemd[1]: Reloading... May 15 00:05:54.425665 zram_generator::config[2946]: No configuration found. May 15 00:05:54.532297 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 00:05:54.662642 systemd[1]: Reloading finished in 354 ms. May 15 00:05:54.755368 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 00:05:54.759437 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 15 00:05:54.761015 systemd[1]: kubelet.service: Deactivated successfully. May 15 00:05:54.761242 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 15 00:05:54.761289 systemd[1]: kubelet.service: Consumed 84ms CPU time, 82.3M memory peak. May 15 00:05:54.768102 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 00:05:54.858177 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 00:05:54.863008 (kubelet)[3012]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 15 00:05:54.902368 kubelet[3012]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 00:05:54.902368 kubelet[3012]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 15 00:05:54.902368 kubelet[3012]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 00:05:54.902801 kubelet[3012]: I0515 00:05:54.902424 3012 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 15 00:05:55.913655 kubelet[3012]: I0515 00:05:55.912544 3012 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 15 00:05:55.913655 kubelet[3012]: I0515 00:05:55.912573 3012 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 15 00:05:55.913655 kubelet[3012]: I0515 00:05:55.912818 3012 server.go:929] "Client rotation is on, will bootstrap in background" May 15 00:05:56.094875 kubelet[3012]: E0515 00:05:56.094838 3012 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.16:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.16:6443: connect: connection refused" logger="UnhandledError" May 15 00:05:56.096524 kubelet[3012]: I0515 00:05:56.096497 3012 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 15 00:05:56.102126 kubelet[3012]: E0515 00:05:56.102089 3012 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 15 00:05:56.102126 kubelet[3012]: I0515 00:05:56.102122 3012 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 15 00:05:56.105949 kubelet[3012]: I0515 00:05:56.105922 3012 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 15 00:05:56.106629 kubelet[3012]: I0515 00:05:56.106607 3012 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 15 00:05:56.106801 kubelet[3012]: I0515 00:05:56.106767 3012 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 15 00:05:56.106977 kubelet[3012]: I0515 00:05:56.106801 3012 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230.1.1-n-c70fe96ece","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 15 00:05:56.107064 kubelet[3012]: I0515 00:05:56.106985 3012 topology_manager.go:138] "Creating topology manager with none policy" May 15 00:05:56.107064 kubelet[3012]: I0515 00:05:56.106994 3012 container_manager_linux.go:300] "Creating device plugin manager" May 15 00:05:56.107109 kubelet[3012]: I0515 00:05:56.107103 3012 state_mem.go:36] "Initialized new in-memory state store" May 15 00:05:56.109801 kubelet[3012]: I0515 00:05:56.109070 3012 kubelet.go:408] "Attempting to sync node with API server" May 15 00:05:56.109801 kubelet[3012]: I0515 00:05:56.109105 3012 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 15 00:05:56.109801 kubelet[3012]: I0515 00:05:56.109131 3012 kubelet.go:314] "Adding apiserver pod source" May 15 00:05:56.109801 kubelet[3012]: I0515 00:05:56.109141 3012 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 15 00:05:56.110502 kubelet[3012]: W0515 00:05:56.110454 3012 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.16:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.1.1-n-c70fe96ece&limit=500&resourceVersion=0": dial tcp 10.200.20.16:6443: connect: connection refused May 15 00:05:56.110563 kubelet[3012]: E0515 00:05:56.110525 3012 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.16:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.1.1-n-c70fe96ece&limit=500&resourceVersion=0\": dial tcp 10.200.20.16:6443: connect: connection refused" logger="UnhandledError" May 15 00:05:56.111195 kubelet[3012]: W0515 00:05:56.111153 3012 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.16:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.16:6443: connect: connection refused May 15 00:05:56.111306 kubelet[3012]: E0515 00:05:56.111288 3012 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.16:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.16:6443: connect: connection refused" logger="UnhandledError" May 15 00:05:56.111460 kubelet[3012]: I0515 00:05:56.111446 3012 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 15 00:05:56.113280 kubelet[3012]: I0515 00:05:56.113258 3012 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 15 00:05:56.113813 kubelet[3012]: W0515 00:05:56.113798 3012 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 15 00:05:56.115711 kubelet[3012]: I0515 00:05:56.115694 3012 server.go:1269] "Started kubelet" May 15 00:05:56.116572 kubelet[3012]: I0515 00:05:56.116535 3012 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 15 00:05:56.117417 kubelet[3012]: I0515 00:05:56.117389 3012 server.go:460] "Adding debug handlers to kubelet server" May 15 00:05:56.119669 kubelet[3012]: I0515 00:05:56.119453 3012 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 15 00:05:56.119882 kubelet[3012]: I0515 00:05:56.119867 3012 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 15 00:05:56.123438 kubelet[3012]: E0515 00:05:56.122456 3012 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.16:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.16:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230.1.1-n-c70fe96ece.183f8a95b9d6e0ab default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230.1.1-n-c70fe96ece,UID:ci-4230.1.1-n-c70fe96ece,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230.1.1-n-c70fe96ece,},FirstTimestamp:2025-05-15 00:05:56.115669163 +0000 UTC m=+1.249287985,LastTimestamp:2025-05-15 00:05:56.115669163 +0000 UTC m=+1.249287985,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230.1.1-n-c70fe96ece,}" May 15 00:05:56.125262 kubelet[3012]: I0515 00:05:56.124385 3012 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 15 00:05:56.125262 kubelet[3012]: I0515 00:05:56.124537 3012 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 15 00:05:56.128132 kubelet[3012]: I0515 00:05:56.128108 3012 volume_manager.go:289] "Starting Kubelet Volume Manager" May 15 00:05:56.128541 kubelet[3012]: E0515 00:05:56.128519 3012 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-c70fe96ece\" not found" May 15 00:05:56.128916 kubelet[3012]: I0515 00:05:56.128901 3012 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 15 00:05:56.129093 kubelet[3012]: I0515 00:05:56.129082 3012 reconciler.go:26] "Reconciler: start to sync state" May 15 00:05:56.130409 kubelet[3012]: I0515 00:05:56.130391 3012 factory.go:221] Registration of the systemd container factory successfully May 15 00:05:56.130657 kubelet[3012]: I0515 00:05:56.130622 3012 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 15 00:05:56.131663 kubelet[3012]: W0515 00:05:56.131206 3012 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.16:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.16:6443: connect: connection refused May 15 00:05:56.131663 kubelet[3012]: E0515 00:05:56.131255 3012 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.16:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.16:6443: connect: connection refused" logger="UnhandledError" May 15 00:05:56.133793 kubelet[3012]: E0515 00:05:56.133759 3012 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.1.1-n-c70fe96ece?timeout=10s\": dial tcp 10.200.20.16:6443: connect: connection refused" interval="200ms" May 15 00:05:56.134743 kubelet[3012]: I0515 00:05:56.134724 3012 factory.go:221] Registration of the containerd container factory successfully May 15 00:05:56.147907 kubelet[3012]: E0515 00:05:56.147872 3012 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 15 00:05:56.153309 kubelet[3012]: I0515 00:05:56.153276 3012 cpu_manager.go:214] "Starting CPU manager" policy="none" May 15 00:05:56.153309 kubelet[3012]: I0515 00:05:56.153298 3012 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 15 00:05:56.153309 kubelet[3012]: I0515 00:05:56.153317 3012 state_mem.go:36] "Initialized new in-memory state store" May 15 00:05:56.230363 kubelet[3012]: E0515 00:05:56.229471 3012 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-c70fe96ece\" not found" May 15 00:05:56.329878 kubelet[3012]: E0515 00:05:56.329838 3012 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-c70fe96ece\" not found" May 15 00:05:56.334451 kubelet[3012]: E0515 00:05:56.334417 3012 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.1.1-n-c70fe96ece?timeout=10s\": dial tcp 10.200.20.16:6443: connect: connection refused" interval="400ms" May 15 00:05:56.430408 kubelet[3012]: E0515 00:05:56.430365 3012 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-c70fe96ece\" not found" May 15 00:05:56.530863 kubelet[3012]: E0515 00:05:56.530714 3012 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-c70fe96ece\" not found" May 15 00:05:56.631382 kubelet[3012]: E0515 00:05:56.631348 3012 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-c70fe96ece\" not found" May 15 00:05:56.728845 kubelet[3012]: I0515 00:05:56.728795 3012 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 15 00:05:56.744177 kubelet[3012]: I0515 00:05:56.730854 3012 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 15 00:05:56.744177 kubelet[3012]: I0515 00:05:56.730895 3012 status_manager.go:217] "Starting to sync pod status with apiserver" May 15 00:05:56.744177 kubelet[3012]: I0515 00:05:56.730919 3012 kubelet.go:2321] "Starting kubelet main sync loop" May 15 00:05:56.744177 kubelet[3012]: E0515 00:05:56.730977 3012 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 15 00:05:56.744177 kubelet[3012]: E0515 00:05:56.731863 3012 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-c70fe96ece\" not found" May 15 00:05:56.744177 kubelet[3012]: W0515 00:05:56.732134 3012 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.16:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.16:6443: connect: connection refused May 15 00:05:56.744177 kubelet[3012]: E0515 00:05:56.732166 3012 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.16:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.16:6443: connect: connection refused" logger="UnhandledError" May 15 00:05:56.744177 kubelet[3012]: E0515 00:05:56.734967 3012 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.1.1-n-c70fe96ece?timeout=10s\": dial tcp 10.200.20.16:6443: connect: connection refused" interval="800ms" May 15 00:05:56.745055 kubelet[3012]: I0515 00:05:56.745027 3012 policy_none.go:49] "None policy: Start" May 15 00:05:56.746018 kubelet[3012]: I0515 00:05:56.745936 3012 memory_manager.go:170] "Starting memorymanager" policy="None" May 15 00:05:56.746018 kubelet[3012]: I0515 00:05:56.745962 3012 state_mem.go:35] "Initializing new in-memory state store" May 15 00:05:56.774914 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 15 00:05:56.784471 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 15 00:05:56.790010 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 15 00:05:56.798864 kubelet[3012]: I0515 00:05:56.798661 3012 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 15 00:05:56.798983 kubelet[3012]: I0515 00:05:56.798879 3012 eviction_manager.go:189] "Eviction manager: starting control loop" May 15 00:05:56.798983 kubelet[3012]: I0515 00:05:56.798891 3012 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 15 00:05:56.799157 kubelet[3012]: I0515 00:05:56.799134 3012 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 15 00:05:56.801341 kubelet[3012]: E0515 00:05:56.801319 3012 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4230.1.1-n-c70fe96ece\" not found" May 15 00:05:56.842903 systemd[1]: Created slice kubepods-burstable-pod8680bec50fd195e791fa87863cf6817b.slice - libcontainer container kubepods-burstable-pod8680bec50fd195e791fa87863cf6817b.slice. May 15 00:05:56.865741 systemd[1]: Created slice kubepods-burstable-podf0f7c16397792f0734589f0ea481911c.slice - libcontainer container kubepods-burstable-podf0f7c16397792f0734589f0ea481911c.slice. May 15 00:05:56.870956 systemd[1]: Created slice kubepods-burstable-pod89db16eda88809eb29a95bd53a5c84c0.slice - libcontainer container kubepods-burstable-pod89db16eda88809eb29a95bd53a5c84c0.slice. May 15 00:05:56.901378 kubelet[3012]: I0515 00:05:56.901031 3012 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230.1.1-n-c70fe96ece" May 15 00:05:56.901378 kubelet[3012]: E0515 00:05:56.901351 3012 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.16:6443/api/v1/nodes\": dial tcp 10.200.20.16:6443: connect: connection refused" node="ci-4230.1.1-n-c70fe96ece" May 15 00:05:56.931935 kubelet[3012]: I0515 00:05:56.931905 3012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8680bec50fd195e791fa87863cf6817b-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230.1.1-n-c70fe96ece\" (UID: \"8680bec50fd195e791fa87863cf6817b\") " pod="kube-system/kube-apiserver-ci-4230.1.1-n-c70fe96ece" May 15 00:05:56.932495 kubelet[3012]: I0515 00:05:56.932295 3012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f0f7c16397792f0734589f0ea481911c-ca-certs\") pod \"kube-controller-manager-ci-4230.1.1-n-c70fe96ece\" (UID: \"f0f7c16397792f0734589f0ea481911c\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-n-c70fe96ece" May 15 00:05:56.932495 kubelet[3012]: I0515 00:05:56.932343 3012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f0f7c16397792f0734589f0ea481911c-flexvolume-dir\") pod \"kube-controller-manager-ci-4230.1.1-n-c70fe96ece\" (UID: \"f0f7c16397792f0734589f0ea481911c\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-n-c70fe96ece" May 15 00:05:56.932495 kubelet[3012]: I0515 00:05:56.932366 3012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f0f7c16397792f0734589f0ea481911c-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230.1.1-n-c70fe96ece\" (UID: \"f0f7c16397792f0734589f0ea481911c\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-n-c70fe96ece" May 15 00:05:56.932495 kubelet[3012]: I0515 00:05:56.932391 3012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8680bec50fd195e791fa87863cf6817b-ca-certs\") pod \"kube-apiserver-ci-4230.1.1-n-c70fe96ece\" (UID: \"8680bec50fd195e791fa87863cf6817b\") " pod="kube-system/kube-apiserver-ci-4230.1.1-n-c70fe96ece" May 15 00:05:56.932495 kubelet[3012]: I0515 00:05:56.932410 3012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8680bec50fd195e791fa87863cf6817b-k8s-certs\") pod \"kube-apiserver-ci-4230.1.1-n-c70fe96ece\" (UID: \"8680bec50fd195e791fa87863cf6817b\") " pod="kube-system/kube-apiserver-ci-4230.1.1-n-c70fe96ece" May 15 00:05:56.932652 kubelet[3012]: I0515 00:05:56.932433 3012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/89db16eda88809eb29a95bd53a5c84c0-kubeconfig\") pod \"kube-scheduler-ci-4230.1.1-n-c70fe96ece\" (UID: \"89db16eda88809eb29a95bd53a5c84c0\") " pod="kube-system/kube-scheduler-ci-4230.1.1-n-c70fe96ece" May 15 00:05:56.932652 kubelet[3012]: I0515 00:05:56.932450 3012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f0f7c16397792f0734589f0ea481911c-k8s-certs\") pod \"kube-controller-manager-ci-4230.1.1-n-c70fe96ece\" (UID: \"f0f7c16397792f0734589f0ea481911c\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-n-c70fe96ece" May 15 00:05:56.932652 kubelet[3012]: I0515 00:05:56.932471 3012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f0f7c16397792f0734589f0ea481911c-kubeconfig\") pod \"kube-controller-manager-ci-4230.1.1-n-c70fe96ece\" (UID: \"f0f7c16397792f0734589f0ea481911c\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-n-c70fe96ece" May 15 00:05:57.047950 kubelet[3012]: E0515 00:05:57.047786 3012 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.16:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.16:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230.1.1-n-c70fe96ece.183f8a95b9d6e0ab default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230.1.1-n-c70fe96ece,UID:ci-4230.1.1-n-c70fe96ece,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230.1.1-n-c70fe96ece,},FirstTimestamp:2025-05-15 00:05:56.115669163 +0000 UTC m=+1.249287985,LastTimestamp:2025-05-15 00:05:56.115669163 +0000 UTC m=+1.249287985,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230.1.1-n-c70fe96ece,}" May 15 00:05:57.103843 kubelet[3012]: I0515 00:05:57.103550 3012 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230.1.1-n-c70fe96ece" May 15 00:05:57.104088 kubelet[3012]: E0515 00:05:57.104054 3012 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.16:6443/api/v1/nodes\": dial tcp 10.200.20.16:6443: connect: connection refused" node="ci-4230.1.1-n-c70fe96ece" May 15 00:05:57.163983 containerd[1724]: time="2025-05-15T00:05:57.163938454Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230.1.1-n-c70fe96ece,Uid:8680bec50fd195e791fa87863cf6817b,Namespace:kube-system,Attempt:0,}" May 15 00:05:57.171056 containerd[1724]: time="2025-05-15T00:05:57.170951846Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230.1.1-n-c70fe96ece,Uid:f0f7c16397792f0734589f0ea481911c,Namespace:kube-system,Attempt:0,}" May 15 00:05:57.173939 containerd[1724]: time="2025-05-15T00:05:57.173812443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230.1.1-n-c70fe96ece,Uid:89db16eda88809eb29a95bd53a5c84c0,Namespace:kube-system,Attempt:0,}" May 15 00:05:57.208771 kubelet[3012]: W0515 00:05:57.208668 3012 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.16:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.16:6443: connect: connection refused May 15 00:05:57.208771 kubelet[3012]: E0515 00:05:57.208737 3012 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.16:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.16:6443: connect: connection refused" logger="UnhandledError" May 15 00:05:57.432827 kubelet[3012]: W0515 00:05:57.432713 3012 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.16:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.16:6443: connect: connection refused May 15 00:05:57.432827 kubelet[3012]: E0515 00:05:57.432778 3012 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.16:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.16:6443: connect: connection refused" logger="UnhandledError" May 15 00:05:57.506143 kubelet[3012]: I0515 00:05:57.505772 3012 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230.1.1-n-c70fe96ece" May 15 00:05:57.506286 kubelet[3012]: E0515 00:05:57.506225 3012 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.16:6443/api/v1/nodes\": dial tcp 10.200.20.16:6443: connect: connection refused" node="ci-4230.1.1-n-c70fe96ece" May 15 00:05:57.535852 kubelet[3012]: E0515 00:05:57.535804 3012 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.1.1-n-c70fe96ece?timeout=10s\": dial tcp 10.200.20.16:6443: connect: connection refused" interval="1.6s" May 15 00:05:57.643173 kubelet[3012]: W0515 00:05:57.643077 3012 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.16:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.1.1-n-c70fe96ece&limit=500&resourceVersion=0": dial tcp 10.200.20.16:6443: connect: connection refused May 15 00:05:57.643173 kubelet[3012]: E0515 00:05:57.643142 3012 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.16:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.1.1-n-c70fe96ece&limit=500&resourceVersion=0\": dial tcp 10.200.20.16:6443: connect: connection refused" logger="UnhandledError" May 15 00:05:57.813437 kubelet[3012]: W0515 00:05:57.813279 3012 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.16:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.16:6443: connect: connection refused May 15 00:05:57.813437 kubelet[3012]: E0515 00:05:57.813334 3012 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.16:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.16:6443: connect: connection refused" logger="UnhandledError" May 15 00:05:57.856377 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2683106894.mount: Deactivated successfully. May 15 00:05:57.914740 containerd[1724]: time="2025-05-15T00:05:57.914686603Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 00:05:57.949226 containerd[1724]: time="2025-05-15T00:05:57.949162964Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" May 15 00:05:57.966487 containerd[1724]: time="2025-05-15T00:05:57.966446384Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 00:05:57.976670 containerd[1724]: time="2025-05-15T00:05:57.975995054Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 00:05:57.986338 containerd[1724]: time="2025-05-15T00:05:57.986178762Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 15 00:05:57.993515 containerd[1724]: time="2025-05-15T00:05:57.993406554Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 00:05:58.005720 containerd[1724]: time="2025-05-15T00:05:58.005674260Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 00:05:58.006594 containerd[1724]: time="2025-05-15T00:05:58.006560939Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 842.546045ms" May 15 00:05:58.013969 containerd[1724]: time="2025-05-15T00:05:58.013912091Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 15 00:05:58.029737 containerd[1724]: time="2025-05-15T00:05:58.029536633Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 858.512347ms" May 15 00:05:58.075585 containerd[1724]: time="2025-05-15T00:05:58.075408981Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 901.539298ms" May 15 00:05:58.284520 kubelet[3012]: E0515 00:05:58.284436 3012 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.16:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.16:6443: connect: connection refused" logger="UnhandledError" May 15 00:05:58.308250 kubelet[3012]: I0515 00:05:58.308182 3012 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230.1.1-n-c70fe96ece" May 15 00:05:58.308562 kubelet[3012]: E0515 00:05:58.308518 3012 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.16:6443/api/v1/nodes\": dial tcp 10.200.20.16:6443: connect: connection refused" node="ci-4230.1.1-n-c70fe96ece" May 15 00:05:58.889877 containerd[1724]: time="2025-05-15T00:05:58.889695098Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:05:58.889877 containerd[1724]: time="2025-05-15T00:05:58.889774058Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:05:58.889877 containerd[1724]: time="2025-05-15T00:05:58.889805497Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:05:58.890983 containerd[1724]: time="2025-05-15T00:05:58.890830616Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:05:58.890983 containerd[1724]: time="2025-05-15T00:05:58.890916816Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:05:58.890983 containerd[1724]: time="2025-05-15T00:05:58.890952656Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:05:58.892030 containerd[1724]: time="2025-05-15T00:05:58.891905335Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:05:58.894237 containerd[1724]: time="2025-05-15T00:05:58.893461973Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:05:58.899650 containerd[1724]: time="2025-05-15T00:05:58.897292249Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:05:58.899650 containerd[1724]: time="2025-05-15T00:05:58.897374049Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:05:58.899650 containerd[1724]: time="2025-05-15T00:05:58.897402809Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:05:58.901702 containerd[1724]: time="2025-05-15T00:05:58.900300646Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:05:58.933860 systemd[1]: Started cri-containerd-def770cf5f91937fa8f461b3adaf0f9ba223815371576a034f6614667b09b454.scope - libcontainer container def770cf5f91937fa8f461b3adaf0f9ba223815371576a034f6614667b09b454. May 15 00:05:58.939358 systemd[1]: Started cri-containerd-2a3384256ab7a102417921d6815679b8f15112f1759254f5ca1eda8644467e78.scope - libcontainer container 2a3384256ab7a102417921d6815679b8f15112f1759254f5ca1eda8644467e78. May 15 00:05:58.941044 systemd[1]: Started cri-containerd-99e253af0a096b71fd04637226f6d6906fc89b7ba21ea52bfade6a204c44b276.scope - libcontainer container 99e253af0a096b71fd04637226f6d6906fc89b7ba21ea52bfade6a204c44b276. May 15 00:05:58.986275 containerd[1724]: time="2025-05-15T00:05:58.986234588Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230.1.1-n-c70fe96ece,Uid:8680bec50fd195e791fa87863cf6817b,Namespace:kube-system,Attempt:0,} returns sandbox id \"2a3384256ab7a102417921d6815679b8f15112f1759254f5ca1eda8644467e78\"" May 15 00:05:58.997150 containerd[1724]: time="2025-05-15T00:05:58.995284298Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230.1.1-n-c70fe96ece,Uid:f0f7c16397792f0734589f0ea481911c,Namespace:kube-system,Attempt:0,} returns sandbox id \"99e253af0a096b71fd04637226f6d6906fc89b7ba21ea52bfade6a204c44b276\"" May 15 00:05:58.998110 containerd[1724]: time="2025-05-15T00:05:58.997772775Z" level=info msg="CreateContainer within sandbox \"2a3384256ab7a102417921d6815679b8f15112f1759254f5ca1eda8644467e78\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 15 00:05:59.003185 containerd[1724]: time="2025-05-15T00:05:59.003028049Z" level=info msg="CreateContainer within sandbox \"99e253af0a096b71fd04637226f6d6906fc89b7ba21ea52bfade6a204c44b276\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 15 00:05:59.007948 containerd[1724]: time="2025-05-15T00:05:59.007906364Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230.1.1-n-c70fe96ece,Uid:89db16eda88809eb29a95bd53a5c84c0,Namespace:kube-system,Attempt:0,} returns sandbox id \"def770cf5f91937fa8f461b3adaf0f9ba223815371576a034f6614667b09b454\"" May 15 00:05:59.010818 containerd[1724]: time="2025-05-15T00:05:59.010779560Z" level=info msg="CreateContainer within sandbox \"def770cf5f91937fa8f461b3adaf0f9ba223815371576a034f6614667b09b454\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 15 00:05:59.136420 kubelet[3012]: E0515 00:05:59.136372 3012 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.1.1-n-c70fe96ece?timeout=10s\": dial tcp 10.200.20.16:6443: connect: connection refused" interval="3.2s" May 15 00:05:59.142513 containerd[1724]: time="2025-05-15T00:05:59.142380971Z" level=info msg="CreateContainer within sandbox \"99e253af0a096b71fd04637226f6d6906fc89b7ba21ea52bfade6a204c44b276\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c48211f30e8f304a27d964e2b4fce3693ea36790a7d799bd7e52c4022ef15e0b\"" May 15 00:05:59.143551 containerd[1724]: time="2025-05-15T00:05:59.143522810Z" level=info msg="StartContainer for \"c48211f30e8f304a27d964e2b4fce3693ea36790a7d799bd7e52c4022ef15e0b\"" May 15 00:05:59.162715 containerd[1724]: time="2025-05-15T00:05:59.162563548Z" level=info msg="CreateContainer within sandbox \"2a3384256ab7a102417921d6815679b8f15112f1759254f5ca1eda8644467e78\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"3bcb7aadf95fa6b7f752f20fcd873fa9111d4df71c45102f16867278f7d2194b\"" May 15 00:05:59.163652 containerd[1724]: time="2025-05-15T00:05:59.163235387Z" level=info msg="StartContainer for \"3bcb7aadf95fa6b7f752f20fcd873fa9111d4df71c45102f16867278f7d2194b\"" May 15 00:05:59.170147 containerd[1724]: time="2025-05-15T00:05:59.170014500Z" level=info msg="CreateContainer within sandbox \"def770cf5f91937fa8f461b3adaf0f9ba223815371576a034f6614667b09b454\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"df200f5138a082d53a867114389013ae99d12f91109cea64398e6150182c353d\"" May 15 00:05:59.171017 systemd[1]: Started cri-containerd-c48211f30e8f304a27d964e2b4fce3693ea36790a7d799bd7e52c4022ef15e0b.scope - libcontainer container c48211f30e8f304a27d964e2b4fce3693ea36790a7d799bd7e52c4022ef15e0b. May 15 00:05:59.176642 containerd[1724]: time="2025-05-15T00:05:59.175998933Z" level=info msg="StartContainer for \"df200f5138a082d53a867114389013ae99d12f91109cea64398e6150182c353d\"" May 15 00:05:59.211002 systemd[1]: Started cri-containerd-3bcb7aadf95fa6b7f752f20fcd873fa9111d4df71c45102f16867278f7d2194b.scope - libcontainer container 3bcb7aadf95fa6b7f752f20fcd873fa9111d4df71c45102f16867278f7d2194b. May 15 00:05:59.236499 containerd[1724]: time="2025-05-15T00:05:59.236454944Z" level=info msg="StartContainer for \"c48211f30e8f304a27d964e2b4fce3693ea36790a7d799bd7e52c4022ef15e0b\" returns successfully" May 15 00:05:59.259055 systemd[1]: Started cri-containerd-df200f5138a082d53a867114389013ae99d12f91109cea64398e6150182c353d.scope - libcontainer container df200f5138a082d53a867114389013ae99d12f91109cea64398e6150182c353d. May 15 00:05:59.270031 containerd[1724]: time="2025-05-15T00:05:59.269521387Z" level=info msg="StartContainer for \"3bcb7aadf95fa6b7f752f20fcd873fa9111d4df71c45102f16867278f7d2194b\" returns successfully" May 15 00:05:59.382825 containerd[1724]: time="2025-05-15T00:05:59.382352019Z" level=info msg="StartContainer for \"df200f5138a082d53a867114389013ae99d12f91109cea64398e6150182c353d\" returns successfully" May 15 00:05:59.911694 kubelet[3012]: I0515 00:05:59.911098 3012 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230.1.1-n-c70fe96ece" May 15 00:06:01.915680 kubelet[3012]: I0515 00:06:01.915604 3012 kubelet_node_status.go:75] "Successfully registered node" node="ci-4230.1.1-n-c70fe96ece" May 15 00:06:02.114671 kubelet[3012]: I0515 00:06:02.114629 3012 apiserver.go:52] "Watching apiserver" May 15 00:06:02.129562 kubelet[3012]: I0515 00:06:02.129516 3012 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 15 00:06:03.765976 kubelet[3012]: W0515 00:06:03.765815 3012 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 15 00:06:04.308073 systemd[1]: Reload requested from client PID 3291 ('systemctl') (unit session-9.scope)... May 15 00:06:04.308095 systemd[1]: Reloading... May 15 00:06:04.416693 zram_generator::config[3338]: No configuration found. May 15 00:06:04.530009 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 00:06:04.675147 systemd[1]: Reloading finished in 366 ms. May 15 00:06:04.707809 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 15 00:06:04.708546 kubelet[3012]: I0515 00:06:04.708367 3012 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 15 00:06:04.724782 systemd[1]: kubelet.service: Deactivated successfully. May 15 00:06:04.725113 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 15 00:06:04.725181 systemd[1]: kubelet.service: Consumed 1.272s CPU time, 118.5M memory peak. May 15 00:06:04.729092 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 00:06:04.837486 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 00:06:04.848990 (kubelet)[3402]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 15 00:06:04.887879 kubelet[3402]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 00:06:04.888265 kubelet[3402]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 15 00:06:04.888310 kubelet[3402]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 00:06:04.888451 kubelet[3402]: I0515 00:06:04.888413 3402 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 15 00:06:04.896120 kubelet[3402]: I0515 00:06:04.896081 3402 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 15 00:06:04.896120 kubelet[3402]: I0515 00:06:04.896111 3402 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 15 00:06:04.896546 kubelet[3402]: I0515 00:06:04.896518 3402 server.go:929] "Client rotation is on, will bootstrap in background" May 15 00:06:04.898742 kubelet[3402]: I0515 00:06:04.898716 3402 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 15 00:06:04.901003 kubelet[3402]: I0515 00:06:04.900978 3402 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 15 00:06:04.905697 kubelet[3402]: E0515 00:06:04.905649 3402 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 15 00:06:04.905881 kubelet[3402]: I0515 00:06:04.905866 3402 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 15 00:06:04.908831 kubelet[3402]: I0515 00:06:04.908807 3402 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 15 00:06:04.909067 kubelet[3402]: I0515 00:06:04.909056 3402 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 15 00:06:04.909272 kubelet[3402]: I0515 00:06:04.909241 3402 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 15 00:06:04.909508 kubelet[3402]: I0515 00:06:04.909336 3402 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230.1.1-n-c70fe96ece","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 15 00:06:04.909679 kubelet[3402]: I0515 00:06:04.909613 3402 topology_manager.go:138] "Creating topology manager with none policy" May 15 00:06:04.909749 kubelet[3402]: I0515 00:06:04.909740 3402 container_manager_linux.go:300] "Creating device plugin manager" May 15 00:06:04.909829 kubelet[3402]: I0515 00:06:04.909821 3402 state_mem.go:36] "Initialized new in-memory state store" May 15 00:06:04.909990 kubelet[3402]: I0515 00:06:04.909979 3402 kubelet.go:408] "Attempting to sync node with API server" May 15 00:06:04.910058 kubelet[3402]: I0515 00:06:04.910048 3402 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 15 00:06:04.910138 kubelet[3402]: I0515 00:06:04.910128 3402 kubelet.go:314] "Adding apiserver pod source" May 15 00:06:04.910186 kubelet[3402]: I0515 00:06:04.910178 3402 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 15 00:06:04.912802 kubelet[3402]: I0515 00:06:04.912775 3402 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 15 00:06:04.913288 kubelet[3402]: I0515 00:06:04.913255 3402 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 15 00:06:04.913697 kubelet[3402]: I0515 00:06:04.913672 3402 server.go:1269] "Started kubelet" May 15 00:06:04.918500 kubelet[3402]: I0515 00:06:04.918452 3402 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 15 00:06:04.934720 kubelet[3402]: I0515 00:06:04.933901 3402 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 15 00:06:04.936357 kubelet[3402]: I0515 00:06:04.936106 3402 server.go:460] "Adding debug handlers to kubelet server" May 15 00:06:04.938224 kubelet[3402]: I0515 00:06:04.938185 3402 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 15 00:06:04.938899 kubelet[3402]: I0515 00:06:04.938360 3402 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 15 00:06:04.941472 kubelet[3402]: I0515 00:06:04.941346 3402 volume_manager.go:289] "Starting Kubelet Volume Manager" May 15 00:06:04.941799 kubelet[3402]: E0515 00:06:04.941662 3402 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-c70fe96ece\" not found" May 15 00:06:04.944588 kubelet[3402]: I0515 00:06:04.941994 3402 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 15 00:06:04.944588 kubelet[3402]: I0515 00:06:04.942140 3402 reconciler.go:26] "Reconciler: start to sync state" May 15 00:06:04.944588 kubelet[3402]: I0515 00:06:04.942440 3402 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 15 00:06:04.956146 kubelet[3402]: I0515 00:06:04.956092 3402 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 15 00:06:04.958193 kubelet[3402]: I0515 00:06:04.957326 3402 factory.go:221] Registration of the systemd container factory successfully May 15 00:06:04.958193 kubelet[3402]: I0515 00:06:04.958086 3402 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 15 00:06:04.958486 kubelet[3402]: E0515 00:06:04.958457 3402 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 15 00:06:04.958728 kubelet[3402]: I0515 00:06:04.958708 3402 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 15 00:06:04.958986 kubelet[3402]: I0515 00:06:04.958973 3402 status_manager.go:217] "Starting to sync pod status with apiserver" May 15 00:06:04.959059 kubelet[3402]: I0515 00:06:04.959050 3402 kubelet.go:2321] "Starting kubelet main sync loop" May 15 00:06:04.959407 kubelet[3402]: E0515 00:06:04.959384 3402 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 15 00:06:04.971919 kubelet[3402]: I0515 00:06:04.971878 3402 factory.go:221] Registration of the containerd container factory successfully May 15 00:06:05.017590 kubelet[3402]: I0515 00:06:05.017560 3402 cpu_manager.go:214] "Starting CPU manager" policy="none" May 15 00:06:05.017590 kubelet[3402]: I0515 00:06:05.017579 3402 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 15 00:06:05.017590 kubelet[3402]: I0515 00:06:05.017601 3402 state_mem.go:36] "Initialized new in-memory state store" May 15 00:06:05.017818 kubelet[3402]: I0515 00:06:05.017803 3402 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 15 00:06:05.017843 kubelet[3402]: I0515 00:06:05.017815 3402 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 15 00:06:05.017843 kubelet[3402]: I0515 00:06:05.017833 3402 policy_none.go:49] "None policy: Start" May 15 00:06:05.018536 kubelet[3402]: I0515 00:06:05.018517 3402 memory_manager.go:170] "Starting memorymanager" policy="None" May 15 00:06:05.018576 kubelet[3402]: I0515 00:06:05.018543 3402 state_mem.go:35] "Initializing new in-memory state store" May 15 00:06:05.018861 kubelet[3402]: I0515 00:06:05.018844 3402 state_mem.go:75] "Updated machine memory state" May 15 00:06:05.024168 kubelet[3402]: I0515 00:06:05.023527 3402 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 15 00:06:05.024168 kubelet[3402]: I0515 00:06:05.023720 3402 eviction_manager.go:189] "Eviction manager: starting control loop" May 15 00:06:05.024168 kubelet[3402]: I0515 00:06:05.023732 3402 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 15 00:06:05.024168 kubelet[3402]: I0515 00:06:05.023940 3402 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 15 00:06:05.067933 kubelet[3402]: W0515 00:06:05.067816 3402 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 15 00:06:05.073096 kubelet[3402]: W0515 00:06:05.073032 3402 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 15 00:06:05.074104 kubelet[3402]: W0515 00:06:05.074070 3402 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 15 00:06:05.074207 kubelet[3402]: E0515 00:06:05.074141 3402 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4230.1.1-n-c70fe96ece\" already exists" pod="kube-system/kube-controller-manager-ci-4230.1.1-n-c70fe96ece" May 15 00:06:05.126730 kubelet[3402]: I0515 00:06:05.126700 3402 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230.1.1-n-c70fe96ece" May 15 00:06:05.140533 kubelet[3402]: I0515 00:06:05.140493 3402 kubelet_node_status.go:111] "Node was previously registered" node="ci-4230.1.1-n-c70fe96ece" May 15 00:06:05.140735 kubelet[3402]: I0515 00:06:05.140592 3402 kubelet_node_status.go:75] "Successfully registered node" node="ci-4230.1.1-n-c70fe96ece" May 15 00:06:05.142636 kubelet[3402]: I0515 00:06:05.142333 3402 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8680bec50fd195e791fa87863cf6817b-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230.1.1-n-c70fe96ece\" (UID: \"8680bec50fd195e791fa87863cf6817b\") " pod="kube-system/kube-apiserver-ci-4230.1.1-n-c70fe96ece" May 15 00:06:05.142636 kubelet[3402]: I0515 00:06:05.142606 3402 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f0f7c16397792f0734589f0ea481911c-ca-certs\") pod \"kube-controller-manager-ci-4230.1.1-n-c70fe96ece\" (UID: \"f0f7c16397792f0734589f0ea481911c\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-n-c70fe96ece" May 15 00:06:05.144039 kubelet[3402]: I0515 00:06:05.143919 3402 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f0f7c16397792f0734589f0ea481911c-k8s-certs\") pod \"kube-controller-manager-ci-4230.1.1-n-c70fe96ece\" (UID: \"f0f7c16397792f0734589f0ea481911c\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-n-c70fe96ece" May 15 00:06:05.144250 kubelet[3402]: I0515 00:06:05.143948 3402 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f0f7c16397792f0734589f0ea481911c-kubeconfig\") pod \"kube-controller-manager-ci-4230.1.1-n-c70fe96ece\" (UID: \"f0f7c16397792f0734589f0ea481911c\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-n-c70fe96ece" May 15 00:06:05.144250 kubelet[3402]: I0515 00:06:05.144119 3402 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f0f7c16397792f0734589f0ea481911c-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230.1.1-n-c70fe96ece\" (UID: \"f0f7c16397792f0734589f0ea481911c\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-n-c70fe96ece" May 15 00:06:05.144250 kubelet[3402]: I0515 00:06:05.144139 3402 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/89db16eda88809eb29a95bd53a5c84c0-kubeconfig\") pod \"kube-scheduler-ci-4230.1.1-n-c70fe96ece\" (UID: \"89db16eda88809eb29a95bd53a5c84c0\") " pod="kube-system/kube-scheduler-ci-4230.1.1-n-c70fe96ece" May 15 00:06:05.144250 kubelet[3402]: I0515 00:06:05.144155 3402 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8680bec50fd195e791fa87863cf6817b-ca-certs\") pod \"kube-apiserver-ci-4230.1.1-n-c70fe96ece\" (UID: \"8680bec50fd195e791fa87863cf6817b\") " pod="kube-system/kube-apiserver-ci-4230.1.1-n-c70fe96ece" May 15 00:06:05.144250 kubelet[3402]: I0515 00:06:05.144188 3402 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8680bec50fd195e791fa87863cf6817b-k8s-certs\") pod \"kube-apiserver-ci-4230.1.1-n-c70fe96ece\" (UID: \"8680bec50fd195e791fa87863cf6817b\") " pod="kube-system/kube-apiserver-ci-4230.1.1-n-c70fe96ece" May 15 00:06:05.144385 kubelet[3402]: I0515 00:06:05.144203 3402 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f0f7c16397792f0734589f0ea481911c-flexvolume-dir\") pod \"kube-controller-manager-ci-4230.1.1-n-c70fe96ece\" (UID: \"f0f7c16397792f0734589f0ea481911c\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-n-c70fe96ece" May 15 00:06:05.335500 sudo[3432]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 15 00:06:05.335800 sudo[3432]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 15 00:06:05.792691 sudo[3432]: pam_unix(sudo:session): session closed for user root May 15 00:06:05.919728 kubelet[3402]: I0515 00:06:05.919415 3402 apiserver.go:52] "Watching apiserver" May 15 00:06:05.943192 kubelet[3402]: I0515 00:06:05.943117 3402 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 15 00:06:06.008820 kubelet[3402]: W0515 00:06:06.008783 3402 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 15 00:06:06.008964 kubelet[3402]: E0515 00:06:06.008860 3402 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4230.1.1-n-c70fe96ece\" already exists" pod="kube-system/kube-controller-manager-ci-4230.1.1-n-c70fe96ece" May 15 00:06:06.010009 kubelet[3402]: W0515 00:06:06.009979 3402 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 15 00:06:06.010718 kubelet[3402]: E0515 00:06:06.010688 3402 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4230.1.1-n-c70fe96ece\" already exists" pod="kube-system/kube-apiserver-ci-4230.1.1-n-c70fe96ece" May 15 00:06:06.037102 kubelet[3402]: I0515 00:06:06.037038 3402 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4230.1.1-n-c70fe96ece" podStartSLOduration=3.037018812 podStartE2EDuration="3.037018812s" podCreationTimestamp="2025-05-15 00:06:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 00:06:06.020879587 +0000 UTC m=+1.168426930" watchObservedRunningTime="2025-05-15 00:06:06.037018812 +0000 UTC m=+1.184566195" May 15 00:06:06.061331 kubelet[3402]: I0515 00:06:06.059163 3402 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4230.1.1-n-c70fe96ece" podStartSLOduration=1.059141112 podStartE2EDuration="1.059141112s" podCreationTimestamp="2025-05-15 00:06:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 00:06:06.038003812 +0000 UTC m=+1.185551195" watchObservedRunningTime="2025-05-15 00:06:06.059141112 +0000 UTC m=+1.206688495" May 15 00:06:06.061331 kubelet[3402]: I0515 00:06:06.060203 3402 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4230.1.1-n-c70fe96ece" podStartSLOduration=1.060155911 podStartE2EDuration="1.060155911s" podCreationTimestamp="2025-05-15 00:06:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 00:06:06.059879512 +0000 UTC m=+1.207426895" watchObservedRunningTime="2025-05-15 00:06:06.060155911 +0000 UTC m=+1.207703294" May 15 00:06:08.334546 sudo[2273]: pam_unix(sudo:session): session closed for user root May 15 00:06:08.417246 sshd[2272]: Connection closed by 10.200.16.10 port 36008 May 15 00:06:08.416689 sshd-session[2270]: pam_unix(sshd:session): session closed for user core May 15 00:06:08.420213 systemd[1]: sshd@6-10.200.20.16:22-10.200.16.10:36008.service: Deactivated successfully. May 15 00:06:08.422445 systemd[1]: session-9.scope: Deactivated successfully. May 15 00:06:08.422759 systemd[1]: session-9.scope: Consumed 6.867s CPU time, 257.9M memory peak. May 15 00:06:08.424066 systemd-logind[1707]: Session 9 logged out. Waiting for processes to exit. May 15 00:06:08.425225 systemd-logind[1707]: Removed session 9. May 15 00:06:09.289249 kubelet[3402]: I0515 00:06:09.289173 3402 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 15 00:06:09.290029 containerd[1724]: time="2025-05-15T00:06:09.289990952Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 15 00:06:09.290688 kubelet[3402]: I0515 00:06:09.290446 3402 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 15 00:06:09.973182 kubelet[3402]: I0515 00:06:09.972776 3402 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cf45fc6a-9056-4812-a0e3-4994193ba0fb-xtables-lock\") pod \"kube-proxy-bq7hk\" (UID: \"cf45fc6a-9056-4812-a0e3-4994193ba0fb\") " pod="kube-system/kube-proxy-bq7hk" May 15 00:06:09.973182 kubelet[3402]: I0515 00:06:09.972825 3402 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-br5sc\" (UniqueName: \"kubernetes.io/projected/cf45fc6a-9056-4812-a0e3-4994193ba0fb-kube-api-access-br5sc\") pod \"kube-proxy-bq7hk\" (UID: \"cf45fc6a-9056-4812-a0e3-4994193ba0fb\") " pod="kube-system/kube-proxy-bq7hk" May 15 00:06:09.973182 kubelet[3402]: I0515 00:06:09.972848 3402 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/cf45fc6a-9056-4812-a0e3-4994193ba0fb-kube-proxy\") pod \"kube-proxy-bq7hk\" (UID: \"cf45fc6a-9056-4812-a0e3-4994193ba0fb\") " pod="kube-system/kube-proxy-bq7hk" May 15 00:06:09.973182 kubelet[3402]: I0515 00:06:09.972864 3402 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cf45fc6a-9056-4812-a0e3-4994193ba0fb-lib-modules\") pod \"kube-proxy-bq7hk\" (UID: \"cf45fc6a-9056-4812-a0e3-4994193ba0fb\") " pod="kube-system/kube-proxy-bq7hk" May 15 00:06:09.978850 systemd[1]: Created slice kubepods-besteffort-podcf45fc6a_9056_4812_a0e3_4994193ba0fb.slice - libcontainer container kubepods-besteffort-podcf45fc6a_9056_4812_a0e3_4994193ba0fb.slice. May 15 00:06:09.998160 systemd[1]: Created slice kubepods-burstable-podae46380a_8549_47ab_9e02_1d5d09e76a7e.slice - libcontainer container kubepods-burstable-podae46380a_8549_47ab_9e02_1d5d09e76a7e.slice. May 15 00:06:10.075432 kubelet[3402]: I0515 00:06:10.074332 3402 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ae46380a-8549-47ab-9e02-1d5d09e76a7e-etc-cni-netd\") pod \"cilium-tr6p9\" (UID: \"ae46380a-8549-47ab-9e02-1d5d09e76a7e\") " pod="kube-system/cilium-tr6p9" May 15 00:06:10.075571 kubelet[3402]: I0515 00:06:10.075453 3402 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ae46380a-8549-47ab-9e02-1d5d09e76a7e-host-proc-sys-net\") pod \"cilium-tr6p9\" (UID: \"ae46380a-8549-47ab-9e02-1d5d09e76a7e\") " pod="kube-system/cilium-tr6p9" May 15 00:06:10.075571 kubelet[3402]: I0515 00:06:10.075512 3402 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ae46380a-8549-47ab-9e02-1d5d09e76a7e-hubble-tls\") pod \"cilium-tr6p9\" (UID: \"ae46380a-8549-47ab-9e02-1d5d09e76a7e\") " pod="kube-system/cilium-tr6p9" May 15 00:06:10.075571 kubelet[3402]: I0515 00:06:10.075531 3402 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ae46380a-8549-47ab-9e02-1d5d09e76a7e-cilium-cgroup\") pod \"cilium-tr6p9\" (UID: \"ae46380a-8549-47ab-9e02-1d5d09e76a7e\") " pod="kube-system/cilium-tr6p9" May 15 00:06:10.075571 kubelet[3402]: I0515 00:06:10.075548 3402 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2njt\" (UniqueName: \"kubernetes.io/projected/ae46380a-8549-47ab-9e02-1d5d09e76a7e-kube-api-access-s2njt\") pod \"cilium-tr6p9\" (UID: \"ae46380a-8549-47ab-9e02-1d5d09e76a7e\") " pod="kube-system/cilium-tr6p9" May 15 00:06:10.075693 kubelet[3402]: I0515 00:06:10.075617 3402 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ae46380a-8549-47ab-9e02-1d5d09e76a7e-lib-modules\") pod \"cilium-tr6p9\" (UID: \"ae46380a-8549-47ab-9e02-1d5d09e76a7e\") " pod="kube-system/cilium-tr6p9" May 15 00:06:10.075718 kubelet[3402]: I0515 00:06:10.075702 3402 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ae46380a-8549-47ab-9e02-1d5d09e76a7e-cilium-config-path\") pod \"cilium-tr6p9\" (UID: \"ae46380a-8549-47ab-9e02-1d5d09e76a7e\") " pod="kube-system/cilium-tr6p9" May 15 00:06:10.075741 kubelet[3402]: I0515 00:06:10.075726 3402 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ae46380a-8549-47ab-9e02-1d5d09e76a7e-xtables-lock\") pod \"cilium-tr6p9\" (UID: \"ae46380a-8549-47ab-9e02-1d5d09e76a7e\") " pod="kube-system/cilium-tr6p9" May 15 00:06:10.075763 kubelet[3402]: I0515 00:06:10.075742 3402 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ae46380a-8549-47ab-9e02-1d5d09e76a7e-clustermesh-secrets\") pod \"cilium-tr6p9\" (UID: \"ae46380a-8549-47ab-9e02-1d5d09e76a7e\") " pod="kube-system/cilium-tr6p9" May 15 00:06:10.075786 kubelet[3402]: I0515 00:06:10.075762 3402 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ae46380a-8549-47ab-9e02-1d5d09e76a7e-cilium-run\") pod \"cilium-tr6p9\" (UID: \"ae46380a-8549-47ab-9e02-1d5d09e76a7e\") " pod="kube-system/cilium-tr6p9" May 15 00:06:10.075786 kubelet[3402]: I0515 00:06:10.075777 3402 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ae46380a-8549-47ab-9e02-1d5d09e76a7e-bpf-maps\") pod \"cilium-tr6p9\" (UID: \"ae46380a-8549-47ab-9e02-1d5d09e76a7e\") " pod="kube-system/cilium-tr6p9" May 15 00:06:10.075830 kubelet[3402]: I0515 00:06:10.075793 3402 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ae46380a-8549-47ab-9e02-1d5d09e76a7e-cni-path\") pod \"cilium-tr6p9\" (UID: \"ae46380a-8549-47ab-9e02-1d5d09e76a7e\") " pod="kube-system/cilium-tr6p9" May 15 00:06:10.075830 kubelet[3402]: I0515 00:06:10.075807 3402 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ae46380a-8549-47ab-9e02-1d5d09e76a7e-hostproc\") pod \"cilium-tr6p9\" (UID: \"ae46380a-8549-47ab-9e02-1d5d09e76a7e\") " pod="kube-system/cilium-tr6p9" May 15 00:06:10.075830 kubelet[3402]: I0515 00:06:10.075821 3402 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ae46380a-8549-47ab-9e02-1d5d09e76a7e-host-proc-sys-kernel\") pod \"cilium-tr6p9\" (UID: \"ae46380a-8549-47ab-9e02-1d5d09e76a7e\") " pod="kube-system/cilium-tr6p9" May 15 00:06:10.295036 containerd[1724]: time="2025-05-15T00:06:10.294887579Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bq7hk,Uid:cf45fc6a-9056-4812-a0e3-4994193ba0fb,Namespace:kube-system,Attempt:0,}" May 15 00:06:10.306197 containerd[1724]: time="2025-05-15T00:06:10.306142887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tr6p9,Uid:ae46380a-8549-47ab-9e02-1d5d09e76a7e,Namespace:kube-system,Attempt:0,}" May 15 00:06:10.311796 systemd[1]: Created slice kubepods-besteffort-pod9a2aa114_4ba0_448e_bc97_0b488e083b36.slice - libcontainer container kubepods-besteffort-pod9a2aa114_4ba0_448e_bc97_0b488e083b36.slice. May 15 00:06:10.378876 kubelet[3402]: I0515 00:06:10.378798 3402 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwkjm\" (UniqueName: \"kubernetes.io/projected/9a2aa114-4ba0-448e-bc97-0b488e083b36-kube-api-access-rwkjm\") pod \"cilium-operator-5d85765b45-sd4lt\" (UID: \"9a2aa114-4ba0-448e-bc97-0b488e083b36\") " pod="kube-system/cilium-operator-5d85765b45-sd4lt" May 15 00:06:10.378876 kubelet[3402]: I0515 00:06:10.378854 3402 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9a2aa114-4ba0-448e-bc97-0b488e083b36-cilium-config-path\") pod \"cilium-operator-5d85765b45-sd4lt\" (UID: \"9a2aa114-4ba0-448e-bc97-0b488e083b36\") " pod="kube-system/cilium-operator-5d85765b45-sd4lt" May 15 00:06:10.384316 containerd[1724]: time="2025-05-15T00:06:10.384159561Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:06:10.384316 containerd[1724]: time="2025-05-15T00:06:10.384221921Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:06:10.384573 containerd[1724]: time="2025-05-15T00:06:10.384521561Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:06:10.386201 containerd[1724]: time="2025-05-15T00:06:10.385944159Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:06:10.398457 containerd[1724]: time="2025-05-15T00:06:10.398293505Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:06:10.398457 containerd[1724]: time="2025-05-15T00:06:10.398359305Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:06:10.398457 containerd[1724]: time="2025-05-15T00:06:10.398370745Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:06:10.398798 containerd[1724]: time="2025-05-15T00:06:10.398471785Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:06:10.406846 systemd[1]: Started cri-containerd-3e051b6edf58b0cf1d99b9c4bde41cf25920b2d2a20742181dfcdd2e44bd80e0.scope - libcontainer container 3e051b6edf58b0cf1d99b9c4bde41cf25920b2d2a20742181dfcdd2e44bd80e0. May 15 00:06:10.423855 systemd[1]: Started cri-containerd-f42b7bec983efac972e8093f684621c486a1fc9a335d30f89d25f62f43277fff.scope - libcontainer container f42b7bec983efac972e8093f684621c486a1fc9a335d30f89d25f62f43277fff. May 15 00:06:10.448541 containerd[1724]: time="2025-05-15T00:06:10.448401970Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bq7hk,Uid:cf45fc6a-9056-4812-a0e3-4994193ba0fb,Namespace:kube-system,Attempt:0,} returns sandbox id \"3e051b6edf58b0cf1d99b9c4bde41cf25920b2d2a20742181dfcdd2e44bd80e0\"" May 15 00:06:10.452280 containerd[1724]: time="2025-05-15T00:06:10.452218966Z" level=info msg="CreateContainer within sandbox \"3e051b6edf58b0cf1d99b9c4bde41cf25920b2d2a20742181dfcdd2e44bd80e0\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 15 00:06:10.460279 containerd[1724]: time="2025-05-15T00:06:10.460075637Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tr6p9,Uid:ae46380a-8549-47ab-9e02-1d5d09e76a7e,Namespace:kube-system,Attempt:0,} returns sandbox id \"f42b7bec983efac972e8093f684621c486a1fc9a335d30f89d25f62f43277fff\"" May 15 00:06:10.462603 containerd[1724]: time="2025-05-15T00:06:10.462423035Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 15 00:06:10.530336 containerd[1724]: time="2025-05-15T00:06:10.530220720Z" level=info msg="CreateContainer within sandbox \"3e051b6edf58b0cf1d99b9c4bde41cf25920b2d2a20742181dfcdd2e44bd80e0\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c1196f503e6e4c93feb7da0d2798f76ba7033efe317bb7011a5ab0cef8e7172d\"" May 15 00:06:10.531982 containerd[1724]: time="2025-05-15T00:06:10.531792558Z" level=info msg="StartContainer for \"c1196f503e6e4c93feb7da0d2798f76ba7033efe317bb7011a5ab0cef8e7172d\"" May 15 00:06:10.555872 systemd[1]: Started cri-containerd-c1196f503e6e4c93feb7da0d2798f76ba7033efe317bb7011a5ab0cef8e7172d.scope - libcontainer container c1196f503e6e4c93feb7da0d2798f76ba7033efe317bb7011a5ab0cef8e7172d. May 15 00:06:10.588866 containerd[1724]: time="2025-05-15T00:06:10.588815855Z" level=info msg="StartContainer for \"c1196f503e6e4c93feb7da0d2798f76ba7033efe317bb7011a5ab0cef8e7172d\" returns successfully" May 15 00:06:10.614684 containerd[1724]: time="2025-05-15T00:06:10.614604907Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-sd4lt,Uid:9a2aa114-4ba0-448e-bc97-0b488e083b36,Namespace:kube-system,Attempt:0,}" May 15 00:06:10.679514 containerd[1724]: time="2025-05-15T00:06:10.679358368Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:06:10.679667 containerd[1724]: time="2025-05-15T00:06:10.679608848Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:06:10.679708 containerd[1724]: time="2025-05-15T00:06:10.679687128Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:06:10.680094 containerd[1724]: time="2025-05-15T00:06:10.679942008Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:06:10.703203 systemd[1]: Started cri-containerd-95adc4660aface91dd56e7440fdc733412af2d9cae9c41438cb272f9c56c02a5.scope - libcontainer container 95adc4660aface91dd56e7440fdc733412af2d9cae9c41438cb272f9c56c02a5. May 15 00:06:10.745091 containerd[1724]: time="2025-05-15T00:06:10.745045178Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-sd4lt,Uid:9a2aa114-4ba0-448e-bc97-0b488e083b36,Namespace:kube-system,Attempt:0,} returns sandbox id \"95adc4660aface91dd56e7440fdc733412af2d9cae9c41438cb272f9c56c02a5\"" May 15 00:06:11.024482 kubelet[3402]: I0515 00:06:11.024408 3402 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-bq7hk" podStartSLOduration=2.024293609 podStartE2EDuration="2.024293609s" podCreationTimestamp="2025-05-15 00:06:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 00:06:11.024151409 +0000 UTC m=+6.171698792" watchObservedRunningTime="2025-05-15 00:06:11.024293609 +0000 UTC m=+6.171840992" May 15 00:06:16.404748 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount695212835.mount: Deactivated successfully. May 15 00:06:18.792658 containerd[1724]: time="2025-05-15T00:06:18.792161830Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:06:18.803654 containerd[1724]: time="2025-05-15T00:06:18.803492579Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" May 15 00:06:18.812977 containerd[1724]: time="2025-05-15T00:06:18.812925410Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:06:18.815333 containerd[1724]: time="2025-05-15T00:06:18.815289248Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 8.352823573s" May 15 00:06:18.815333 containerd[1724]: time="2025-05-15T00:06:18.815332008Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" May 15 00:06:18.816650 containerd[1724]: time="2025-05-15T00:06:18.816367207Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 15 00:06:18.818151 containerd[1724]: time="2025-05-15T00:06:18.817854805Z" level=info msg="CreateContainer within sandbox \"f42b7bec983efac972e8093f684621c486a1fc9a335d30f89d25f62f43277fff\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 15 00:06:18.896016 containerd[1724]: time="2025-05-15T00:06:18.895926089Z" level=info msg="CreateContainer within sandbox \"f42b7bec983efac972e8093f684621c486a1fc9a335d30f89d25f62f43277fff\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"68dec67aa88d955227993d6ded37703ed6b4a29e1fd00ed4199f3c9f9adb0fca\"" May 15 00:06:18.896929 containerd[1724]: time="2025-05-15T00:06:18.896901488Z" level=info msg="StartContainer for \"68dec67aa88d955227993d6ded37703ed6b4a29e1fd00ed4199f3c9f9adb0fca\"" May 15 00:06:18.926819 systemd[1]: Started cri-containerd-68dec67aa88d955227993d6ded37703ed6b4a29e1fd00ed4199f3c9f9adb0fca.scope - libcontainer container 68dec67aa88d955227993d6ded37703ed6b4a29e1fd00ed4199f3c9f9adb0fca. May 15 00:06:18.955130 containerd[1724]: time="2025-05-15T00:06:18.955075951Z" level=info msg="StartContainer for \"68dec67aa88d955227993d6ded37703ed6b4a29e1fd00ed4199f3c9f9adb0fca\" returns successfully" May 15 00:06:18.958602 systemd[1]: cri-containerd-68dec67aa88d955227993d6ded37703ed6b4a29e1fd00ed4199f3c9f9adb0fca.scope: Deactivated successfully. May 15 00:06:19.867713 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-68dec67aa88d955227993d6ded37703ed6b4a29e1fd00ed4199f3c9f9adb0fca-rootfs.mount: Deactivated successfully. May 15 00:06:20.164784 containerd[1724]: time="2025-05-15T00:06:20.164646245Z" level=info msg="shim disconnected" id=68dec67aa88d955227993d6ded37703ed6b4a29e1fd00ed4199f3c9f9adb0fca namespace=k8s.io May 15 00:06:20.164784 containerd[1724]: time="2025-05-15T00:06:20.164709445Z" level=warning msg="cleaning up after shim disconnected" id=68dec67aa88d955227993d6ded37703ed6b4a29e1fd00ed4199f3c9f9adb0fca namespace=k8s.io May 15 00:06:20.164784 containerd[1724]: time="2025-05-15T00:06:20.164718565Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 00:06:21.034107 containerd[1724]: time="2025-05-15T00:06:21.033981594Z" level=info msg="CreateContainer within sandbox \"f42b7bec983efac972e8093f684621c486a1fc9a335d30f89d25f62f43277fff\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 15 00:06:21.108982 containerd[1724]: time="2025-05-15T00:06:21.108876360Z" level=info msg="CreateContainer within sandbox \"f42b7bec983efac972e8093f684621c486a1fc9a335d30f89d25f62f43277fff\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"281316e6ccc852229b34c35d90212aa715d39e3a9e9893a99b8039652f2a4a8b\"" May 15 00:06:21.110507 containerd[1724]: time="2025-05-15T00:06:21.110035319Z" level=info msg="StartContainer for \"281316e6ccc852229b34c35d90212aa715d39e3a9e9893a99b8039652f2a4a8b\"" May 15 00:06:21.139211 systemd[1]: run-containerd-runc-k8s.io-281316e6ccc852229b34c35d90212aa715d39e3a9e9893a99b8039652f2a4a8b-runc.FzweV1.mount: Deactivated successfully. May 15 00:06:21.148776 systemd[1]: Started cri-containerd-281316e6ccc852229b34c35d90212aa715d39e3a9e9893a99b8039652f2a4a8b.scope - libcontainer container 281316e6ccc852229b34c35d90212aa715d39e3a9e9893a99b8039652f2a4a8b. May 15 00:06:21.178969 containerd[1724]: time="2025-05-15T00:06:21.178600852Z" level=info msg="StartContainer for \"281316e6ccc852229b34c35d90212aa715d39e3a9e9893a99b8039652f2a4a8b\" returns successfully" May 15 00:06:21.189472 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 15 00:06:21.189842 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 15 00:06:21.190961 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 15 00:06:21.194999 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 15 00:06:21.195270 systemd[1]: cri-containerd-281316e6ccc852229b34c35d90212aa715d39e3a9e9893a99b8039652f2a4a8b.scope: Deactivated successfully. May 15 00:06:21.223454 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 15 00:06:21.297351 containerd[1724]: time="2025-05-15T00:06:21.297197776Z" level=info msg="shim disconnected" id=281316e6ccc852229b34c35d90212aa715d39e3a9e9893a99b8039652f2a4a8b namespace=k8s.io May 15 00:06:21.297351 containerd[1724]: time="2025-05-15T00:06:21.297270256Z" level=warning msg="cleaning up after shim disconnected" id=281316e6ccc852229b34c35d90212aa715d39e3a9e9893a99b8039652f2a4a8b namespace=k8s.io May 15 00:06:21.297351 containerd[1724]: time="2025-05-15T00:06:21.297279416Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 00:06:21.973270 containerd[1724]: time="2025-05-15T00:06:21.973219193Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:06:21.977688 containerd[1724]: time="2025-05-15T00:06:21.977644509Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" May 15 00:06:21.983254 containerd[1724]: time="2025-05-15T00:06:21.983194343Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:06:21.984658 containerd[1724]: time="2025-05-15T00:06:21.984519422Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 3.168112655s" May 15 00:06:21.984658 containerd[1724]: time="2025-05-15T00:06:21.984555742Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" May 15 00:06:21.987143 containerd[1724]: time="2025-05-15T00:06:21.987030260Z" level=info msg="CreateContainer within sandbox \"95adc4660aface91dd56e7440fdc733412af2d9cae9c41438cb272f9c56c02a5\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 15 00:06:22.037554 containerd[1724]: time="2025-05-15T00:06:22.037319850Z" level=info msg="CreateContainer within sandbox \"95adc4660aface91dd56e7440fdc733412af2d9cae9c41438cb272f9c56c02a5\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"3e9c793d4c47922ebbc863b5c4a89b0c51228161fc97e98ba56bb1c9fb61efbf\"" May 15 00:06:22.039178 containerd[1724]: time="2025-05-15T00:06:22.039026489Z" level=info msg="StartContainer for \"3e9c793d4c47922ebbc863b5c4a89b0c51228161fc97e98ba56bb1c9fb61efbf\"" May 15 00:06:22.040348 containerd[1724]: time="2025-05-15T00:06:22.040050008Z" level=info msg="CreateContainer within sandbox \"f42b7bec983efac972e8093f684621c486a1fc9a335d30f89d25f62f43277fff\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 15 00:06:22.083789 systemd[1]: Started cri-containerd-3e9c793d4c47922ebbc863b5c4a89b0c51228161fc97e98ba56bb1c9fb61efbf.scope - libcontainer container 3e9c793d4c47922ebbc863b5c4a89b0c51228161fc97e98ba56bb1c9fb61efbf. May 15 00:06:22.091034 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-281316e6ccc852229b34c35d90212aa715d39e3a9e9893a99b8039652f2a4a8b-rootfs.mount: Deactivated successfully. May 15 00:06:22.131487 containerd[1724]: time="2025-05-15T00:06:22.131365438Z" level=info msg="CreateContainer within sandbox \"f42b7bec983efac972e8093f684621c486a1fc9a335d30f89d25f62f43277fff\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"82fe81264f78cb242eed31866510f26239f8ad8a7f7ea2f992344afee4de76e6\"" May 15 00:06:22.133723 containerd[1724]: time="2025-05-15T00:06:22.132994397Z" level=info msg="StartContainer for \"82fe81264f78cb242eed31866510f26239f8ad8a7f7ea2f992344afee4de76e6\"" May 15 00:06:22.173613 containerd[1724]: time="2025-05-15T00:06:22.173555397Z" level=info msg="StartContainer for \"3e9c793d4c47922ebbc863b5c4a89b0c51228161fc97e98ba56bb1c9fb61efbf\" returns successfully" May 15 00:06:22.190476 systemd[1]: Started cri-containerd-82fe81264f78cb242eed31866510f26239f8ad8a7f7ea2f992344afee4de76e6.scope - libcontainer container 82fe81264f78cb242eed31866510f26239f8ad8a7f7ea2f992344afee4de76e6. May 15 00:06:22.224795 systemd[1]: cri-containerd-82fe81264f78cb242eed31866510f26239f8ad8a7f7ea2f992344afee4de76e6.scope: Deactivated successfully. May 15 00:06:22.229384 containerd[1724]: time="2025-05-15T00:06:22.229151702Z" level=info msg="StartContainer for \"82fe81264f78cb242eed31866510f26239f8ad8a7f7ea2f992344afee4de76e6\" returns successfully" May 15 00:06:22.643158 containerd[1724]: time="2025-05-15T00:06:22.643090097Z" level=info msg="shim disconnected" id=82fe81264f78cb242eed31866510f26239f8ad8a7f7ea2f992344afee4de76e6 namespace=k8s.io May 15 00:06:22.643158 containerd[1724]: time="2025-05-15T00:06:22.643146337Z" level=warning msg="cleaning up after shim disconnected" id=82fe81264f78cb242eed31866510f26239f8ad8a7f7ea2f992344afee4de76e6 namespace=k8s.io May 15 00:06:22.643158 containerd[1724]: time="2025-05-15T00:06:22.643155617Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 00:06:23.045836 containerd[1724]: time="2025-05-15T00:06:23.045579902Z" level=info msg="CreateContainer within sandbox \"f42b7bec983efac972e8093f684621c486a1fc9a335d30f89d25f62f43277fff\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 15 00:06:23.085882 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-82fe81264f78cb242eed31866510f26239f8ad8a7f7ea2f992344afee4de76e6-rootfs.mount: Deactivated successfully. May 15 00:06:23.106897 containerd[1724]: time="2025-05-15T00:06:23.106855602Z" level=info msg="CreateContainer within sandbox \"f42b7bec983efac972e8093f684621c486a1fc9a335d30f89d25f62f43277fff\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"2b91044f471432e002c0e56d55b11a7529ef1f4cd9142b36f4886fa1c8b914ed\"" May 15 00:06:23.107485 containerd[1724]: time="2025-05-15T00:06:23.107399602Z" level=info msg="StartContainer for \"2b91044f471432e002c0e56d55b11a7529ef1f4cd9142b36f4886fa1c8b914ed\"" May 15 00:06:23.130469 systemd[1]: run-containerd-runc-k8s.io-2b91044f471432e002c0e56d55b11a7529ef1f4cd9142b36f4886fa1c8b914ed-runc.eO5Xsa.mount: Deactivated successfully. May 15 00:06:23.141866 systemd[1]: Started cri-containerd-2b91044f471432e002c0e56d55b11a7529ef1f4cd9142b36f4886fa1c8b914ed.scope - libcontainer container 2b91044f471432e002c0e56d55b11a7529ef1f4cd9142b36f4886fa1c8b914ed. May 15 00:06:23.162128 systemd[1]: cri-containerd-2b91044f471432e002c0e56d55b11a7529ef1f4cd9142b36f4886fa1c8b914ed.scope: Deactivated successfully. May 15 00:06:23.167885 containerd[1724]: time="2025-05-15T00:06:23.167834902Z" level=info msg="StartContainer for \"2b91044f471432e002c0e56d55b11a7529ef1f4cd9142b36f4886fa1c8b914ed\" returns successfully" May 15 00:06:23.182714 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2b91044f471432e002c0e56d55b11a7529ef1f4cd9142b36f4886fa1c8b914ed-rootfs.mount: Deactivated successfully. May 15 00:06:23.201115 containerd[1724]: time="2025-05-15T00:06:23.201027270Z" level=info msg="shim disconnected" id=2b91044f471432e002c0e56d55b11a7529ef1f4cd9142b36f4886fa1c8b914ed namespace=k8s.io May 15 00:06:23.201277 containerd[1724]: time="2025-05-15T00:06:23.201109150Z" level=warning msg="cleaning up after shim disconnected" id=2b91044f471432e002c0e56d55b11a7529ef1f4cd9142b36f4886fa1c8b914ed namespace=k8s.io May 15 00:06:23.201277 containerd[1724]: time="2025-05-15T00:06:23.201131310Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 00:06:24.049598 containerd[1724]: time="2025-05-15T00:06:24.049549278Z" level=info msg="CreateContainer within sandbox \"f42b7bec983efac972e8093f684621c486a1fc9a335d30f89d25f62f43277fff\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 15 00:06:24.074364 kubelet[3402]: I0515 00:06:24.071891 3402 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-sd4lt" podStartSLOduration=2.832884413 podStartE2EDuration="14.071865457s" podCreationTimestamp="2025-05-15 00:06:10 +0000 UTC" firstStartedPulling="2025-05-15 00:06:10.746597737 +0000 UTC m=+5.894145120" lastFinishedPulling="2025-05-15 00:06:21.985578781 +0000 UTC m=+17.133126164" observedRunningTime="2025-05-15 00:06:23.080614628 +0000 UTC m=+18.228162011" watchObservedRunningTime="2025-05-15 00:06:24.071865457 +0000 UTC m=+19.219412840" May 15 00:06:24.115776 containerd[1724]: time="2025-05-15T00:06:24.115722054Z" level=info msg="CreateContainer within sandbox \"f42b7bec983efac972e8093f684621c486a1fc9a335d30f89d25f62f43277fff\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"9f06633de92b249f2b4f97d5a7406f137a221e8f77463421026643f8347f8886\"" May 15 00:06:24.117553 containerd[1724]: time="2025-05-15T00:06:24.116656253Z" level=info msg="StartContainer for \"9f06633de92b249f2b4f97d5a7406f137a221e8f77463421026643f8347f8886\"" May 15 00:06:24.141203 systemd[1]: run-containerd-runc-k8s.io-9f06633de92b249f2b4f97d5a7406f137a221e8f77463421026643f8347f8886-runc.1u6cu0.mount: Deactivated successfully. May 15 00:06:24.150843 systemd[1]: Started cri-containerd-9f06633de92b249f2b4f97d5a7406f137a221e8f77463421026643f8347f8886.scope - libcontainer container 9f06633de92b249f2b4f97d5a7406f137a221e8f77463421026643f8347f8886. May 15 00:06:24.180657 containerd[1724]: time="2025-05-15T00:06:24.180321750Z" level=info msg="StartContainer for \"9f06633de92b249f2b4f97d5a7406f137a221e8f77463421026643f8347f8886\" returns successfully" May 15 00:06:24.243119 kubelet[3402]: I0515 00:06:24.243080 3402 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 15 00:06:24.292264 systemd[1]: Created slice kubepods-burstable-podcb031119_3b5b_4ce1_b20c_bd38d73e2d37.slice - libcontainer container kubepods-burstable-podcb031119_3b5b_4ce1_b20c_bd38d73e2d37.slice. May 15 00:06:24.302025 systemd[1]: Created slice kubepods-burstable-podf1114956_1f64_4c0d_8a9d_6711bcc8b22b.slice - libcontainer container kubepods-burstable-podf1114956_1f64_4c0d_8a9d_6711bcc8b22b.slice. May 15 00:06:24.366998 kubelet[3402]: I0515 00:06:24.366813 3402 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cb031119-3b5b-4ce1-b20c-bd38d73e2d37-config-volume\") pod \"coredns-6f6b679f8f-cn7sb\" (UID: \"cb031119-3b5b-4ce1-b20c-bd38d73e2d37\") " pod="kube-system/coredns-6f6b679f8f-cn7sb" May 15 00:06:24.366998 kubelet[3402]: I0515 00:06:24.366852 3402 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f1114956-1f64-4c0d-8a9d-6711bcc8b22b-config-volume\") pod \"coredns-6f6b679f8f-wtmhf\" (UID: \"f1114956-1f64-4c0d-8a9d-6711bcc8b22b\") " pod="kube-system/coredns-6f6b679f8f-wtmhf" May 15 00:06:24.366998 kubelet[3402]: I0515 00:06:24.366871 3402 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5kvl\" (UniqueName: \"kubernetes.io/projected/cb031119-3b5b-4ce1-b20c-bd38d73e2d37-kube-api-access-h5kvl\") pod \"coredns-6f6b679f8f-cn7sb\" (UID: \"cb031119-3b5b-4ce1-b20c-bd38d73e2d37\") " pod="kube-system/coredns-6f6b679f8f-cn7sb" May 15 00:06:24.366998 kubelet[3402]: I0515 00:06:24.366903 3402 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhq8b\" (UniqueName: \"kubernetes.io/projected/f1114956-1f64-4c0d-8a9d-6711bcc8b22b-kube-api-access-qhq8b\") pod \"coredns-6f6b679f8f-wtmhf\" (UID: \"f1114956-1f64-4c0d-8a9d-6711bcc8b22b\") " pod="kube-system/coredns-6f6b679f8f-wtmhf" May 15 00:06:24.600308 containerd[1724]: time="2025-05-15T00:06:24.600204059Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-cn7sb,Uid:cb031119-3b5b-4ce1-b20c-bd38d73e2d37,Namespace:kube-system,Attempt:0,}" May 15 00:06:24.605880 containerd[1724]: time="2025-05-15T00:06:24.605838853Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-wtmhf,Uid:f1114956-1f64-4c0d-8a9d-6711bcc8b22b,Namespace:kube-system,Attempt:0,}" May 15 00:06:26.394593 systemd-networkd[1343]: cilium_host: Link UP May 15 00:06:26.397637 systemd-networkd[1343]: cilium_net: Link UP May 15 00:06:26.397817 systemd-networkd[1343]: cilium_net: Gained carrier May 15 00:06:26.397925 systemd-networkd[1343]: cilium_host: Gained carrier May 15 00:06:26.614283 systemd-networkd[1343]: cilium_vxlan: Link UP May 15 00:06:26.614423 systemd-networkd[1343]: cilium_vxlan: Gained carrier May 15 00:06:26.794759 systemd-networkd[1343]: cilium_net: Gained IPv6LL May 15 00:06:26.946784 systemd-networkd[1343]: cilium_host: Gained IPv6LL May 15 00:06:26.981828 kernel: NET: Registered PF_ALG protocol family May 15 00:06:27.782703 systemd-networkd[1343]: lxc_health: Link UP May 15 00:06:27.789170 systemd-networkd[1343]: lxc_health: Gained carrier May 15 00:06:28.250897 systemd-networkd[1343]: lxc44e9a905fc3d: Link UP May 15 00:06:28.256766 kernel: eth0: renamed from tmpbc552 May 15 00:06:28.261369 systemd-networkd[1343]: lxc44e9a905fc3d: Gained carrier May 15 00:06:28.262553 systemd-networkd[1343]: lxc2306757f8210: Link UP May 15 00:06:28.282746 kernel: eth0: renamed from tmp3a128 May 15 00:06:28.291853 systemd-networkd[1343]: lxc2306757f8210: Gained carrier May 15 00:06:28.346027 kubelet[3402]: I0515 00:06:28.345794 3402 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-tr6p9" podStartSLOduration=10.991101968 podStartE2EDuration="19.345768179s" podCreationTimestamp="2025-05-15 00:06:09 +0000 UTC" firstStartedPulling="2025-05-15 00:06:10.461555236 +0000 UTC m=+5.609102619" lastFinishedPulling="2025-05-15 00:06:18.816221447 +0000 UTC m=+13.963768830" observedRunningTime="2025-05-15 00:06:25.077998151 +0000 UTC m=+20.225545534" watchObservedRunningTime="2025-05-15 00:06:28.345768179 +0000 UTC m=+23.493315522" May 15 00:06:28.546779 systemd-networkd[1343]: cilium_vxlan: Gained IPv6LL May 15 00:06:29.383391 kubelet[3402]: I0515 00:06:29.383295 3402 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 15 00:06:29.698830 systemd-networkd[1343]: lxc2306757f8210: Gained IPv6LL May 15 00:06:29.763803 systemd-networkd[1343]: lxc_health: Gained IPv6LL May 15 00:06:29.892549 systemd-networkd[1343]: lxc44e9a905fc3d: Gained IPv6LL May 15 00:06:31.798829 containerd[1724]: time="2025-05-15T00:06:31.798690976Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:06:31.798829 containerd[1724]: time="2025-05-15T00:06:31.798762656Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:06:31.798829 containerd[1724]: time="2025-05-15T00:06:31.798774096Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:06:31.800334 containerd[1724]: time="2025-05-15T00:06:31.798888176Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:06:31.815464 containerd[1724]: time="2025-05-15T00:06:31.815358439Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:06:31.815711 containerd[1724]: time="2025-05-15T00:06:31.815678559Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:06:31.816877 containerd[1724]: time="2025-05-15T00:06:31.816679358Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:06:31.816877 containerd[1724]: time="2025-05-15T00:06:31.816781238Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:06:31.853765 systemd[1]: Started cri-containerd-3a128b49e9c67d1c987c34d79e624459656bafc51d664f81aec707af43244420.scope - libcontainer container 3a128b49e9c67d1c987c34d79e624459656bafc51d664f81aec707af43244420. May 15 00:06:31.856345 systemd[1]: Started cri-containerd-bc552d2f0caca4070c459581f2e385c24f54f8fcf23af6f2b2995f50be98be51.scope - libcontainer container bc552d2f0caca4070c459581f2e385c24f54f8fcf23af6f2b2995f50be98be51. May 15 00:06:31.911866 containerd[1724]: time="2025-05-15T00:06:31.911774064Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-wtmhf,Uid:f1114956-1f64-4c0d-8a9d-6711bcc8b22b,Namespace:kube-system,Attempt:0,} returns sandbox id \"3a128b49e9c67d1c987c34d79e624459656bafc51d664f81aec707af43244420\"" May 15 00:06:31.916410 containerd[1724]: time="2025-05-15T00:06:31.916167460Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-cn7sb,Uid:cb031119-3b5b-4ce1-b20c-bd38d73e2d37,Namespace:kube-system,Attempt:0,} returns sandbox id \"bc552d2f0caca4070c459581f2e385c24f54f8fcf23af6f2b2995f50be98be51\"" May 15 00:06:31.917708 containerd[1724]: time="2025-05-15T00:06:31.917605499Z" level=info msg="CreateContainer within sandbox \"3a128b49e9c67d1c987c34d79e624459656bafc51d664f81aec707af43244420\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 15 00:06:31.920925 containerd[1724]: time="2025-05-15T00:06:31.920831575Z" level=info msg="CreateContainer within sandbox \"bc552d2f0caca4070c459581f2e385c24f54f8fcf23af6f2b2995f50be98be51\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 15 00:06:31.998979 containerd[1724]: time="2025-05-15T00:06:31.998941458Z" level=info msg="CreateContainer within sandbox \"3a128b49e9c67d1c987c34d79e624459656bafc51d664f81aec707af43244420\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"aebee14b428830eb6dcf42ef5696526985f450bc89e07e2cc922572ced96f5fe\"" May 15 00:06:31.999927 containerd[1724]: time="2025-05-15T00:06:31.999620458Z" level=info msg="StartContainer for \"aebee14b428830eb6dcf42ef5696526985f450bc89e07e2cc922572ced96f5fe\"" May 15 00:06:32.018095 containerd[1724]: time="2025-05-15T00:06:32.017938120Z" level=info msg="CreateContainer within sandbox \"bc552d2f0caca4070c459581f2e385c24f54f8fcf23af6f2b2995f50be98be51\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2ace1de4d55526b58b5a7edbaaa212de58884234a92e1b956b90c0d8167fc615\"" May 15 00:06:32.019909 containerd[1724]: time="2025-05-15T00:06:32.018770439Z" level=info msg="StartContainer for \"2ace1de4d55526b58b5a7edbaaa212de58884234a92e1b956b90c0d8167fc615\"" May 15 00:06:32.025050 systemd[1]: Started cri-containerd-aebee14b428830eb6dcf42ef5696526985f450bc89e07e2cc922572ced96f5fe.scope - libcontainer container aebee14b428830eb6dcf42ef5696526985f450bc89e07e2cc922572ced96f5fe. May 15 00:06:32.041795 systemd[1]: Started cri-containerd-2ace1de4d55526b58b5a7edbaaa212de58884234a92e1b956b90c0d8167fc615.scope - libcontainer container 2ace1de4d55526b58b5a7edbaaa212de58884234a92e1b956b90c0d8167fc615. May 15 00:06:32.070709 containerd[1724]: time="2025-05-15T00:06:32.069980828Z" level=info msg="StartContainer for \"aebee14b428830eb6dcf42ef5696526985f450bc89e07e2cc922572ced96f5fe\" returns successfully" May 15 00:06:32.086661 containerd[1724]: time="2025-05-15T00:06:32.086598412Z" level=info msg="StartContainer for \"2ace1de4d55526b58b5a7edbaaa212de58884234a92e1b956b90c0d8167fc615\" returns successfully" May 15 00:06:32.803811 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3611185771.mount: Deactivated successfully. May 15 00:06:33.103012 kubelet[3402]: I0515 00:06:33.102797 3402 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-wtmhf" podStartSLOduration=23.10277161 podStartE2EDuration="23.10277161s" podCreationTimestamp="2025-05-15 00:06:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 00:06:33.101150772 +0000 UTC m=+28.248698155" watchObservedRunningTime="2025-05-15 00:06:33.10277161 +0000 UTC m=+28.250318993" May 15 00:06:33.118064 kubelet[3402]: I0515 00:06:33.117991 3402 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-cn7sb" podStartSLOduration=23.117976835 podStartE2EDuration="23.117976835s" podCreationTimestamp="2025-05-15 00:06:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 00:06:33.114963838 +0000 UTC m=+28.262511221" watchObservedRunningTime="2025-05-15 00:06:33.117976835 +0000 UTC m=+28.265524178" May 15 00:07:42.027905 systemd[1]: Started sshd@7-10.200.20.16:22-10.200.16.10:50782.service - OpenSSH per-connection server daemon (10.200.16.10:50782). May 15 00:07:42.478494 sshd[4787]: Accepted publickey for core from 10.200.16.10 port 50782 ssh2: RSA SHA256:7ZishQ9HWvAtdE+Xy1E7/rrbLiv2T8OuOKfxsc80/d0 May 15 00:07:42.480703 sshd-session[4787]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:07:42.486584 systemd-logind[1707]: New session 10 of user core. May 15 00:07:42.493797 systemd[1]: Started session-10.scope - Session 10 of User core. May 15 00:07:42.918082 sshd[4789]: Connection closed by 10.200.16.10 port 50782 May 15 00:07:42.917994 sshd-session[4787]: pam_unix(sshd:session): session closed for user core May 15 00:07:42.921192 systemd[1]: sshd@7-10.200.20.16:22-10.200.16.10:50782.service: Deactivated successfully. May 15 00:07:42.924018 systemd[1]: session-10.scope: Deactivated successfully. May 15 00:07:42.924814 systemd-logind[1707]: Session 10 logged out. Waiting for processes to exit. May 15 00:07:42.925763 systemd-logind[1707]: Removed session 10. May 15 00:07:47.999296 systemd[1]: Started sshd@8-10.200.20.16:22-10.200.16.10:50792.service - OpenSSH per-connection server daemon (10.200.16.10:50792). May 15 00:07:48.454203 sshd[4802]: Accepted publickey for core from 10.200.16.10 port 50792 ssh2: RSA SHA256:7ZishQ9HWvAtdE+Xy1E7/rrbLiv2T8OuOKfxsc80/d0 May 15 00:07:48.455419 sshd-session[4802]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:07:48.460676 systemd-logind[1707]: New session 11 of user core. May 15 00:07:48.467759 systemd[1]: Started session-11.scope - Session 11 of User core. May 15 00:07:48.837120 sshd[4804]: Connection closed by 10.200.16.10 port 50792 May 15 00:07:48.837609 sshd-session[4802]: pam_unix(sshd:session): session closed for user core May 15 00:07:48.841007 systemd[1]: sshd@8-10.200.20.16:22-10.200.16.10:50792.service: Deactivated successfully. May 15 00:07:48.842481 systemd[1]: session-11.scope: Deactivated successfully. May 15 00:07:48.843231 systemd-logind[1707]: Session 11 logged out. Waiting for processes to exit. May 15 00:07:48.844265 systemd-logind[1707]: Removed session 11. May 15 00:07:53.919367 systemd[1]: Started sshd@9-10.200.20.16:22-10.200.16.10:33366.service - OpenSSH per-connection server daemon (10.200.16.10:33366). May 15 00:07:54.373401 sshd[4816]: Accepted publickey for core from 10.200.16.10 port 33366 ssh2: RSA SHA256:7ZishQ9HWvAtdE+Xy1E7/rrbLiv2T8OuOKfxsc80/d0 May 15 00:07:54.374650 sshd-session[4816]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:07:54.378647 systemd-logind[1707]: New session 12 of user core. May 15 00:07:54.386831 systemd[1]: Started session-12.scope - Session 12 of User core. May 15 00:07:54.774336 sshd[4818]: Connection closed by 10.200.16.10 port 33366 May 15 00:07:54.775025 sshd-session[4816]: pam_unix(sshd:session): session closed for user core May 15 00:07:54.778254 systemd[1]: sshd@9-10.200.20.16:22-10.200.16.10:33366.service: Deactivated successfully. May 15 00:07:54.779907 systemd[1]: session-12.scope: Deactivated successfully. May 15 00:07:54.780729 systemd-logind[1707]: Session 12 logged out. Waiting for processes to exit. May 15 00:07:54.781814 systemd-logind[1707]: Removed session 12. May 15 00:07:59.857176 systemd[1]: Started sshd@10-10.200.20.16:22-10.200.16.10:46458.service - OpenSSH per-connection server daemon (10.200.16.10:46458). May 15 00:08:00.308065 sshd[4831]: Accepted publickey for core from 10.200.16.10 port 46458 ssh2: RSA SHA256:7ZishQ9HWvAtdE+Xy1E7/rrbLiv2T8OuOKfxsc80/d0 May 15 00:08:00.309318 sshd-session[4831]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:08:00.313504 systemd-logind[1707]: New session 13 of user core. May 15 00:08:00.323842 systemd[1]: Started session-13.scope - Session 13 of User core. May 15 00:08:00.693304 sshd[4833]: Connection closed by 10.200.16.10 port 46458 May 15 00:08:00.695252 sshd-session[4831]: pam_unix(sshd:session): session closed for user core May 15 00:08:00.698739 systemd[1]: sshd@10-10.200.20.16:22-10.200.16.10:46458.service: Deactivated successfully. May 15 00:08:00.701259 systemd[1]: session-13.scope: Deactivated successfully. May 15 00:08:00.702086 systemd-logind[1707]: Session 13 logged out. Waiting for processes to exit. May 15 00:08:00.702952 systemd-logind[1707]: Removed session 13. May 15 00:08:05.785850 systemd[1]: Started sshd@11-10.200.20.16:22-10.200.16.10:46472.service - OpenSSH per-connection server daemon (10.200.16.10:46472). May 15 00:08:06.239821 sshd[4849]: Accepted publickey for core from 10.200.16.10 port 46472 ssh2: RSA SHA256:7ZishQ9HWvAtdE+Xy1E7/rrbLiv2T8OuOKfxsc80/d0 May 15 00:08:06.241226 sshd-session[4849]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:08:06.245466 systemd-logind[1707]: New session 14 of user core. May 15 00:08:06.250768 systemd[1]: Started session-14.scope - Session 14 of User core. May 15 00:08:06.639197 sshd[4851]: Connection closed by 10.200.16.10 port 46472 May 15 00:08:06.639758 sshd-session[4849]: pam_unix(sshd:session): session closed for user core May 15 00:08:06.643102 systemd[1]: sshd@11-10.200.20.16:22-10.200.16.10:46472.service: Deactivated successfully. May 15 00:08:06.644605 systemd[1]: session-14.scope: Deactivated successfully. May 15 00:08:06.645961 systemd-logind[1707]: Session 14 logged out. Waiting for processes to exit. May 15 00:08:06.646781 systemd-logind[1707]: Removed session 14. May 15 00:08:11.735909 systemd[1]: Started sshd@12-10.200.20.16:22-10.200.16.10:32936.service - OpenSSH per-connection server daemon (10.200.16.10:32936). May 15 00:08:12.218920 sshd[4866]: Accepted publickey for core from 10.200.16.10 port 32936 ssh2: RSA SHA256:7ZishQ9HWvAtdE+Xy1E7/rrbLiv2T8OuOKfxsc80/d0 May 15 00:08:12.220164 sshd-session[4866]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:08:12.224582 systemd-logind[1707]: New session 15 of user core. May 15 00:08:12.230782 systemd[1]: Started session-15.scope - Session 15 of User core. May 15 00:08:12.629693 sshd[4868]: Connection closed by 10.200.16.10 port 32936 May 15 00:08:12.630269 sshd-session[4866]: pam_unix(sshd:session): session closed for user core May 15 00:08:12.634212 systemd[1]: sshd@12-10.200.20.16:22-10.200.16.10:32936.service: Deactivated successfully. May 15 00:08:12.634239 systemd-logind[1707]: Session 15 logged out. Waiting for processes to exit. May 15 00:08:12.636247 systemd[1]: session-15.scope: Deactivated successfully. May 15 00:08:12.637821 systemd-logind[1707]: Removed session 15. May 15 00:08:17.719157 systemd[1]: Started sshd@13-10.200.20.16:22-10.200.16.10:32944.service - OpenSSH per-connection server daemon (10.200.16.10:32944). May 15 00:08:18.171447 sshd[4881]: Accepted publickey for core from 10.200.16.10 port 32944 ssh2: RSA SHA256:7ZishQ9HWvAtdE+Xy1E7/rrbLiv2T8OuOKfxsc80/d0 May 15 00:08:18.172844 sshd-session[4881]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:08:18.176889 systemd-logind[1707]: New session 16 of user core. May 15 00:08:18.180785 systemd[1]: Started session-16.scope - Session 16 of User core. May 15 00:08:18.560164 sshd[4883]: Connection closed by 10.200.16.10 port 32944 May 15 00:08:18.559438 sshd-session[4881]: pam_unix(sshd:session): session closed for user core May 15 00:08:18.563214 systemd[1]: sshd@13-10.200.20.16:22-10.200.16.10:32944.service: Deactivated successfully. May 15 00:08:18.565047 systemd[1]: session-16.scope: Deactivated successfully. May 15 00:08:18.566138 systemd-logind[1707]: Session 16 logged out. Waiting for processes to exit. May 15 00:08:18.567083 systemd-logind[1707]: Removed session 16. May 15 00:08:18.641524 systemd[1]: Started sshd@14-10.200.20.16:22-10.200.16.10:50908.service - OpenSSH per-connection server daemon (10.200.16.10:50908). May 15 00:08:19.098551 sshd[4896]: Accepted publickey for core from 10.200.16.10 port 50908 ssh2: RSA SHA256:7ZishQ9HWvAtdE+Xy1E7/rrbLiv2T8OuOKfxsc80/d0 May 15 00:08:19.099882 sshd-session[4896]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:08:19.103918 systemd-logind[1707]: New session 17 of user core. May 15 00:08:19.108768 systemd[1]: Started session-17.scope - Session 17 of User core. May 15 00:08:19.524357 sshd[4898]: Connection closed by 10.200.16.10 port 50908 May 15 00:08:19.525241 sshd-session[4896]: pam_unix(sshd:session): session closed for user core May 15 00:08:19.528749 systemd-logind[1707]: Session 17 logged out. Waiting for processes to exit. May 15 00:08:19.529306 systemd[1]: sshd@14-10.200.20.16:22-10.200.16.10:50908.service: Deactivated successfully. May 15 00:08:19.532401 systemd[1]: session-17.scope: Deactivated successfully. May 15 00:08:19.533476 systemd-logind[1707]: Removed session 17. May 15 00:08:19.611714 systemd[1]: Started sshd@15-10.200.20.16:22-10.200.16.10:50912.service - OpenSSH per-connection server daemon (10.200.16.10:50912). May 15 00:08:20.098078 sshd[4908]: Accepted publickey for core from 10.200.16.10 port 50912 ssh2: RSA SHA256:7ZishQ9HWvAtdE+Xy1E7/rrbLiv2T8OuOKfxsc80/d0 May 15 00:08:20.099478 sshd-session[4908]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:08:20.103800 systemd-logind[1707]: New session 18 of user core. May 15 00:08:20.108813 systemd[1]: Started session-18.scope - Session 18 of User core. May 15 00:08:20.511662 sshd[4910]: Connection closed by 10.200.16.10 port 50912 May 15 00:08:20.511538 sshd-session[4908]: pam_unix(sshd:session): session closed for user core May 15 00:08:20.515145 systemd-logind[1707]: Session 18 logged out. Waiting for processes to exit. May 15 00:08:20.516278 systemd[1]: sshd@15-10.200.20.16:22-10.200.16.10:50912.service: Deactivated successfully. May 15 00:08:20.519395 systemd[1]: session-18.scope: Deactivated successfully. May 15 00:08:20.520802 systemd-logind[1707]: Removed session 18. May 15 00:08:25.598991 systemd[1]: Started sshd@16-10.200.20.16:22-10.200.16.10:50914.service - OpenSSH per-connection server daemon (10.200.16.10:50914). May 15 00:08:26.045871 sshd[4922]: Accepted publickey for core from 10.200.16.10 port 50914 ssh2: RSA SHA256:7ZishQ9HWvAtdE+Xy1E7/rrbLiv2T8OuOKfxsc80/d0 May 15 00:08:26.047119 sshd-session[4922]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:08:26.051203 systemd-logind[1707]: New session 19 of user core. May 15 00:08:26.059779 systemd[1]: Started session-19.scope - Session 19 of User core. May 15 00:08:26.442657 sshd[4924]: Connection closed by 10.200.16.10 port 50914 May 15 00:08:26.443124 sshd-session[4922]: pam_unix(sshd:session): session closed for user core May 15 00:08:26.447482 systemd[1]: sshd@16-10.200.20.16:22-10.200.16.10:50914.service: Deactivated successfully. May 15 00:08:26.450180 systemd[1]: session-19.scope: Deactivated successfully. May 15 00:08:26.451457 systemd-logind[1707]: Session 19 logged out. Waiting for processes to exit. May 15 00:08:26.452492 systemd-logind[1707]: Removed session 19. May 15 00:08:31.535169 systemd[1]: Started sshd@17-10.200.20.16:22-10.200.16.10:49064.service - OpenSSH per-connection server daemon (10.200.16.10:49064). May 15 00:08:31.983556 sshd[4937]: Accepted publickey for core from 10.200.16.10 port 49064 ssh2: RSA SHA256:7ZishQ9HWvAtdE+Xy1E7/rrbLiv2T8OuOKfxsc80/d0 May 15 00:08:31.984992 sshd-session[4937]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:08:31.989350 systemd-logind[1707]: New session 20 of user core. May 15 00:08:31.998799 systemd[1]: Started session-20.scope - Session 20 of User core. May 15 00:08:32.385762 sshd[4939]: Connection closed by 10.200.16.10 port 49064 May 15 00:08:32.386293 sshd-session[4937]: pam_unix(sshd:session): session closed for user core May 15 00:08:32.390009 systemd[1]: sshd@17-10.200.20.16:22-10.200.16.10:49064.service: Deactivated successfully. May 15 00:08:32.391963 systemd[1]: session-20.scope: Deactivated successfully. May 15 00:08:32.392897 systemd-logind[1707]: Session 20 logged out. Waiting for processes to exit. May 15 00:08:32.394685 systemd-logind[1707]: Removed session 20. May 15 00:08:37.473608 systemd[1]: Started sshd@18-10.200.20.16:22-10.200.16.10:49066.service - OpenSSH per-connection server daemon (10.200.16.10:49066). May 15 00:08:37.957548 sshd[4950]: Accepted publickey for core from 10.200.16.10 port 49066 ssh2: RSA SHA256:7ZishQ9HWvAtdE+Xy1E7/rrbLiv2T8OuOKfxsc80/d0 May 15 00:08:37.958871 sshd-session[4950]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:08:37.963868 systemd-logind[1707]: New session 21 of user core. May 15 00:08:37.971830 systemd[1]: Started session-21.scope - Session 21 of User core. May 15 00:08:38.369547 sshd[4952]: Connection closed by 10.200.16.10 port 49066 May 15 00:08:38.370155 sshd-session[4950]: pam_unix(sshd:session): session closed for user core May 15 00:08:38.373974 systemd[1]: sshd@18-10.200.20.16:22-10.200.16.10:49066.service: Deactivated successfully. May 15 00:08:38.377021 systemd[1]: session-21.scope: Deactivated successfully. May 15 00:08:38.378847 systemd-logind[1707]: Session 21 logged out. Waiting for processes to exit. May 15 00:08:38.379843 systemd-logind[1707]: Removed session 21. May 15 00:08:43.467201 systemd[1]: Started sshd@19-10.200.20.16:22-10.200.16.10:41606.service - OpenSSH per-connection server daemon (10.200.16.10:41606). May 15 00:08:43.949954 sshd[4966]: Accepted publickey for core from 10.200.16.10 port 41606 ssh2: RSA SHA256:7ZishQ9HWvAtdE+Xy1E7/rrbLiv2T8OuOKfxsc80/d0 May 15 00:08:43.951231 sshd-session[4966]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:08:43.955274 systemd-logind[1707]: New session 22 of user core. May 15 00:08:43.961825 systemd[1]: Started session-22.scope - Session 22 of User core. May 15 00:08:44.364662 sshd[4968]: Connection closed by 10.200.16.10 port 41606 May 15 00:08:44.365204 sshd-session[4966]: pam_unix(sshd:session): session closed for user core May 15 00:08:44.368570 systemd[1]: sshd@19-10.200.20.16:22-10.200.16.10:41606.service: Deactivated successfully. May 15 00:08:44.370432 systemd[1]: session-22.scope: Deactivated successfully. May 15 00:08:44.371346 systemd-logind[1707]: Session 22 logged out. Waiting for processes to exit. May 15 00:08:44.372474 systemd-logind[1707]: Removed session 22. May 15 00:08:49.454773 systemd[1]: Started sshd@20-10.200.20.16:22-10.200.16.10:56462.service - OpenSSH per-connection server daemon (10.200.16.10:56462). May 15 00:08:49.945608 sshd[4981]: Accepted publickey for core from 10.200.16.10 port 56462 ssh2: RSA SHA256:7ZishQ9HWvAtdE+Xy1E7/rrbLiv2T8OuOKfxsc80/d0 May 15 00:08:49.946881 sshd-session[4981]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:08:49.951169 systemd-logind[1707]: New session 23 of user core. May 15 00:08:49.960783 systemd[1]: Started session-23.scope - Session 23 of User core. May 15 00:08:50.362745 sshd[4983]: Connection closed by 10.200.16.10 port 56462 May 15 00:08:50.363317 sshd-session[4981]: pam_unix(sshd:session): session closed for user core May 15 00:08:50.366901 systemd[1]: sshd@20-10.200.20.16:22-10.200.16.10:56462.service: Deactivated successfully. May 15 00:08:50.368761 systemd[1]: session-23.scope: Deactivated successfully. May 15 00:08:50.369533 systemd-logind[1707]: Session 23 logged out. Waiting for processes to exit. May 15 00:08:50.370891 systemd-logind[1707]: Removed session 23. May 15 00:08:55.455021 systemd[1]: Started sshd@21-10.200.20.16:22-10.200.16.10:56478.service - OpenSSH per-connection server daemon (10.200.16.10:56478). May 15 00:08:55.934076 sshd[4995]: Accepted publickey for core from 10.200.16.10 port 56478 ssh2: RSA SHA256:7ZishQ9HWvAtdE+Xy1E7/rrbLiv2T8OuOKfxsc80/d0 May 15 00:08:55.935313 sshd-session[4995]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:08:55.939428 systemd-logind[1707]: New session 24 of user core. May 15 00:08:55.949871 systemd[1]: Started session-24.scope - Session 24 of User core. May 15 00:08:56.376832 sshd[4997]: Connection closed by 10.200.16.10 port 56478 May 15 00:08:56.377452 sshd-session[4995]: pam_unix(sshd:session): session closed for user core May 15 00:08:56.381090 systemd-logind[1707]: Session 24 logged out. Waiting for processes to exit. May 15 00:08:56.382051 systemd[1]: sshd@21-10.200.20.16:22-10.200.16.10:56478.service: Deactivated successfully. May 15 00:08:56.384094 systemd[1]: session-24.scope: Deactivated successfully. May 15 00:08:56.385226 systemd-logind[1707]: Removed session 24. May 15 00:09:01.468874 systemd[1]: Started sshd@22-10.200.20.16:22-10.200.16.10:39314.service - OpenSSH per-connection server daemon (10.200.16.10:39314). May 15 00:09:01.918013 sshd[5008]: Accepted publickey for core from 10.200.16.10 port 39314 ssh2: RSA SHA256:7ZishQ9HWvAtdE+Xy1E7/rrbLiv2T8OuOKfxsc80/d0 May 15 00:09:01.919391 sshd-session[5008]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:09:01.924710 systemd-logind[1707]: New session 25 of user core. May 15 00:09:01.934802 systemd[1]: Started session-25.scope - Session 25 of User core. May 15 00:09:02.306071 sshd[5010]: Connection closed by 10.200.16.10 port 39314 May 15 00:09:02.305909 sshd-session[5008]: pam_unix(sshd:session): session closed for user core May 15 00:09:02.310023 systemd[1]: sshd@22-10.200.20.16:22-10.200.16.10:39314.service: Deactivated successfully. May 15 00:09:02.312480 systemd[1]: session-25.scope: Deactivated successfully. May 15 00:09:02.313607 systemd-logind[1707]: Session 25 logged out. Waiting for processes to exit. May 15 00:09:02.314532 systemd-logind[1707]: Removed session 25. May 15 00:09:07.401918 systemd[1]: Started sshd@23-10.200.20.16:22-10.200.16.10:39316.service - OpenSSH per-connection server daemon (10.200.16.10:39316). May 15 00:09:07.891331 sshd[5023]: Accepted publickey for core from 10.200.16.10 port 39316 ssh2: RSA SHA256:7ZishQ9HWvAtdE+Xy1E7/rrbLiv2T8OuOKfxsc80/d0 May 15 00:09:07.893260 sshd-session[5023]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:09:07.898767 systemd-logind[1707]: New session 26 of user core. May 15 00:09:07.906871 systemd[1]: Started session-26.scope - Session 26 of User core. May 15 00:09:08.309731 sshd[5025]: Connection closed by 10.200.16.10 port 39316 May 15 00:09:08.310303 sshd-session[5023]: pam_unix(sshd:session): session closed for user core May 15 00:09:08.314159 systemd[1]: sshd@23-10.200.20.16:22-10.200.16.10:39316.service: Deactivated successfully. May 15 00:09:08.317292 systemd[1]: session-26.scope: Deactivated successfully. May 15 00:09:08.318588 systemd-logind[1707]: Session 26 logged out. Waiting for processes to exit. May 15 00:09:08.320132 systemd-logind[1707]: Removed session 26. May 15 00:09:13.399077 systemd[1]: Started sshd@24-10.200.20.16:22-10.200.16.10:41836.service - OpenSSH per-connection server daemon (10.200.16.10:41836). May 15 00:09:13.857593 sshd[5039]: Accepted publickey for core from 10.200.16.10 port 41836 ssh2: RSA SHA256:7ZishQ9HWvAtdE+Xy1E7/rrbLiv2T8OuOKfxsc80/d0 May 15 00:09:13.858970 sshd-session[5039]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:09:13.863416 systemd-logind[1707]: New session 27 of user core. May 15 00:09:13.874950 systemd[1]: Started session-27.scope - Session 27 of User core. May 15 00:09:14.245059 sshd[5042]: Connection closed by 10.200.16.10 port 41836 May 15 00:09:14.246854 sshd-session[5039]: pam_unix(sshd:session): session closed for user core May 15 00:09:14.249859 systemd[1]: sshd@24-10.200.20.16:22-10.200.16.10:41836.service: Deactivated successfully. May 15 00:09:14.251782 systemd[1]: session-27.scope: Deactivated successfully. May 15 00:09:14.253585 systemd-logind[1707]: Session 27 logged out. Waiting for processes to exit. May 15 00:09:14.254636 systemd-logind[1707]: Removed session 27. May 15 00:09:14.333646 systemd[1]: Started sshd@25-10.200.20.16:22-10.200.16.10:41842.service - OpenSSH per-connection server daemon (10.200.16.10:41842). May 15 00:09:14.826416 sshd[5055]: Accepted publickey for core from 10.200.16.10 port 41842 ssh2: RSA SHA256:7ZishQ9HWvAtdE+Xy1E7/rrbLiv2T8OuOKfxsc80/d0 May 15 00:09:14.828258 sshd-session[5055]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:09:14.835307 systemd-logind[1707]: New session 28 of user core. May 15 00:09:14.843872 systemd[1]: Started session-28.scope - Session 28 of User core. May 15 00:09:15.298672 sshd[5057]: Connection closed by 10.200.16.10 port 41842 May 15 00:09:15.299227 sshd-session[5055]: pam_unix(sshd:session): session closed for user core May 15 00:09:15.302155 systemd[1]: sshd@25-10.200.20.16:22-10.200.16.10:41842.service: Deactivated successfully. May 15 00:09:15.304114 systemd[1]: session-28.scope: Deactivated successfully. May 15 00:09:15.307865 systemd-logind[1707]: Session 28 logged out. Waiting for processes to exit. May 15 00:09:15.310074 systemd-logind[1707]: Removed session 28. May 15 00:09:15.392258 systemd[1]: Started sshd@26-10.200.20.16:22-10.200.16.10:41854.service - OpenSSH per-connection server daemon (10.200.16.10:41854). May 15 00:09:15.873225 sshd[5067]: Accepted publickey for core from 10.200.16.10 port 41854 ssh2: RSA SHA256:7ZishQ9HWvAtdE+Xy1E7/rrbLiv2T8OuOKfxsc80/d0 May 15 00:09:15.874529 sshd-session[5067]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:09:15.878815 systemd-logind[1707]: New session 29 of user core. May 15 00:09:15.885827 systemd[1]: Started session-29.scope - Session 29 of User core. May 15 00:09:17.729410 sshd[5069]: Connection closed by 10.200.16.10 port 41854 May 15 00:09:17.730062 sshd-session[5067]: pam_unix(sshd:session): session closed for user core May 15 00:09:17.732939 systemd-logind[1707]: Session 29 logged out. Waiting for processes to exit. May 15 00:09:17.733016 systemd[1]: session-29.scope: Deactivated successfully. May 15 00:09:17.735047 systemd[1]: sshd@26-10.200.20.16:22-10.200.16.10:41854.service: Deactivated successfully. May 15 00:09:17.738918 systemd-logind[1707]: Removed session 29. May 15 00:09:17.816845 systemd[1]: Started sshd@27-10.200.20.16:22-10.200.16.10:41868.service - OpenSSH per-connection server daemon (10.200.16.10:41868). May 15 00:09:18.307788 sshd[5086]: Accepted publickey for core from 10.200.16.10 port 41868 ssh2: RSA SHA256:7ZishQ9HWvAtdE+Xy1E7/rrbLiv2T8OuOKfxsc80/d0 May 15 00:09:18.309343 sshd-session[5086]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:09:18.315293 systemd-logind[1707]: New session 30 of user core. May 15 00:09:18.322820 systemd[1]: Started session-30.scope - Session 30 of User core. May 15 00:09:18.850687 sshd[5088]: Connection closed by 10.200.16.10 port 41868 May 15 00:09:18.851360 sshd-session[5086]: pam_unix(sshd:session): session closed for user core May 15 00:09:18.854833 systemd-logind[1707]: Session 30 logged out. Waiting for processes to exit. May 15 00:09:18.854988 systemd[1]: sshd@27-10.200.20.16:22-10.200.16.10:41868.service: Deactivated successfully. May 15 00:09:18.858177 systemd[1]: session-30.scope: Deactivated successfully. May 15 00:09:18.860434 systemd-logind[1707]: Removed session 30. May 15 00:09:18.944026 systemd[1]: Started sshd@28-10.200.20.16:22-10.200.16.10:33612.service - OpenSSH per-connection server daemon (10.200.16.10:33612). May 15 00:09:19.423805 sshd[5098]: Accepted publickey for core from 10.200.16.10 port 33612 ssh2: RSA SHA256:7ZishQ9HWvAtdE+Xy1E7/rrbLiv2T8OuOKfxsc80/d0 May 15 00:09:19.425399 sshd-session[5098]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:09:19.430792 systemd-logind[1707]: New session 31 of user core. May 15 00:09:19.435999 systemd[1]: Started session-31.scope - Session 31 of User core. May 15 00:09:19.842653 sshd[5100]: Connection closed by 10.200.16.10 port 33612 May 15 00:09:19.843140 sshd-session[5098]: pam_unix(sshd:session): session closed for user core May 15 00:09:19.846485 systemd[1]: sshd@28-10.200.20.16:22-10.200.16.10:33612.service: Deactivated successfully. May 15 00:09:19.848513 systemd[1]: session-31.scope: Deactivated successfully. May 15 00:09:19.849481 systemd-logind[1707]: Session 31 logged out. Waiting for processes to exit. May 15 00:09:19.850441 systemd-logind[1707]: Removed session 31. May 15 00:09:24.935868 systemd[1]: Started sshd@29-10.200.20.16:22-10.200.16.10:33616.service - OpenSSH per-connection server daemon (10.200.16.10:33616). May 15 00:09:25.417416 sshd[5111]: Accepted publickey for core from 10.200.16.10 port 33616 ssh2: RSA SHA256:7ZishQ9HWvAtdE+Xy1E7/rrbLiv2T8OuOKfxsc80/d0 May 15 00:09:25.418723 sshd-session[5111]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:09:25.422894 systemd-logind[1707]: New session 32 of user core. May 15 00:09:25.430818 systemd[1]: Started session-32.scope - Session 32 of User core. May 15 00:09:25.826734 sshd[5113]: Connection closed by 10.200.16.10 port 33616 May 15 00:09:25.827519 sshd-session[5111]: pam_unix(sshd:session): session closed for user core May 15 00:09:25.831178 systemd[1]: sshd@29-10.200.20.16:22-10.200.16.10:33616.service: Deactivated successfully. May 15 00:09:25.834110 systemd[1]: session-32.scope: Deactivated successfully. May 15 00:09:25.835014 systemd-logind[1707]: Session 32 logged out. Waiting for processes to exit. May 15 00:09:25.836189 systemd-logind[1707]: Removed session 32. May 15 00:09:30.916418 systemd[1]: Started sshd@30-10.200.20.16:22-10.200.16.10:41612.service - OpenSSH per-connection server daemon (10.200.16.10:41612). May 15 00:09:31.399304 sshd[5127]: Accepted publickey for core from 10.200.16.10 port 41612 ssh2: RSA SHA256:7ZishQ9HWvAtdE+Xy1E7/rrbLiv2T8OuOKfxsc80/d0 May 15 00:09:31.400769 sshd-session[5127]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:09:31.404876 systemd-logind[1707]: New session 33 of user core. May 15 00:09:31.411826 systemd[1]: Started session-33.scope - Session 33 of User core. May 15 00:09:31.820250 sshd[5129]: Connection closed by 10.200.16.10 port 41612 May 15 00:09:31.820792 sshd-session[5127]: pam_unix(sshd:session): session closed for user core May 15 00:09:31.826487 systemd[1]: sshd@30-10.200.20.16:22-10.200.16.10:41612.service: Deactivated successfully. May 15 00:09:31.829285 systemd[1]: session-33.scope: Deactivated successfully. May 15 00:09:31.831232 systemd-logind[1707]: Session 33 logged out. Waiting for processes to exit. May 15 00:09:31.832486 systemd-logind[1707]: Removed session 33. May 15 00:09:36.905446 systemd[1]: Started sshd@31-10.200.20.16:22-10.200.16.10:41620.service - OpenSSH per-connection server daemon (10.200.16.10:41620). May 15 00:09:37.387190 sshd[5142]: Accepted publickey for core from 10.200.16.10 port 41620 ssh2: RSA SHA256:7ZishQ9HWvAtdE+Xy1E7/rrbLiv2T8OuOKfxsc80/d0 May 15 00:09:37.388462 sshd-session[5142]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:09:37.392567 systemd-logind[1707]: New session 34 of user core. May 15 00:09:37.400775 systemd[1]: Started session-34.scope - Session 34 of User core. May 15 00:09:37.805830 sshd[5144]: Connection closed by 10.200.16.10 port 41620 May 15 00:09:37.807061 sshd-session[5142]: pam_unix(sshd:session): session closed for user core May 15 00:09:37.812394 systemd[1]: sshd@31-10.200.20.16:22-10.200.16.10:41620.service: Deactivated successfully. May 15 00:09:37.815072 systemd[1]: session-34.scope: Deactivated successfully. May 15 00:09:37.816309 systemd-logind[1707]: Session 34 logged out. Waiting for processes to exit. May 15 00:09:37.817524 systemd-logind[1707]: Removed session 34. May 15 00:09:42.899936 systemd[1]: Started sshd@32-10.200.20.16:22-10.200.16.10:56302.service - OpenSSH per-connection server daemon (10.200.16.10:56302). May 15 00:09:43.347803 sshd[5158]: Accepted publickey for core from 10.200.16.10 port 56302 ssh2: RSA SHA256:7ZishQ9HWvAtdE+Xy1E7/rrbLiv2T8OuOKfxsc80/d0 May 15 00:09:43.349741 sshd-session[5158]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:09:43.354054 systemd-logind[1707]: New session 35 of user core. May 15 00:09:43.363792 systemd[1]: Started session-35.scope - Session 35 of User core. May 15 00:09:43.747105 sshd[5160]: Connection closed by 10.200.16.10 port 56302 May 15 00:09:43.747737 sshd-session[5158]: pam_unix(sshd:session): session closed for user core May 15 00:09:43.750973 systemd[1]: sshd@32-10.200.20.16:22-10.200.16.10:56302.service: Deactivated successfully. May 15 00:09:43.753304 systemd[1]: session-35.scope: Deactivated successfully. May 15 00:09:43.754449 systemd-logind[1707]: Session 35 logged out. Waiting for processes to exit. May 15 00:09:43.755431 systemd-logind[1707]: Removed session 35. May 15 00:09:48.840934 systemd[1]: Started sshd@33-10.200.20.16:22-10.200.16.10:50292.service - OpenSSH per-connection server daemon (10.200.16.10:50292). May 15 00:09:49.322382 sshd[5172]: Accepted publickey for core from 10.200.16.10 port 50292 ssh2: RSA SHA256:7ZishQ9HWvAtdE+Xy1E7/rrbLiv2T8OuOKfxsc80/d0 May 15 00:09:49.323724 sshd-session[5172]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:09:49.327920 systemd-logind[1707]: New session 36 of user core. May 15 00:09:49.334767 systemd[1]: Started session-36.scope - Session 36 of User core. May 15 00:09:49.731960 sshd[5174]: Connection closed by 10.200.16.10 port 50292 May 15 00:09:49.732875 sshd-session[5172]: pam_unix(sshd:session): session closed for user core May 15 00:09:49.736293 systemd-logind[1707]: Session 36 logged out. Waiting for processes to exit. May 15 00:09:49.737302 systemd[1]: sshd@33-10.200.20.16:22-10.200.16.10:50292.service: Deactivated successfully. May 15 00:09:49.739959 systemd[1]: session-36.scope: Deactivated successfully. May 15 00:09:49.741325 systemd-logind[1707]: Removed session 36. May 15 00:09:49.820350 systemd[1]: Started sshd@34-10.200.20.16:22-10.200.16.10:50302.service - OpenSSH per-connection server daemon (10.200.16.10:50302). May 15 00:09:50.311000 sshd[5185]: Accepted publickey for core from 10.200.16.10 port 50302 ssh2: RSA SHA256:7ZishQ9HWvAtdE+Xy1E7/rrbLiv2T8OuOKfxsc80/d0 May 15 00:09:50.312289 sshd-session[5185]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:09:50.316529 systemd-logind[1707]: New session 37 of user core. May 15 00:09:50.321776 systemd[1]: Started session-37.scope - Session 37 of User core. May 15 00:09:52.323514 systemd[1]: run-containerd-runc-k8s.io-9f06633de92b249f2b4f97d5a7406f137a221e8f77463421026643f8347f8886-runc.rUFve5.mount: Deactivated successfully. May 15 00:09:52.326504 containerd[1724]: time="2025-05-15T00:09:52.326006237Z" level=info msg="StopContainer for \"3e9c793d4c47922ebbc863b5c4a89b0c51228161fc97e98ba56bb1c9fb61efbf\" with timeout 30 (s)" May 15 00:09:52.328203 containerd[1724]: time="2025-05-15T00:09:52.326739197Z" level=info msg="Stop container \"3e9c793d4c47922ebbc863b5c4a89b0c51228161fc97e98ba56bb1c9fb61efbf\" with signal terminated" May 15 00:09:52.338252 containerd[1724]: time="2025-05-15T00:09:52.338074184Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 15 00:09:52.339888 systemd[1]: cri-containerd-3e9c793d4c47922ebbc863b5c4a89b0c51228161fc97e98ba56bb1c9fb61efbf.scope: Deactivated successfully. May 15 00:09:52.349645 containerd[1724]: time="2025-05-15T00:09:52.349488772Z" level=info msg="StopContainer for \"9f06633de92b249f2b4f97d5a7406f137a221e8f77463421026643f8347f8886\" with timeout 2 (s)" May 15 00:09:52.349960 containerd[1724]: time="2025-05-15T00:09:52.349891931Z" level=info msg="Stop container \"9f06633de92b249f2b4f97d5a7406f137a221e8f77463421026643f8347f8886\" with signal terminated" May 15 00:09:52.360590 systemd-networkd[1343]: lxc_health: Link DOWN May 15 00:09:52.360596 systemd-networkd[1343]: lxc_health: Lost carrier May 15 00:09:52.373598 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3e9c793d4c47922ebbc863b5c4a89b0c51228161fc97e98ba56bb1c9fb61efbf-rootfs.mount: Deactivated successfully. May 15 00:09:52.380594 systemd[1]: cri-containerd-9f06633de92b249f2b4f97d5a7406f137a221e8f77463421026643f8347f8886.scope: Deactivated successfully. May 15 00:09:52.380912 systemd[1]: cri-containerd-9f06633de92b249f2b4f97d5a7406f137a221e8f77463421026643f8347f8886.scope: Consumed 6.235s CPU time, 122.9M memory peak, 128K read from disk, 12.9M written to disk. May 15 00:09:52.400085 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9f06633de92b249f2b4f97d5a7406f137a221e8f77463421026643f8347f8886-rootfs.mount: Deactivated successfully. May 15 00:09:52.431579 containerd[1724]: time="2025-05-15T00:09:52.431366203Z" level=info msg="shim disconnected" id=3e9c793d4c47922ebbc863b5c4a89b0c51228161fc97e98ba56bb1c9fb61efbf namespace=k8s.io May 15 00:09:52.431579 containerd[1724]: time="2025-05-15T00:09:52.431528003Z" level=warning msg="cleaning up after shim disconnected" id=3e9c793d4c47922ebbc863b5c4a89b0c51228161fc97e98ba56bb1c9fb61efbf namespace=k8s.io May 15 00:09:52.431579 containerd[1724]: time="2025-05-15T00:09:52.431539283Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 00:09:52.432057 containerd[1724]: time="2025-05-15T00:09:52.431484043Z" level=info msg="shim disconnected" id=9f06633de92b249f2b4f97d5a7406f137a221e8f77463421026643f8347f8886 namespace=k8s.io May 15 00:09:52.432057 containerd[1724]: time="2025-05-15T00:09:52.431683362Z" level=warning msg="cleaning up after shim disconnected" id=9f06633de92b249f2b4f97d5a7406f137a221e8f77463421026643f8347f8886 namespace=k8s.io May 15 00:09:52.432057 containerd[1724]: time="2025-05-15T00:09:52.431699722Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 00:09:52.456554 containerd[1724]: time="2025-05-15T00:09:52.456477736Z" level=info msg="StopContainer for \"3e9c793d4c47922ebbc863b5c4a89b0c51228161fc97e98ba56bb1c9fb61efbf\" returns successfully" May 15 00:09:52.457198 containerd[1724]: time="2025-05-15T00:09:52.457166335Z" level=info msg="StopPodSandbox for \"95adc4660aface91dd56e7440fdc733412af2d9cae9c41438cb272f9c56c02a5\"" May 15 00:09:52.457259 containerd[1724]: time="2025-05-15T00:09:52.457210895Z" level=info msg="Container to stop \"3e9c793d4c47922ebbc863b5c4a89b0c51228161fc97e98ba56bb1c9fb61efbf\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 00:09:52.459444 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-95adc4660aface91dd56e7440fdc733412af2d9cae9c41438cb272f9c56c02a5-shm.mount: Deactivated successfully. May 15 00:09:52.460561 containerd[1724]: time="2025-05-15T00:09:52.460473291Z" level=info msg="StopContainer for \"9f06633de92b249f2b4f97d5a7406f137a221e8f77463421026643f8347f8886\" returns successfully" May 15 00:09:52.461665 containerd[1724]: time="2025-05-15T00:09:52.461598010Z" level=info msg="StopPodSandbox for \"f42b7bec983efac972e8093f684621c486a1fc9a335d30f89d25f62f43277fff\"" May 15 00:09:52.461748 containerd[1724]: time="2025-05-15T00:09:52.461689450Z" level=info msg="Container to stop \"82fe81264f78cb242eed31866510f26239f8ad8a7f7ea2f992344afee4de76e6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 00:09:52.461748 containerd[1724]: time="2025-05-15T00:09:52.461703530Z" level=info msg="Container to stop \"9f06633de92b249f2b4f97d5a7406f137a221e8f77463421026643f8347f8886\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 00:09:52.461748 containerd[1724]: time="2025-05-15T00:09:52.461712770Z" level=info msg="Container to stop \"68dec67aa88d955227993d6ded37703ed6b4a29e1fd00ed4199f3c9f9adb0fca\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 00:09:52.461748 containerd[1724]: time="2025-05-15T00:09:52.461720850Z" level=info msg="Container to stop \"281316e6ccc852229b34c35d90212aa715d39e3a9e9893a99b8039652f2a4a8b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 00:09:52.461748 containerd[1724]: time="2025-05-15T00:09:52.461729890Z" level=info msg="Container to stop \"2b91044f471432e002c0e56d55b11a7529ef1f4cd9142b36f4886fa1c8b914ed\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 00:09:52.465894 systemd[1]: cri-containerd-95adc4660aface91dd56e7440fdc733412af2d9cae9c41438cb272f9c56c02a5.scope: Deactivated successfully. May 15 00:09:52.472490 systemd[1]: cri-containerd-f42b7bec983efac972e8093f684621c486a1fc9a335d30f89d25f62f43277fff.scope: Deactivated successfully. May 15 00:09:52.519801 containerd[1724]: time="2025-05-15T00:09:52.519739587Z" level=info msg="shim disconnected" id=f42b7bec983efac972e8093f684621c486a1fc9a335d30f89d25f62f43277fff namespace=k8s.io May 15 00:09:52.520346 containerd[1724]: time="2025-05-15T00:09:52.520283706Z" level=warning msg="cleaning up after shim disconnected" id=f42b7bec983efac972e8093f684621c486a1fc9a335d30f89d25f62f43277fff namespace=k8s.io May 15 00:09:52.520346 containerd[1724]: time="2025-05-15T00:09:52.520302786Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 00:09:52.520635 containerd[1724]: time="2025-05-15T00:09:52.520120786Z" level=info msg="shim disconnected" id=95adc4660aface91dd56e7440fdc733412af2d9cae9c41438cb272f9c56c02a5 namespace=k8s.io May 15 00:09:52.520635 containerd[1724]: time="2025-05-15T00:09:52.520542786Z" level=warning msg="cleaning up after shim disconnected" id=95adc4660aface91dd56e7440fdc733412af2d9cae9c41438cb272f9c56c02a5 namespace=k8s.io May 15 00:09:52.520635 containerd[1724]: time="2025-05-15T00:09:52.520549866Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 00:09:52.534041 containerd[1724]: time="2025-05-15T00:09:52.533973851Z" level=warning msg="cleanup warnings time=\"2025-05-15T00:09:52Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 15 00:09:52.535311 containerd[1724]: time="2025-05-15T00:09:52.534836410Z" level=warning msg="cleanup warnings time=\"2025-05-15T00:09:52Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 15 00:09:52.535311 containerd[1724]: time="2025-05-15T00:09:52.535059650Z" level=info msg="TearDown network for sandbox \"95adc4660aface91dd56e7440fdc733412af2d9cae9c41438cb272f9c56c02a5\" successfully" May 15 00:09:52.535311 containerd[1724]: time="2025-05-15T00:09:52.535080290Z" level=info msg="StopPodSandbox for \"95adc4660aface91dd56e7440fdc733412af2d9cae9c41438cb272f9c56c02a5\" returns successfully" May 15 00:09:52.536012 containerd[1724]: time="2025-05-15T00:09:52.535853209Z" level=info msg="TearDown network for sandbox \"f42b7bec983efac972e8093f684621c486a1fc9a335d30f89d25f62f43277fff\" successfully" May 15 00:09:52.536012 containerd[1724]: time="2025-05-15T00:09:52.535875489Z" level=info msg="StopPodSandbox for \"f42b7bec983efac972e8093f684621c486a1fc9a335d30f89d25f62f43277fff\" returns successfully" May 15 00:09:52.641993 kubelet[3402]: I0515 00:09:52.641767 3402 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9a2aa114-4ba0-448e-bc97-0b488e083b36-cilium-config-path\") pod \"9a2aa114-4ba0-448e-bc97-0b488e083b36\" (UID: \"9a2aa114-4ba0-448e-bc97-0b488e083b36\") " May 15 00:09:52.641993 kubelet[3402]: I0515 00:09:52.641818 3402 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rwkjm\" (UniqueName: \"kubernetes.io/projected/9a2aa114-4ba0-448e-bc97-0b488e083b36-kube-api-access-rwkjm\") pod \"9a2aa114-4ba0-448e-bc97-0b488e083b36\" (UID: \"9a2aa114-4ba0-448e-bc97-0b488e083b36\") " May 15 00:09:52.644303 kubelet[3402]: I0515 00:09:52.644209 3402 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a2aa114-4ba0-448e-bc97-0b488e083b36-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "9a2aa114-4ba0-448e-bc97-0b488e083b36" (UID: "9a2aa114-4ba0-448e-bc97-0b488e083b36"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 15 00:09:52.644491 kubelet[3402]: I0515 00:09:52.644463 3402 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a2aa114-4ba0-448e-bc97-0b488e083b36-kube-api-access-rwkjm" (OuterVolumeSpecName: "kube-api-access-rwkjm") pod "9a2aa114-4ba0-448e-bc97-0b488e083b36" (UID: "9a2aa114-4ba0-448e-bc97-0b488e083b36"). InnerVolumeSpecName "kube-api-access-rwkjm". PluginName "kubernetes.io/projected", VolumeGidValue "" May 15 00:09:52.743173 kubelet[3402]: I0515 00:09:52.742799 3402 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ae46380a-8549-47ab-9e02-1d5d09e76a7e-lib-modules\") pod \"ae46380a-8549-47ab-9e02-1d5d09e76a7e\" (UID: \"ae46380a-8549-47ab-9e02-1d5d09e76a7e\") " May 15 00:09:52.743173 kubelet[3402]: I0515 00:09:52.742862 3402 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae46380a-8549-47ab-9e02-1d5d09e76a7e-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "ae46380a-8549-47ab-9e02-1d5d09e76a7e" (UID: "ae46380a-8549-47ab-9e02-1d5d09e76a7e"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 00:09:52.743173 kubelet[3402]: I0515 00:09:52.742867 3402 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ae46380a-8549-47ab-9e02-1d5d09e76a7e-clustermesh-secrets\") pod \"ae46380a-8549-47ab-9e02-1d5d09e76a7e\" (UID: \"ae46380a-8549-47ab-9e02-1d5d09e76a7e\") " May 15 00:09:52.743173 kubelet[3402]: I0515 00:09:52.742907 3402 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ae46380a-8549-47ab-9e02-1d5d09e76a7e-cilium-run\") pod \"ae46380a-8549-47ab-9e02-1d5d09e76a7e\" (UID: \"ae46380a-8549-47ab-9e02-1d5d09e76a7e\") " May 15 00:09:52.743173 kubelet[3402]: I0515 00:09:52.742929 3402 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s2njt\" (UniqueName: \"kubernetes.io/projected/ae46380a-8549-47ab-9e02-1d5d09e76a7e-kube-api-access-s2njt\") pod \"ae46380a-8549-47ab-9e02-1d5d09e76a7e\" (UID: \"ae46380a-8549-47ab-9e02-1d5d09e76a7e\") " May 15 00:09:52.743173 kubelet[3402]: I0515 00:09:52.742946 3402 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ae46380a-8549-47ab-9e02-1d5d09e76a7e-host-proc-sys-kernel\") pod \"ae46380a-8549-47ab-9e02-1d5d09e76a7e\" (UID: \"ae46380a-8549-47ab-9e02-1d5d09e76a7e\") " May 15 00:09:52.743433 kubelet[3402]: I0515 00:09:52.742963 3402 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ae46380a-8549-47ab-9e02-1d5d09e76a7e-cilium-cgroup\") pod \"ae46380a-8549-47ab-9e02-1d5d09e76a7e\" (UID: \"ae46380a-8549-47ab-9e02-1d5d09e76a7e\") " May 15 00:09:52.743433 kubelet[3402]: I0515 00:09:52.742977 3402 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ae46380a-8549-47ab-9e02-1d5d09e76a7e-xtables-lock\") pod \"ae46380a-8549-47ab-9e02-1d5d09e76a7e\" (UID: \"ae46380a-8549-47ab-9e02-1d5d09e76a7e\") " May 15 00:09:52.743433 kubelet[3402]: I0515 00:09:52.742993 3402 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ae46380a-8549-47ab-9e02-1d5d09e76a7e-etc-cni-netd\") pod \"ae46380a-8549-47ab-9e02-1d5d09e76a7e\" (UID: \"ae46380a-8549-47ab-9e02-1d5d09e76a7e\") " May 15 00:09:52.743433 kubelet[3402]: I0515 00:09:52.743008 3402 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ae46380a-8549-47ab-9e02-1d5d09e76a7e-host-proc-sys-net\") pod \"ae46380a-8549-47ab-9e02-1d5d09e76a7e\" (UID: \"ae46380a-8549-47ab-9e02-1d5d09e76a7e\") " May 15 00:09:52.743433 kubelet[3402]: I0515 00:09:52.743022 3402 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ae46380a-8549-47ab-9e02-1d5d09e76a7e-bpf-maps\") pod \"ae46380a-8549-47ab-9e02-1d5d09e76a7e\" (UID: \"ae46380a-8549-47ab-9e02-1d5d09e76a7e\") " May 15 00:09:52.743433 kubelet[3402]: I0515 00:09:52.743038 3402 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ae46380a-8549-47ab-9e02-1d5d09e76a7e-hostproc\") pod \"ae46380a-8549-47ab-9e02-1d5d09e76a7e\" (UID: \"ae46380a-8549-47ab-9e02-1d5d09e76a7e\") " May 15 00:09:52.743557 kubelet[3402]: I0515 00:09:52.743056 3402 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ae46380a-8549-47ab-9e02-1d5d09e76a7e-cilium-config-path\") pod \"ae46380a-8549-47ab-9e02-1d5d09e76a7e\" (UID: \"ae46380a-8549-47ab-9e02-1d5d09e76a7e\") " May 15 00:09:52.743557 kubelet[3402]: I0515 00:09:52.743072 3402 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ae46380a-8549-47ab-9e02-1d5d09e76a7e-hubble-tls\") pod \"ae46380a-8549-47ab-9e02-1d5d09e76a7e\" (UID: \"ae46380a-8549-47ab-9e02-1d5d09e76a7e\") " May 15 00:09:52.743557 kubelet[3402]: I0515 00:09:52.743087 3402 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ae46380a-8549-47ab-9e02-1d5d09e76a7e-cni-path\") pod \"ae46380a-8549-47ab-9e02-1d5d09e76a7e\" (UID: \"ae46380a-8549-47ab-9e02-1d5d09e76a7e\") " May 15 00:09:52.743557 kubelet[3402]: I0515 00:09:52.743119 3402 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ae46380a-8549-47ab-9e02-1d5d09e76a7e-lib-modules\") on node \"ci-4230.1.1-n-c70fe96ece\" DevicePath \"\"" May 15 00:09:52.743557 kubelet[3402]: I0515 00:09:52.743129 3402 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9a2aa114-4ba0-448e-bc97-0b488e083b36-cilium-config-path\") on node \"ci-4230.1.1-n-c70fe96ece\" DevicePath \"\"" May 15 00:09:52.743557 kubelet[3402]: I0515 00:09:52.743139 3402 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-rwkjm\" (UniqueName: \"kubernetes.io/projected/9a2aa114-4ba0-448e-bc97-0b488e083b36-kube-api-access-rwkjm\") on node \"ci-4230.1.1-n-c70fe96ece\" DevicePath \"\"" May 15 00:09:52.743723 kubelet[3402]: I0515 00:09:52.743165 3402 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae46380a-8549-47ab-9e02-1d5d09e76a7e-cni-path" (OuterVolumeSpecName: "cni-path") pod "ae46380a-8549-47ab-9e02-1d5d09e76a7e" (UID: "ae46380a-8549-47ab-9e02-1d5d09e76a7e"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 00:09:52.743723 kubelet[3402]: I0515 00:09:52.743179 3402 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae46380a-8549-47ab-9e02-1d5d09e76a7e-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "ae46380a-8549-47ab-9e02-1d5d09e76a7e" (UID: "ae46380a-8549-47ab-9e02-1d5d09e76a7e"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 00:09:52.744884 kubelet[3402]: I0515 00:09:52.743932 3402 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae46380a-8549-47ab-9e02-1d5d09e76a7e-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "ae46380a-8549-47ab-9e02-1d5d09e76a7e" (UID: "ae46380a-8549-47ab-9e02-1d5d09e76a7e"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 00:09:52.745033 kubelet[3402]: I0515 00:09:52.745010 3402 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae46380a-8549-47ab-9e02-1d5d09e76a7e-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "ae46380a-8549-47ab-9e02-1d5d09e76a7e" (UID: "ae46380a-8549-47ab-9e02-1d5d09e76a7e"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 00:09:52.745117 kubelet[3402]: I0515 00:09:52.745105 3402 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae46380a-8549-47ab-9e02-1d5d09e76a7e-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "ae46380a-8549-47ab-9e02-1d5d09e76a7e" (UID: "ae46380a-8549-47ab-9e02-1d5d09e76a7e"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 00:09:52.745190 kubelet[3402]: I0515 00:09:52.745176 3402 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae46380a-8549-47ab-9e02-1d5d09e76a7e-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "ae46380a-8549-47ab-9e02-1d5d09e76a7e" (UID: "ae46380a-8549-47ab-9e02-1d5d09e76a7e"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 00:09:52.745273 kubelet[3402]: I0515 00:09:52.745259 3402 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae46380a-8549-47ab-9e02-1d5d09e76a7e-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "ae46380a-8549-47ab-9e02-1d5d09e76a7e" (UID: "ae46380a-8549-47ab-9e02-1d5d09e76a7e"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 00:09:52.745346 kubelet[3402]: I0515 00:09:52.745334 3402 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae46380a-8549-47ab-9e02-1d5d09e76a7e-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "ae46380a-8549-47ab-9e02-1d5d09e76a7e" (UID: "ae46380a-8549-47ab-9e02-1d5d09e76a7e"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 00:09:52.747178 kubelet[3402]: I0515 00:09:52.747133 3402 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae46380a-8549-47ab-9e02-1d5d09e76a7e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ae46380a-8549-47ab-9e02-1d5d09e76a7e" (UID: "ae46380a-8549-47ab-9e02-1d5d09e76a7e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 15 00:09:52.747259 kubelet[3402]: I0515 00:09:52.747194 3402 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae46380a-8549-47ab-9e02-1d5d09e76a7e-hostproc" (OuterVolumeSpecName: "hostproc") pod "ae46380a-8549-47ab-9e02-1d5d09e76a7e" (UID: "ae46380a-8549-47ab-9e02-1d5d09e76a7e"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 00:09:52.747309 kubelet[3402]: I0515 00:09:52.747284 3402 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae46380a-8549-47ab-9e02-1d5d09e76a7e-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "ae46380a-8549-47ab-9e02-1d5d09e76a7e" (UID: "ae46380a-8549-47ab-9e02-1d5d09e76a7e"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 15 00:09:52.747544 kubelet[3402]: I0515 00:09:52.747513 3402 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae46380a-8549-47ab-9e02-1d5d09e76a7e-kube-api-access-s2njt" (OuterVolumeSpecName: "kube-api-access-s2njt") pod "ae46380a-8549-47ab-9e02-1d5d09e76a7e" (UID: "ae46380a-8549-47ab-9e02-1d5d09e76a7e"). InnerVolumeSpecName "kube-api-access-s2njt". PluginName "kubernetes.io/projected", VolumeGidValue "" May 15 00:09:52.748095 kubelet[3402]: I0515 00:09:52.748066 3402 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae46380a-8549-47ab-9e02-1d5d09e76a7e-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "ae46380a-8549-47ab-9e02-1d5d09e76a7e" (UID: "ae46380a-8549-47ab-9e02-1d5d09e76a7e"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 15 00:09:52.843465 kubelet[3402]: I0515 00:09:52.843406 3402 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ae46380a-8549-47ab-9e02-1d5d09e76a7e-cilium-config-path\") on node \"ci-4230.1.1-n-c70fe96ece\" DevicePath \"\"" May 15 00:09:52.843465 kubelet[3402]: I0515 00:09:52.843461 3402 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ae46380a-8549-47ab-9e02-1d5d09e76a7e-hubble-tls\") on node \"ci-4230.1.1-n-c70fe96ece\" DevicePath \"\"" May 15 00:09:52.843652 kubelet[3402]: I0515 00:09:52.843479 3402 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ae46380a-8549-47ab-9e02-1d5d09e76a7e-cni-path\") on node \"ci-4230.1.1-n-c70fe96ece\" DevicePath \"\"" May 15 00:09:52.843652 kubelet[3402]: I0515 00:09:52.843494 3402 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ae46380a-8549-47ab-9e02-1d5d09e76a7e-cilium-run\") on node \"ci-4230.1.1-n-c70fe96ece\" DevicePath \"\"" May 15 00:09:52.843652 kubelet[3402]: I0515 00:09:52.843503 3402 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ae46380a-8549-47ab-9e02-1d5d09e76a7e-clustermesh-secrets\") on node \"ci-4230.1.1-n-c70fe96ece\" DevicePath \"\"" May 15 00:09:52.843652 kubelet[3402]: I0515 00:09:52.843511 3402 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ae46380a-8549-47ab-9e02-1d5d09e76a7e-host-proc-sys-kernel\") on node \"ci-4230.1.1-n-c70fe96ece\" DevicePath \"\"" May 15 00:09:52.843652 kubelet[3402]: I0515 00:09:52.843520 3402 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-s2njt\" (UniqueName: \"kubernetes.io/projected/ae46380a-8549-47ab-9e02-1d5d09e76a7e-kube-api-access-s2njt\") on node \"ci-4230.1.1-n-c70fe96ece\" DevicePath \"\"" May 15 00:09:52.843652 kubelet[3402]: I0515 00:09:52.843528 3402 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ae46380a-8549-47ab-9e02-1d5d09e76a7e-cilium-cgroup\") on node \"ci-4230.1.1-n-c70fe96ece\" DevicePath \"\"" May 15 00:09:52.843652 kubelet[3402]: I0515 00:09:52.843535 3402 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ae46380a-8549-47ab-9e02-1d5d09e76a7e-xtables-lock\") on node \"ci-4230.1.1-n-c70fe96ece\" DevicePath \"\"" May 15 00:09:52.843652 kubelet[3402]: I0515 00:09:52.843543 3402 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ae46380a-8549-47ab-9e02-1d5d09e76a7e-etc-cni-netd\") on node \"ci-4230.1.1-n-c70fe96ece\" DevicePath \"\"" May 15 00:09:52.843818 kubelet[3402]: I0515 00:09:52.843550 3402 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ae46380a-8549-47ab-9e02-1d5d09e76a7e-host-proc-sys-net\") on node \"ci-4230.1.1-n-c70fe96ece\" DevicePath \"\"" May 15 00:09:52.843818 kubelet[3402]: I0515 00:09:52.843558 3402 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ae46380a-8549-47ab-9e02-1d5d09e76a7e-bpf-maps\") on node \"ci-4230.1.1-n-c70fe96ece\" DevicePath \"\"" May 15 00:09:52.843818 kubelet[3402]: I0515 00:09:52.843566 3402 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ae46380a-8549-47ab-9e02-1d5d09e76a7e-hostproc\") on node \"ci-4230.1.1-n-c70fe96ece\" DevicePath \"\"" May 15 00:09:52.966940 systemd[1]: Removed slice kubepods-besteffort-pod9a2aa114_4ba0_448e_bc97_0b488e083b36.slice - libcontainer container kubepods-besteffort-pod9a2aa114_4ba0_448e_bc97_0b488e083b36.slice. May 15 00:09:52.969230 systemd[1]: Removed slice kubepods-burstable-podae46380a_8549_47ab_9e02_1d5d09e76a7e.slice - libcontainer container kubepods-burstable-podae46380a_8549_47ab_9e02_1d5d09e76a7e.slice. May 15 00:09:52.969500 systemd[1]: kubepods-burstable-podae46380a_8549_47ab_9e02_1d5d09e76a7e.slice: Consumed 6.302s CPU time, 123.4M memory peak, 128K read from disk, 12.9M written to disk. May 15 00:09:53.316002 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-95adc4660aface91dd56e7440fdc733412af2d9cae9c41438cb272f9c56c02a5-rootfs.mount: Deactivated successfully. May 15 00:09:53.316105 systemd[1]: var-lib-kubelet-pods-9a2aa114\x2d4ba0\x2d448e\x2dbc97\x2d0b488e083b36-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drwkjm.mount: Deactivated successfully. May 15 00:09:53.316161 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f42b7bec983efac972e8093f684621c486a1fc9a335d30f89d25f62f43277fff-rootfs.mount: Deactivated successfully. May 15 00:09:53.316211 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f42b7bec983efac972e8093f684621c486a1fc9a335d30f89d25f62f43277fff-shm.mount: Deactivated successfully. May 15 00:09:53.316262 systemd[1]: var-lib-kubelet-pods-ae46380a\x2d8549\x2d47ab\x2d9e02\x2d1d5d09e76a7e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ds2njt.mount: Deactivated successfully. May 15 00:09:53.316320 systemd[1]: var-lib-kubelet-pods-ae46380a\x2d8549\x2d47ab\x2d9e02\x2d1d5d09e76a7e-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 15 00:09:53.316374 systemd[1]: var-lib-kubelet-pods-ae46380a\x2d8549\x2d47ab\x2d9e02\x2d1d5d09e76a7e-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 15 00:09:53.424231 kubelet[3402]: I0515 00:09:53.423672 3402 scope.go:117] "RemoveContainer" containerID="3e9c793d4c47922ebbc863b5c4a89b0c51228161fc97e98ba56bb1c9fb61efbf" May 15 00:09:53.427295 containerd[1724]: time="2025-05-15T00:09:53.426922320Z" level=info msg="RemoveContainer for \"3e9c793d4c47922ebbc863b5c4a89b0c51228161fc97e98ba56bb1c9fb61efbf\"" May 15 00:09:53.441689 containerd[1724]: time="2025-05-15T00:09:53.441521625Z" level=info msg="RemoveContainer for \"3e9c793d4c47922ebbc863b5c4a89b0c51228161fc97e98ba56bb1c9fb61efbf\" returns successfully" May 15 00:09:53.442466 kubelet[3402]: I0515 00:09:53.442355 3402 scope.go:117] "RemoveContainer" containerID="3e9c793d4c47922ebbc863b5c4a89b0c51228161fc97e98ba56bb1c9fb61efbf" May 15 00:09:53.442684 containerd[1724]: time="2025-05-15T00:09:53.442639103Z" level=error msg="ContainerStatus for \"3e9c793d4c47922ebbc863b5c4a89b0c51228161fc97e98ba56bb1c9fb61efbf\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3e9c793d4c47922ebbc863b5c4a89b0c51228161fc97e98ba56bb1c9fb61efbf\": not found" May 15 00:09:53.443334 kubelet[3402]: E0515 00:09:53.442780 3402 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3e9c793d4c47922ebbc863b5c4a89b0c51228161fc97e98ba56bb1c9fb61efbf\": not found" containerID="3e9c793d4c47922ebbc863b5c4a89b0c51228161fc97e98ba56bb1c9fb61efbf" May 15 00:09:53.443334 kubelet[3402]: I0515 00:09:53.442808 3402 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3e9c793d4c47922ebbc863b5c4a89b0c51228161fc97e98ba56bb1c9fb61efbf"} err="failed to get container status \"3e9c793d4c47922ebbc863b5c4a89b0c51228161fc97e98ba56bb1c9fb61efbf\": rpc error: code = NotFound desc = an error occurred when try to find container \"3e9c793d4c47922ebbc863b5c4a89b0c51228161fc97e98ba56bb1c9fb61efbf\": not found" May 15 00:09:53.443334 kubelet[3402]: I0515 00:09:53.442892 3402 scope.go:117] "RemoveContainer" containerID="9f06633de92b249f2b4f97d5a7406f137a221e8f77463421026643f8347f8886" May 15 00:09:53.444382 containerd[1724]: time="2025-05-15T00:09:53.444358422Z" level=info msg="RemoveContainer for \"9f06633de92b249f2b4f97d5a7406f137a221e8f77463421026643f8347f8886\"" May 15 00:09:53.461099 containerd[1724]: time="2025-05-15T00:09:53.460992683Z" level=info msg="RemoveContainer for \"9f06633de92b249f2b4f97d5a7406f137a221e8f77463421026643f8347f8886\" returns successfully" May 15 00:09:53.461344 kubelet[3402]: I0515 00:09:53.461305 3402 scope.go:117] "RemoveContainer" containerID="2b91044f471432e002c0e56d55b11a7529ef1f4cd9142b36f4886fa1c8b914ed" May 15 00:09:53.465288 containerd[1724]: time="2025-05-15T00:09:53.465100079Z" level=info msg="RemoveContainer for \"2b91044f471432e002c0e56d55b11a7529ef1f4cd9142b36f4886fa1c8b914ed\"" May 15 00:09:53.479783 containerd[1724]: time="2025-05-15T00:09:53.479740703Z" level=info msg="RemoveContainer for \"2b91044f471432e002c0e56d55b11a7529ef1f4cd9142b36f4886fa1c8b914ed\" returns successfully" May 15 00:09:53.479998 kubelet[3402]: I0515 00:09:53.479973 3402 scope.go:117] "RemoveContainer" containerID="82fe81264f78cb242eed31866510f26239f8ad8a7f7ea2f992344afee4de76e6" May 15 00:09:53.481117 containerd[1724]: time="2025-05-15T00:09:53.481029102Z" level=info msg="RemoveContainer for \"82fe81264f78cb242eed31866510f26239f8ad8a7f7ea2f992344afee4de76e6\"" May 15 00:09:53.491384 containerd[1724]: time="2025-05-15T00:09:53.491344250Z" level=info msg="RemoveContainer for \"82fe81264f78cb242eed31866510f26239f8ad8a7f7ea2f992344afee4de76e6\" returns successfully" May 15 00:09:53.491764 kubelet[3402]: I0515 00:09:53.491658 3402 scope.go:117] "RemoveContainer" containerID="281316e6ccc852229b34c35d90212aa715d39e3a9e9893a99b8039652f2a4a8b" May 15 00:09:53.492963 containerd[1724]: time="2025-05-15T00:09:53.492758569Z" level=info msg="RemoveContainer for \"281316e6ccc852229b34c35d90212aa715d39e3a9e9893a99b8039652f2a4a8b\"" May 15 00:09:53.506248 containerd[1724]: time="2025-05-15T00:09:53.506169554Z" level=info msg="RemoveContainer for \"281316e6ccc852229b34c35d90212aa715d39e3a9e9893a99b8039652f2a4a8b\" returns successfully" May 15 00:09:53.506533 kubelet[3402]: I0515 00:09:53.506512 3402 scope.go:117] "RemoveContainer" containerID="68dec67aa88d955227993d6ded37703ed6b4a29e1fd00ed4199f3c9f9adb0fca" May 15 00:09:53.508474 containerd[1724]: time="2025-05-15T00:09:53.508191312Z" level=info msg="RemoveContainer for \"68dec67aa88d955227993d6ded37703ed6b4a29e1fd00ed4199f3c9f9adb0fca\"" May 15 00:09:53.526785 containerd[1724]: time="2025-05-15T00:09:53.526713812Z" level=info msg="RemoveContainer for \"68dec67aa88d955227993d6ded37703ed6b4a29e1fd00ed4199f3c9f9adb0fca\" returns successfully" May 15 00:09:53.527029 kubelet[3402]: I0515 00:09:53.526998 3402 scope.go:117] "RemoveContainer" containerID="9f06633de92b249f2b4f97d5a7406f137a221e8f77463421026643f8347f8886" May 15 00:09:53.527904 kubelet[3402]: E0515 00:09:53.527810 3402 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9f06633de92b249f2b4f97d5a7406f137a221e8f77463421026643f8347f8886\": not found" containerID="9f06633de92b249f2b4f97d5a7406f137a221e8f77463421026643f8347f8886" May 15 00:09:53.527904 kubelet[3402]: I0515 00:09:53.527855 3402 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9f06633de92b249f2b4f97d5a7406f137a221e8f77463421026643f8347f8886"} err="failed to get container status \"9f06633de92b249f2b4f97d5a7406f137a221e8f77463421026643f8347f8886\": rpc error: code = NotFound desc = an error occurred when try to find container \"9f06633de92b249f2b4f97d5a7406f137a221e8f77463421026643f8347f8886\": not found" May 15 00:09:53.527904 kubelet[3402]: I0515 00:09:53.527878 3402 scope.go:117] "RemoveContainer" containerID="2b91044f471432e002c0e56d55b11a7529ef1f4cd9142b36f4886fa1c8b914ed" May 15 00:09:53.528234 containerd[1724]: time="2025-05-15T00:09:53.527587291Z" level=error msg="ContainerStatus for \"9f06633de92b249f2b4f97d5a7406f137a221e8f77463421026643f8347f8886\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9f06633de92b249f2b4f97d5a7406f137a221e8f77463421026643f8347f8886\": not found" May 15 00:09:53.528234 containerd[1724]: time="2025-05-15T00:09:53.528086130Z" level=error msg="ContainerStatus for \"2b91044f471432e002c0e56d55b11a7529ef1f4cd9142b36f4886fa1c8b914ed\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2b91044f471432e002c0e56d55b11a7529ef1f4cd9142b36f4886fa1c8b914ed\": not found" May 15 00:09:53.528617 kubelet[3402]: E0515 00:09:53.528193 3402 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2b91044f471432e002c0e56d55b11a7529ef1f4cd9142b36f4886fa1c8b914ed\": not found" containerID="2b91044f471432e002c0e56d55b11a7529ef1f4cd9142b36f4886fa1c8b914ed" May 15 00:09:53.528617 kubelet[3402]: I0515 00:09:53.528215 3402 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2b91044f471432e002c0e56d55b11a7529ef1f4cd9142b36f4886fa1c8b914ed"} err="failed to get container status \"2b91044f471432e002c0e56d55b11a7529ef1f4cd9142b36f4886fa1c8b914ed\": rpc error: code = NotFound desc = an error occurred when try to find container \"2b91044f471432e002c0e56d55b11a7529ef1f4cd9142b36f4886fa1c8b914ed\": not found" May 15 00:09:53.528617 kubelet[3402]: I0515 00:09:53.528232 3402 scope.go:117] "RemoveContainer" containerID="82fe81264f78cb242eed31866510f26239f8ad8a7f7ea2f992344afee4de76e6" May 15 00:09:53.528617 kubelet[3402]: E0515 00:09:53.528493 3402 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"82fe81264f78cb242eed31866510f26239f8ad8a7f7ea2f992344afee4de76e6\": not found" containerID="82fe81264f78cb242eed31866510f26239f8ad8a7f7ea2f992344afee4de76e6" May 15 00:09:53.528617 kubelet[3402]: I0515 00:09:53.528514 3402 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"82fe81264f78cb242eed31866510f26239f8ad8a7f7ea2f992344afee4de76e6"} err="failed to get container status \"82fe81264f78cb242eed31866510f26239f8ad8a7f7ea2f992344afee4de76e6\": rpc error: code = NotFound desc = an error occurred when try to find container \"82fe81264f78cb242eed31866510f26239f8ad8a7f7ea2f992344afee4de76e6\": not found" May 15 00:09:53.528617 kubelet[3402]: I0515 00:09:53.528541 3402 scope.go:117] "RemoveContainer" containerID="281316e6ccc852229b34c35d90212aa715d39e3a9e9893a99b8039652f2a4a8b" May 15 00:09:53.528796 containerd[1724]: time="2025-05-15T00:09:53.528375650Z" level=error msg="ContainerStatus for \"82fe81264f78cb242eed31866510f26239f8ad8a7f7ea2f992344afee4de76e6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"82fe81264f78cb242eed31866510f26239f8ad8a7f7ea2f992344afee4de76e6\": not found" May 15 00:09:53.529415 containerd[1724]: time="2025-05-15T00:09:53.528947090Z" level=error msg="ContainerStatus for \"281316e6ccc852229b34c35d90212aa715d39e3a9e9893a99b8039652f2a4a8b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"281316e6ccc852229b34c35d90212aa715d39e3a9e9893a99b8039652f2a4a8b\": not found" May 15 00:09:53.529551 kubelet[3402]: E0515 00:09:53.529379 3402 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"281316e6ccc852229b34c35d90212aa715d39e3a9e9893a99b8039652f2a4a8b\": not found" containerID="281316e6ccc852229b34c35d90212aa715d39e3a9e9893a99b8039652f2a4a8b" May 15 00:09:53.529551 kubelet[3402]: I0515 00:09:53.529412 3402 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"281316e6ccc852229b34c35d90212aa715d39e3a9e9893a99b8039652f2a4a8b"} err="failed to get container status \"281316e6ccc852229b34c35d90212aa715d39e3a9e9893a99b8039652f2a4a8b\": rpc error: code = NotFound desc = an error occurred when try to find container \"281316e6ccc852229b34c35d90212aa715d39e3a9e9893a99b8039652f2a4a8b\": not found" May 15 00:09:53.529551 kubelet[3402]: I0515 00:09:53.529430 3402 scope.go:117] "RemoveContainer" containerID="68dec67aa88d955227993d6ded37703ed6b4a29e1fd00ed4199f3c9f9adb0fca" May 15 00:09:53.530324 containerd[1724]: time="2025-05-15T00:09:53.530233848Z" level=error msg="ContainerStatus for \"68dec67aa88d955227993d6ded37703ed6b4a29e1fd00ed4199f3c9f9adb0fca\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"68dec67aa88d955227993d6ded37703ed6b4a29e1fd00ed4199f3c9f9adb0fca\": not found" May 15 00:09:53.530483 kubelet[3402]: E0515 00:09:53.530384 3402 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"68dec67aa88d955227993d6ded37703ed6b4a29e1fd00ed4199f3c9f9adb0fca\": not found" containerID="68dec67aa88d955227993d6ded37703ed6b4a29e1fd00ed4199f3c9f9adb0fca" May 15 00:09:53.530483 kubelet[3402]: I0515 00:09:53.530441 3402 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"68dec67aa88d955227993d6ded37703ed6b4a29e1fd00ed4199f3c9f9adb0fca"} err="failed to get container status \"68dec67aa88d955227993d6ded37703ed6b4a29e1fd00ed4199f3c9f9adb0fca\": rpc error: code = NotFound desc = an error occurred when try to find container \"68dec67aa88d955227993d6ded37703ed6b4a29e1fd00ed4199f3c9f9adb0fca\": not found" May 15 00:09:54.332582 sshd[5187]: Connection closed by 10.200.16.10 port 50302 May 15 00:09:54.333270 sshd-session[5185]: pam_unix(sshd:session): session closed for user core May 15 00:09:54.336571 systemd-logind[1707]: Session 37 logged out. Waiting for processes to exit. May 15 00:09:54.337551 systemd[1]: sshd@34-10.200.20.16:22-10.200.16.10:50302.service: Deactivated successfully. May 15 00:09:54.339506 systemd[1]: session-37.scope: Deactivated successfully. May 15 00:09:54.339740 systemd[1]: session-37.scope: Consumed 1.086s CPU time, 23.5M memory peak. May 15 00:09:54.340762 systemd-logind[1707]: Removed session 37. May 15 00:09:54.420621 systemd[1]: Started sshd@35-10.200.20.16:22-10.200.16.10:50308.service - OpenSSH per-connection server daemon (10.200.16.10:50308). May 15 00:09:54.913073 sshd[5343]: Accepted publickey for core from 10.200.16.10 port 50308 ssh2: RSA SHA256:7ZishQ9HWvAtdE+Xy1E7/rrbLiv2T8OuOKfxsc80/d0 May 15 00:09:54.914434 sshd-session[5343]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:09:54.919660 systemd-logind[1707]: New session 38 of user core. May 15 00:09:54.926818 systemd[1]: Started session-38.scope - Session 38 of User core. May 15 00:09:54.962774 kubelet[3402]: I0515 00:09:54.962703 3402 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a2aa114-4ba0-448e-bc97-0b488e083b36" path="/var/lib/kubelet/pods/9a2aa114-4ba0-448e-bc97-0b488e083b36/volumes" May 15 00:09:54.963202 kubelet[3402]: I0515 00:09:54.963177 3402 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae46380a-8549-47ab-9e02-1d5d09e76a7e" path="/var/lib/kubelet/pods/ae46380a-8549-47ab-9e02-1d5d09e76a7e/volumes" May 15 00:09:55.077443 kubelet[3402]: E0515 00:09:55.077391 3402 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 15 00:09:56.948911 kubelet[3402]: E0515 00:09:56.948858 3402 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ae46380a-8549-47ab-9e02-1d5d09e76a7e" containerName="mount-cgroup" May 15 00:09:56.948911 kubelet[3402]: E0515 00:09:56.948894 3402 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ae46380a-8549-47ab-9e02-1d5d09e76a7e" containerName="apply-sysctl-overwrites" May 15 00:09:56.948911 kubelet[3402]: E0515 00:09:56.948902 3402 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9a2aa114-4ba0-448e-bc97-0b488e083b36" containerName="cilium-operator" May 15 00:09:56.948911 kubelet[3402]: E0515 00:09:56.948908 3402 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ae46380a-8549-47ab-9e02-1d5d09e76a7e" containerName="mount-bpf-fs" May 15 00:09:56.948911 kubelet[3402]: E0515 00:09:56.948914 3402 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ae46380a-8549-47ab-9e02-1d5d09e76a7e" containerName="cilium-agent" May 15 00:09:56.948911 kubelet[3402]: E0515 00:09:56.948921 3402 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ae46380a-8549-47ab-9e02-1d5d09e76a7e" containerName="clean-cilium-state" May 15 00:09:56.949362 kubelet[3402]: I0515 00:09:56.948943 3402 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a2aa114-4ba0-448e-bc97-0b488e083b36" containerName="cilium-operator" May 15 00:09:56.949362 kubelet[3402]: I0515 00:09:56.948949 3402 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae46380a-8549-47ab-9e02-1d5d09e76a7e" containerName="cilium-agent" May 15 00:09:56.958523 systemd[1]: Created slice kubepods-burstable-podd6307c04_c5cd_4ae6_a9f9_454889357801.slice - libcontainer container kubepods-burstable-podd6307c04_c5cd_4ae6_a9f9_454889357801.slice. May 15 00:09:56.973070 kubelet[3402]: I0515 00:09:56.972279 3402 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d6307c04-c5cd-4ae6-a9f9-454889357801-etc-cni-netd\") pod \"cilium-gzw56\" (UID: \"d6307c04-c5cd-4ae6-a9f9-454889357801\") " pod="kube-system/cilium-gzw56" May 15 00:09:56.973070 kubelet[3402]: I0515 00:09:56.972325 3402 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d6307c04-c5cd-4ae6-a9f9-454889357801-host-proc-sys-net\") pod \"cilium-gzw56\" (UID: \"d6307c04-c5cd-4ae6-a9f9-454889357801\") " pod="kube-system/cilium-gzw56" May 15 00:09:56.973070 kubelet[3402]: I0515 00:09:56.972359 3402 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d6307c04-c5cd-4ae6-a9f9-454889357801-hostproc\") pod \"cilium-gzw56\" (UID: \"d6307c04-c5cd-4ae6-a9f9-454889357801\") " pod="kube-system/cilium-gzw56" May 15 00:09:56.973070 kubelet[3402]: I0515 00:09:56.972381 3402 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d6307c04-c5cd-4ae6-a9f9-454889357801-lib-modules\") pod \"cilium-gzw56\" (UID: \"d6307c04-c5cd-4ae6-a9f9-454889357801\") " pod="kube-system/cilium-gzw56" May 15 00:09:56.973070 kubelet[3402]: I0515 00:09:56.972403 3402 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d6307c04-c5cd-4ae6-a9f9-454889357801-hubble-tls\") pod \"cilium-gzw56\" (UID: \"d6307c04-c5cd-4ae6-a9f9-454889357801\") " pod="kube-system/cilium-gzw56" May 15 00:09:56.973070 kubelet[3402]: I0515 00:09:56.972421 3402 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-458kx\" (UniqueName: \"kubernetes.io/projected/d6307c04-c5cd-4ae6-a9f9-454889357801-kube-api-access-458kx\") pod \"cilium-gzw56\" (UID: \"d6307c04-c5cd-4ae6-a9f9-454889357801\") " pod="kube-system/cilium-gzw56" May 15 00:09:56.973318 kubelet[3402]: I0515 00:09:56.972454 3402 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d6307c04-c5cd-4ae6-a9f9-454889357801-cilium-cgroup\") pod \"cilium-gzw56\" (UID: \"d6307c04-c5cd-4ae6-a9f9-454889357801\") " pod="kube-system/cilium-gzw56" May 15 00:09:56.973318 kubelet[3402]: I0515 00:09:56.972470 3402 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d6307c04-c5cd-4ae6-a9f9-454889357801-cni-path\") pod \"cilium-gzw56\" (UID: \"d6307c04-c5cd-4ae6-a9f9-454889357801\") " pod="kube-system/cilium-gzw56" May 15 00:09:56.973318 kubelet[3402]: I0515 00:09:56.972488 3402 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d6307c04-c5cd-4ae6-a9f9-454889357801-xtables-lock\") pod \"cilium-gzw56\" (UID: \"d6307c04-c5cd-4ae6-a9f9-454889357801\") " pod="kube-system/cilium-gzw56" May 15 00:09:56.973318 kubelet[3402]: I0515 00:09:56.972516 3402 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/d6307c04-c5cd-4ae6-a9f9-454889357801-cilium-ipsec-secrets\") pod \"cilium-gzw56\" (UID: \"d6307c04-c5cd-4ae6-a9f9-454889357801\") " pod="kube-system/cilium-gzw56" May 15 00:09:56.973318 kubelet[3402]: I0515 00:09:56.972539 3402 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d6307c04-c5cd-4ae6-a9f9-454889357801-bpf-maps\") pod \"cilium-gzw56\" (UID: \"d6307c04-c5cd-4ae6-a9f9-454889357801\") " pod="kube-system/cilium-gzw56" May 15 00:09:56.973318 kubelet[3402]: I0515 00:09:56.972896 3402 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d6307c04-c5cd-4ae6-a9f9-454889357801-cilium-run\") pod \"cilium-gzw56\" (UID: \"d6307c04-c5cd-4ae6-a9f9-454889357801\") " pod="kube-system/cilium-gzw56" May 15 00:09:56.973445 kubelet[3402]: I0515 00:09:56.973262 3402 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d6307c04-c5cd-4ae6-a9f9-454889357801-host-proc-sys-kernel\") pod \"cilium-gzw56\" (UID: \"d6307c04-c5cd-4ae6-a9f9-454889357801\") " pod="kube-system/cilium-gzw56" May 15 00:09:56.973445 kubelet[3402]: I0515 00:09:56.973420 3402 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d6307c04-c5cd-4ae6-a9f9-454889357801-cilium-config-path\") pod \"cilium-gzw56\" (UID: \"d6307c04-c5cd-4ae6-a9f9-454889357801\") " pod="kube-system/cilium-gzw56" May 15 00:09:56.975665 kubelet[3402]: I0515 00:09:56.973682 3402 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d6307c04-c5cd-4ae6-a9f9-454889357801-clustermesh-secrets\") pod \"cilium-gzw56\" (UID: \"d6307c04-c5cd-4ae6-a9f9-454889357801\") " pod="kube-system/cilium-gzw56" May 15 00:09:56.986884 sshd[5345]: Connection closed by 10.200.16.10 port 50308 May 15 00:09:56.987693 sshd-session[5343]: pam_unix(sshd:session): session closed for user core May 15 00:09:56.994360 systemd-logind[1707]: Session 38 logged out. Waiting for processes to exit. May 15 00:09:56.995507 systemd[1]: sshd@35-10.200.20.16:22-10.200.16.10:50308.service: Deactivated successfully. May 15 00:09:57.000302 systemd[1]: session-38.scope: Deactivated successfully. May 15 00:09:57.003722 systemd[1]: session-38.scope: Consumed 1.649s CPU time, 25.8M memory peak. May 15 00:09:57.005380 systemd-logind[1707]: Removed session 38. May 15 00:09:57.084450 systemd[1]: Started sshd@36-10.200.20.16:22-10.200.16.10:50318.service - OpenSSH per-connection server daemon (10.200.16.10:50318). May 15 00:09:57.277831 containerd[1724]: time="2025-05-15T00:09:57.277178641Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gzw56,Uid:d6307c04-c5cd-4ae6-a9f9-454889357801,Namespace:kube-system,Attempt:0,}" May 15 00:09:57.321753 containerd[1724]: time="2025-05-15T00:09:57.321652715Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:09:57.322237 containerd[1724]: time="2025-05-15T00:09:57.322151315Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:09:57.322237 containerd[1724]: time="2025-05-15T00:09:57.322214795Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:09:57.322551 containerd[1724]: time="2025-05-15T00:09:57.322495514Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:09:57.340802 systemd[1]: Started cri-containerd-2b6a7f3426530806d28aedbd95f0ef05c6b8f071c9521bb2a17ad0f90b330bae.scope - libcontainer container 2b6a7f3426530806d28aedbd95f0ef05c6b8f071c9521bb2a17ad0f90b330bae. May 15 00:09:57.361821 containerd[1724]: time="2025-05-15T00:09:57.361736114Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gzw56,Uid:d6307c04-c5cd-4ae6-a9f9-454889357801,Namespace:kube-system,Attempt:0,} returns sandbox id \"2b6a7f3426530806d28aedbd95f0ef05c6b8f071c9521bb2a17ad0f90b330bae\"" May 15 00:09:57.364533 containerd[1724]: time="2025-05-15T00:09:57.364437392Z" level=info msg="CreateContainer within sandbox \"2b6a7f3426530806d28aedbd95f0ef05c6b8f071c9521bb2a17ad0f90b330bae\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 15 00:09:57.435874 containerd[1724]: time="2025-05-15T00:09:57.435771598Z" level=info msg="CreateContainer within sandbox \"2b6a7f3426530806d28aedbd95f0ef05c6b8f071c9521bb2a17ad0f90b330bae\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"45f8769b859fb8c4242eafdbeff851b9ffd3719f15670fa8af9a4de6bd969bd3\"" May 15 00:09:57.436647 containerd[1724]: time="2025-05-15T00:09:57.436508358Z" level=info msg="StartContainer for \"45f8769b859fb8c4242eafdbeff851b9ffd3719f15670fa8af9a4de6bd969bd3\"" May 15 00:09:57.465816 systemd[1]: Started cri-containerd-45f8769b859fb8c4242eafdbeff851b9ffd3719f15670fa8af9a4de6bd969bd3.scope - libcontainer container 45f8769b859fb8c4242eafdbeff851b9ffd3719f15670fa8af9a4de6bd969bd3. May 15 00:09:57.498528 containerd[1724]: time="2025-05-15T00:09:57.498476574Z" level=info msg="StartContainer for \"45f8769b859fb8c4242eafdbeff851b9ffd3719f15670fa8af9a4de6bd969bd3\" returns successfully" May 15 00:09:57.499318 systemd[1]: cri-containerd-45f8769b859fb8c4242eafdbeff851b9ffd3719f15670fa8af9a4de6bd969bd3.scope: Deactivated successfully. May 15 00:09:57.567720 containerd[1724]: time="2025-05-15T00:09:57.567559184Z" level=info msg="shim disconnected" id=45f8769b859fb8c4242eafdbeff851b9ffd3719f15670fa8af9a4de6bd969bd3 namespace=k8s.io May 15 00:09:57.568287 containerd[1724]: time="2025-05-15T00:09:57.568105583Z" level=warning msg="cleaning up after shim disconnected" id=45f8769b859fb8c4242eafdbeff851b9ffd3719f15670fa8af9a4de6bd969bd3 namespace=k8s.io May 15 00:09:57.568287 containerd[1724]: time="2025-05-15T00:09:57.568128903Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 00:09:57.590389 sshd[5357]: Accepted publickey for core from 10.200.16.10 port 50318 ssh2: RSA SHA256:7ZishQ9HWvAtdE+Xy1E7/rrbLiv2T8OuOKfxsc80/d0 May 15 00:09:57.591805 sshd-session[5357]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:09:57.597048 systemd-logind[1707]: New session 39 of user core. May 15 00:09:57.606815 systemd[1]: Started session-39.scope - Session 39 of User core. May 15 00:09:57.942439 sshd[5467]: Connection closed by 10.200.16.10 port 50318 May 15 00:09:57.941790 sshd-session[5357]: pam_unix(sshd:session): session closed for user core May 15 00:09:57.945343 systemd[1]: sshd@36-10.200.20.16:22-10.200.16.10:50318.service: Deactivated successfully. May 15 00:09:57.947158 systemd[1]: session-39.scope: Deactivated successfully. May 15 00:09:57.947901 systemd-logind[1707]: Session 39 logged out. Waiting for processes to exit. May 15 00:09:57.949092 systemd-logind[1707]: Removed session 39. May 15 00:09:58.033941 systemd[1]: Started sshd@37-10.200.20.16:22-10.200.16.10:50324.service - OpenSSH per-connection server daemon (10.200.16.10:50324). May 15 00:09:58.449644 containerd[1724]: time="2025-05-15T00:09:58.449351441Z" level=info msg="CreateContainer within sandbox \"2b6a7f3426530806d28aedbd95f0ef05c6b8f071c9521bb2a17ad0f90b330bae\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 15 00:09:58.489175 sshd[5474]: Accepted publickey for core from 10.200.16.10 port 50324 ssh2: RSA SHA256:7ZishQ9HWvAtdE+Xy1E7/rrbLiv2T8OuOKfxsc80/d0 May 15 00:09:58.492234 sshd-session[5474]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:09:58.496741 systemd-logind[1707]: New session 40 of user core. May 15 00:09:58.503829 systemd[1]: Started session-40.scope - Session 40 of User core. May 15 00:09:58.515998 containerd[1724]: time="2025-05-15T00:09:58.515911493Z" level=info msg="CreateContainer within sandbox \"2b6a7f3426530806d28aedbd95f0ef05c6b8f071c9521bb2a17ad0f90b330bae\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b18bf618d85be5069d3a7a02f6be5d360a29de4841025385970d983c309cb4de\"" May 15 00:09:58.516831 containerd[1724]: time="2025-05-15T00:09:58.516748732Z" level=info msg="StartContainer for \"b18bf618d85be5069d3a7a02f6be5d360a29de4841025385970d983c309cb4de\"" May 15 00:09:58.549821 systemd[1]: Started cri-containerd-b18bf618d85be5069d3a7a02f6be5d360a29de4841025385970d983c309cb4de.scope - libcontainer container b18bf618d85be5069d3a7a02f6be5d360a29de4841025385970d983c309cb4de. May 15 00:09:58.580436 systemd[1]: cri-containerd-b18bf618d85be5069d3a7a02f6be5d360a29de4841025385970d983c309cb4de.scope: Deactivated successfully. May 15 00:09:58.581792 containerd[1724]: time="2025-05-15T00:09:58.580873546Z" level=info msg="StartContainer for \"b18bf618d85be5069d3a7a02f6be5d360a29de4841025385970d983c309cb4de\" returns successfully" May 15 00:09:58.619594 containerd[1724]: time="2025-05-15T00:09:58.619505267Z" level=info msg="shim disconnected" id=b18bf618d85be5069d3a7a02f6be5d360a29de4841025385970d983c309cb4de namespace=k8s.io May 15 00:09:58.619594 containerd[1724]: time="2025-05-15T00:09:58.619585746Z" level=warning msg="cleaning up after shim disconnected" id=b18bf618d85be5069d3a7a02f6be5d360a29de4841025385970d983c309cb4de namespace=k8s.io May 15 00:09:58.619813 containerd[1724]: time="2025-05-15T00:09:58.619604906Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 00:09:58.975374 kubelet[3402]: I0515 00:09:58.975324 3402 setters.go:600] "Node became not ready" node="ci-4230.1.1-n-c70fe96ece" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-15T00:09:58Z","lastTransitionTime":"2025-05-15T00:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 15 00:09:59.087270 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b18bf618d85be5069d3a7a02f6be5d360a29de4841025385970d983c309cb4de-rootfs.mount: Deactivated successfully. May 15 00:09:59.453191 containerd[1724]: time="2025-05-15T00:09:59.453150693Z" level=info msg="CreateContainer within sandbox \"2b6a7f3426530806d28aedbd95f0ef05c6b8f071c9521bb2a17ad0f90b330bae\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 15 00:09:59.499509 containerd[1724]: time="2025-05-15T00:09:59.499416726Z" level=info msg="CreateContainer within sandbox \"2b6a7f3426530806d28aedbd95f0ef05c6b8f071c9521bb2a17ad0f90b330bae\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"4f185bd0ccb319966f04fbd428b2ff84d79cb2f74ef1f94c91690cfd61aa59a3\"" May 15 00:09:59.500483 containerd[1724]: time="2025-05-15T00:09:59.500178685Z" level=info msg="StartContainer for \"4f185bd0ccb319966f04fbd428b2ff84d79cb2f74ef1f94c91690cfd61aa59a3\"" May 15 00:09:59.530806 systemd[1]: Started cri-containerd-4f185bd0ccb319966f04fbd428b2ff84d79cb2f74ef1f94c91690cfd61aa59a3.scope - libcontainer container 4f185bd0ccb319966f04fbd428b2ff84d79cb2f74ef1f94c91690cfd61aa59a3. May 15 00:09:59.558113 systemd[1]: cri-containerd-4f185bd0ccb319966f04fbd428b2ff84d79cb2f74ef1f94c91690cfd61aa59a3.scope: Deactivated successfully. May 15 00:09:59.562423 containerd[1724]: time="2025-05-15T00:09:59.562302021Z" level=info msg="StartContainer for \"4f185bd0ccb319966f04fbd428b2ff84d79cb2f74ef1f94c91690cfd61aa59a3\" returns successfully" May 15 00:09:59.609698 containerd[1724]: time="2025-05-15T00:09:59.609613373Z" level=info msg="shim disconnected" id=4f185bd0ccb319966f04fbd428b2ff84d79cb2f74ef1f94c91690cfd61aa59a3 namespace=k8s.io May 15 00:09:59.609698 containerd[1724]: time="2025-05-15T00:09:59.609691773Z" level=warning msg="cleaning up after shim disconnected" id=4f185bd0ccb319966f04fbd428b2ff84d79cb2f74ef1f94c91690cfd61aa59a3 namespace=k8s.io May 15 00:09:59.609698 containerd[1724]: time="2025-05-15T00:09:59.609700373Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 00:10:00.078975 kubelet[3402]: E0515 00:10:00.078912 3402 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 15 00:10:00.086472 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4f185bd0ccb319966f04fbd428b2ff84d79cb2f74ef1f94c91690cfd61aa59a3-rootfs.mount: Deactivated successfully. May 15 00:10:00.455740 containerd[1724]: time="2025-05-15T00:10:00.455525587Z" level=info msg="CreateContainer within sandbox \"2b6a7f3426530806d28aedbd95f0ef05c6b8f071c9521bb2a17ad0f90b330bae\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 15 00:10:00.516753 containerd[1724]: time="2025-05-15T00:10:00.516710084Z" level=info msg="CreateContainer within sandbox \"2b6a7f3426530806d28aedbd95f0ef05c6b8f071c9521bb2a17ad0f90b330bae\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"879daf4f120d222bfbf68e9a9ac184ee19125a5f46045ce0a6a491b5bba61a8a\"" May 15 00:10:00.517538 containerd[1724]: time="2025-05-15T00:10:00.517470283Z" level=info msg="StartContainer for \"879daf4f120d222bfbf68e9a9ac184ee19125a5f46045ce0a6a491b5bba61a8a\"" May 15 00:10:00.547816 systemd[1]: Started cri-containerd-879daf4f120d222bfbf68e9a9ac184ee19125a5f46045ce0a6a491b5bba61a8a.scope - libcontainer container 879daf4f120d222bfbf68e9a9ac184ee19125a5f46045ce0a6a491b5bba61a8a. May 15 00:10:00.568128 systemd[1]: cri-containerd-879daf4f120d222bfbf68e9a9ac184ee19125a5f46045ce0a6a491b5bba61a8a.scope: Deactivated successfully. May 15 00:10:00.574524 containerd[1724]: time="2025-05-15T00:10:00.574478585Z" level=info msg="StartContainer for \"879daf4f120d222bfbf68e9a9ac184ee19125a5f46045ce0a6a491b5bba61a8a\" returns successfully" May 15 00:10:00.605818 containerd[1724]: time="2025-05-15T00:10:00.605671833Z" level=info msg="shim disconnected" id=879daf4f120d222bfbf68e9a9ac184ee19125a5f46045ce0a6a491b5bba61a8a namespace=k8s.io May 15 00:10:00.605818 containerd[1724]: time="2025-05-15T00:10:00.605733433Z" level=warning msg="cleaning up after shim disconnected" id=879daf4f120d222bfbf68e9a9ac184ee19125a5f46045ce0a6a491b5bba61a8a namespace=k8s.io May 15 00:10:00.605818 containerd[1724]: time="2025-05-15T00:10:00.605741353Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 00:10:01.086718 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-879daf4f120d222bfbf68e9a9ac184ee19125a5f46045ce0a6a491b5bba61a8a-rootfs.mount: Deactivated successfully. May 15 00:10:01.458926 containerd[1724]: time="2025-05-15T00:10:01.458777360Z" level=info msg="CreateContainer within sandbox \"2b6a7f3426530806d28aedbd95f0ef05c6b8f071c9521bb2a17ad0f90b330bae\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 15 00:10:01.540007 containerd[1724]: time="2025-05-15T00:10:01.539921797Z" level=info msg="CreateContainer within sandbox \"2b6a7f3426530806d28aedbd95f0ef05c6b8f071c9521bb2a17ad0f90b330bae\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"a52126a8254f00dbda007a99b55fd989b50ba13ed8ed0e25e4f79c29a53245a5\"" May 15 00:10:01.541593 containerd[1724]: time="2025-05-15T00:10:01.541364275Z" level=info msg="StartContainer for \"a52126a8254f00dbda007a99b55fd989b50ba13ed8ed0e25e4f79c29a53245a5\"" May 15 00:10:01.573803 systemd[1]: Started cri-containerd-a52126a8254f00dbda007a99b55fd989b50ba13ed8ed0e25e4f79c29a53245a5.scope - libcontainer container a52126a8254f00dbda007a99b55fd989b50ba13ed8ed0e25e4f79c29a53245a5. May 15 00:10:01.605556 containerd[1724]: time="2025-05-15T00:10:01.605495449Z" level=info msg="StartContainer for \"a52126a8254f00dbda007a99b55fd989b50ba13ed8ed0e25e4f79c29a53245a5\" returns successfully" May 15 00:10:02.099801 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) May 15 00:10:02.479671 kubelet[3402]: I0515 00:10:02.479592 3402 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-gzw56" podStartSLOduration=6.479577394 podStartE2EDuration="6.479577394s" podCreationTimestamp="2025-05-15 00:09:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 00:10:02.479317795 +0000 UTC m=+237.626865178" watchObservedRunningTime="2025-05-15 00:10:02.479577394 +0000 UTC m=+237.627124777" May 15 00:10:02.907082 systemd[1]: run-containerd-runc-k8s.io-a52126a8254f00dbda007a99b55fd989b50ba13ed8ed0e25e4f79c29a53245a5-runc.eP2k5r.mount: Deactivated successfully. May 15 00:10:02.951105 kubelet[3402]: E0515 00:10:02.951050 3402 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:42632->127.0.0.1:40933: write tcp 127.0.0.1:42632->127.0.0.1:40933: write: broken pipe May 15 00:10:04.804828 systemd-networkd[1343]: lxc_health: Link UP May 15 00:10:04.809500 systemd-networkd[1343]: lxc_health: Gained carrier May 15 00:10:04.953526 containerd[1724]: time="2025-05-15T00:10:04.953471872Z" level=info msg="StopPodSandbox for \"f42b7bec983efac972e8093f684621c486a1fc9a335d30f89d25f62f43277fff\"" May 15 00:10:04.953881 containerd[1724]: time="2025-05-15T00:10:04.953566592Z" level=info msg="TearDown network for sandbox \"f42b7bec983efac972e8093f684621c486a1fc9a335d30f89d25f62f43277fff\" successfully" May 15 00:10:04.953881 containerd[1724]: time="2025-05-15T00:10:04.953576992Z" level=info msg="StopPodSandbox for \"f42b7bec983efac972e8093f684621c486a1fc9a335d30f89d25f62f43277fff\" returns successfully" May 15 00:10:04.956620 containerd[1724]: time="2025-05-15T00:10:04.955068390Z" level=info msg="RemovePodSandbox for \"f42b7bec983efac972e8093f684621c486a1fc9a335d30f89d25f62f43277fff\"" May 15 00:10:04.956620 containerd[1724]: time="2025-05-15T00:10:04.955103870Z" level=info msg="Forcibly stopping sandbox \"f42b7bec983efac972e8093f684621c486a1fc9a335d30f89d25f62f43277fff\"" May 15 00:10:04.956620 containerd[1724]: time="2025-05-15T00:10:04.955175510Z" level=info msg="TearDown network for sandbox \"f42b7bec983efac972e8093f684621c486a1fc9a335d30f89d25f62f43277fff\" successfully" May 15 00:10:04.968424 containerd[1724]: time="2025-05-15T00:10:04.968371497Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f42b7bec983efac972e8093f684621c486a1fc9a335d30f89d25f62f43277fff\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 15 00:10:04.968683 containerd[1724]: time="2025-05-15T00:10:04.968663257Z" level=info msg="RemovePodSandbox \"f42b7bec983efac972e8093f684621c486a1fc9a335d30f89d25f62f43277fff\" returns successfully" May 15 00:10:04.969268 containerd[1724]: time="2025-05-15T00:10:04.969238936Z" level=info msg="StopPodSandbox for \"95adc4660aface91dd56e7440fdc733412af2d9cae9c41438cb272f9c56c02a5\"" May 15 00:10:04.969457 containerd[1724]: time="2025-05-15T00:10:04.969429376Z" level=info msg="TearDown network for sandbox \"95adc4660aface91dd56e7440fdc733412af2d9cae9c41438cb272f9c56c02a5\" successfully" May 15 00:10:04.969605 containerd[1724]: time="2025-05-15T00:10:04.969526296Z" level=info msg="StopPodSandbox for \"95adc4660aface91dd56e7440fdc733412af2d9cae9c41438cb272f9c56c02a5\" returns successfully" May 15 00:10:04.969958 containerd[1724]: time="2025-05-15T00:10:04.969917495Z" level=info msg="RemovePodSandbox for \"95adc4660aface91dd56e7440fdc733412af2d9cae9c41438cb272f9c56c02a5\"" May 15 00:10:04.969958 containerd[1724]: time="2025-05-15T00:10:04.969949335Z" level=info msg="Forcibly stopping sandbox \"95adc4660aface91dd56e7440fdc733412af2d9cae9c41438cb272f9c56c02a5\"" May 15 00:10:04.970386 containerd[1724]: time="2025-05-15T00:10:04.969993135Z" level=info msg="TearDown network for sandbox \"95adc4660aface91dd56e7440fdc733412af2d9cae9c41438cb272f9c56c02a5\" successfully" May 15 00:10:04.985760 containerd[1724]: time="2025-05-15T00:10:04.985685279Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"95adc4660aface91dd56e7440fdc733412af2d9cae9c41438cb272f9c56c02a5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 15 00:10:04.985760 containerd[1724]: time="2025-05-15T00:10:04.985756519Z" level=info msg="RemovePodSandbox \"95adc4660aface91dd56e7440fdc733412af2d9cae9c41438cb272f9c56c02a5\" returns successfully" May 15 00:10:05.061567 systemd[1]: run-containerd-runc-k8s.io-a52126a8254f00dbda007a99b55fd989b50ba13ed8ed0e25e4f79c29a53245a5-runc.BzlAmw.mount: Deactivated successfully. May 15 00:10:06.018771 systemd-networkd[1343]: lxc_health: Gained IPv6LL May 15 00:10:11.588559 sshd[5476]: Connection closed by 10.200.16.10 port 50324 May 15 00:10:11.589177 sshd-session[5474]: pam_unix(sshd:session): session closed for user core May 15 00:10:11.592535 systemd[1]: sshd@37-10.200.20.16:22-10.200.16.10:50324.service: Deactivated successfully. May 15 00:10:11.594136 systemd[1]: session-40.scope: Deactivated successfully. May 15 00:10:11.594926 systemd-logind[1707]: Session 40 logged out. Waiting for processes to exit. May 15 00:10:11.595983 systemd-logind[1707]: Removed session 40.