Jun 20 18:40:50.367948 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jun 20 18:40:50.367969 kernel: Linux version 6.6.94-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Fri Jun 20 17:15:00 -00 2025 Jun 20 18:40:50.367977 kernel: KASLR enabled Jun 20 18:40:50.367983 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jun 20 18:40:50.367990 kernel: printk: bootconsole [pl11] enabled Jun 20 18:40:50.367995 kernel: efi: EFI v2.7 by EDK II Jun 20 18:40:50.368002 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f20f698 RNG=0x3fd5f998 MEMRESERVE=0x3e477598 Jun 20 18:40:50.368008 kernel: random: crng init done Jun 20 18:40:50.368014 kernel: secureboot: Secure boot disabled Jun 20 18:40:50.368020 kernel: ACPI: Early table checksum verification disabled Jun 20 18:40:50.368026 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Jun 20 18:40:50.368031 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 20 18:40:50.368037 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 20 18:40:50.368045 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jun 20 18:40:50.368052 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 20 18:40:50.368058 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 20 18:40:50.368064 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 20 18:40:50.368072 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 20 18:40:50.368078 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 20 18:40:50.368084 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 20 18:40:50.368090 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jun 20 18:40:50.370117 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 20 18:40:50.370143 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jun 20 18:40:50.370150 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Jun 20 18:40:50.370156 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Jun 20 18:40:50.370162 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Jun 20 18:40:50.370168 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Jun 20 18:40:50.370175 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Jun 20 18:40:50.370186 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Jun 20 18:40:50.370192 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Jun 20 18:40:50.370198 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Jun 20 18:40:50.370204 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Jun 20 18:40:50.370211 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Jun 20 18:40:50.370217 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Jun 20 18:40:50.370223 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Jun 20 18:40:50.370229 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Jun 20 18:40:50.370235 kernel: Zone ranges: Jun 20 18:40:50.370241 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jun 20 18:40:50.370247 kernel: DMA32 empty Jun 20 18:40:50.370253 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jun 20 18:40:50.370264 kernel: Movable zone start for each node Jun 20 18:40:50.370270 kernel: Early memory node ranges Jun 20 18:40:50.370277 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jun 20 18:40:50.370283 kernel: node 0: [mem 0x0000000000824000-0x000000003e45ffff] Jun 20 18:40:50.370290 kernel: node 0: [mem 0x000000003e460000-0x000000003e46ffff] Jun 20 18:40:50.370297 kernel: node 0: [mem 0x000000003e470000-0x000000003e54ffff] Jun 20 18:40:50.370304 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Jun 20 18:40:50.370310 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Jun 20 18:40:50.370317 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Jun 20 18:40:50.370323 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Jun 20 18:40:50.370329 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jun 20 18:40:50.370336 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jun 20 18:40:50.370343 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jun 20 18:40:50.370349 kernel: psci: probing for conduit method from ACPI. Jun 20 18:40:50.370356 kernel: psci: PSCIv1.1 detected in firmware. Jun 20 18:40:50.370362 kernel: psci: Using standard PSCI v0.2 function IDs Jun 20 18:40:50.370368 kernel: psci: MIGRATE_INFO_TYPE not supported. Jun 20 18:40:50.370376 kernel: psci: SMC Calling Convention v1.4 Jun 20 18:40:50.370383 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jun 20 18:40:50.370389 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jun 20 18:40:50.370396 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jun 20 18:40:50.370402 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jun 20 18:40:50.370409 kernel: pcpu-alloc: [0] 0 [0] 1 Jun 20 18:40:50.370415 kernel: Detected PIPT I-cache on CPU0 Jun 20 18:40:50.370422 kernel: CPU features: detected: GIC system register CPU interface Jun 20 18:40:50.370428 kernel: CPU features: detected: Hardware dirty bit management Jun 20 18:40:50.370434 kernel: CPU features: detected: Spectre-BHB Jun 20 18:40:50.370441 kernel: CPU features: kernel page table isolation forced ON by KASLR Jun 20 18:40:50.370449 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jun 20 18:40:50.370455 kernel: CPU features: detected: ARM erratum 1418040 Jun 20 18:40:50.370462 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Jun 20 18:40:50.370468 kernel: CPU features: detected: SSBS not fully self-synchronizing Jun 20 18:40:50.370474 kernel: alternatives: applying boot alternatives Jun 20 18:40:50.370482 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=8a081d870e25287d755f6d580d3ffafd8d53f08173c09683922f11f1a622a40e Jun 20 18:40:50.370489 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jun 20 18:40:50.370496 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jun 20 18:40:50.370503 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jun 20 18:40:50.370509 kernel: Fallback order for Node 0: 0 Jun 20 18:40:50.370515 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Jun 20 18:40:50.370523 kernel: Policy zone: Normal Jun 20 18:40:50.370529 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jun 20 18:40:50.370536 kernel: software IO TLB: area num 2. Jun 20 18:40:50.370542 kernel: software IO TLB: mapped [mem 0x0000000036540000-0x000000003a540000] (64MB) Jun 20 18:40:50.370549 kernel: Memory: 3983592K/4194160K available (10368K kernel code, 2186K rwdata, 8104K rodata, 38336K init, 897K bss, 210568K reserved, 0K cma-reserved) Jun 20 18:40:50.370556 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jun 20 18:40:50.370562 kernel: rcu: Preemptible hierarchical RCU implementation. Jun 20 18:40:50.370569 kernel: rcu: RCU event tracing is enabled. Jun 20 18:40:50.370576 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jun 20 18:40:50.370582 kernel: Trampoline variant of Tasks RCU enabled. Jun 20 18:40:50.370589 kernel: Tracing variant of Tasks RCU enabled. Jun 20 18:40:50.370597 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jun 20 18:40:50.370603 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jun 20 18:40:50.370609 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jun 20 18:40:50.370616 kernel: GICv3: 960 SPIs implemented Jun 20 18:40:50.370622 kernel: GICv3: 0 Extended SPIs implemented Jun 20 18:40:50.370628 kernel: Root IRQ handler: gic_handle_irq Jun 20 18:40:50.370635 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jun 20 18:40:50.370641 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jun 20 18:40:50.370647 kernel: ITS: No ITS available, not enabling LPIs Jun 20 18:40:50.370654 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jun 20 18:40:50.370661 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 20 18:40:50.370667 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jun 20 18:40:50.370676 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jun 20 18:40:50.370682 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jun 20 18:40:50.370689 kernel: Console: colour dummy device 80x25 Jun 20 18:40:50.370695 kernel: printk: console [tty1] enabled Jun 20 18:40:50.370702 kernel: ACPI: Core revision 20230628 Jun 20 18:40:50.370709 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jun 20 18:40:50.370716 kernel: pid_max: default: 32768 minimum: 301 Jun 20 18:40:50.370722 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jun 20 18:40:50.370729 kernel: landlock: Up and running. Jun 20 18:40:50.370737 kernel: SELinux: Initializing. Jun 20 18:40:50.370744 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jun 20 18:40:50.370751 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jun 20 18:40:50.370758 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jun 20 18:40:50.370764 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jun 20 18:40:50.370771 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Jun 20 18:40:50.370778 kernel: Hyper-V: Host Build 10.0.22477.1619-1-0 Jun 20 18:40:50.370792 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jun 20 18:40:50.370799 kernel: rcu: Hierarchical SRCU implementation. Jun 20 18:40:50.370806 kernel: rcu: Max phase no-delay instances is 400. Jun 20 18:40:50.370813 kernel: Remapping and enabling EFI services. Jun 20 18:40:50.370820 kernel: smp: Bringing up secondary CPUs ... Jun 20 18:40:50.370828 kernel: Detected PIPT I-cache on CPU1 Jun 20 18:40:50.370835 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jun 20 18:40:50.370843 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 20 18:40:50.370850 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jun 20 18:40:50.370857 kernel: smp: Brought up 1 node, 2 CPUs Jun 20 18:40:50.370865 kernel: SMP: Total of 2 processors activated. Jun 20 18:40:50.370872 kernel: CPU features: detected: 32-bit EL0 Support Jun 20 18:40:50.370879 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jun 20 18:40:50.370886 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jun 20 18:40:50.370894 kernel: CPU features: detected: CRC32 instructions Jun 20 18:40:50.370901 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jun 20 18:40:50.370908 kernel: CPU features: detected: LSE atomic instructions Jun 20 18:40:50.370915 kernel: CPU features: detected: Privileged Access Never Jun 20 18:40:50.370922 kernel: CPU: All CPU(s) started at EL1 Jun 20 18:40:50.370930 kernel: alternatives: applying system-wide alternatives Jun 20 18:40:50.370937 kernel: devtmpfs: initialized Jun 20 18:40:50.370945 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jun 20 18:40:50.370952 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jun 20 18:40:50.370959 kernel: pinctrl core: initialized pinctrl subsystem Jun 20 18:40:50.370966 kernel: SMBIOS 3.1.0 present. Jun 20 18:40:50.370973 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Jun 20 18:40:50.370980 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jun 20 18:40:50.370987 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jun 20 18:40:50.370996 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jun 20 18:40:50.371003 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jun 20 18:40:50.371010 kernel: audit: initializing netlink subsys (disabled) Jun 20 18:40:50.371018 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Jun 20 18:40:50.371025 kernel: thermal_sys: Registered thermal governor 'step_wise' Jun 20 18:40:50.371032 kernel: cpuidle: using governor menu Jun 20 18:40:50.371039 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jun 20 18:40:50.371046 kernel: ASID allocator initialised with 32768 entries Jun 20 18:40:50.371053 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jun 20 18:40:50.371061 kernel: Serial: AMBA PL011 UART driver Jun 20 18:40:50.371068 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jun 20 18:40:50.371075 kernel: Modules: 0 pages in range for non-PLT usage Jun 20 18:40:50.371082 kernel: Modules: 509264 pages in range for PLT usage Jun 20 18:40:50.371089 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jun 20 18:40:50.371096 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jun 20 18:40:50.371114 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jun 20 18:40:50.371121 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jun 20 18:40:50.371128 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jun 20 18:40:50.371137 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jun 20 18:40:50.371145 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jun 20 18:40:50.371152 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jun 20 18:40:50.371159 kernel: ACPI: Added _OSI(Module Device) Jun 20 18:40:50.371166 kernel: ACPI: Added _OSI(Processor Device) Jun 20 18:40:50.371173 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jun 20 18:40:50.371180 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jun 20 18:40:50.371187 kernel: ACPI: Interpreter enabled Jun 20 18:40:50.371194 kernel: ACPI: Using GIC for interrupt routing Jun 20 18:40:50.371203 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jun 20 18:40:50.371210 kernel: printk: console [ttyAMA0] enabled Jun 20 18:40:50.371216 kernel: printk: bootconsole [pl11] disabled Jun 20 18:40:50.371223 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jun 20 18:40:50.371230 kernel: iommu: Default domain type: Translated Jun 20 18:40:50.371237 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jun 20 18:40:50.371244 kernel: efivars: Registered efivars operations Jun 20 18:40:50.371251 kernel: vgaarb: loaded Jun 20 18:40:50.371258 kernel: clocksource: Switched to clocksource arch_sys_counter Jun 20 18:40:50.371267 kernel: VFS: Disk quotas dquot_6.6.0 Jun 20 18:40:50.371274 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jun 20 18:40:50.371281 kernel: pnp: PnP ACPI init Jun 20 18:40:50.371288 kernel: pnp: PnP ACPI: found 0 devices Jun 20 18:40:50.371295 kernel: NET: Registered PF_INET protocol family Jun 20 18:40:50.371302 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jun 20 18:40:50.371309 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jun 20 18:40:50.371316 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jun 20 18:40:50.371323 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jun 20 18:40:50.371332 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jun 20 18:40:50.371339 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jun 20 18:40:50.371346 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jun 20 18:40:50.371353 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jun 20 18:40:50.371360 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jun 20 18:40:50.371367 kernel: PCI: CLS 0 bytes, default 64 Jun 20 18:40:50.371374 kernel: kvm [1]: HYP mode not available Jun 20 18:40:50.371381 kernel: Initialise system trusted keyrings Jun 20 18:40:50.371388 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jun 20 18:40:50.371396 kernel: Key type asymmetric registered Jun 20 18:40:50.371403 kernel: Asymmetric key parser 'x509' registered Jun 20 18:40:50.371410 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jun 20 18:40:50.371417 kernel: io scheduler mq-deadline registered Jun 20 18:40:50.371424 kernel: io scheduler kyber registered Jun 20 18:40:50.371431 kernel: io scheduler bfq registered Jun 20 18:40:50.371438 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jun 20 18:40:50.371445 kernel: thunder_xcv, ver 1.0 Jun 20 18:40:50.371452 kernel: thunder_bgx, ver 1.0 Jun 20 18:40:50.371460 kernel: nicpf, ver 1.0 Jun 20 18:40:50.371467 kernel: nicvf, ver 1.0 Jun 20 18:40:50.371619 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jun 20 18:40:50.371688 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-06-20T18:40:49 UTC (1750444849) Jun 20 18:40:50.371698 kernel: efifb: probing for efifb Jun 20 18:40:50.371705 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jun 20 18:40:50.371712 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jun 20 18:40:50.371719 kernel: efifb: scrolling: redraw Jun 20 18:40:50.371728 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jun 20 18:40:50.371735 kernel: Console: switching to colour frame buffer device 128x48 Jun 20 18:40:50.371742 kernel: fb0: EFI VGA frame buffer device Jun 20 18:40:50.371749 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jun 20 18:40:50.371756 kernel: hid: raw HID events driver (C) Jiri Kosina Jun 20 18:40:50.371763 kernel: No ACPI PMU IRQ for CPU0 Jun 20 18:40:50.371770 kernel: No ACPI PMU IRQ for CPU1 Jun 20 18:40:50.371777 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Jun 20 18:40:50.371784 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jun 20 18:40:50.371792 kernel: watchdog: Hard watchdog permanently disabled Jun 20 18:40:50.371799 kernel: NET: Registered PF_INET6 protocol family Jun 20 18:40:50.371806 kernel: Segment Routing with IPv6 Jun 20 18:40:50.371813 kernel: In-situ OAM (IOAM) with IPv6 Jun 20 18:40:50.371820 kernel: NET: Registered PF_PACKET protocol family Jun 20 18:40:50.371827 kernel: Key type dns_resolver registered Jun 20 18:40:50.371834 kernel: registered taskstats version 1 Jun 20 18:40:50.371841 kernel: Loading compiled-in X.509 certificates Jun 20 18:40:50.371848 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.94-flatcar: 8506faa781fda315da94c2790de0e5c860361c93' Jun 20 18:40:50.371857 kernel: Key type .fscrypt registered Jun 20 18:40:50.371863 kernel: Key type fscrypt-provisioning registered Jun 20 18:40:50.371870 kernel: ima: No TPM chip found, activating TPM-bypass! Jun 20 18:40:50.371877 kernel: ima: Allocated hash algorithm: sha1 Jun 20 18:40:50.371884 kernel: ima: No architecture policies found Jun 20 18:40:50.371891 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jun 20 18:40:50.371898 kernel: clk: Disabling unused clocks Jun 20 18:40:50.371905 kernel: Freeing unused kernel memory: 38336K Jun 20 18:40:50.371912 kernel: Run /init as init process Jun 20 18:40:50.371921 kernel: with arguments: Jun 20 18:40:50.371928 kernel: /init Jun 20 18:40:50.371934 kernel: with environment: Jun 20 18:40:50.371941 kernel: HOME=/ Jun 20 18:40:50.371948 kernel: TERM=linux Jun 20 18:40:50.371955 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jun 20 18:40:50.371964 systemd[1]: Successfully made /usr/ read-only. Jun 20 18:40:50.371974 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jun 20 18:40:50.371983 systemd[1]: Detected virtualization microsoft. Jun 20 18:40:50.371990 systemd[1]: Detected architecture arm64. Jun 20 18:40:50.371998 systemd[1]: Running in initrd. Jun 20 18:40:50.372005 systemd[1]: No hostname configured, using default hostname. Jun 20 18:40:50.372013 systemd[1]: Hostname set to . Jun 20 18:40:50.372020 systemd[1]: Initializing machine ID from random generator. Jun 20 18:40:50.372028 systemd[1]: Queued start job for default target initrd.target. Jun 20 18:40:50.372035 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 20 18:40:50.372045 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 20 18:40:50.372053 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jun 20 18:40:50.372061 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 20 18:40:50.372068 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jun 20 18:40:50.372077 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jun 20 18:40:50.372085 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jun 20 18:40:50.372094 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jun 20 18:40:50.374158 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 20 18:40:50.374169 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 20 18:40:50.374177 systemd[1]: Reached target paths.target - Path Units. Jun 20 18:40:50.374185 systemd[1]: Reached target slices.target - Slice Units. Jun 20 18:40:50.374193 systemd[1]: Reached target swap.target - Swaps. Jun 20 18:40:50.374200 systemd[1]: Reached target timers.target - Timer Units. Jun 20 18:40:50.374208 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jun 20 18:40:50.374216 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 20 18:40:50.374230 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jun 20 18:40:50.374238 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jun 20 18:40:50.374246 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 20 18:40:50.374253 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 20 18:40:50.374261 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 20 18:40:50.374269 systemd[1]: Reached target sockets.target - Socket Units. Jun 20 18:40:50.374276 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jun 20 18:40:50.374284 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 20 18:40:50.374294 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jun 20 18:40:50.374302 systemd[1]: Starting systemd-fsck-usr.service... Jun 20 18:40:50.374309 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 20 18:40:50.374344 systemd-journald[218]: Collecting audit messages is disabled. Jun 20 18:40:50.374365 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 20 18:40:50.374374 systemd-journald[218]: Journal started Jun 20 18:40:50.374392 systemd-journald[218]: Runtime Journal (/run/log/journal/cb92e1c0c54d4ec8a47c17212a06c5e6) is 8M, max 78.5M, 70.5M free. Jun 20 18:40:50.383114 systemd-modules-load[220]: Inserted module 'overlay' Jun 20 18:40:50.393392 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 18:40:50.413116 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jun 20 18:40:50.431963 systemd[1]: Started systemd-journald.service - Journal Service. Jun 20 18:40:50.432016 kernel: Bridge firewalling registered Jun 20 18:40:50.431305 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jun 20 18:40:50.438194 systemd-modules-load[220]: Inserted module 'br_netfilter' Jun 20 18:40:50.440355 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 20 18:40:50.459983 systemd[1]: Finished systemd-fsck-usr.service. Jun 20 18:40:50.477104 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 20 18:40:50.486115 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 18:40:50.506372 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 20 18:40:50.524301 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 20 18:40:50.535271 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jun 20 18:40:50.563402 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jun 20 18:40:50.581004 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 20 18:40:50.597126 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 20 18:40:50.605537 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 20 18:40:50.621560 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 20 18:40:50.650374 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jun 20 18:40:50.665164 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 20 18:40:50.685509 dracut-cmdline[254]: dracut-dracut-053 Jun 20 18:40:50.699955 dracut-cmdline[254]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=8a081d870e25287d755f6d580d3ffafd8d53f08173c09683922f11f1a622a40e Jun 20 18:40:50.687734 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 20 18:40:50.741843 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 20 18:40:50.778578 systemd-resolved[258]: Positive Trust Anchors: Jun 20 18:40:50.778593 systemd-resolved[258]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 20 18:40:50.778624 systemd-resolved[258]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jun 20 18:40:50.785481 systemd-resolved[258]: Defaulting to hostname 'linux'. Jun 20 18:40:50.786385 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 20 18:40:50.793393 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 20 18:40:50.870122 kernel: SCSI subsystem initialized Jun 20 18:40:50.878113 kernel: Loading iSCSI transport class v2.0-870. Jun 20 18:40:50.890136 kernel: iscsi: registered transport (tcp) Jun 20 18:40:50.906743 kernel: iscsi: registered transport (qla4xxx) Jun 20 18:40:50.906803 kernel: QLogic iSCSI HBA Driver Jun 20 18:40:50.945225 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jun 20 18:40:50.965265 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jun 20 18:40:51.001711 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jun 20 18:40:51.001772 kernel: device-mapper: uevent: version 1.0.3 Jun 20 18:40:51.008318 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jun 20 18:40:51.058131 kernel: raid6: neonx8 gen() 15758 MB/s Jun 20 18:40:51.078113 kernel: raid6: neonx4 gen() 15830 MB/s Jun 20 18:40:51.098110 kernel: raid6: neonx2 gen() 13246 MB/s Jun 20 18:40:51.119141 kernel: raid6: neonx1 gen() 10533 MB/s Jun 20 18:40:51.139126 kernel: raid6: int64x8 gen() 6788 MB/s Jun 20 18:40:51.159111 kernel: raid6: int64x4 gen() 7359 MB/s Jun 20 18:40:51.180115 kernel: raid6: int64x2 gen() 6104 MB/s Jun 20 18:40:51.203819 kernel: raid6: int64x1 gen() 5061 MB/s Jun 20 18:40:51.203834 kernel: raid6: using algorithm neonx4 gen() 15830 MB/s Jun 20 18:40:51.227729 kernel: raid6: .... xor() 12403 MB/s, rmw enabled Jun 20 18:40:51.227740 kernel: raid6: using neon recovery algorithm Jun 20 18:40:51.237116 kernel: xor: measuring software checksum speed Jun 20 18:40:51.244137 kernel: 8regs : 19608 MB/sec Jun 20 18:40:51.244159 kernel: 32regs : 21670 MB/sec Jun 20 18:40:51.247821 kernel: arm64_neon : 27823 MB/sec Jun 20 18:40:51.252296 kernel: xor: using function: arm64_neon (27823 MB/sec) Jun 20 18:40:51.302118 kernel: Btrfs loaded, zoned=no, fsverity=no Jun 20 18:40:51.313741 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jun 20 18:40:51.331240 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 20 18:40:51.355949 systemd-udevd[440]: Using default interface naming scheme 'v255'. Jun 20 18:40:51.362466 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 20 18:40:51.382260 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jun 20 18:40:51.395118 dracut-pre-trigger[452]: rd.md=0: removing MD RAID activation Jun 20 18:40:51.421865 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jun 20 18:40:51.437405 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 20 18:40:51.480168 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 20 18:40:51.501324 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jun 20 18:40:51.526782 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jun 20 18:40:51.542835 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jun 20 18:40:51.558151 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 20 18:40:51.573727 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 20 18:40:51.592352 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jun 20 18:40:51.612760 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 20 18:40:51.628906 kernel: hv_vmbus: Vmbus version:5.3 Jun 20 18:40:51.612917 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 20 18:40:51.638034 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 20 18:40:51.685248 kernel: hv_vmbus: registering driver hyperv_keyboard Jun 20 18:40:51.685274 kernel: pps_core: LinuxPPS API ver. 1 registered Jun 20 18:40:51.685284 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jun 20 18:40:51.685293 kernel: hv_vmbus: registering driver hv_netvsc Jun 20 18:40:51.685302 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jun 20 18:40:51.675467 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 20 18:40:51.713303 kernel: hv_vmbus: registering driver hv_storvsc Jun 20 18:40:51.675706 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 18:40:51.706913 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 18:40:51.743793 kernel: hv_vmbus: registering driver hid_hyperv Jun 20 18:40:51.743816 kernel: PTP clock support registered Jun 20 18:40:51.743827 kernel: scsi host1: storvsc_host_t Jun 20 18:40:51.743976 kernel: scsi host0: storvsc_host_t Jun 20 18:40:51.743998 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jun 20 18:40:51.740519 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 18:40:51.785205 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jun 20 18:40:51.785250 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jun 20 18:40:51.777378 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jun 20 18:40:51.797281 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 18:40:51.808784 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jun 20 18:40:51.823135 kernel: hv_netvsc 000d3a6d-ba6a-000d-3a6d-ba6a000d3a6d eth0: VF slot 1 added Jun 20 18:40:51.826655 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 20 18:40:51.843301 kernel: hv_utils: Registering HyperV Utility Driver Jun 20 18:40:51.843324 kernel: hv_vmbus: registering driver hv_utils Jun 20 18:40:51.826931 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 18:40:51.868513 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jun 20 18:40:51.868682 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jun 20 18:40:51.863862 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 18:40:51.439838 kernel: hv_utils: Heartbeat IC version 3.0 Jun 20 18:40:51.446374 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jun 20 18:40:51.446515 kernel: hv_utils: TimeSync IC version 4.0 Jun 20 18:40:51.446527 kernel: hv_utils: Shutdown IC version 3.2 Jun 20 18:40:51.446535 kernel: hv_vmbus: registering driver hv_pci Jun 20 18:40:51.446542 systemd-journald[218]: Time jumped backwards, rotating. Jun 20 18:40:51.446579 kernel: hv_pci e29d49d2-c393-48ba-864f-b6a73cf0b52c: PCI VMBus probing: Using version 0x10004 Jun 20 18:40:51.436792 systemd-resolved[258]: Clock change detected. Flushing caches. Jun 20 18:40:51.585334 kernel: hv_pci e29d49d2-c393-48ba-864f-b6a73cf0b52c: PCI host bridge to bus c393:00 Jun 20 18:40:51.585529 kernel: pci_bus c393:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jun 20 18:40:51.566279 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 18:40:51.600830 kernel: pci_bus c393:00: No busn resource found for root bus, will use [bus 00-ff] Jun 20 18:40:51.602727 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 18:40:51.621400 kernel: pci c393:00:02.0: [15b3:1018] type 00 class 0x020000 Jun 20 18:40:51.628766 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jun 20 18:40:51.637497 kernel: pci c393:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Jun 20 18:40:51.637541 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jun 20 18:40:51.638955 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 20 18:40:51.680280 kernel: pci c393:00:02.0: enabling Extended Tags Jun 20 18:40:51.680331 kernel: sd 0:0:0:0: [sda] Write Protect is off Jun 20 18:40:51.680535 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jun 20 18:40:51.680634 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jun 20 18:40:51.680728 kernel: pci c393:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at c393:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Jun 20 18:40:51.680760 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 20 18:40:51.695829 kernel: pci_bus c393:00: busn_res: [bus 00-ff] end is updated to 00 Jun 20 18:40:51.696025 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jun 20 18:40:51.707201 kernel: pci c393:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Jun 20 18:40:51.725043 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 20 18:40:51.770755 kernel: mlx5_core c393:00:02.0: enabling device (0000 -> 0002) Jun 20 18:40:51.777768 kernel: mlx5_core c393:00:02.0: firmware version: 16.31.2424 Jun 20 18:40:52.059427 kernel: hv_netvsc 000d3a6d-ba6a-000d-3a6d-ba6a000d3a6d eth0: VF registering: eth1 Jun 20 18:40:52.059606 kernel: mlx5_core c393:00:02.0 eth1: joined to eth0 Jun 20 18:40:52.072825 kernel: mlx5_core c393:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Jun 20 18:40:52.082764 kernel: mlx5_core c393:00:02.0 enP50067s1: renamed from eth1 Jun 20 18:40:52.163755 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jun 20 18:40:52.259811 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jun 20 18:40:52.289741 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by (udev-worker) (498) Jun 20 18:40:52.289786 kernel: BTRFS: device fsid c1b254aa-fc5c-4606-9f4d-9a81b9ab3a0f devid 1 transid 39 /dev/sda3 scanned by (udev-worker) (488) Jun 20 18:40:52.293336 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jun 20 18:40:52.325548 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jun 20 18:40:52.333230 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jun 20 18:40:52.363004 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jun 20 18:40:52.389768 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 20 18:40:53.411776 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 20 18:40:53.412819 disk-uuid[608]: The operation has completed successfully. Jun 20 18:40:53.468166 systemd[1]: disk-uuid.service: Deactivated successfully. Jun 20 18:40:53.468253 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jun 20 18:40:53.521923 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jun 20 18:40:53.535740 sh[694]: Success Jun 20 18:40:53.567872 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jun 20 18:40:53.750805 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jun 20 18:40:53.762886 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jun 20 18:40:53.781937 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jun 20 18:40:53.808935 kernel: BTRFS info (device dm-0): first mount of filesystem c1b254aa-fc5c-4606-9f4d-9a81b9ab3a0f Jun 20 18:40:53.808983 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jun 20 18:40:53.815819 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jun 20 18:40:53.820969 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jun 20 18:40:53.825229 kernel: BTRFS info (device dm-0): using free space tree Jun 20 18:40:54.124804 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jun 20 18:40:54.130759 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jun 20 18:40:54.146980 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jun 20 18:40:54.159921 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jun 20 18:40:54.198777 kernel: BTRFS info (device sda6): first mount of filesystem 068a5250-b7b4-4dc6-8e6c-a1610cec1941 Jun 20 18:40:54.199952 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jun 20 18:40:54.199968 kernel: BTRFS info (device sda6): using free space tree Jun 20 18:40:54.223786 kernel: BTRFS info (device sda6): auto enabling async discard Jun 20 18:40:54.234915 kernel: BTRFS info (device sda6): last unmount of filesystem 068a5250-b7b4-4dc6-8e6c-a1610cec1941 Jun 20 18:40:54.240209 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jun 20 18:40:54.255608 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jun 20 18:40:54.286820 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 20 18:40:54.304881 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 20 18:40:54.349958 systemd-networkd[875]: lo: Link UP Jun 20 18:40:54.349972 systemd-networkd[875]: lo: Gained carrier Jun 20 18:40:54.352601 systemd-networkd[875]: Enumeration completed Jun 20 18:40:54.353607 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 20 18:40:54.360173 systemd[1]: Reached target network.target - Network. Jun 20 18:40:54.364348 systemd-networkd[875]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 18:40:54.364352 systemd-networkd[875]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 20 18:40:54.449777 kernel: mlx5_core c393:00:02.0 enP50067s1: Link up Jun 20 18:40:54.536779 kernel: hv_netvsc 000d3a6d-ba6a-000d-3a6d-ba6a000d3a6d eth0: Data path switched to VF: enP50067s1 Jun 20 18:40:54.537334 systemd-networkd[875]: enP50067s1: Link UP Jun 20 18:40:54.537566 systemd-networkd[875]: eth0: Link UP Jun 20 18:40:54.537980 systemd-networkd[875]: eth0: Gained carrier Jun 20 18:40:54.537990 systemd-networkd[875]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 18:40:54.563352 systemd-networkd[875]: enP50067s1: Gained carrier Jun 20 18:40:54.573788 systemd-networkd[875]: eth0: DHCPv4 address 10.200.20.15/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jun 20 18:40:54.942711 ignition[841]: Ignition 2.20.0 Jun 20 18:40:54.942722 ignition[841]: Stage: fetch-offline Jun 20 18:40:54.946066 ignition[841]: no configs at "/usr/lib/ignition/base.d" Jun 20 18:40:54.950478 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jun 20 18:40:54.946082 ignition[841]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 20 18:40:54.946194 ignition[841]: parsed url from cmdline: "" Jun 20 18:40:54.946197 ignition[841]: no config URL provided Jun 20 18:40:54.946202 ignition[841]: reading system config file "/usr/lib/ignition/user.ign" Jun 20 18:40:54.946210 ignition[841]: no config at "/usr/lib/ignition/user.ign" Jun 20 18:40:54.980049 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jun 20 18:40:54.946215 ignition[841]: failed to fetch config: resource requires networking Jun 20 18:40:54.946403 ignition[841]: Ignition finished successfully Jun 20 18:40:55.013210 ignition[884]: Ignition 2.20.0 Jun 20 18:40:55.013217 ignition[884]: Stage: fetch Jun 20 18:40:55.013953 ignition[884]: no configs at "/usr/lib/ignition/base.d" Jun 20 18:40:55.013966 ignition[884]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 20 18:40:55.014078 ignition[884]: parsed url from cmdline: "" Jun 20 18:40:55.014082 ignition[884]: no config URL provided Jun 20 18:40:55.014087 ignition[884]: reading system config file "/usr/lib/ignition/user.ign" Jun 20 18:40:55.014095 ignition[884]: no config at "/usr/lib/ignition/user.ign" Jun 20 18:40:55.014124 ignition[884]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jun 20 18:40:55.098788 ignition[884]: GET result: OK Jun 20 18:40:55.098902 ignition[884]: config has been read from IMDS userdata Jun 20 18:40:55.098949 ignition[884]: parsing config with SHA512: 8d317f3a4bf8e0db6fc52a03441cb7ca5f20703c8b90b6c7fccf9758ad98b60a74fc17fbf5904bd6e30db4031b0d52872b651c1f6a2804d05cfc56de51e0111c Jun 20 18:40:55.104138 unknown[884]: fetched base config from "system" Jun 20 18:40:55.104692 ignition[884]: fetch: fetch complete Jun 20 18:40:55.104145 unknown[884]: fetched base config from "system" Jun 20 18:40:55.104697 ignition[884]: fetch: fetch passed Jun 20 18:40:55.104151 unknown[884]: fetched user config from "azure" Jun 20 18:40:55.104768 ignition[884]: Ignition finished successfully Jun 20 18:40:55.109205 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jun 20 18:40:55.127995 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jun 20 18:40:55.160903 ignition[891]: Ignition 2.20.0 Jun 20 18:40:55.160913 ignition[891]: Stage: kargs Jun 20 18:40:55.165814 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jun 20 18:40:55.161084 ignition[891]: no configs at "/usr/lib/ignition/base.d" Jun 20 18:40:55.161093 ignition[891]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 20 18:40:55.186969 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jun 20 18:40:55.161993 ignition[891]: kargs: kargs passed Jun 20 18:40:55.162039 ignition[891]: Ignition finished successfully Jun 20 18:40:55.210153 ignition[897]: Ignition 2.20.0 Jun 20 18:40:55.210159 ignition[897]: Stage: disks Jun 20 18:40:55.212528 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jun 20 18:40:55.210329 ignition[897]: no configs at "/usr/lib/ignition/base.d" Jun 20 18:40:55.221434 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jun 20 18:40:55.210338 ignition[897]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 20 18:40:55.230292 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 20 18:40:55.211266 ignition[897]: disks: disks passed Jun 20 18:40:55.242890 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 20 18:40:55.211308 ignition[897]: Ignition finished successfully Jun 20 18:40:55.253518 systemd[1]: Reached target sysinit.target - System Initialization. Jun 20 18:40:55.265619 systemd[1]: Reached target basic.target - Basic System. Jun 20 18:40:55.293009 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jun 20 18:40:55.358917 systemd-fsck[905]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jun 20 18:40:55.366184 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jun 20 18:40:55.383929 systemd[1]: Mounting sysroot.mount - /sysroot... Jun 20 18:40:55.439772 kernel: EXT4-fs (sda9): mounted filesystem f172a629-efc5-4850-a631-f3c62b46134c r/w with ordered data mode. Quota mode: none. Jun 20 18:40:55.440775 systemd[1]: Mounted sysroot.mount - /sysroot. Jun 20 18:40:55.446544 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jun 20 18:40:55.486815 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 20 18:40:55.498277 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jun 20 18:40:55.513806 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jun 20 18:40:55.541785 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (916) Jun 20 18:40:55.541820 kernel: BTRFS info (device sda6): first mount of filesystem 068a5250-b7b4-4dc6-8e6c-a1610cec1941 Jun 20 18:40:55.528319 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jun 20 18:40:55.573418 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jun 20 18:40:55.573442 kernel: BTRFS info (device sda6): using free space tree Jun 20 18:40:55.528369 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jun 20 18:40:55.549863 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jun 20 18:40:55.595766 kernel: BTRFS info (device sda6): auto enabling async discard Jun 20 18:40:55.599987 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jun 20 18:40:55.614714 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 20 18:40:55.934199 coreos-metadata[918]: Jun 20 18:40:55.934 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jun 20 18:40:55.944435 coreos-metadata[918]: Jun 20 18:40:55.944 INFO Fetch successful Jun 20 18:40:55.949916 coreos-metadata[918]: Jun 20 18:40:55.949 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jun 20 18:40:55.963682 coreos-metadata[918]: Jun 20 18:40:55.963 INFO Fetch successful Jun 20 18:40:55.975936 coreos-metadata[918]: Jun 20 18:40:55.975 INFO wrote hostname ci-4230.2.0-a-c483281568 to /sysroot/etc/hostname Jun 20 18:40:55.985216 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jun 20 18:40:56.000881 systemd-networkd[875]: enP50067s1: Gained IPv6LL Jun 20 18:40:56.095380 initrd-setup-root[947]: cut: /sysroot/etc/passwd: No such file or directory Jun 20 18:40:56.258025 initrd-setup-root[954]: cut: /sysroot/etc/group: No such file or directory Jun 20 18:40:56.267826 initrd-setup-root[961]: cut: /sysroot/etc/shadow: No such file or directory Jun 20 18:40:56.276928 initrd-setup-root[968]: cut: /sysroot/etc/gshadow: No such file or directory Jun 20 18:40:56.447873 systemd-networkd[875]: eth0: Gained IPv6LL Jun 20 18:40:56.895971 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jun 20 18:40:56.910968 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jun 20 18:40:56.925994 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jun 20 18:40:56.943052 kernel: BTRFS info (device sda6): last unmount of filesystem 068a5250-b7b4-4dc6-8e6c-a1610cec1941 Jun 20 18:40:56.943294 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jun 20 18:40:56.962648 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jun 20 18:40:56.973734 ignition[1041]: INFO : Ignition 2.20.0 Jun 20 18:40:56.973734 ignition[1041]: INFO : Stage: mount Jun 20 18:40:56.973734 ignition[1041]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 20 18:40:56.973734 ignition[1041]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 20 18:40:56.973734 ignition[1041]: INFO : mount: mount passed Jun 20 18:40:56.973734 ignition[1041]: INFO : Ignition finished successfully Jun 20 18:40:56.975543 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jun 20 18:40:56.994681 systemd[1]: Starting ignition-files.service - Ignition (files)... Jun 20 18:40:57.013017 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 20 18:40:57.062443 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (1052) Jun 20 18:40:57.062526 kernel: BTRFS info (device sda6): first mount of filesystem 068a5250-b7b4-4dc6-8e6c-a1610cec1941 Jun 20 18:40:57.073724 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jun 20 18:40:57.073774 kernel: BTRFS info (device sda6): using free space tree Jun 20 18:40:57.080778 kernel: BTRFS info (device sda6): auto enabling async discard Jun 20 18:40:57.082290 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 20 18:40:57.107786 ignition[1070]: INFO : Ignition 2.20.0 Jun 20 18:40:57.107786 ignition[1070]: INFO : Stage: files Jun 20 18:40:57.116485 ignition[1070]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 20 18:40:57.116485 ignition[1070]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 20 18:40:57.116485 ignition[1070]: DEBUG : files: compiled without relabeling support, skipping Jun 20 18:40:57.136467 ignition[1070]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jun 20 18:40:57.136467 ignition[1070]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jun 20 18:40:57.178977 ignition[1070]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jun 20 18:40:57.187055 ignition[1070]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jun 20 18:40:57.187055 ignition[1070]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jun 20 18:40:57.179373 unknown[1070]: wrote ssh authorized keys file for user: core Jun 20 18:40:57.207770 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jun 20 18:40:57.207770 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Jun 20 18:40:57.249120 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jun 20 18:40:57.357928 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jun 20 18:40:57.357928 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jun 20 18:40:57.357928 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jun 20 18:40:57.724779 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jun 20 18:40:57.795521 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jun 20 18:40:57.795521 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jun 20 18:40:57.816149 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jun 20 18:40:57.816149 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jun 20 18:40:57.816149 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jun 20 18:40:57.816149 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 20 18:40:57.816149 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 20 18:40:57.816149 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 20 18:40:57.816149 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 20 18:40:57.816149 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jun 20 18:40:57.816149 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jun 20 18:40:57.816149 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jun 20 18:40:57.816149 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jun 20 18:40:57.816149 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jun 20 18:40:57.816149 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Jun 20 18:40:58.568709 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jun 20 18:40:58.781128 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jun 20 18:40:58.781128 ignition[1070]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jun 20 18:40:58.852722 ignition[1070]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 20 18:40:58.863868 ignition[1070]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 20 18:40:58.863868 ignition[1070]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jun 20 18:40:58.863868 ignition[1070]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jun 20 18:40:58.863868 ignition[1070]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jun 20 18:40:58.863868 ignition[1070]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jun 20 18:40:58.863868 ignition[1070]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jun 20 18:40:58.863868 ignition[1070]: INFO : files: files passed Jun 20 18:40:58.863868 ignition[1070]: INFO : Ignition finished successfully Jun 20 18:40:58.874742 systemd[1]: Finished ignition-files.service - Ignition (files). Jun 20 18:40:58.915013 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jun 20 18:40:58.933948 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jun 20 18:40:58.978521 initrd-setup-root-after-ignition[1097]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 20 18:40:58.978521 initrd-setup-root-after-ignition[1097]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jun 20 18:40:58.954380 systemd[1]: ignition-quench.service: Deactivated successfully. Jun 20 18:40:59.014669 initrd-setup-root-after-ignition[1101]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 20 18:40:58.954464 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jun 20 18:40:58.979732 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 20 18:40:58.994372 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jun 20 18:40:59.029962 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jun 20 18:40:59.065298 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jun 20 18:40:59.065395 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jun 20 18:40:59.072846 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jun 20 18:40:59.079069 systemd[1]: Reached target initrd.target - Initrd Default Target. Jun 20 18:40:59.091010 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jun 20 18:40:59.109953 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jun 20 18:40:59.139464 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 20 18:40:59.163901 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jun 20 18:40:59.183108 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jun 20 18:40:59.185306 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jun 20 18:40:59.196672 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jun 20 18:40:59.208720 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 20 18:40:59.221715 systemd[1]: Stopped target timers.target - Timer Units. Jun 20 18:40:59.233087 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jun 20 18:40:59.233162 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 20 18:40:59.249524 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jun 20 18:40:59.255536 systemd[1]: Stopped target basic.target - Basic System. Jun 20 18:40:59.267460 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jun 20 18:40:59.279448 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jun 20 18:40:59.291201 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jun 20 18:40:59.303142 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jun 20 18:40:59.315281 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jun 20 18:40:59.328058 systemd[1]: Stopped target sysinit.target - System Initialization. Jun 20 18:40:59.339309 systemd[1]: Stopped target local-fs.target - Local File Systems. Jun 20 18:40:59.351699 systemd[1]: Stopped target swap.target - Swaps. Jun 20 18:40:59.361927 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jun 20 18:40:59.362010 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jun 20 18:40:59.377186 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jun 20 18:40:59.383569 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 20 18:40:59.396087 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jun 20 18:40:59.401806 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 20 18:40:59.409490 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jun 20 18:40:59.409561 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jun 20 18:40:59.428763 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jun 20 18:40:59.428816 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 20 18:40:59.436359 systemd[1]: ignition-files.service: Deactivated successfully. Jun 20 18:40:59.436404 systemd[1]: Stopped ignition-files.service - Ignition (files). Jun 20 18:40:59.447668 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jun 20 18:40:59.520724 ignition[1123]: INFO : Ignition 2.20.0 Jun 20 18:40:59.520724 ignition[1123]: INFO : Stage: umount Jun 20 18:40:59.520724 ignition[1123]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 20 18:40:59.520724 ignition[1123]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 20 18:40:59.520724 ignition[1123]: INFO : umount: umount passed Jun 20 18:40:59.520724 ignition[1123]: INFO : Ignition finished successfully Jun 20 18:40:59.447732 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jun 20 18:40:59.489895 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jun 20 18:40:59.506614 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jun 20 18:40:59.506700 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jun 20 18:40:59.534872 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jun 20 18:40:59.546809 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jun 20 18:40:59.546884 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jun 20 18:40:59.553709 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jun 20 18:40:59.553772 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jun 20 18:40:59.572812 systemd[1]: ignition-mount.service: Deactivated successfully. Jun 20 18:40:59.572900 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jun 20 18:40:59.586259 systemd[1]: ignition-disks.service: Deactivated successfully. Jun 20 18:40:59.586371 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jun 20 18:40:59.597365 systemd[1]: ignition-kargs.service: Deactivated successfully. Jun 20 18:40:59.597435 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jun 20 18:40:59.612945 systemd[1]: ignition-fetch.service: Deactivated successfully. Jun 20 18:40:59.612993 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jun 20 18:40:59.625341 systemd[1]: Stopped target network.target - Network. Jun 20 18:40:59.635592 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jun 20 18:40:59.635663 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jun 20 18:40:59.648064 systemd[1]: Stopped target paths.target - Path Units. Jun 20 18:40:59.653466 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jun 20 18:40:59.659606 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 20 18:40:59.673004 systemd[1]: Stopped target slices.target - Slice Units. Jun 20 18:40:59.684287 systemd[1]: Stopped target sockets.target - Socket Units. Jun 20 18:40:59.695338 systemd[1]: iscsid.socket: Deactivated successfully. Jun 20 18:40:59.695418 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jun 20 18:40:59.708044 systemd[1]: iscsiuio.socket: Deactivated successfully. Jun 20 18:40:59.708090 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 20 18:40:59.719707 systemd[1]: ignition-setup.service: Deactivated successfully. Jun 20 18:40:59.719772 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jun 20 18:40:59.731466 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jun 20 18:40:59.731506 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jun 20 18:40:59.743233 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jun 20 18:40:59.754548 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jun 20 18:40:59.774509 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jun 20 18:40:59.775090 systemd[1]: systemd-resolved.service: Deactivated successfully. Jun 20 18:40:59.775181 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jun 20 18:40:59.792850 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jun 20 18:40:59.793077 systemd[1]: systemd-networkd.service: Deactivated successfully. Jun 20 18:40:59.793300 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jun 20 18:40:59.811606 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jun 20 18:40:59.812509 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jun 20 18:40:59.812568 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jun 20 18:41:00.053868 kernel: hv_netvsc 000d3a6d-ba6a-000d-3a6d-ba6a000d3a6d eth0: Data path switched from VF: enP50067s1 Jun 20 18:40:59.845909 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jun 20 18:40:59.856253 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jun 20 18:40:59.856336 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 20 18:40:59.869472 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 20 18:40:59.869536 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 20 18:40:59.887262 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jun 20 18:40:59.887325 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jun 20 18:40:59.894125 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jun 20 18:40:59.894176 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 20 18:40:59.906985 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 20 18:40:59.920249 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jun 20 18:40:59.920326 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jun 20 18:40:59.944357 systemd[1]: systemd-udevd.service: Deactivated successfully. Jun 20 18:40:59.944495 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 20 18:40:59.956200 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jun 20 18:40:59.956280 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jun 20 18:40:59.968810 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jun 20 18:40:59.968860 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jun 20 18:40:59.981011 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jun 20 18:40:59.981081 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jun 20 18:41:00.000523 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jun 20 18:41:00.000585 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jun 20 18:41:00.018358 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 20 18:41:00.018435 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 20 18:41:00.074004 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jun 20 18:41:00.089175 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jun 20 18:41:00.089250 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 20 18:41:00.110800 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 20 18:41:00.111006 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 18:41:00.130143 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jun 20 18:41:00.130209 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jun 20 18:41:00.130601 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jun 20 18:41:00.131444 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jun 20 18:41:00.205915 systemd[1]: network-cleanup.service: Deactivated successfully. Jun 20 18:41:00.206064 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jun 20 18:41:00.594818 systemd[1]: sysroot-boot.service: Deactivated successfully. Jun 20 18:41:00.594933 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jun 20 18:41:00.607258 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jun 20 18:41:00.618344 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jun 20 18:41:00.618425 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jun 20 18:41:00.643999 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jun 20 18:41:00.669547 systemd[1]: Switching root. Jun 20 18:41:00.724307 systemd-journald[218]: Journal stopped Jun 20 18:41:05.596766 systemd-journald[218]: Received SIGTERM from PID 1 (systemd). Jun 20 18:41:05.596790 kernel: SELinux: policy capability network_peer_controls=1 Jun 20 18:41:05.596801 kernel: SELinux: policy capability open_perms=1 Jun 20 18:41:05.596811 kernel: SELinux: policy capability extended_socket_class=1 Jun 20 18:41:05.596818 kernel: SELinux: policy capability always_check_network=0 Jun 20 18:41:05.596826 kernel: SELinux: policy capability cgroup_seclabel=1 Jun 20 18:41:05.596834 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jun 20 18:41:05.596842 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jun 20 18:41:05.596849 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jun 20 18:41:05.596857 kernel: audit: type=1403 audit(1750444862.025:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jun 20 18:41:05.596867 systemd[1]: Successfully loaded SELinux policy in 154.605ms. Jun 20 18:41:05.596876 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.700ms. Jun 20 18:41:05.596886 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jun 20 18:41:05.596895 systemd[1]: Detected virtualization microsoft. Jun 20 18:41:05.596904 systemd[1]: Detected architecture arm64. Jun 20 18:41:05.596914 systemd[1]: Detected first boot. Jun 20 18:41:05.596923 systemd[1]: Hostname set to . Jun 20 18:41:05.596931 systemd[1]: Initializing machine ID from random generator. Jun 20 18:41:05.596940 zram_generator::config[1166]: No configuration found. Jun 20 18:41:05.596949 kernel: NET: Registered PF_VSOCK protocol family Jun 20 18:41:05.596959 systemd[1]: Populated /etc with preset unit settings. Jun 20 18:41:05.596970 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jun 20 18:41:05.596979 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jun 20 18:41:05.596987 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jun 20 18:41:05.596996 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jun 20 18:41:05.597005 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jun 20 18:41:05.597014 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jun 20 18:41:05.597023 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jun 20 18:41:05.597031 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jun 20 18:41:05.597042 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jun 20 18:41:05.597050 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jun 20 18:41:05.597059 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jun 20 18:41:05.597068 systemd[1]: Created slice user.slice - User and Session Slice. Jun 20 18:41:05.597077 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 20 18:41:05.597086 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 20 18:41:05.597095 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jun 20 18:41:05.597103 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jun 20 18:41:05.597114 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jun 20 18:41:05.597123 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 20 18:41:05.597132 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jun 20 18:41:05.597143 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 20 18:41:05.597153 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jun 20 18:41:05.597162 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jun 20 18:41:05.597171 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jun 20 18:41:05.597181 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jun 20 18:41:05.597191 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 20 18:41:05.597200 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 20 18:41:05.597208 systemd[1]: Reached target slices.target - Slice Units. Jun 20 18:41:05.597217 systemd[1]: Reached target swap.target - Swaps. Jun 20 18:41:05.597226 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jun 20 18:41:05.597235 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jun 20 18:41:05.597244 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jun 20 18:41:05.597255 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 20 18:41:05.597264 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 20 18:41:05.597274 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 20 18:41:05.597283 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jun 20 18:41:05.597293 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jun 20 18:41:05.597304 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jun 20 18:41:05.597313 systemd[1]: Mounting media.mount - External Media Directory... Jun 20 18:41:05.597322 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jun 20 18:41:05.597331 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jun 20 18:41:05.597340 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jun 20 18:41:05.597351 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jun 20 18:41:05.597360 systemd[1]: Reached target machines.target - Containers. Jun 20 18:41:05.597369 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jun 20 18:41:05.597378 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 20 18:41:05.597389 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 20 18:41:05.597398 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jun 20 18:41:05.597408 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 20 18:41:05.597417 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 20 18:41:05.597426 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 20 18:41:05.597435 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jun 20 18:41:05.597444 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 20 18:41:05.597454 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jun 20 18:41:05.597465 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jun 20 18:41:05.597474 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jun 20 18:41:05.597483 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jun 20 18:41:05.597492 systemd[1]: Stopped systemd-fsck-usr.service. Jun 20 18:41:05.597502 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 20 18:41:05.597510 kernel: fuse: init (API version 7.39) Jun 20 18:41:05.597518 kernel: loop: module loaded Jun 20 18:41:05.597527 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 20 18:41:05.597538 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 20 18:41:05.597548 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 20 18:41:05.597558 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jun 20 18:41:05.597567 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jun 20 18:41:05.597576 kernel: ACPI: bus type drm_connector registered Jun 20 18:41:05.597600 systemd-journald[1263]: Collecting audit messages is disabled. Jun 20 18:41:05.597621 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 20 18:41:05.597631 systemd-journald[1263]: Journal started Jun 20 18:41:05.597650 systemd-journald[1263]: Runtime Journal (/run/log/journal/2d7fe9af29e74743b79b7ae1d7f31739) is 8M, max 78.5M, 70.5M free. Jun 20 18:41:04.642354 systemd[1]: Queued start job for default target multi-user.target. Jun 20 18:41:04.646667 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jun 20 18:41:04.647147 systemd[1]: systemd-journald.service: Deactivated successfully. Jun 20 18:41:04.647514 systemd[1]: systemd-journald.service: Consumed 3.407s CPU time. Jun 20 18:41:05.612767 systemd[1]: verity-setup.service: Deactivated successfully. Jun 20 18:41:05.612829 systemd[1]: Stopped verity-setup.service. Jun 20 18:41:05.634282 systemd[1]: Started systemd-journald.service - Journal Service. Jun 20 18:41:05.635097 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jun 20 18:41:05.641057 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jun 20 18:41:05.648225 systemd[1]: Mounted media.mount - External Media Directory. Jun 20 18:41:05.654341 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jun 20 18:41:05.660912 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jun 20 18:41:05.667613 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jun 20 18:41:05.673616 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jun 20 18:41:05.682425 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 20 18:41:05.691242 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jun 20 18:41:05.691409 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jun 20 18:41:05.698612 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 20 18:41:05.698785 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 20 18:41:05.706004 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 20 18:41:05.706146 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 20 18:41:05.712928 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 20 18:41:05.713075 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 20 18:41:05.720797 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jun 20 18:41:05.720935 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jun 20 18:41:05.727644 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 20 18:41:05.727797 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 20 18:41:05.734483 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 20 18:41:05.741399 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 20 18:41:05.749496 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jun 20 18:41:05.759779 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jun 20 18:41:05.767145 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 20 18:41:05.782832 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 20 18:41:05.793828 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jun 20 18:41:05.801392 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jun 20 18:41:05.808568 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jun 20 18:41:05.808610 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 20 18:41:05.815775 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jun 20 18:41:05.828909 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jun 20 18:41:05.836778 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jun 20 18:41:05.842555 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 20 18:41:05.845932 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jun 20 18:41:05.853976 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jun 20 18:41:05.864324 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 20 18:41:05.865520 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jun 20 18:41:05.871877 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 20 18:41:05.874949 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 20 18:41:05.883977 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jun 20 18:41:05.891983 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jun 20 18:41:05.901922 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jun 20 18:41:05.911939 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jun 20 18:41:05.920430 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jun 20 18:41:05.935770 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jun 20 18:41:05.946480 udevadm[1309]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jun 20 18:41:05.971573 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jun 20 18:41:05.978771 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jun 20 18:41:05.994048 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jun 20 18:41:06.002779 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 20 18:41:06.003576 systemd-journald[1263]: Time spent on flushing to /var/log/journal/2d7fe9af29e74743b79b7ae1d7f31739 is 44.431ms for 923 entries. Jun 20 18:41:06.003576 systemd-journald[1263]: System Journal (/var/log/journal/2d7fe9af29e74743b79b7ae1d7f31739) is 11.8M, max 2.6G, 2.6G free. Jun 20 18:41:06.195244 systemd-journald[1263]: Received client request to flush runtime journal. Jun 20 18:41:06.195301 kernel: loop0: detected capacity change from 0 to 123192 Jun 20 18:41:06.195334 systemd-journald[1263]: /var/log/journal/2d7fe9af29e74743b79b7ae1d7f31739/system.journal: Realtime clock jumped backwards relative to last journal entry, rotating. Jun 20 18:41:06.195359 systemd-journald[1263]: Rotating system journal. Jun 20 18:41:06.185879 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jun 20 18:41:06.187523 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jun 20 18:41:06.196772 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jun 20 18:41:06.280557 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jun 20 18:41:06.298878 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 20 18:41:06.425379 systemd-tmpfiles[1324]: ACLs are not supported, ignoring. Jun 20 18:41:06.425400 systemd-tmpfiles[1324]: ACLs are not supported, ignoring. Jun 20 18:41:06.429814 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 20 18:41:06.450769 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jun 20 18:41:06.510044 kernel: loop1: detected capacity change from 0 to 113512 Jun 20 18:41:06.878836 kernel: loop2: detected capacity change from 0 to 211168 Jun 20 18:41:06.921788 kernel: loop3: detected capacity change from 0 to 28720 Jun 20 18:41:07.336148 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jun 20 18:41:07.343716 kernel: loop4: detected capacity change from 0 to 123192 Jun 20 18:41:07.349858 kernel: loop5: detected capacity change from 0 to 113512 Jun 20 18:41:07.359316 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 20 18:41:07.360770 kernel: loop6: detected capacity change from 0 to 211168 Jun 20 18:41:07.374942 kernel: loop7: detected capacity change from 0 to 28720 Jun 20 18:41:07.378597 (sd-merge)[1332]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jun 20 18:41:07.379323 (sd-merge)[1332]: Merged extensions into '/usr'. Jun 20 18:41:07.383855 systemd[1]: Reload requested from client PID 1306 ('systemd-sysext') (unit systemd-sysext.service)... Jun 20 18:41:07.383871 systemd[1]: Reloading... Jun 20 18:41:07.392423 systemd-udevd[1334]: Using default interface naming scheme 'v255'. Jun 20 18:41:07.446784 zram_generator::config[1362]: No configuration found. Jun 20 18:41:07.567523 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 18:41:07.637256 systemd[1]: Reloading finished in 253 ms. Jun 20 18:41:07.653464 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jun 20 18:41:07.671906 systemd[1]: Starting ensure-sysext.service... Jun 20 18:41:07.677651 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jun 20 18:41:07.709710 systemd[1]: Reload requested from client PID 1417 ('systemctl') (unit ensure-sysext.service)... Jun 20 18:41:07.709726 systemd[1]: Reloading... Jun 20 18:41:07.710082 systemd-tmpfiles[1418]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jun 20 18:41:07.710308 systemd-tmpfiles[1418]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jun 20 18:41:07.711034 systemd-tmpfiles[1418]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jun 20 18:41:07.711281 systemd-tmpfiles[1418]: ACLs are not supported, ignoring. Jun 20 18:41:07.711326 systemd-tmpfiles[1418]: ACLs are not supported, ignoring. Jun 20 18:41:07.740097 systemd-tmpfiles[1418]: Detected autofs mount point /boot during canonicalization of boot. Jun 20 18:41:07.741018 systemd-tmpfiles[1418]: Skipping /boot Jun 20 18:41:07.753045 systemd-tmpfiles[1418]: Detected autofs mount point /boot during canonicalization of boot. Jun 20 18:41:07.753149 systemd-tmpfiles[1418]: Skipping /boot Jun 20 18:41:07.822773 zram_generator::config[1469]: No configuration found. Jun 20 18:41:07.918775 kernel: mousedev: PS/2 mouse device common for all mice Jun 20 18:41:07.964800 kernel: hv_vmbus: registering driver hyperv_fb Jun 20 18:41:07.981090 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jun 20 18:41:07.981168 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jun 20 18:41:07.987984 kernel: Console: switching to colour dummy device 80x25 Jun 20 18:41:08.001564 kernel: hv_vmbus: registering driver hv_balloon Jun 20 18:41:08.001661 kernel: Console: switching to colour frame buffer device 128x48 Jun 20 18:41:07.998962 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 18:41:08.028109 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jun 20 18:41:08.028428 kernel: hv_balloon: Memory hot add disabled on ARM64 Jun 20 18:41:08.114819 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jun 20 18:41:08.114921 systemd[1]: Reloading finished in 404 ms. Jun 20 18:41:08.121818 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1426) Jun 20 18:41:08.128502 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 20 18:41:08.157229 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 20 18:41:08.200958 systemd[1]: Finished ensure-sysext.service. Jun 20 18:41:08.240709 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jun 20 18:41:08.267957 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jun 20 18:41:08.274780 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jun 20 18:41:08.282475 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 20 18:41:08.285954 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 20 18:41:08.293990 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 20 18:41:08.302284 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 20 18:41:08.310605 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 20 18:41:08.319441 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 20 18:41:08.326630 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jun 20 18:41:08.334505 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 20 18:41:08.335674 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jun 20 18:41:08.347935 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 20 18:41:08.361973 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 20 18:41:08.368780 systemd[1]: Reached target time-set.target - System Time Set. Jun 20 18:41:08.383935 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jun 20 18:41:08.393493 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 18:41:08.404015 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jun 20 18:41:08.411994 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 20 18:41:08.412198 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 20 18:41:08.422107 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 20 18:41:08.422289 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 20 18:41:08.429003 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 20 18:41:08.429161 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 20 18:41:08.436758 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 20 18:41:08.436914 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 20 18:41:08.451959 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jun 20 18:41:08.461035 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 20 18:41:08.461109 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 20 18:41:08.467927 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jun 20 18:41:08.475080 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jun 20 18:41:08.484549 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jun 20 18:41:08.519693 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jun 20 18:41:08.526576 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jun 20 18:41:08.555764 lvm[1636]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 20 18:41:08.578864 augenrules[1655]: No rules Jun 20 18:41:08.581691 systemd[1]: audit-rules.service: Deactivated successfully. Jun 20 18:41:08.581919 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jun 20 18:41:08.595299 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jun 20 18:41:08.603417 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 20 18:41:08.617044 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jun 20 18:41:08.627654 lvm[1662]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 20 18:41:08.655678 systemd-resolved[1619]: Positive Trust Anchors: Jun 20 18:41:08.655695 systemd-resolved[1619]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 20 18:41:08.655725 systemd-resolved[1619]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jun 20 18:41:08.657248 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jun 20 18:41:08.657993 systemd-networkd[1618]: lo: Link UP Jun 20 18:41:08.658242 systemd-networkd[1618]: lo: Gained carrier Jun 20 18:41:08.660306 systemd-networkd[1618]: Enumeration completed Jun 20 18:41:08.660693 systemd-networkd[1618]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 18:41:08.660778 systemd-networkd[1618]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 20 18:41:08.665584 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 20 18:41:08.675794 systemd-resolved[1619]: Using system hostname 'ci-4230.2.0-a-c483281568'. Jun 20 18:41:08.678070 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jun 20 18:41:08.696204 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jun 20 18:41:08.725506 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 18:41:08.740802 kernel: mlx5_core c393:00:02.0 enP50067s1: Link up Jun 20 18:41:08.783770 kernel: hv_netvsc 000d3a6d-ba6a-000d-3a6d-ba6a000d3a6d eth0: Data path switched to VF: enP50067s1 Jun 20 18:41:08.785790 systemd-networkd[1618]: enP50067s1: Link UP Jun 20 18:41:08.785895 systemd-networkd[1618]: eth0: Link UP Jun 20 18:41:08.785899 systemd-networkd[1618]: eth0: Gained carrier Jun 20 18:41:08.785914 systemd-networkd[1618]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 18:41:08.786539 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 20 18:41:08.793553 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jun 20 18:41:08.801798 systemd[1]: Reached target network.target - Network. Jun 20 18:41:08.802454 systemd-networkd[1618]: enP50067s1: Gained carrier Jun 20 18:41:08.807125 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 20 18:41:08.817784 systemd-networkd[1618]: eth0: DHCPv4 address 10.200.20.15/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jun 20 18:41:08.958502 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jun 20 18:41:08.966213 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jun 20 18:41:10.207943 systemd-networkd[1618]: enP50067s1: Gained IPv6LL Jun 20 18:41:10.655909 systemd-networkd[1618]: eth0: Gained IPv6LL Jun 20 18:41:10.658671 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jun 20 18:41:10.666274 systemd[1]: Reached target network-online.target - Network is Online. Jun 20 18:41:10.764774 ldconfig[1301]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jun 20 18:41:10.776582 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jun 20 18:41:10.788963 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jun 20 18:41:10.796281 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jun 20 18:41:10.803335 systemd[1]: Reached target sysinit.target - System Initialization. Jun 20 18:41:10.809261 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jun 20 18:41:10.816202 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jun 20 18:41:10.823229 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jun 20 18:41:10.829179 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jun 20 18:41:10.836130 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jun 20 18:41:10.843031 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jun 20 18:41:10.843063 systemd[1]: Reached target paths.target - Path Units. Jun 20 18:41:10.848150 systemd[1]: Reached target timers.target - Timer Units. Jun 20 18:41:10.853770 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jun 20 18:41:10.861401 systemd[1]: Starting docker.socket - Docker Socket for the API... Jun 20 18:41:10.869093 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jun 20 18:41:10.876869 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jun 20 18:41:10.884362 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jun 20 18:41:10.902430 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jun 20 18:41:10.908960 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jun 20 18:41:10.916626 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jun 20 18:41:10.922944 systemd[1]: Reached target sockets.target - Socket Units. Jun 20 18:41:10.928517 systemd[1]: Reached target basic.target - Basic System. Jun 20 18:41:10.933937 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jun 20 18:41:10.933966 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jun 20 18:41:10.942833 systemd[1]: Starting chronyd.service - NTP client/server... Jun 20 18:41:10.951881 systemd[1]: Starting containerd.service - containerd container runtime... Jun 20 18:41:10.962902 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jun 20 18:41:10.973950 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jun 20 18:41:10.981540 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jun 20 18:41:10.988617 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jun 20 18:41:10.995308 (chronyd)[1678]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Jun 20 18:41:10.996281 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jun 20 18:41:10.996318 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Jun 20 18:41:10.997883 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jun 20 18:41:11.005375 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jun 20 18:41:11.007881 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 18:41:11.008388 jq[1685]: false Jun 20 18:41:11.016965 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jun 20 18:41:11.020739 KVP[1687]: KVP starting; pid is:1687 Jun 20 18:41:11.025919 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jun 20 18:41:11.028803 kernel: hv_utils: KVP IC version 4.0 Jun 20 18:41:11.028946 KVP[1687]: KVP LIC Version: 3.1 Jun 20 18:41:11.031046 chronyd[1693]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Jun 20 18:41:11.036874 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jun 20 18:41:11.046405 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jun 20 18:41:11.063483 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jun 20 18:41:11.072125 systemd[1]: Starting systemd-logind.service - User Login Management... Jun 20 18:41:11.078927 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jun 20 18:41:11.079498 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jun 20 18:41:11.081947 systemd[1]: Starting update-engine.service - Update Engine... Jun 20 18:41:11.088363 chronyd[1693]: Timezone right/UTC failed leap second check, ignoring Jun 20 18:41:11.088593 chronyd[1693]: Loaded seccomp filter (level 2) Jun 20 18:41:11.090877 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jun 20 18:41:11.100468 systemd[1]: Started chronyd.service - NTP client/server. Jun 20 18:41:11.104318 jq[1703]: true Jun 20 18:41:11.109185 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jun 20 18:41:11.112800 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jun 20 18:41:11.116174 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jun 20 18:41:11.116710 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jun 20 18:41:11.137296 extend-filesystems[1686]: Found loop4 Jun 20 18:41:11.137296 extend-filesystems[1686]: Found loop5 Jun 20 18:41:11.137296 extend-filesystems[1686]: Found loop6 Jun 20 18:41:11.137296 extend-filesystems[1686]: Found loop7 Jun 20 18:41:11.137296 extend-filesystems[1686]: Found sda Jun 20 18:41:11.137296 extend-filesystems[1686]: Found sda1 Jun 20 18:41:11.137296 extend-filesystems[1686]: Found sda2 Jun 20 18:41:11.137296 extend-filesystems[1686]: Found sda3 Jun 20 18:41:11.137296 extend-filesystems[1686]: Found usr Jun 20 18:41:11.172763 (ntainerd)[1720]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jun 20 18:41:11.319984 update_engine[1702]: I20250620 18:41:11.213711 1702 main.cc:92] Flatcar Update Engine starting Jun 20 18:41:11.319984 update_engine[1702]: I20250620 18:41:11.221969 1702 update_check_scheduler.cc:74] Next update check in 6m28s Jun 20 18:41:11.320147 extend-filesystems[1686]: Found sda4 Jun 20 18:41:11.320147 extend-filesystems[1686]: Found sda6 Jun 20 18:41:11.320147 extend-filesystems[1686]: Found sda7 Jun 20 18:41:11.320147 extend-filesystems[1686]: Found sda9 Jun 20 18:41:11.320147 extend-filesystems[1686]: Checking size of /dev/sda9 Jun 20 18:41:11.320147 extend-filesystems[1686]: Old size kept for /dev/sda9 Jun 20 18:41:11.320147 extend-filesystems[1686]: Found sr0 Jun 20 18:41:11.189308 dbus-daemon[1681]: [system] SELinux support is enabled Jun 20 18:41:11.174791 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jun 20 18:41:11.383284 coreos-metadata[1680]: Jun 20 18:41:11.326 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jun 20 18:41:11.383284 coreos-metadata[1680]: Jun 20 18:41:11.340 INFO Fetch successful Jun 20 18:41:11.383284 coreos-metadata[1680]: Jun 20 18:41:11.341 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jun 20 18:41:11.383284 coreos-metadata[1680]: Jun 20 18:41:11.346 INFO Fetch successful Jun 20 18:41:11.383284 coreos-metadata[1680]: Jun 20 18:41:11.347 INFO Fetching http://168.63.129.16/machine/19bf0df3-9ace-4727-80a5-255ffaa01255/eee9f4fb%2D8893%2D4377%2D9517%2D42867823d107.%5Fci%2D4230.2.0%2Da%2Dc483281568?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jun 20 18:41:11.285327 dbus-daemon[1681]: [system] Successfully activated service 'org.freedesktop.systemd1' Jun 20 18:41:11.383577 tar[1710]: linux-arm64/LICENSE Jun 20 18:41:11.383577 tar[1710]: linux-arm64/helm Jun 20 18:41:11.190983 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jun 20 18:41:11.209191 systemd[1]: motdgen.service: Deactivated successfully. Jun 20 18:41:11.385175 jq[1712]: true Jun 20 18:41:11.209373 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jun 20 18:41:11.233041 systemd-logind[1701]: New seat seat0. Jun 20 18:41:11.233880 systemd-logind[1701]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jun 20 18:41:11.390090 coreos-metadata[1680]: Jun 20 18:41:11.388 INFO Fetch successful Jun 20 18:41:11.390090 coreos-metadata[1680]: Jun 20 18:41:11.389 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jun 20 18:41:11.251074 systemd[1]: Started systemd-logind.service - User Login Management. Jun 20 18:41:11.261222 systemd[1]: extend-filesystems.service: Deactivated successfully. Jun 20 18:41:11.261419 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jun 20 18:41:11.283077 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jun 20 18:41:11.283115 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jun 20 18:41:11.305921 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jun 20 18:41:11.305945 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jun 20 18:41:11.324561 systemd[1]: Started update-engine.service - Update Engine. Jun 20 18:41:11.357216 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jun 20 18:41:11.394268 bash[1754]: Updated "/home/core/.ssh/authorized_keys" Jun 20 18:41:11.394728 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jun 20 18:41:11.409257 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jun 20 18:41:11.409975 coreos-metadata[1680]: Jun 20 18:41:11.409 INFO Fetch successful Jun 20 18:41:11.482816 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1448) Jun 20 18:41:11.543147 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jun 20 18:41:11.552072 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jun 20 18:41:11.600037 locksmithd[1758]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jun 20 18:41:11.811499 containerd[1720]: time="2025-06-20T18:41:11.811417940Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jun 20 18:41:11.889808 containerd[1720]: time="2025-06-20T18:41:11.888114700Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jun 20 18:41:11.891802 containerd[1720]: time="2025-06-20T18:41:11.891770260Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.94-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jun 20 18:41:11.894229 containerd[1720]: time="2025-06-20T18:41:11.893036500Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jun 20 18:41:11.894229 containerd[1720]: time="2025-06-20T18:41:11.893062260Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jun 20 18:41:11.894229 containerd[1720]: time="2025-06-20T18:41:11.893219300Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jun 20 18:41:11.894229 containerd[1720]: time="2025-06-20T18:41:11.893238860Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jun 20 18:41:11.894229 containerd[1720]: time="2025-06-20T18:41:11.893298940Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jun 20 18:41:11.894229 containerd[1720]: time="2025-06-20T18:41:11.893310140Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jun 20 18:41:11.894229 containerd[1720]: time="2025-06-20T18:41:11.893509020Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 20 18:41:11.894229 containerd[1720]: time="2025-06-20T18:41:11.893523380Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jun 20 18:41:11.894229 containerd[1720]: time="2025-06-20T18:41:11.893536660Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jun 20 18:41:11.894229 containerd[1720]: time="2025-06-20T18:41:11.893545220Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jun 20 18:41:11.894229 containerd[1720]: time="2025-06-20T18:41:11.893619380Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jun 20 18:41:11.894229 containerd[1720]: time="2025-06-20T18:41:11.893856340Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jun 20 18:41:11.894482 containerd[1720]: time="2025-06-20T18:41:11.893979820Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 20 18:41:11.894482 containerd[1720]: time="2025-06-20T18:41:11.893992180Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jun 20 18:41:11.894482 containerd[1720]: time="2025-06-20T18:41:11.894069060Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jun 20 18:41:11.894482 containerd[1720]: time="2025-06-20T18:41:11.894108100Z" level=info msg="metadata content store policy set" policy=shared Jun 20 18:41:11.912911 containerd[1720]: time="2025-06-20T18:41:11.912876060Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jun 20 18:41:11.913070 containerd[1720]: time="2025-06-20T18:41:11.913056460Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jun 20 18:41:11.913815 containerd[1720]: time="2025-06-20T18:41:11.913801620Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jun 20 18:41:11.914777 containerd[1720]: time="2025-06-20T18:41:11.913869820Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jun 20 18:41:11.914777 containerd[1720]: time="2025-06-20T18:41:11.913891620Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jun 20 18:41:11.914777 containerd[1720]: time="2025-06-20T18:41:11.914048100Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jun 20 18:41:11.914777 containerd[1720]: time="2025-06-20T18:41:11.914289780Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jun 20 18:41:11.914777 containerd[1720]: time="2025-06-20T18:41:11.914388580Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jun 20 18:41:11.914777 containerd[1720]: time="2025-06-20T18:41:11.914426180Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jun 20 18:41:11.914777 containerd[1720]: time="2025-06-20T18:41:11.914442420Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jun 20 18:41:11.914777 containerd[1720]: time="2025-06-20T18:41:11.914456620Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jun 20 18:41:11.914777 containerd[1720]: time="2025-06-20T18:41:11.914487220Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jun 20 18:41:11.914777 containerd[1720]: time="2025-06-20T18:41:11.914512660Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jun 20 18:41:11.914777 containerd[1720]: time="2025-06-20T18:41:11.914526780Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jun 20 18:41:11.914777 containerd[1720]: time="2025-06-20T18:41:11.914540860Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jun 20 18:41:11.914777 containerd[1720]: time="2025-06-20T18:41:11.914558020Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jun 20 18:41:11.914777 containerd[1720]: time="2025-06-20T18:41:11.914571820Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jun 20 18:41:11.915048 containerd[1720]: time="2025-06-20T18:41:11.914583220Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jun 20 18:41:11.915048 containerd[1720]: time="2025-06-20T18:41:11.914606380Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jun 20 18:41:11.915048 containerd[1720]: time="2025-06-20T18:41:11.914630940Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jun 20 18:41:11.915048 containerd[1720]: time="2025-06-20T18:41:11.914643100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jun 20 18:41:11.915048 containerd[1720]: time="2025-06-20T18:41:11.914655260Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jun 20 18:41:11.915048 containerd[1720]: time="2025-06-20T18:41:11.914666380Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jun 20 18:41:11.915048 containerd[1720]: time="2025-06-20T18:41:11.914678060Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jun 20 18:41:11.915048 containerd[1720]: time="2025-06-20T18:41:11.914689820Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jun 20 18:41:11.915048 containerd[1720]: time="2025-06-20T18:41:11.914703900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jun 20 18:41:11.915048 containerd[1720]: time="2025-06-20T18:41:11.914722820Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jun 20 18:41:11.915048 containerd[1720]: time="2025-06-20T18:41:11.914737740Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jun 20 18:41:11.917058 containerd[1720]: time="2025-06-20T18:41:11.916785020Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jun 20 18:41:11.917058 containerd[1720]: time="2025-06-20T18:41:11.916810780Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jun 20 18:41:11.917058 containerd[1720]: time="2025-06-20T18:41:11.916823820Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jun 20 18:41:11.917058 containerd[1720]: time="2025-06-20T18:41:11.916838940Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jun 20 18:41:11.917058 containerd[1720]: time="2025-06-20T18:41:11.916862420Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jun 20 18:41:11.917058 containerd[1720]: time="2025-06-20T18:41:11.916877660Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jun 20 18:41:11.917058 containerd[1720]: time="2025-06-20T18:41:11.916889740Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jun 20 18:41:11.917058 containerd[1720]: time="2025-06-20T18:41:11.916953500Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jun 20 18:41:11.917058 containerd[1720]: time="2025-06-20T18:41:11.916974380Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jun 20 18:41:11.917058 containerd[1720]: time="2025-06-20T18:41:11.916984460Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jun 20 18:41:11.917768 containerd[1720]: time="2025-06-20T18:41:11.916996700Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jun 20 18:41:11.917768 containerd[1720]: time="2025-06-20T18:41:11.917319380Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jun 20 18:41:11.917768 containerd[1720]: time="2025-06-20T18:41:11.917335460Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jun 20 18:41:11.917768 containerd[1720]: time="2025-06-20T18:41:11.917345580Z" level=info msg="NRI interface is disabled by configuration." Jun 20 18:41:11.917768 containerd[1720]: time="2025-06-20T18:41:11.917355420Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jun 20 18:41:11.917897 containerd[1720]: time="2025-06-20T18:41:11.917630140Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jun 20 18:41:11.917897 containerd[1720]: time="2025-06-20T18:41:11.917674940Z" level=info msg="Connect containerd service" Jun 20 18:41:11.917897 containerd[1720]: time="2025-06-20T18:41:11.917724180Z" level=info msg="using legacy CRI server" Jun 20 18:41:11.917897 containerd[1720]: time="2025-06-20T18:41:11.917731100Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jun 20 18:41:11.920165 containerd[1720]: time="2025-06-20T18:41:11.919866340Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jun 20 18:41:11.920611 containerd[1720]: time="2025-06-20T18:41:11.920584060Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 20 18:41:11.920811 containerd[1720]: time="2025-06-20T18:41:11.920783220Z" level=info msg="Start subscribing containerd event" Jun 20 18:41:11.920905 containerd[1720]: time="2025-06-20T18:41:11.920891820Z" level=info msg="Start recovering state" Jun 20 18:41:11.921016 containerd[1720]: time="2025-06-20T18:41:11.921003020Z" level=info msg="Start event monitor" Jun 20 18:41:11.921067 containerd[1720]: time="2025-06-20T18:41:11.921056860Z" level=info msg="Start snapshots syncer" Jun 20 18:41:11.921112 containerd[1720]: time="2025-06-20T18:41:11.921102180Z" level=info msg="Start cni network conf syncer for default" Jun 20 18:41:11.921158 containerd[1720]: time="2025-06-20T18:41:11.921147780Z" level=info msg="Start streaming server" Jun 20 18:41:11.922939 containerd[1720]: time="2025-06-20T18:41:11.922914380Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jun 20 18:41:11.923094 containerd[1720]: time="2025-06-20T18:41:11.923052540Z" level=info msg=serving... address=/run/containerd/containerd.sock Jun 20 18:41:11.923182 containerd[1720]: time="2025-06-20T18:41:11.923170660Z" level=info msg="containerd successfully booted in 0.113073s" Jun 20 18:41:11.923267 systemd[1]: Started containerd.service - containerd container runtime. Jun 20 18:41:12.199551 tar[1710]: linux-arm64/README.md Jun 20 18:41:12.216282 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jun 20 18:41:12.252893 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:41:12.263321 (kubelet)[1832]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 18:41:12.588826 kubelet[1832]: E0620 18:41:12.588253 1832 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 18:41:12.590791 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 18:41:12.591041 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 18:41:12.591546 systemd[1]: kubelet.service: Consumed 717ms CPU time, 258M memory peak. Jun 20 18:41:12.755973 sshd_keygen[1711]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jun 20 18:41:12.773028 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jun 20 18:41:12.786157 systemd[1]: Starting issuegen.service - Generate /run/issue... Jun 20 18:41:12.792855 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jun 20 18:41:12.799980 systemd[1]: issuegen.service: Deactivated successfully. Jun 20 18:41:12.800191 systemd[1]: Finished issuegen.service - Generate /run/issue. Jun 20 18:41:12.812859 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jun 20 18:41:12.819936 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jun 20 18:41:12.831663 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jun 20 18:41:12.843031 systemd[1]: Started getty@tty1.service - Getty on tty1. Jun 20 18:41:12.851064 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jun 20 18:41:12.858030 systemd[1]: Reached target getty.target - Login Prompts. Jun 20 18:41:12.863495 systemd[1]: Reached target multi-user.target - Multi-User System. Jun 20 18:41:12.869617 systemd[1]: Startup finished in 690ms (kernel) + 12.572s (initrd) + 10.997s (userspace) = 24.260s. Jun 20 18:41:13.173691 login[1861]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Jun 20 18:41:13.175944 login[1862]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:41:13.562964 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jun 20 18:41:13.573142 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jun 20 18:41:13.580972 systemd-logind[1701]: New session 1 of user core. Jun 20 18:41:13.587773 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jun 20 18:41:13.596001 systemd[1]: Starting user@500.service - User Manager for UID 500... Jun 20 18:41:13.598372 (systemd)[1869]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jun 20 18:41:13.600591 systemd-logind[1701]: New session c1 of user core. Jun 20 18:41:13.777483 systemd[1869]: Queued start job for default target default.target. Jun 20 18:41:13.787589 systemd[1869]: Created slice app.slice - User Application Slice. Jun 20 18:41:13.787935 systemd[1869]: Reached target paths.target - Paths. Jun 20 18:41:13.788058 systemd[1869]: Reached target timers.target - Timers. Jun 20 18:41:13.789593 systemd[1869]: Starting dbus.socket - D-Bus User Message Bus Socket... Jun 20 18:41:13.799544 systemd[1869]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jun 20 18:41:13.799612 systemd[1869]: Reached target sockets.target - Sockets. Jun 20 18:41:13.800237 systemd[1869]: Reached target basic.target - Basic System. Jun 20 18:41:13.800316 systemd[1]: Started user@500.service - User Manager for UID 500. Jun 20 18:41:13.801312 systemd[1869]: Reached target default.target - Main User Target. Jun 20 18:41:13.801346 systemd[1869]: Startup finished in 194ms. Jun 20 18:41:13.807928 systemd[1]: Started session-1.scope - Session 1 of User core. Jun 20 18:41:14.174061 login[1861]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:41:14.178444 systemd-logind[1701]: New session 2 of user core. Jun 20 18:41:14.184899 systemd[1]: Started session-2.scope - Session 2 of User core. Jun 20 18:41:14.499486 waagent[1858]: 2025-06-20T18:41:14.499344Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Jun 20 18:41:14.505237 waagent[1858]: 2025-06-20T18:41:14.505176Z INFO Daemon Daemon OS: flatcar 4230.2.0 Jun 20 18:41:14.509845 waagent[1858]: 2025-06-20T18:41:14.509800Z INFO Daemon Daemon Python: 3.11.11 Jun 20 18:41:14.514201 waagent[1858]: 2025-06-20T18:41:14.514147Z INFO Daemon Daemon Run daemon Jun 20 18:41:14.518231 waagent[1858]: 2025-06-20T18:41:14.518186Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4230.2.0' Jun 20 18:41:14.527251 waagent[1858]: 2025-06-20T18:41:14.527204Z INFO Daemon Daemon Using waagent for provisioning Jun 20 18:41:14.532558 waagent[1858]: 2025-06-20T18:41:14.532519Z INFO Daemon Daemon Activate resource disk Jun 20 18:41:14.537259 waagent[1858]: 2025-06-20T18:41:14.537211Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jun 20 18:41:14.550078 waagent[1858]: 2025-06-20T18:41:14.550022Z INFO Daemon Daemon Found device: None Jun 20 18:41:14.554697 waagent[1858]: 2025-06-20T18:41:14.554654Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jun 20 18:41:14.563178 waagent[1858]: 2025-06-20T18:41:14.563132Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jun 20 18:41:14.574867 waagent[1858]: 2025-06-20T18:41:14.574824Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jun 20 18:41:14.580513 waagent[1858]: 2025-06-20T18:41:14.580474Z INFO Daemon Daemon Running default provisioning handler Jun 20 18:41:14.592025 waagent[1858]: 2025-06-20T18:41:14.591958Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jun 20 18:41:14.605729 waagent[1858]: 2025-06-20T18:41:14.605666Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jun 20 18:41:14.615444 waagent[1858]: 2025-06-20T18:41:14.615390Z INFO Daemon Daemon cloud-init is enabled: False Jun 20 18:41:14.620637 waagent[1858]: 2025-06-20T18:41:14.620592Z INFO Daemon Daemon Copying ovf-env.xml Jun 20 18:41:14.738160 waagent[1858]: 2025-06-20T18:41:14.737979Z INFO Daemon Daemon Successfully mounted dvd Jun 20 18:41:14.764930 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jun 20 18:41:14.766786 waagent[1858]: 2025-06-20T18:41:14.766502Z INFO Daemon Daemon Detect protocol endpoint Jun 20 18:41:14.771584 waagent[1858]: 2025-06-20T18:41:14.771530Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jun 20 18:41:14.777572 waagent[1858]: 2025-06-20T18:41:14.777527Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jun 20 18:41:14.784168 waagent[1858]: 2025-06-20T18:41:14.784126Z INFO Daemon Daemon Test for route to 168.63.129.16 Jun 20 18:41:14.789521 waagent[1858]: 2025-06-20T18:41:14.789479Z INFO Daemon Daemon Route to 168.63.129.16 exists Jun 20 18:41:14.794747 waagent[1858]: 2025-06-20T18:41:14.794707Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jun 20 18:41:14.823872 waagent[1858]: 2025-06-20T18:41:14.823825Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jun 20 18:41:14.830859 waagent[1858]: 2025-06-20T18:41:14.830831Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jun 20 18:41:14.836232 waagent[1858]: 2025-06-20T18:41:14.836190Z INFO Daemon Daemon Server preferred version:2015-04-05 Jun 20 18:41:15.375789 waagent[1858]: 2025-06-20T18:41:15.375108Z INFO Daemon Daemon Initializing goal state during protocol detection Jun 20 18:41:15.381890 waagent[1858]: 2025-06-20T18:41:15.381826Z INFO Daemon Daemon Forcing an update of the goal state. Jun 20 18:41:15.391467 waagent[1858]: 2025-06-20T18:41:15.391415Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jun 20 18:41:15.412545 waagent[1858]: 2025-06-20T18:41:15.412501Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.175 Jun 20 18:41:15.418555 waagent[1858]: 2025-06-20T18:41:15.418511Z INFO Daemon Jun 20 18:41:15.421506 waagent[1858]: 2025-06-20T18:41:15.421466Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: a05529c6-ef01-46ec-868c-3ed2b42f5984 eTag: 8266811496642642321 source: Fabric] Jun 20 18:41:15.433330 waagent[1858]: 2025-06-20T18:41:15.433285Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jun 20 18:41:15.440716 waagent[1858]: 2025-06-20T18:41:15.440673Z INFO Daemon Jun 20 18:41:15.443605 waagent[1858]: 2025-06-20T18:41:15.443567Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jun 20 18:41:15.454839 waagent[1858]: 2025-06-20T18:41:15.454802Z INFO Daemon Daemon Downloading artifacts profile blob Jun 20 18:41:15.602230 waagent[1858]: 2025-06-20T18:41:15.602143Z INFO Daemon Downloaded certificate {'thumbprint': 'C648B62CCD8F70E5F6E2594B2843371304032044', 'hasPrivateKey': True} Jun 20 18:41:15.612525 waagent[1858]: 2025-06-20T18:41:15.612469Z INFO Daemon Downloaded certificate {'thumbprint': 'AAF04598724C40C30D6C0353D60E56A138C01C9D', 'hasPrivateKey': False} Jun 20 18:41:15.622519 waagent[1858]: 2025-06-20T18:41:15.622471Z INFO Daemon Fetch goal state completed Jun 20 18:41:15.665222 waagent[1858]: 2025-06-20T18:41:15.665117Z INFO Daemon Daemon Starting provisioning Jun 20 18:41:15.670139 waagent[1858]: 2025-06-20T18:41:15.670080Z INFO Daemon Daemon Handle ovf-env.xml. Jun 20 18:41:15.674931 waagent[1858]: 2025-06-20T18:41:15.674887Z INFO Daemon Daemon Set hostname [ci-4230.2.0-a-c483281568] Jun 20 18:41:15.695143 waagent[1858]: 2025-06-20T18:41:15.695081Z INFO Daemon Daemon Publish hostname [ci-4230.2.0-a-c483281568] Jun 20 18:41:15.701392 waagent[1858]: 2025-06-20T18:41:15.701343Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jun 20 18:41:15.707844 waagent[1858]: 2025-06-20T18:41:15.707804Z INFO Daemon Daemon Primary interface is [eth0] Jun 20 18:41:15.733248 systemd-networkd[1618]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 18:41:15.733861 systemd-networkd[1618]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 20 18:41:15.733913 systemd-networkd[1618]: eth0: DHCP lease lost Jun 20 18:41:15.734419 waagent[1858]: 2025-06-20T18:41:15.734345Z INFO Daemon Daemon Create user account if not exists Jun 20 18:41:15.740665 waagent[1858]: 2025-06-20T18:41:15.740614Z INFO Daemon Daemon User core already exists, skip useradd Jun 20 18:41:15.746063 waagent[1858]: 2025-06-20T18:41:15.746002Z INFO Daemon Daemon Configure sudoer Jun 20 18:41:15.750447 waagent[1858]: 2025-06-20T18:41:15.750398Z INFO Daemon Daemon Configure sshd Jun 20 18:41:15.754782 waagent[1858]: 2025-06-20T18:41:15.754713Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jun 20 18:41:15.766438 waagent[1858]: 2025-06-20T18:41:15.766375Z INFO Daemon Daemon Deploy ssh public key. Jun 20 18:41:15.779833 systemd-networkd[1618]: eth0: DHCPv4 address 10.200.20.15/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jun 20 18:41:16.860560 waagent[1858]: 2025-06-20T18:41:16.860506Z INFO Daemon Daemon Provisioning complete Jun 20 18:41:16.878620 waagent[1858]: 2025-06-20T18:41:16.878568Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jun 20 18:41:16.885008 waagent[1858]: 2025-06-20T18:41:16.884961Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jun 20 18:41:16.895198 waagent[1858]: 2025-06-20T18:41:16.895151Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Jun 20 18:41:17.022579 waagent[1924]: 2025-06-20T18:41:17.022448Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Jun 20 18:41:17.886587 waagent[1924]: 2025-06-20T18:41:17.885981Z INFO ExtHandler ExtHandler OS: flatcar 4230.2.0 Jun 20 18:41:17.886587 waagent[1924]: 2025-06-20T18:41:17.886141Z INFO ExtHandler ExtHandler Python: 3.11.11 Jun 20 18:41:18.020781 waagent[1924]: 2025-06-20T18:41:18.020677Z INFO ExtHandler ExtHandler Distro: flatcar-4230.2.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.11; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Jun 20 18:41:18.021105 waagent[1924]: 2025-06-20T18:41:18.021067Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jun 20 18:41:18.021243 waagent[1924]: 2025-06-20T18:41:18.021208Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jun 20 18:41:18.029654 waagent[1924]: 2025-06-20T18:41:18.029596Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jun 20 18:41:18.036773 waagent[1924]: 2025-06-20T18:41:18.035601Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.175 Jun 20 18:41:18.036773 waagent[1924]: 2025-06-20T18:41:18.036086Z INFO ExtHandler Jun 20 18:41:18.036773 waagent[1924]: 2025-06-20T18:41:18.036160Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 1174ada6-c662-449f-8073-b87ce2332bf9 eTag: 8266811496642642321 source: Fabric] Jun 20 18:41:18.036773 waagent[1924]: 2025-06-20T18:41:18.036421Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jun 20 18:41:18.037033 waagent[1924]: 2025-06-20T18:41:18.036979Z INFO ExtHandler Jun 20 18:41:18.037093 waagent[1924]: 2025-06-20T18:41:18.037064Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jun 20 18:41:18.041019 waagent[1924]: 2025-06-20T18:41:18.040986Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jun 20 18:41:18.114236 waagent[1924]: 2025-06-20T18:41:18.114145Z INFO ExtHandler Downloaded certificate {'thumbprint': 'C648B62CCD8F70E5F6E2594B2843371304032044', 'hasPrivateKey': True} Jun 20 18:41:18.114634 waagent[1924]: 2025-06-20T18:41:18.114590Z INFO ExtHandler Downloaded certificate {'thumbprint': 'AAF04598724C40C30D6C0353D60E56A138C01C9D', 'hasPrivateKey': False} Jun 20 18:41:18.115065 waagent[1924]: 2025-06-20T18:41:18.115021Z INFO ExtHandler Fetch goal state completed Jun 20 18:41:18.131027 waagent[1924]: 2025-06-20T18:41:18.130956Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1924 Jun 20 18:41:18.131194 waagent[1924]: 2025-06-20T18:41:18.131159Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jun 20 18:41:18.132942 waagent[1924]: 2025-06-20T18:41:18.132893Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4230.2.0', '', 'Flatcar Container Linux by Kinvolk'] Jun 20 18:41:18.133322 waagent[1924]: 2025-06-20T18:41:18.133285Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jun 20 18:41:18.176051 waagent[1924]: 2025-06-20T18:41:18.175954Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jun 20 18:41:18.176193 waagent[1924]: 2025-06-20T18:41:18.176153Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jun 20 18:41:18.182302 waagent[1924]: 2025-06-20T18:41:18.182252Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jun 20 18:41:18.188526 systemd[1]: Reload requested from client PID 1942 ('systemctl') (unit waagent.service)... Jun 20 18:41:18.188808 systemd[1]: Reloading... Jun 20 18:41:18.281780 zram_generator::config[1977]: No configuration found. Jun 20 18:41:18.383921 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 18:41:18.481195 systemd[1]: Reloading finished in 292 ms. Jun 20 18:41:18.500769 waagent[1924]: 2025-06-20T18:41:18.498826Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Jun 20 18:41:18.505805 systemd[1]: Reload requested from client PID 2035 ('systemctl') (unit waagent.service)... Jun 20 18:41:18.505921 systemd[1]: Reloading... Jun 20 18:41:18.580902 zram_generator::config[2072]: No configuration found. Jun 20 18:41:18.694625 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 18:41:18.792786 systemd[1]: Reloading finished in 286 ms. Jun 20 18:41:18.806783 waagent[1924]: 2025-06-20T18:41:18.805482Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jun 20 18:41:18.806783 waagent[1924]: 2025-06-20T18:41:18.805658Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jun 20 18:41:19.062128 waagent[1924]: 2025-06-20T18:41:19.061992Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jun 20 18:41:19.062663 waagent[1924]: 2025-06-20T18:41:19.062588Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Jun 20 18:41:19.063493 waagent[1924]: 2025-06-20T18:41:19.063403Z INFO ExtHandler ExtHandler Starting env monitor service. Jun 20 18:41:19.064046 waagent[1924]: 2025-06-20T18:41:19.063828Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jun 20 18:41:19.064046 waagent[1924]: 2025-06-20T18:41:19.063996Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jun 20 18:41:19.064312 waagent[1924]: 2025-06-20T18:41:19.064266Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jun 20 18:41:19.065197 waagent[1924]: 2025-06-20T18:41:19.064415Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jun 20 18:41:19.065197 waagent[1924]: 2025-06-20T18:41:19.064563Z INFO EnvHandler ExtHandler Configure routes Jun 20 18:41:19.065197 waagent[1924]: 2025-06-20T18:41:19.064623Z INFO EnvHandler ExtHandler Gateway:None Jun 20 18:41:19.065197 waagent[1924]: 2025-06-20T18:41:19.064667Z INFO EnvHandler ExtHandler Routes:None Jun 20 18:41:19.065481 waagent[1924]: 2025-06-20T18:41:19.065426Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jun 20 18:41:19.065832 waagent[1924]: 2025-06-20T18:41:19.065781Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jun 20 18:41:19.066012 waagent[1924]: 2025-06-20T18:41:19.065967Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jun 20 18:41:19.066284 waagent[1924]: 2025-06-20T18:41:19.066241Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jun 20 18:41:19.066284 waagent[1924]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jun 20 18:41:19.066284 waagent[1924]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Jun 20 18:41:19.066284 waagent[1924]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jun 20 18:41:19.066284 waagent[1924]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jun 20 18:41:19.066284 waagent[1924]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jun 20 18:41:19.066284 waagent[1924]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jun 20 18:41:19.066872 waagent[1924]: 2025-06-20T18:41:19.066808Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jun 20 18:41:19.067442 waagent[1924]: 2025-06-20T18:41:19.067357Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jun 20 18:41:19.068323 waagent[1924]: 2025-06-20T18:41:19.067303Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jun 20 18:41:19.068323 waagent[1924]: 2025-06-20T18:41:19.067954Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jun 20 18:41:19.077239 waagent[1924]: 2025-06-20T18:41:19.077180Z INFO ExtHandler ExtHandler Jun 20 18:41:19.077326 waagent[1924]: 2025-06-20T18:41:19.077291Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: f5ef736a-5a65-4be8-8607-28a4e7c9e3cb correlation f0158330-4c13-491c-a4ce-0dd7d265340d created: 2025-06-20T18:40:09.011856Z] Jun 20 18:41:19.077697 waagent[1924]: 2025-06-20T18:41:19.077654Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jun 20 18:41:19.078273 waagent[1924]: 2025-06-20T18:41:19.078236Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Jun 20 18:41:19.108249 waagent[1924]: 2025-06-20T18:41:19.107850Z INFO MonitorHandler ExtHandler Network interfaces: Jun 20 18:41:19.108249 waagent[1924]: Executing ['ip', '-a', '-o', 'link']: Jun 20 18:41:19.108249 waagent[1924]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jun 20 18:41:19.108249 waagent[1924]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:6d:ba:6a brd ff:ff:ff:ff:ff:ff Jun 20 18:41:19.108249 waagent[1924]: 3: enP50067s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:6d:ba:6a brd ff:ff:ff:ff:ff:ff\ altname enP50067p0s2 Jun 20 18:41:19.108249 waagent[1924]: Executing ['ip', '-4', '-a', '-o', 'address']: Jun 20 18:41:19.108249 waagent[1924]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jun 20 18:41:19.108249 waagent[1924]: 2: eth0 inet 10.200.20.15/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Jun 20 18:41:19.108249 waagent[1924]: Executing ['ip', '-6', '-a', '-o', 'address']: Jun 20 18:41:19.108249 waagent[1924]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jun 20 18:41:19.108249 waagent[1924]: 2: eth0 inet6 fe80::20d:3aff:fe6d:ba6a/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jun 20 18:41:19.108249 waagent[1924]: 3: enP50067s1 inet6 fe80::20d:3aff:fe6d:ba6a/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jun 20 18:41:19.138583 waagent[1924]: 2025-06-20T18:41:19.138502Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: BE502110-0FD2-490A-9C6E-F22C46B36A9B;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Jun 20 18:41:19.166681 waagent[1924]: 2025-06-20T18:41:19.166605Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Jun 20 18:41:19.166681 waagent[1924]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jun 20 18:41:19.166681 waagent[1924]: pkts bytes target prot opt in out source destination Jun 20 18:41:19.166681 waagent[1924]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jun 20 18:41:19.166681 waagent[1924]: pkts bytes target prot opt in out source destination Jun 20 18:41:19.166681 waagent[1924]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jun 20 18:41:19.166681 waagent[1924]: pkts bytes target prot opt in out source destination Jun 20 18:41:19.166681 waagent[1924]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jun 20 18:41:19.166681 waagent[1924]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jun 20 18:41:19.166681 waagent[1924]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jun 20 18:41:19.169662 waagent[1924]: 2025-06-20T18:41:19.169600Z INFO EnvHandler ExtHandler Current Firewall rules: Jun 20 18:41:19.169662 waagent[1924]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jun 20 18:41:19.169662 waagent[1924]: pkts bytes target prot opt in out source destination Jun 20 18:41:19.169662 waagent[1924]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jun 20 18:41:19.169662 waagent[1924]: pkts bytes target prot opt in out source destination Jun 20 18:41:19.169662 waagent[1924]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jun 20 18:41:19.169662 waagent[1924]: pkts bytes target prot opt in out source destination Jun 20 18:41:19.169662 waagent[1924]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jun 20 18:41:19.169662 waagent[1924]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jun 20 18:41:19.169662 waagent[1924]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jun 20 18:41:19.169934 waagent[1924]: 2025-06-20T18:41:19.169895Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jun 20 18:41:22.841887 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jun 20 18:41:22.851985 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 18:41:22.958403 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:41:22.962439 (kubelet)[2167]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 18:41:23.613871 kubelet[2167]: E0620 18:41:23.613813 2167 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 18:41:23.616523 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 18:41:23.616649 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 18:41:23.617107 systemd[1]: kubelet.service: Consumed 123ms CPU time, 105.4M memory peak. Jun 20 18:41:25.627496 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jun 20 18:41:25.635054 systemd[1]: Started sshd@0-10.200.20.15:22-10.200.16.10:53732.service - OpenSSH per-connection server daemon (10.200.16.10:53732). Jun 20 18:41:26.188094 sshd[2175]: Accepted publickey for core from 10.200.16.10 port 53732 ssh2: RSA SHA256:V0PIl5bgEVSoJBEasTsvuXRLX17TbdMMyIxacTZE6p0 Jun 20 18:41:26.189262 sshd-session[2175]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:41:26.193708 systemd-logind[1701]: New session 3 of user core. Jun 20 18:41:26.203896 systemd[1]: Started session-3.scope - Session 3 of User core. Jun 20 18:41:26.600921 systemd[1]: Started sshd@1-10.200.20.15:22-10.200.16.10:53740.service - OpenSSH per-connection server daemon (10.200.16.10:53740). Jun 20 18:41:27.057713 sshd[2180]: Accepted publickey for core from 10.200.16.10 port 53740 ssh2: RSA SHA256:V0PIl5bgEVSoJBEasTsvuXRLX17TbdMMyIxacTZE6p0 Jun 20 18:41:27.058893 sshd-session[2180]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:41:27.062824 systemd-logind[1701]: New session 4 of user core. Jun 20 18:41:27.069904 systemd[1]: Started session-4.scope - Session 4 of User core. Jun 20 18:41:27.391941 sshd[2182]: Connection closed by 10.200.16.10 port 53740 Jun 20 18:41:27.391706 sshd-session[2180]: pam_unix(sshd:session): session closed for user core Jun 20 18:41:27.395499 systemd-logind[1701]: Session 4 logged out. Waiting for processes to exit. Jun 20 18:41:27.396091 systemd[1]: sshd@1-10.200.20.15:22-10.200.16.10:53740.service: Deactivated successfully. Jun 20 18:41:27.398180 systemd[1]: session-4.scope: Deactivated successfully. Jun 20 18:41:27.399053 systemd-logind[1701]: Removed session 4. Jun 20 18:41:27.486267 systemd[1]: Started sshd@2-10.200.20.15:22-10.200.16.10:53746.service - OpenSSH per-connection server daemon (10.200.16.10:53746). Jun 20 18:41:28.015672 sshd[2188]: Accepted publickey for core from 10.200.16.10 port 53746 ssh2: RSA SHA256:V0PIl5bgEVSoJBEasTsvuXRLX17TbdMMyIxacTZE6p0 Jun 20 18:41:28.016936 sshd-session[2188]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:41:28.020820 systemd-logind[1701]: New session 5 of user core. Jun 20 18:41:28.027881 systemd[1]: Started session-5.scope - Session 5 of User core. Jun 20 18:41:28.388813 sshd[2190]: Connection closed by 10.200.16.10 port 53746 Jun 20 18:41:28.388585 sshd-session[2188]: pam_unix(sshd:session): session closed for user core Jun 20 18:41:28.392395 systemd[1]: sshd@2-10.200.20.15:22-10.200.16.10:53746.service: Deactivated successfully. Jun 20 18:41:28.394016 systemd[1]: session-5.scope: Deactivated successfully. Jun 20 18:41:28.394709 systemd-logind[1701]: Session 5 logged out. Waiting for processes to exit. Jun 20 18:41:28.395566 systemd-logind[1701]: Removed session 5. Jun 20 18:41:28.482082 systemd[1]: Started sshd@3-10.200.20.15:22-10.200.16.10:45440.service - OpenSSH per-connection server daemon (10.200.16.10:45440). Jun 20 18:41:28.968924 sshd[2196]: Accepted publickey for core from 10.200.16.10 port 45440 ssh2: RSA SHA256:V0PIl5bgEVSoJBEasTsvuXRLX17TbdMMyIxacTZE6p0 Jun 20 18:41:28.970186 sshd-session[2196]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:41:28.975681 systemd-logind[1701]: New session 6 of user core. Jun 20 18:41:28.981077 systemd[1]: Started session-6.scope - Session 6 of User core. Jun 20 18:41:29.318609 sshd[2198]: Connection closed by 10.200.16.10 port 45440 Jun 20 18:41:29.318260 sshd-session[2196]: pam_unix(sshd:session): session closed for user core Jun 20 18:41:29.322170 systemd[1]: sshd@3-10.200.20.15:22-10.200.16.10:45440.service: Deactivated successfully. Jun 20 18:41:29.323678 systemd[1]: session-6.scope: Deactivated successfully. Jun 20 18:41:29.324520 systemd-logind[1701]: Session 6 logged out. Waiting for processes to exit. Jun 20 18:41:29.325422 systemd-logind[1701]: Removed session 6. Jun 20 18:41:29.415253 systemd[1]: Started sshd@4-10.200.20.15:22-10.200.16.10:45452.service - OpenSSH per-connection server daemon (10.200.16.10:45452). Jun 20 18:41:29.958739 sshd[2204]: Accepted publickey for core from 10.200.16.10 port 45452 ssh2: RSA SHA256:V0PIl5bgEVSoJBEasTsvuXRLX17TbdMMyIxacTZE6p0 Jun 20 18:41:29.959974 sshd-session[2204]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:41:29.963919 systemd-logind[1701]: New session 7 of user core. Jun 20 18:41:29.970890 systemd[1]: Started session-7.scope - Session 7 of User core. Jun 20 18:41:30.366690 sudo[2207]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jun 20 18:41:30.366984 sudo[2207]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 18:41:30.392500 sudo[2207]: pam_unix(sudo:session): session closed for user root Jun 20 18:41:30.475536 sshd[2206]: Connection closed by 10.200.16.10 port 45452 Jun 20 18:41:30.476334 sshd-session[2204]: pam_unix(sshd:session): session closed for user core Jun 20 18:41:30.479977 systemd[1]: sshd@4-10.200.20.15:22-10.200.16.10:45452.service: Deactivated successfully. Jun 20 18:41:30.481453 systemd[1]: session-7.scope: Deactivated successfully. Jun 20 18:41:30.483250 systemd-logind[1701]: Session 7 logged out. Waiting for processes to exit. Jun 20 18:41:30.484327 systemd-logind[1701]: Removed session 7. Jun 20 18:41:30.558213 systemd[1]: Started sshd@5-10.200.20.15:22-10.200.16.10:45466.service - OpenSSH per-connection server daemon (10.200.16.10:45466). Jun 20 18:41:31.020471 sshd[2213]: Accepted publickey for core from 10.200.16.10 port 45466 ssh2: RSA SHA256:V0PIl5bgEVSoJBEasTsvuXRLX17TbdMMyIxacTZE6p0 Jun 20 18:41:31.021734 sshd-session[2213]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:41:31.026597 systemd-logind[1701]: New session 8 of user core. Jun 20 18:41:31.032894 systemd[1]: Started session-8.scope - Session 8 of User core. Jun 20 18:41:31.280222 sudo[2217]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jun 20 18:41:31.281037 sudo[2217]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 18:41:31.284200 sudo[2217]: pam_unix(sudo:session): session closed for user root Jun 20 18:41:31.288616 sudo[2216]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jun 20 18:41:31.289126 sudo[2216]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 18:41:31.307013 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jun 20 18:41:31.327646 augenrules[2239]: No rules Jun 20 18:41:31.328900 systemd[1]: audit-rules.service: Deactivated successfully. Jun 20 18:41:31.329095 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jun 20 18:41:31.330990 sudo[2216]: pam_unix(sudo:session): session closed for user root Jun 20 18:41:31.408524 sshd[2215]: Connection closed by 10.200.16.10 port 45466 Jun 20 18:41:31.408431 sshd-session[2213]: pam_unix(sshd:session): session closed for user core Jun 20 18:41:31.411089 systemd-logind[1701]: Session 8 logged out. Waiting for processes to exit. Jun 20 18:41:31.411328 systemd[1]: sshd@5-10.200.20.15:22-10.200.16.10:45466.service: Deactivated successfully. Jun 20 18:41:31.412908 systemd[1]: session-8.scope: Deactivated successfully. Jun 20 18:41:31.414301 systemd-logind[1701]: Removed session 8. Jun 20 18:41:31.497989 systemd[1]: Started sshd@6-10.200.20.15:22-10.200.16.10:45470.service - OpenSSH per-connection server daemon (10.200.16.10:45470). Jun 20 18:41:31.951500 sshd[2248]: Accepted publickey for core from 10.200.16.10 port 45470 ssh2: RSA SHA256:V0PIl5bgEVSoJBEasTsvuXRLX17TbdMMyIxacTZE6p0 Jun 20 18:41:31.952710 sshd-session[2248]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:41:31.958088 systemd-logind[1701]: New session 9 of user core. Jun 20 18:41:31.964901 systemd[1]: Started session-9.scope - Session 9 of User core. Jun 20 18:41:32.207979 sudo[2251]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jun 20 18:41:32.208234 sudo[2251]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 18:41:33.464169 (dockerd)[2268]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jun 20 18:41:33.464614 systemd[1]: Starting docker.service - Docker Application Container Engine... Jun 20 18:41:33.867108 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jun 20 18:41:33.876941 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 18:41:34.890731 chronyd[1693]: Selected source PHC0 Jun 20 18:41:35.015807 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:41:35.019990 (kubelet)[2281]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 18:41:35.061147 kubelet[2281]: E0620 18:41:35.061076 2281 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 18:41:35.063063 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 18:41:35.063188 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 18:41:35.063602 systemd[1]: kubelet.service: Consumed 123ms CPU time, 106.1M memory peak. Jun 20 18:41:35.213635 dockerd[2268]: time="2025-06-20T18:41:35.213515626Z" level=info msg="Starting up" Jun 20 18:41:35.671713 dockerd[2268]: time="2025-06-20T18:41:35.671464478Z" level=info msg="Loading containers: start." Jun 20 18:41:35.827780 kernel: Initializing XFRM netlink socket Jun 20 18:41:35.908137 systemd-networkd[1618]: docker0: Link UP Jun 20 18:41:35.947897 dockerd[2268]: time="2025-06-20T18:41:35.947785659Z" level=info msg="Loading containers: done." Jun 20 18:41:35.959912 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2066328072-merged.mount: Deactivated successfully. Jun 20 18:41:35.974743 dockerd[2268]: time="2025-06-20T18:41:35.974323324Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jun 20 18:41:35.974743 dockerd[2268]: time="2025-06-20T18:41:35.974431645Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Jun 20 18:41:35.974743 dockerd[2268]: time="2025-06-20T18:41:35.974552245Z" level=info msg="Daemon has completed initialization" Jun 20 18:41:36.027521 systemd[1]: Started docker.service - Docker Application Container Engine. Jun 20 18:41:36.028373 dockerd[2268]: time="2025-06-20T18:41:36.027871136Z" level=info msg="API listen on /run/docker.sock" Jun 20 18:41:36.698579 containerd[1720]: time="2025-06-20T18:41:36.698303778Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\"" Jun 20 18:41:37.615244 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3094440534.mount: Deactivated successfully. Jun 20 18:41:39.038740 containerd[1720]: time="2025-06-20T18:41:39.037729445Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:41:39.041817 containerd[1720]: time="2025-06-20T18:41:39.041772809Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.2: active requests=0, bytes read=27351716" Jun 20 18:41:39.047446 containerd[1720]: time="2025-06-20T18:41:39.047394294Z" level=info msg="ImageCreate event name:\"sha256:04ac773cca35cc457f24a6501b6b308d63a2cddd1aec14fe95559bccca3010a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:41:39.052313 containerd[1720]: time="2025-06-20T18:41:39.052258779Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:41:39.053512 containerd[1720]: time="2025-06-20T18:41:39.053349980Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.2\" with image id \"sha256:04ac773cca35cc457f24a6501b6b308d63a2cddd1aec14fe95559bccca3010a4\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\", size \"27348516\" in 2.355008042s" Jun 20 18:41:39.053512 containerd[1720]: time="2025-06-20T18:41:39.053386700Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\" returns image reference \"sha256:04ac773cca35cc457f24a6501b6b308d63a2cddd1aec14fe95559bccca3010a4\"" Jun 20 18:41:39.054870 containerd[1720]: time="2025-06-20T18:41:39.054762182Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\"" Jun 20 18:41:40.545109 containerd[1720]: time="2025-06-20T18:41:40.545051627Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:41:40.548891 containerd[1720]: time="2025-06-20T18:41:40.548844670Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.2: active requests=0, bytes read=23537623" Jun 20 18:41:40.551823 containerd[1720]: time="2025-06-20T18:41:40.551767273Z" level=info msg="ImageCreate event name:\"sha256:99a259072231375ad69a369cdf5620d60cdff72d450951c603fad8a94667af65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:41:40.558008 containerd[1720]: time="2025-06-20T18:41:40.557963639Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:41:40.559654 containerd[1720]: time="2025-06-20T18:41:40.558940400Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.2\" with image id \"sha256:99a259072231375ad69a369cdf5620d60cdff72d450951c603fad8a94667af65\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\", size \"25092541\" in 1.503962578s" Jun 20 18:41:40.559654 containerd[1720]: time="2025-06-20T18:41:40.558973680Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\" returns image reference \"sha256:99a259072231375ad69a369cdf5620d60cdff72d450951c603fad8a94667af65\"" Jun 20 18:41:40.559654 containerd[1720]: time="2025-06-20T18:41:40.559490721Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\"" Jun 20 18:41:42.242790 containerd[1720]: time="2025-06-20T18:41:42.242625353Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:41:42.246059 containerd[1720]: time="2025-06-20T18:41:42.246011956Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.2: active requests=0, bytes read=18293515" Jun 20 18:41:42.250733 containerd[1720]: time="2025-06-20T18:41:42.250701441Z" level=info msg="ImageCreate event name:\"sha256:bb3da57746ca4726b669d35145eb9b4085643c61bbc80b9df3bf1e6021ba9eaf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:41:42.256659 containerd[1720]: time="2025-06-20T18:41:42.256577087Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:41:42.258312 containerd[1720]: time="2025-06-20T18:41:42.258013888Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.2\" with image id \"sha256:bb3da57746ca4726b669d35145eb9b4085643c61bbc80b9df3bf1e6021ba9eaf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\", size \"19848451\" in 1.698493087s" Jun 20 18:41:42.258312 containerd[1720]: time="2025-06-20T18:41:42.258046568Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\" returns image reference \"sha256:bb3da57746ca4726b669d35145eb9b4085643c61bbc80b9df3bf1e6021ba9eaf\"" Jun 20 18:41:42.261071 containerd[1720]: time="2025-06-20T18:41:42.261041451Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\"" Jun 20 18:41:43.933114 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount368803361.mount: Deactivated successfully. Jun 20 18:41:44.294289 containerd[1720]: time="2025-06-20T18:41:44.293956623Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:41:44.296868 containerd[1720]: time="2025-06-20T18:41:44.296820429Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.2: active requests=0, bytes read=28199472" Jun 20 18:41:44.301110 containerd[1720]: time="2025-06-20T18:41:44.301054879Z" level=info msg="ImageCreate event name:\"sha256:c26522e54bad2e6bfbb1bf11500833c94433076a3fa38436a2ec496a422c5455\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:41:44.306673 containerd[1720]: time="2025-06-20T18:41:44.306598452Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:41:44.307445 containerd[1720]: time="2025-06-20T18:41:44.307320654Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.2\" with image id \"sha256:c26522e54bad2e6bfbb1bf11500833c94433076a3fa38436a2ec496a422c5455\", repo tag \"registry.k8s.io/kube-proxy:v1.33.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\", size \"28198491\" in 2.046246043s" Jun 20 18:41:44.307445 containerd[1720]: time="2025-06-20T18:41:44.307355654Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\" returns image reference \"sha256:c26522e54bad2e6bfbb1bf11500833c94433076a3fa38436a2ec496a422c5455\"" Jun 20 18:41:44.308008 containerd[1720]: time="2025-06-20T18:41:44.307826815Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jun 20 18:41:44.990440 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4276397759.mount: Deactivated successfully. Jun 20 18:41:45.314499 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jun 20 18:41:45.322005 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 18:41:45.432789 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:41:45.437322 (kubelet)[2558]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 18:41:46.336448 kubelet[2558]: E0620 18:41:46.042248 2558 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 18:41:46.044480 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 18:41:46.044613 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 18:41:46.044912 systemd[1]: kubelet.service: Consumed 131ms CPU time, 105.2M memory peak. Jun 20 18:41:48.771211 containerd[1720]: time="2025-06-20T18:41:48.771152905Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:41:48.774458 containerd[1720]: time="2025-06-20T18:41:48.774407628Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152117" Jun 20 18:41:48.780666 containerd[1720]: time="2025-06-20T18:41:48.780630235Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:41:48.787231 containerd[1720]: time="2025-06-20T18:41:48.787173963Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:41:48.788555 containerd[1720]: time="2025-06-20T18:41:48.788322124Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 4.480460869s" Jun 20 18:41:48.788555 containerd[1720]: time="2025-06-20T18:41:48.788359724Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Jun 20 18:41:48.789059 containerd[1720]: time="2025-06-20T18:41:48.788869925Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jun 20 18:41:49.484517 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount529630333.mount: Deactivated successfully. Jun 20 18:41:49.511800 containerd[1720]: time="2025-06-20T18:41:49.511304710Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:41:49.514307 containerd[1720]: time="2025-06-20T18:41:49.514087433Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Jun 20 18:41:49.518711 containerd[1720]: time="2025-06-20T18:41:49.518660278Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:41:49.524190 containerd[1720]: time="2025-06-20T18:41:49.524140444Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:41:49.524878 containerd[1720]: time="2025-06-20T18:41:49.524837285Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 735.93824ms" Jun 20 18:41:49.524878 containerd[1720]: time="2025-06-20T18:41:49.524873085Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jun 20 18:41:49.525540 containerd[1720]: time="2025-06-20T18:41:49.525446526Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jun 20 18:41:50.210631 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1435425895.mount: Deactivated successfully. Jun 20 18:41:55.146679 containerd[1720]: time="2025-06-20T18:41:55.146617321Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:41:55.150823 containerd[1720]: time="2025-06-20T18:41:55.150767488Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69334599" Jun 20 18:41:55.157267 containerd[1720]: time="2025-06-20T18:41:55.156007097Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:41:55.168082 containerd[1720]: time="2025-06-20T18:41:55.167971276Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:41:55.168906 containerd[1720]: time="2025-06-20T18:41:55.168870718Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 5.643380872s" Jun 20 18:41:55.168906 containerd[1720]: time="2025-06-20T18:41:55.168905198Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Jun 20 18:41:56.151763 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Jun 20 18:41:56.192421 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jun 20 18:41:56.199353 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 18:41:56.294915 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:41:56.297714 (kubelet)[2693]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 18:41:56.330240 kubelet[2693]: E0620 18:41:56.330199 2693 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 18:41:56.333069 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 18:41:56.333204 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 18:41:56.333478 systemd[1]: kubelet.service: Consumed 110ms CPU time, 106.8M memory peak. Jun 20 18:41:56.629401 update_engine[1702]: I20250620 18:41:56.628782 1702 update_attempter.cc:509] Updating boot flags... Jun 20 18:41:57.362837 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (2715) Jun 20 18:41:59.427039 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:41:59.427183 systemd[1]: kubelet.service: Consumed 110ms CPU time, 106.8M memory peak. Jun 20 18:41:59.437977 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 18:41:59.467111 systemd[1]: Reload requested from client PID 2770 ('systemctl') (unit session-9.scope)... Jun 20 18:41:59.467129 systemd[1]: Reloading... Jun 20 18:41:59.593805 zram_generator::config[2820]: No configuration found. Jun 20 18:41:59.683769 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 18:41:59.784938 systemd[1]: Reloading finished in 317 ms. Jun 20 18:41:59.825327 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:41:59.834290 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 18:41:59.835099 systemd[1]: kubelet.service: Deactivated successfully. Jun 20 18:41:59.835302 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:41:59.835342 systemd[1]: kubelet.service: Consumed 86ms CPU time, 94.9M memory peak. Jun 20 18:41:59.837271 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 18:42:01.284826 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:42:01.288480 (kubelet)[2887]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 20 18:42:01.320256 kubelet[2887]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 18:42:01.320256 kubelet[2887]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jun 20 18:42:01.320256 kubelet[2887]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 18:42:01.320621 kubelet[2887]: I0620 18:42:01.320304 2887 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 20 18:42:01.660120 kubelet[2887]: I0620 18:42:01.660007 2887 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jun 20 18:42:01.660120 kubelet[2887]: I0620 18:42:01.660037 2887 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 20 18:42:01.660524 kubelet[2887]: I0620 18:42:01.660496 2887 server.go:956] "Client rotation is on, will bootstrap in background" Jun 20 18:42:01.675388 kubelet[2887]: E0620 18:42:01.675345 2887 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.20.15:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.15:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jun 20 18:42:01.676033 kubelet[2887]: I0620 18:42:01.675911 2887 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 20 18:42:01.684221 kubelet[2887]: E0620 18:42:01.684184 2887 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jun 20 18:42:01.684221 kubelet[2887]: I0620 18:42:01.684223 2887 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jun 20 18:42:01.687041 kubelet[2887]: I0620 18:42:01.687022 2887 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 20 18:42:01.688140 kubelet[2887]: I0620 18:42:01.688106 2887 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 20 18:42:01.688290 kubelet[2887]: I0620 18:42:01.688142 2887 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230.2.0-a-c483281568","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jun 20 18:42:01.688379 kubelet[2887]: I0620 18:42:01.688301 2887 topology_manager.go:138] "Creating topology manager with none policy" Jun 20 18:42:01.688379 kubelet[2887]: I0620 18:42:01.688326 2887 container_manager_linux.go:303] "Creating device plugin manager" Jun 20 18:42:01.688466 kubelet[2887]: I0620 18:42:01.688449 2887 state_mem.go:36] "Initialized new in-memory state store" Jun 20 18:42:01.691315 kubelet[2887]: I0620 18:42:01.691296 2887 kubelet.go:480] "Attempting to sync node with API server" Jun 20 18:42:01.691349 kubelet[2887]: I0620 18:42:01.691320 2887 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 20 18:42:01.691382 kubelet[2887]: I0620 18:42:01.691351 2887 kubelet.go:386] "Adding apiserver pod source" Jun 20 18:42:01.692872 kubelet[2887]: I0620 18:42:01.692485 2887 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 20 18:42:01.696361 kubelet[2887]: E0620 18:42:01.696336 2887 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.2.0-a-c483281568&limit=500&resourceVersion=0\": dial tcp 10.200.20.15:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jun 20 18:42:01.696544 kubelet[2887]: I0620 18:42:01.696529 2887 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jun 20 18:42:01.697154 kubelet[2887]: I0620 18:42:01.697137 2887 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jun 20 18:42:01.697267 kubelet[2887]: W0620 18:42:01.697256 2887 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jun 20 18:42:01.699958 kubelet[2887]: I0620 18:42:01.699943 2887 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jun 20 18:42:01.700073 kubelet[2887]: I0620 18:42:01.700063 2887 server.go:1289] "Started kubelet" Jun 20 18:42:01.701169 kubelet[2887]: E0620 18:42:01.701071 2887 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.15:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.15:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jun 20 18:42:01.701237 kubelet[2887]: I0620 18:42:01.701177 2887 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jun 20 18:42:01.702732 kubelet[2887]: I0620 18:42:01.701941 2887 server.go:317] "Adding debug handlers to kubelet server" Jun 20 18:42:01.702732 kubelet[2887]: I0620 18:42:01.702251 2887 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 20 18:42:01.702732 kubelet[2887]: I0620 18:42:01.702550 2887 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 20 18:42:01.703521 kubelet[2887]: E0620 18:42:01.702656 2887 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.15:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.15:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230.2.0-a-c483281568.184ad461ccd1265b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230.2.0-a-c483281568,UID:ci-4230.2.0-a-c483281568,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230.2.0-a-c483281568,},FirstTimestamp:2025-06-20 18:42:01.700034139 +0000 UTC m=+0.408469946,LastTimestamp:2025-06-20 18:42:01.700034139 +0000 UTC m=+0.408469946,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230.2.0-a-c483281568,}" Jun 20 18:42:01.706122 kubelet[2887]: E0620 18:42:01.706102 2887 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 20 18:42:01.706544 kubelet[2887]: I0620 18:42:01.706523 2887 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 20 18:42:01.706581 kubelet[2887]: I0620 18:42:01.706567 2887 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jun 20 18:42:01.709351 kubelet[2887]: I0620 18:42:01.708847 2887 volume_manager.go:297] "Starting Kubelet Volume Manager" Jun 20 18:42:01.709351 kubelet[2887]: I0620 18:42:01.708986 2887 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jun 20 18:42:01.709351 kubelet[2887]: I0620 18:42:01.709051 2887 reconciler.go:26] "Reconciler: start to sync state" Jun 20 18:42:01.709942 kubelet[2887]: E0620 18:42:01.709456 2887 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.15:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jun 20 18:42:01.710199 kubelet[2887]: E0620 18:42:01.710146 2887 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.0-a-c483281568\" not found" Jun 20 18:42:01.710402 kubelet[2887]: E0620 18:42:01.710365 2887 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.2.0-a-c483281568?timeout=10s\": dial tcp 10.200.20.15:6443: connect: connection refused" interval="200ms" Jun 20 18:42:01.710770 kubelet[2887]: I0620 18:42:01.710736 2887 factory.go:223] Registration of the containerd container factory successfully Jun 20 18:42:01.710770 kubelet[2887]: I0620 18:42:01.710766 2887 factory.go:223] Registration of the systemd container factory successfully Jun 20 18:42:01.710867 kubelet[2887]: I0620 18:42:01.710846 2887 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 20 18:42:01.739350 kubelet[2887]: I0620 18:42:01.739088 2887 cpu_manager.go:221] "Starting CPU manager" policy="none" Jun 20 18:42:01.739350 kubelet[2887]: I0620 18:42:01.739107 2887 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jun 20 18:42:01.739350 kubelet[2887]: I0620 18:42:01.739122 2887 state_mem.go:36] "Initialized new in-memory state store" Jun 20 18:42:01.740167 kubelet[2887]: I0620 18:42:01.740060 2887 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jun 20 18:42:01.741779 kubelet[2887]: I0620 18:42:01.741724 2887 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jun 20 18:42:01.741779 kubelet[2887]: I0620 18:42:01.741758 2887 status_manager.go:230] "Starting to sync pod status with apiserver" Jun 20 18:42:01.741779 kubelet[2887]: I0620 18:42:01.741784 2887 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jun 20 18:42:01.741890 kubelet[2887]: I0620 18:42:01.741791 2887 kubelet.go:2436] "Starting kubelet main sync loop" Jun 20 18:42:01.741890 kubelet[2887]: E0620 18:42:01.741828 2887 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 20 18:42:01.743685 kubelet[2887]: E0620 18:42:01.743154 2887 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.15:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jun 20 18:42:01.747022 kubelet[2887]: I0620 18:42:01.746997 2887 policy_none.go:49] "None policy: Start" Jun 20 18:42:01.747022 kubelet[2887]: I0620 18:42:01.747020 2887 memory_manager.go:186] "Starting memorymanager" policy="None" Jun 20 18:42:01.747103 kubelet[2887]: I0620 18:42:01.747030 2887 state_mem.go:35] "Initializing new in-memory state store" Jun 20 18:42:01.756479 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jun 20 18:42:01.766412 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jun 20 18:42:01.769571 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jun 20 18:42:01.780833 kubelet[2887]: E0620 18:42:01.780398 2887 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jun 20 18:42:01.780833 kubelet[2887]: I0620 18:42:01.780588 2887 eviction_manager.go:189] "Eviction manager: starting control loop" Jun 20 18:42:01.780833 kubelet[2887]: I0620 18:42:01.780599 2887 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 20 18:42:01.781435 kubelet[2887]: I0620 18:42:01.781400 2887 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 20 18:42:01.782231 kubelet[2887]: E0620 18:42:01.782190 2887 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jun 20 18:42:01.782299 kubelet[2887]: E0620 18:42:01.782240 2887 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4230.2.0-a-c483281568\" not found" Jun 20 18:42:01.854773 systemd[1]: Created slice kubepods-burstable-pod195b5c6e828227afefae212154245bae.slice - libcontainer container kubepods-burstable-pod195b5c6e828227afefae212154245bae.slice. Jun 20 18:42:01.861452 kubelet[2887]: E0620 18:42:01.861411 2887 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.0-a-c483281568\" not found" node="ci-4230.2.0-a-c483281568" Jun 20 18:42:01.866371 systemd[1]: Created slice kubepods-burstable-pod1f55c1fba06a046757587f9853441fed.slice - libcontainer container kubepods-burstable-pod1f55c1fba06a046757587f9853441fed.slice. Jun 20 18:42:01.868163 kubelet[2887]: E0620 18:42:01.868132 2887 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.0-a-c483281568\" not found" node="ci-4230.2.0-a-c483281568" Jun 20 18:42:01.870215 systemd[1]: Created slice kubepods-burstable-pod5cdac2d57d044135e3d601a451a36750.slice - libcontainer container kubepods-burstable-pod5cdac2d57d044135e3d601a451a36750.slice. Jun 20 18:42:01.871868 kubelet[2887]: E0620 18:42:01.871840 2887 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.0-a-c483281568\" not found" node="ci-4230.2.0-a-c483281568" Jun 20 18:42:01.882554 kubelet[2887]: I0620 18:42:01.882528 2887 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.0-a-c483281568" Jun 20 18:42:01.883019 kubelet[2887]: E0620 18:42:01.882977 2887 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.15:6443/api/v1/nodes\": dial tcp 10.200.20.15:6443: connect: connection refused" node="ci-4230.2.0-a-c483281568" Jun 20 18:42:01.910459 kubelet[2887]: I0620 18:42:01.910380 2887 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/195b5c6e828227afefae212154245bae-ca-certs\") pod \"kube-apiserver-ci-4230.2.0-a-c483281568\" (UID: \"195b5c6e828227afefae212154245bae\") " pod="kube-system/kube-apiserver-ci-4230.2.0-a-c483281568" Jun 20 18:42:01.910459 kubelet[2887]: I0620 18:42:01.910410 2887 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/195b5c6e828227afefae212154245bae-k8s-certs\") pod \"kube-apiserver-ci-4230.2.0-a-c483281568\" (UID: \"195b5c6e828227afefae212154245bae\") " pod="kube-system/kube-apiserver-ci-4230.2.0-a-c483281568" Jun 20 18:42:01.910459 kubelet[2887]: I0620 18:42:01.910427 2887 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/195b5c6e828227afefae212154245bae-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230.2.0-a-c483281568\" (UID: \"195b5c6e828227afefae212154245bae\") " pod="kube-system/kube-apiserver-ci-4230.2.0-a-c483281568" Jun 20 18:42:01.910459 kubelet[2887]: I0620 18:42:01.910443 2887 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1f55c1fba06a046757587f9853441fed-ca-certs\") pod \"kube-controller-manager-ci-4230.2.0-a-c483281568\" (UID: \"1f55c1fba06a046757587f9853441fed\") " pod="kube-system/kube-controller-manager-ci-4230.2.0-a-c483281568" Jun 20 18:42:01.911188 kubelet[2887]: E0620 18:42:01.910939 2887 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.2.0-a-c483281568?timeout=10s\": dial tcp 10.200.20.15:6443: connect: connection refused" interval="400ms" Jun 20 18:42:01.911527 kubelet[2887]: I0620 18:42:01.911478 2887 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1f55c1fba06a046757587f9853441fed-flexvolume-dir\") pod \"kube-controller-manager-ci-4230.2.0-a-c483281568\" (UID: \"1f55c1fba06a046757587f9853441fed\") " pod="kube-system/kube-controller-manager-ci-4230.2.0-a-c483281568" Jun 20 18:42:01.911796 kubelet[2887]: I0620 18:42:01.911689 2887 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1f55c1fba06a046757587f9853441fed-k8s-certs\") pod \"kube-controller-manager-ci-4230.2.0-a-c483281568\" (UID: \"1f55c1fba06a046757587f9853441fed\") " pod="kube-system/kube-controller-manager-ci-4230.2.0-a-c483281568" Jun 20 18:42:01.911796 kubelet[2887]: I0620 18:42:01.911711 2887 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1f55c1fba06a046757587f9853441fed-kubeconfig\") pod \"kube-controller-manager-ci-4230.2.0-a-c483281568\" (UID: \"1f55c1fba06a046757587f9853441fed\") " pod="kube-system/kube-controller-manager-ci-4230.2.0-a-c483281568" Jun 20 18:42:01.911796 kubelet[2887]: I0620 18:42:01.911728 2887 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5cdac2d57d044135e3d601a451a36750-kubeconfig\") pod \"kube-scheduler-ci-4230.2.0-a-c483281568\" (UID: \"5cdac2d57d044135e3d601a451a36750\") " pod="kube-system/kube-scheduler-ci-4230.2.0-a-c483281568" Jun 20 18:42:01.911796 kubelet[2887]: I0620 18:42:01.911763 2887 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1f55c1fba06a046757587f9853441fed-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230.2.0-a-c483281568\" (UID: \"1f55c1fba06a046757587f9853441fed\") " pod="kube-system/kube-controller-manager-ci-4230.2.0-a-c483281568" Jun 20 18:42:02.084923 kubelet[2887]: I0620 18:42:02.084782 2887 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.0-a-c483281568" Jun 20 18:42:02.085217 kubelet[2887]: E0620 18:42:02.085185 2887 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.15:6443/api/v1/nodes\": dial tcp 10.200.20.15:6443: connect: connection refused" node="ci-4230.2.0-a-c483281568" Jun 20 18:42:02.162722 containerd[1720]: time="2025-06-20T18:42:02.162601374Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230.2.0-a-c483281568,Uid:195b5c6e828227afefae212154245bae,Namespace:kube-system,Attempt:0,}" Jun 20 18:42:02.169264 containerd[1720]: time="2025-06-20T18:42:02.169044900Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230.2.0-a-c483281568,Uid:1f55c1fba06a046757587f9853441fed,Namespace:kube-system,Attempt:0,}" Jun 20 18:42:02.172889 containerd[1720]: time="2025-06-20T18:42:02.172859264Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230.2.0-a-c483281568,Uid:5cdac2d57d044135e3d601a451a36750,Namespace:kube-system,Attempt:0,}" Jun 20 18:42:02.311570 kubelet[2887]: E0620 18:42:02.311528 2887 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.2.0-a-c483281568?timeout=10s\": dial tcp 10.200.20.15:6443: connect: connection refused" interval="800ms" Jun 20 18:42:02.486801 kubelet[2887]: I0620 18:42:02.486770 2887 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.0-a-c483281568" Jun 20 18:42:02.487271 kubelet[2887]: E0620 18:42:02.487177 2887 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.15:6443/api/v1/nodes\": dial tcp 10.200.20.15:6443: connect: connection refused" node="ci-4230.2.0-a-c483281568" Jun 20 18:42:02.513121 kubelet[2887]: E0620 18:42:02.513077 2887 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.2.0-a-c483281568&limit=500&resourceVersion=0\": dial tcp 10.200.20.15:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jun 20 18:42:02.637419 kubelet[2887]: E0620 18:42:02.637375 2887 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.15:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jun 20 18:42:02.850396 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1681334723.mount: Deactivated successfully. Jun 20 18:42:02.888781 containerd[1720]: time="2025-06-20T18:42:02.887959498Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 18:42:02.904372 containerd[1720]: time="2025-06-20T18:42:02.904311193Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Jun 20 18:42:02.909008 containerd[1720]: time="2025-06-20T18:42:02.908220557Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 18:42:02.913195 containerd[1720]: time="2025-06-20T18:42:02.913152201Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 18:42:02.921097 containerd[1720]: time="2025-06-20T18:42:02.920923809Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 20 18:42:02.926069 containerd[1720]: time="2025-06-20T18:42:02.925359333Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 18:42:02.928220 containerd[1720]: time="2025-06-20T18:42:02.928175455Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 20 18:42:02.933789 containerd[1720]: time="2025-06-20T18:42:02.933683541Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 18:42:02.936207 containerd[1720]: time="2025-06-20T18:42:02.935720383Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 762.729519ms" Jun 20 18:42:02.938783 containerd[1720]: time="2025-06-20T18:42:02.936937904Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 773.848849ms" Jun 20 18:42:02.958815 containerd[1720]: time="2025-06-20T18:42:02.958760924Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 789.643183ms" Jun 20 18:42:03.112158 kubelet[2887]: E0620 18:42:03.112031 2887 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.2.0-a-c483281568?timeout=10s\": dial tcp 10.200.20.15:6443: connect: connection refused" interval="1.6s" Jun 20 18:42:03.195992 kubelet[2887]: E0620 18:42:03.195946 2887 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.15:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jun 20 18:42:03.243250 kubelet[2887]: E0620 18:42:03.243204 2887 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.15:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.15:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jun 20 18:42:03.288994 kubelet[2887]: I0620 18:42:03.288965 2887 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.0-a-c483281568" Jun 20 18:42:03.289462 kubelet[2887]: E0620 18:42:03.289427 2887 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.15:6443/api/v1/nodes\": dial tcp 10.200.20.15:6443: connect: connection refused" node="ci-4230.2.0-a-c483281568" Jun 20 18:42:03.447385 containerd[1720]: time="2025-06-20T18:42:03.447216064Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 18:42:03.447991 containerd[1720]: time="2025-06-20T18:42:03.447367664Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 18:42:03.447991 containerd[1720]: time="2025-06-20T18:42:03.447787905Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:42:03.451064 containerd[1720]: time="2025-06-20T18:42:03.450571347Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 18:42:03.451064 containerd[1720]: time="2025-06-20T18:42:03.450621107Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 18:42:03.451064 containerd[1720]: time="2025-06-20T18:42:03.450636907Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:42:03.451064 containerd[1720]: time="2025-06-20T18:42:03.450702628Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:42:03.451786 containerd[1720]: time="2025-06-20T18:42:03.449783067Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:42:03.457011 containerd[1720]: time="2025-06-20T18:42:03.455963953Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 18:42:03.457011 containerd[1720]: time="2025-06-20T18:42:03.456027713Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 18:42:03.457011 containerd[1720]: time="2025-06-20T18:42:03.456038793Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:42:03.457011 containerd[1720]: time="2025-06-20T18:42:03.456116353Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:42:03.473984 systemd[1]: Started cri-containerd-a9e8757ce6d0ca1984aafe866fcf87cd51d7ff1790776af6594943daa5309cbe.scope - libcontainer container a9e8757ce6d0ca1984aafe866fcf87cd51d7ff1790776af6594943daa5309cbe. Jun 20 18:42:03.478699 systemd[1]: Started cri-containerd-71b6ce1b006ffaef6adc695ddce4ae89386891d8490c2e9a19e45cf5e8a2d42b.scope - libcontainer container 71b6ce1b006ffaef6adc695ddce4ae89386891d8490c2e9a19e45cf5e8a2d42b. Jun 20 18:42:03.480961 systemd[1]: Started cri-containerd-d2f36a3a5e280a4a919ee2f63c2f91e2ac815a56ed9ded5bed242b84ff977b35.scope - libcontainer container d2f36a3a5e280a4a919ee2f63c2f91e2ac815a56ed9ded5bed242b84ff977b35. Jun 20 18:42:03.523435 containerd[1720]: time="2025-06-20T18:42:03.523399816Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230.2.0-a-c483281568,Uid:195b5c6e828227afefae212154245bae,Namespace:kube-system,Attempt:0,} returns sandbox id \"71b6ce1b006ffaef6adc695ddce4ae89386891d8490c2e9a19e45cf5e8a2d42b\"" Jun 20 18:42:03.533195 containerd[1720]: time="2025-06-20T18:42:03.533059225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230.2.0-a-c483281568,Uid:5cdac2d57d044135e3d601a451a36750,Namespace:kube-system,Attempt:0,} returns sandbox id \"a9e8757ce6d0ca1984aafe866fcf87cd51d7ff1790776af6594943daa5309cbe\"" Jun 20 18:42:03.535694 containerd[1720]: time="2025-06-20T18:42:03.535640588Z" level=info msg="CreateContainer within sandbox \"71b6ce1b006ffaef6adc695ddce4ae89386891d8490c2e9a19e45cf5e8a2d42b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jun 20 18:42:03.538309 containerd[1720]: time="2025-06-20T18:42:03.538131830Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230.2.0-a-c483281568,Uid:1f55c1fba06a046757587f9853441fed,Namespace:kube-system,Attempt:0,} returns sandbox id \"d2f36a3a5e280a4a919ee2f63c2f91e2ac815a56ed9ded5bed242b84ff977b35\"" Jun 20 18:42:03.542905 containerd[1720]: time="2025-06-20T18:42:03.542782034Z" level=info msg="CreateContainer within sandbox \"a9e8757ce6d0ca1984aafe866fcf87cd51d7ff1790776af6594943daa5309cbe\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jun 20 18:42:03.548254 containerd[1720]: time="2025-06-20T18:42:03.548226399Z" level=info msg="CreateContainer within sandbox \"d2f36a3a5e280a4a919ee2f63c2f91e2ac815a56ed9ded5bed242b84ff977b35\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jun 20 18:42:03.628511 containerd[1720]: time="2025-06-20T18:42:03.628454235Z" level=info msg="CreateContainer within sandbox \"71b6ce1b006ffaef6adc695ddce4ae89386891d8490c2e9a19e45cf5e8a2d42b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"40368e08c4dd14d570b632d0873a648431c17029f447ded9fcc0bb7b200bd14a\"" Jun 20 18:42:03.630272 containerd[1720]: time="2025-06-20T18:42:03.629127636Z" level=info msg="StartContainer for \"40368e08c4dd14d570b632d0873a648431c17029f447ded9fcc0bb7b200bd14a\"" Jun 20 18:42:03.645669 containerd[1720]: time="2025-06-20T18:42:03.645554811Z" level=info msg="CreateContainer within sandbox \"d2f36a3a5e280a4a919ee2f63c2f91e2ac815a56ed9ded5bed242b84ff977b35\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"2c89390bc4c13bce019111f4d3e107fe2d39bf48e7cc0ed9993e5642cb260d24\"" Jun 20 18:42:03.647380 containerd[1720]: time="2025-06-20T18:42:03.646183852Z" level=info msg="StartContainer for \"2c89390bc4c13bce019111f4d3e107fe2d39bf48e7cc0ed9993e5642cb260d24\"" Jun 20 18:42:03.650946 containerd[1720]: time="2025-06-20T18:42:03.650904376Z" level=info msg="CreateContainer within sandbox \"a9e8757ce6d0ca1984aafe866fcf87cd51d7ff1790776af6594943daa5309cbe\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"20597ddc8e9843c582498636da84452d1c922f92016cc704d43e4296303febdd\"" Jun 20 18:42:03.651596 containerd[1720]: time="2025-06-20T18:42:03.651578857Z" level=info msg="StartContainer for \"20597ddc8e9843c582498636da84452d1c922f92016cc704d43e4296303febdd\"" Jun 20 18:42:03.653013 systemd[1]: Started cri-containerd-40368e08c4dd14d570b632d0873a648431c17029f447ded9fcc0bb7b200bd14a.scope - libcontainer container 40368e08c4dd14d570b632d0873a648431c17029f447ded9fcc0bb7b200bd14a. Jun 20 18:42:03.681020 systemd[1]: Started cri-containerd-2c89390bc4c13bce019111f4d3e107fe2d39bf48e7cc0ed9993e5642cb260d24.scope - libcontainer container 2c89390bc4c13bce019111f4d3e107fe2d39bf48e7cc0ed9993e5642cb260d24. Jun 20 18:42:03.690984 systemd[1]: Started cri-containerd-20597ddc8e9843c582498636da84452d1c922f92016cc704d43e4296303febdd.scope - libcontainer container 20597ddc8e9843c582498636da84452d1c922f92016cc704d43e4296303febdd. Jun 20 18:42:03.710467 containerd[1720]: time="2025-06-20T18:42:03.710355792Z" level=info msg="StartContainer for \"40368e08c4dd14d570b632d0873a648431c17029f447ded9fcc0bb7b200bd14a\" returns successfully" Jun 20 18:42:03.759594 containerd[1720]: time="2025-06-20T18:42:03.759266958Z" level=info msg="StartContainer for \"2c89390bc4c13bce019111f4d3e107fe2d39bf48e7cc0ed9993e5642cb260d24\" returns successfully" Jun 20 18:42:03.759594 containerd[1720]: time="2025-06-20T18:42:03.759389238Z" level=info msg="StartContainer for \"20597ddc8e9843c582498636da84452d1c922f92016cc704d43e4296303febdd\" returns successfully" Jun 20 18:42:03.766198 kubelet[2887]: E0620 18:42:03.766160 2887 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.0-a-c483281568\" not found" node="ci-4230.2.0-a-c483281568" Jun 20 18:42:03.771567 kubelet[2887]: E0620 18:42:03.771161 2887 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.0-a-c483281568\" not found" node="ci-4230.2.0-a-c483281568" Jun 20 18:42:03.773405 kubelet[2887]: E0620 18:42:03.773366 2887 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.0-a-c483281568\" not found" node="ci-4230.2.0-a-c483281568" Jun 20 18:42:04.776675 kubelet[2887]: E0620 18:42:04.776279 2887 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.0-a-c483281568\" not found" node="ci-4230.2.0-a-c483281568" Jun 20 18:42:04.776675 kubelet[2887]: E0620 18:42:04.776574 2887 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.0-a-c483281568\" not found" node="ci-4230.2.0-a-c483281568" Jun 20 18:42:04.891921 kubelet[2887]: I0620 18:42:04.891145 2887 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.0-a-c483281568" Jun 20 18:42:06.261793 kubelet[2887]: E0620 18:42:06.261741 2887 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4230.2.0-a-c483281568\" not found" node="ci-4230.2.0-a-c483281568" Jun 20 18:42:06.362841 kubelet[2887]: I0620 18:42:06.361834 2887 kubelet_node_status.go:78] "Successfully registered node" node="ci-4230.2.0-a-c483281568" Jun 20 18:42:06.410940 kubelet[2887]: I0620 18:42:06.410698 2887 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230.2.0-a-c483281568" Jun 20 18:42:06.423736 kubelet[2887]: E0620 18:42:06.423696 2887 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4230.2.0-a-c483281568\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4230.2.0-a-c483281568" Jun 20 18:42:06.424078 kubelet[2887]: I0620 18:42:06.423926 2887 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230.2.0-a-c483281568" Jun 20 18:42:06.428168 kubelet[2887]: E0620 18:42:06.428146 2887 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4230.2.0-a-c483281568\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4230.2.0-a-c483281568" Jun 20 18:42:06.428431 kubelet[2887]: I0620 18:42:06.428270 2887 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230.2.0-a-c483281568" Jun 20 18:42:06.430104 kubelet[2887]: E0620 18:42:06.430069 2887 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4230.2.0-a-c483281568\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4230.2.0-a-c483281568" Jun 20 18:42:06.443300 kubelet[2887]: I0620 18:42:06.443062 2887 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230.2.0-a-c483281568" Jun 20 18:42:06.446379 kubelet[2887]: E0620 18:42:06.446016 2887 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4230.2.0-a-c483281568\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4230.2.0-a-c483281568" Jun 20 18:42:06.703907 kubelet[2887]: I0620 18:42:06.703876 2887 apiserver.go:52] "Watching apiserver" Jun 20 18:42:06.709518 kubelet[2887]: I0620 18:42:06.709469 2887 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jun 20 18:42:08.484344 systemd[1]: Reload requested from client PID 3170 ('systemctl') (unit session-9.scope)... Jun 20 18:42:08.484359 systemd[1]: Reloading... Jun 20 18:42:08.589796 zram_generator::config[3222]: No configuration found. Jun 20 18:42:08.691365 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 18:42:08.806083 systemd[1]: Reloading finished in 321 ms. Jun 20 18:42:08.829078 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 18:42:08.843697 systemd[1]: kubelet.service: Deactivated successfully. Jun 20 18:42:08.843977 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:42:08.844036 systemd[1]: kubelet.service: Consumed 762ms CPU time, 125M memory peak. Jun 20 18:42:08.851092 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 18:42:08.959561 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:42:08.964299 (kubelet)[3281]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 20 18:42:09.561160 kubelet[3281]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 18:42:09.561160 kubelet[3281]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jun 20 18:42:09.561160 kubelet[3281]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 18:42:09.561160 kubelet[3281]: I0620 18:42:09.560204 3281 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 20 18:42:09.566937 kubelet[3281]: I0620 18:42:09.566906 3281 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jun 20 18:42:09.566937 kubelet[3281]: I0620 18:42:09.566929 3281 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 20 18:42:09.567140 kubelet[3281]: I0620 18:42:09.567118 3281 server.go:956] "Client rotation is on, will bootstrap in background" Jun 20 18:42:09.568346 kubelet[3281]: I0620 18:42:09.568324 3281 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jun 20 18:42:09.570772 kubelet[3281]: I0620 18:42:09.570542 3281 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 20 18:42:09.575049 kubelet[3281]: E0620 18:42:09.573812 3281 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jun 20 18:42:09.575049 kubelet[3281]: I0620 18:42:09.573834 3281 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jun 20 18:42:09.577873 kubelet[3281]: I0620 18:42:09.577807 3281 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 20 18:42:09.578068 kubelet[3281]: I0620 18:42:09.578038 3281 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 20 18:42:09.578231 kubelet[3281]: I0620 18:42:09.578064 3281 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230.2.0-a-c483281568","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jun 20 18:42:09.578309 kubelet[3281]: I0620 18:42:09.578237 3281 topology_manager.go:138] "Creating topology manager with none policy" Jun 20 18:42:09.578309 kubelet[3281]: I0620 18:42:09.578246 3281 container_manager_linux.go:303] "Creating device plugin manager" Jun 20 18:42:09.578309 kubelet[3281]: I0620 18:42:09.578283 3281 state_mem.go:36] "Initialized new in-memory state store" Jun 20 18:42:09.578444 kubelet[3281]: I0620 18:42:09.578427 3281 kubelet.go:480] "Attempting to sync node with API server" Jun 20 18:42:09.578476 kubelet[3281]: I0620 18:42:09.578447 3281 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 20 18:42:09.578476 kubelet[3281]: I0620 18:42:09.578469 3281 kubelet.go:386] "Adding apiserver pod source" Jun 20 18:42:09.578476 kubelet[3281]: I0620 18:42:09.578482 3281 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 20 18:42:09.583773 kubelet[3281]: I0620 18:42:09.581712 3281 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jun 20 18:42:09.587256 kubelet[3281]: I0620 18:42:09.585378 3281 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jun 20 18:42:09.604579 kubelet[3281]: I0620 18:42:09.604545 3281 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jun 20 18:42:09.604579 kubelet[3281]: I0620 18:42:09.604588 3281 server.go:1289] "Started kubelet" Jun 20 18:42:09.606709 kubelet[3281]: I0620 18:42:09.606269 3281 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 20 18:42:09.610458 kubelet[3281]: I0620 18:42:09.610370 3281 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jun 20 18:42:09.613544 kubelet[3281]: I0620 18:42:09.613526 3281 server.go:317] "Adding debug handlers to kubelet server" Jun 20 18:42:09.618693 kubelet[3281]: I0620 18:42:09.618647 3281 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 20 18:42:09.618969 kubelet[3281]: I0620 18:42:09.618954 3281 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 20 18:42:09.619242 kubelet[3281]: I0620 18:42:09.619225 3281 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jun 20 18:42:09.619414 kubelet[3281]: I0620 18:42:09.619376 3281 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jun 20 18:42:09.619661 kubelet[3281]: I0620 18:42:09.619648 3281 volume_manager.go:297] "Starting Kubelet Volume Manager" Jun 20 18:42:09.620131 kubelet[3281]: E0620 18:42:09.620113 3281 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.0-a-c483281568\" not found" Jun 20 18:42:09.621064 kubelet[3281]: I0620 18:42:09.621026 3281 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jun 20 18:42:09.624937 kubelet[3281]: I0620 18:42:09.623008 3281 reconciler.go:26] "Reconciler: start to sync state" Jun 20 18:42:09.629102 kubelet[3281]: I0620 18:42:09.629073 3281 factory.go:223] Registration of the systemd container factory successfully Jun 20 18:42:09.629204 kubelet[3281]: I0620 18:42:09.629181 3281 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 20 18:42:09.635024 kubelet[3281]: E0620 18:42:09.634875 3281 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 20 18:42:09.640012 kubelet[3281]: I0620 18:42:09.639711 3281 factory.go:223] Registration of the containerd container factory successfully Jun 20 18:42:09.640445 kubelet[3281]: I0620 18:42:09.640336 3281 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jun 20 18:42:09.640684 kubelet[3281]: I0620 18:42:09.640643 3281 status_manager.go:230] "Starting to sync pod status with apiserver" Jun 20 18:42:09.641053 kubelet[3281]: I0620 18:42:09.640903 3281 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jun 20 18:42:09.641053 kubelet[3281]: I0620 18:42:09.640916 3281 kubelet.go:2436] "Starting kubelet main sync loop" Jun 20 18:42:09.641624 kubelet[3281]: E0620 18:42:09.641488 3281 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 20 18:42:09.700860 kubelet[3281]: I0620 18:42:09.700813 3281 cpu_manager.go:221] "Starting CPU manager" policy="none" Jun 20 18:42:09.700860 kubelet[3281]: I0620 18:42:09.700834 3281 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jun 20 18:42:09.700860 kubelet[3281]: I0620 18:42:09.700855 3281 state_mem.go:36] "Initialized new in-memory state store" Jun 20 18:42:09.701271 kubelet[3281]: I0620 18:42:09.700982 3281 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jun 20 18:42:09.701271 kubelet[3281]: I0620 18:42:09.700992 3281 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jun 20 18:42:09.701271 kubelet[3281]: I0620 18:42:09.701009 3281 policy_none.go:49] "None policy: Start" Jun 20 18:42:09.701271 kubelet[3281]: I0620 18:42:09.701017 3281 memory_manager.go:186] "Starting memorymanager" policy="None" Jun 20 18:42:09.701271 kubelet[3281]: I0620 18:42:09.701025 3281 state_mem.go:35] "Initializing new in-memory state store" Jun 20 18:42:09.701271 kubelet[3281]: I0620 18:42:09.701102 3281 state_mem.go:75] "Updated machine memory state" Jun 20 18:42:09.707180 kubelet[3281]: E0620 18:42:09.707029 3281 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jun 20 18:42:09.707424 kubelet[3281]: I0620 18:42:09.707387 3281 eviction_manager.go:189] "Eviction manager: starting control loop" Jun 20 18:42:09.707593 kubelet[3281]: I0620 18:42:09.707405 3281 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 20 18:42:09.708887 kubelet[3281]: E0620 18:42:09.708638 3281 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jun 20 18:42:09.709795 kubelet[3281]: I0620 18:42:09.709643 3281 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 20 18:42:09.743086 kubelet[3281]: I0620 18:42:09.743050 3281 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230.2.0-a-c483281568" Jun 20 18:42:09.743434 kubelet[3281]: I0620 18:42:09.743402 3281 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230.2.0-a-c483281568" Jun 20 18:42:09.743568 kubelet[3281]: I0620 18:42:09.743548 3281 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230.2.0-a-c483281568" Jun 20 18:42:09.759217 kubelet[3281]: I0620 18:42:09.759182 3281 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jun 20 18:42:09.759924 kubelet[3281]: I0620 18:42:09.759887 3281 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jun 20 18:42:09.760026 kubelet[3281]: I0620 18:42:09.760002 3281 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jun 20 18:42:09.810395 kubelet[3281]: I0620 18:42:09.810055 3281 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.0-a-c483281568" Jun 20 18:42:09.824787 kubelet[3281]: I0620 18:42:09.824087 3281 kubelet_node_status.go:124] "Node was previously registered" node="ci-4230.2.0-a-c483281568" Jun 20 18:42:09.824787 kubelet[3281]: I0620 18:42:09.824170 3281 kubelet_node_status.go:78] "Successfully registered node" node="ci-4230.2.0-a-c483281568" Jun 20 18:42:09.825990 kubelet[3281]: I0620 18:42:09.825210 3281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1f55c1fba06a046757587f9853441fed-ca-certs\") pod \"kube-controller-manager-ci-4230.2.0-a-c483281568\" (UID: \"1f55c1fba06a046757587f9853441fed\") " pod="kube-system/kube-controller-manager-ci-4230.2.0-a-c483281568" Jun 20 18:42:09.825990 kubelet[3281]: I0620 18:42:09.825243 3281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1f55c1fba06a046757587f9853441fed-flexvolume-dir\") pod \"kube-controller-manager-ci-4230.2.0-a-c483281568\" (UID: \"1f55c1fba06a046757587f9853441fed\") " pod="kube-system/kube-controller-manager-ci-4230.2.0-a-c483281568" Jun 20 18:42:09.825990 kubelet[3281]: I0620 18:42:09.825260 3281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1f55c1fba06a046757587f9853441fed-k8s-certs\") pod \"kube-controller-manager-ci-4230.2.0-a-c483281568\" (UID: \"1f55c1fba06a046757587f9853441fed\") " pod="kube-system/kube-controller-manager-ci-4230.2.0-a-c483281568" Jun 20 18:42:09.825990 kubelet[3281]: I0620 18:42:09.825287 3281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1f55c1fba06a046757587f9853441fed-kubeconfig\") pod \"kube-controller-manager-ci-4230.2.0-a-c483281568\" (UID: \"1f55c1fba06a046757587f9853441fed\") " pod="kube-system/kube-controller-manager-ci-4230.2.0-a-c483281568" Jun 20 18:42:09.825990 kubelet[3281]: I0620 18:42:09.825306 3281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1f55c1fba06a046757587f9853441fed-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230.2.0-a-c483281568\" (UID: \"1f55c1fba06a046757587f9853441fed\") " pod="kube-system/kube-controller-manager-ci-4230.2.0-a-c483281568" Jun 20 18:42:09.826163 kubelet[3281]: I0620 18:42:09.825323 3281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5cdac2d57d044135e3d601a451a36750-kubeconfig\") pod \"kube-scheduler-ci-4230.2.0-a-c483281568\" (UID: \"5cdac2d57d044135e3d601a451a36750\") " pod="kube-system/kube-scheduler-ci-4230.2.0-a-c483281568" Jun 20 18:42:09.826163 kubelet[3281]: I0620 18:42:09.825338 3281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/195b5c6e828227afefae212154245bae-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230.2.0-a-c483281568\" (UID: \"195b5c6e828227afefae212154245bae\") " pod="kube-system/kube-apiserver-ci-4230.2.0-a-c483281568" Jun 20 18:42:09.826163 kubelet[3281]: I0620 18:42:09.825352 3281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/195b5c6e828227afefae212154245bae-ca-certs\") pod \"kube-apiserver-ci-4230.2.0-a-c483281568\" (UID: \"195b5c6e828227afefae212154245bae\") " pod="kube-system/kube-apiserver-ci-4230.2.0-a-c483281568" Jun 20 18:42:09.826163 kubelet[3281]: I0620 18:42:09.825365 3281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/195b5c6e828227afefae212154245bae-k8s-certs\") pod \"kube-apiserver-ci-4230.2.0-a-c483281568\" (UID: \"195b5c6e828227afefae212154245bae\") " pod="kube-system/kube-apiserver-ci-4230.2.0-a-c483281568" Jun 20 18:42:10.021696 sudo[3317]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jun 20 18:42:10.022374 sudo[3317]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jun 20 18:42:10.496835 sudo[3317]: pam_unix(sudo:session): session closed for user root Jun 20 18:42:10.579791 kubelet[3281]: I0620 18:42:10.579356 3281 apiserver.go:52] "Watching apiserver" Jun 20 18:42:10.625808 kubelet[3281]: I0620 18:42:10.625768 3281 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jun 20 18:42:10.671029 kubelet[3281]: I0620 18:42:10.670931 3281 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230.2.0-a-c483281568" Jun 20 18:42:10.684319 kubelet[3281]: I0620 18:42:10.683905 3281 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jun 20 18:42:10.684319 kubelet[3281]: E0620 18:42:10.683959 3281 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4230.2.0-a-c483281568\" already exists" pod="kube-system/kube-apiserver-ci-4230.2.0-a-c483281568" Jun 20 18:42:10.710573 kubelet[3281]: I0620 18:42:10.708269 3281 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4230.2.0-a-c483281568" podStartSLOduration=1.708250988 podStartE2EDuration="1.708250988s" podCreationTimestamp="2025-06-20 18:42:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 18:42:10.694350252 +0000 UTC m=+1.726626010" watchObservedRunningTime="2025-06-20 18:42:10.708250988 +0000 UTC m=+1.740526786" Jun 20 18:42:10.729575 kubelet[3281]: I0620 18:42:10.729526 3281 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4230.2.0-a-c483281568" podStartSLOduration=1.729488973 podStartE2EDuration="1.729488973s" podCreationTimestamp="2025-06-20 18:42:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 18:42:10.712222073 +0000 UTC m=+1.744497871" watchObservedRunningTime="2025-06-20 18:42:10.729488973 +0000 UTC m=+1.761764771" Jun 20 18:42:10.743962 kubelet[3281]: I0620 18:42:10.743913 3281 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4230.2.0-a-c483281568" podStartSLOduration=1.743897271 podStartE2EDuration="1.743897271s" podCreationTimestamp="2025-06-20 18:42:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 18:42:10.729988334 +0000 UTC m=+1.762264132" watchObservedRunningTime="2025-06-20 18:42:10.743897271 +0000 UTC m=+1.776173069" Jun 20 18:42:12.484669 sudo[2251]: pam_unix(sudo:session): session closed for user root Jun 20 18:42:12.562540 sshd[2250]: Connection closed by 10.200.16.10 port 45470 Jun 20 18:42:12.562434 sshd-session[2248]: pam_unix(sshd:session): session closed for user core Jun 20 18:42:12.565831 systemd[1]: sshd@6-10.200.20.15:22-10.200.16.10:45470.service: Deactivated successfully. Jun 20 18:42:12.567987 systemd[1]: session-9.scope: Deactivated successfully. Jun 20 18:42:12.568731 systemd[1]: session-9.scope: Consumed 6.413s CPU time, 263.8M memory peak. Jun 20 18:42:12.570354 systemd-logind[1701]: Session 9 logged out. Waiting for processes to exit. Jun 20 18:42:12.571272 systemd-logind[1701]: Removed session 9. Jun 20 18:42:14.469845 kubelet[3281]: I0620 18:42:14.469806 3281 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jun 20 18:42:14.470233 containerd[1720]: time="2025-06-20T18:42:14.470078212Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jun 20 18:42:14.470399 kubelet[3281]: I0620 18:42:14.470235 3281 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jun 20 18:42:15.042120 systemd[1]: Created slice kubepods-besteffort-pode7a053f3_2adc_4378_ae33_861a182d7927.slice - libcontainer container kubepods-besteffort-pode7a053f3_2adc_4378_ae33_861a182d7927.slice. Jun 20 18:42:15.052216 systemd[1]: Created slice kubepods-burstable-pod049bac15_f17c_4d07_9365_996cce339bd4.slice - libcontainer container kubepods-burstable-pod049bac15_f17c_4d07_9365_996cce339bd4.slice. Jun 20 18:42:15.057479 kubelet[3281]: I0620 18:42:15.057448 3281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/049bac15-f17c-4d07-9365-996cce339bd4-hubble-tls\") pod \"cilium-tgn7k\" (UID: \"049bac15-f17c-4d07-9365-996cce339bd4\") " pod="kube-system/cilium-tgn7k" Jun 20 18:42:15.058110 kubelet[3281]: I0620 18:42:15.057620 3281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/049bac15-f17c-4d07-9365-996cce339bd4-cilium-run\") pod \"cilium-tgn7k\" (UID: \"049bac15-f17c-4d07-9365-996cce339bd4\") " pod="kube-system/cilium-tgn7k" Jun 20 18:42:15.058110 kubelet[3281]: I0620 18:42:15.057649 3281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e7a053f3-2adc-4378-ae33-861a182d7927-kube-proxy\") pod \"kube-proxy-pl558\" (UID: \"e7a053f3-2adc-4378-ae33-861a182d7927\") " pod="kube-system/kube-proxy-pl558" Jun 20 18:42:15.058110 kubelet[3281]: I0620 18:42:15.057692 3281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/049bac15-f17c-4d07-9365-996cce339bd4-lib-modules\") pod \"cilium-tgn7k\" (UID: \"049bac15-f17c-4d07-9365-996cce339bd4\") " pod="kube-system/cilium-tgn7k" Jun 20 18:42:15.058110 kubelet[3281]: I0620 18:42:15.057712 3281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e7a053f3-2adc-4378-ae33-861a182d7927-xtables-lock\") pod \"kube-proxy-pl558\" (UID: \"e7a053f3-2adc-4378-ae33-861a182d7927\") " pod="kube-system/kube-proxy-pl558" Jun 20 18:42:15.058430 kubelet[3281]: I0620 18:42:15.057732 3281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5x9s8\" (UniqueName: \"kubernetes.io/projected/e7a053f3-2adc-4378-ae33-861a182d7927-kube-api-access-5x9s8\") pod \"kube-proxy-pl558\" (UID: \"e7a053f3-2adc-4378-ae33-861a182d7927\") " pod="kube-system/kube-proxy-pl558" Jun 20 18:42:15.058430 kubelet[3281]: I0620 18:42:15.058346 3281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/049bac15-f17c-4d07-9365-996cce339bd4-host-proc-sys-kernel\") pod \"cilium-tgn7k\" (UID: \"049bac15-f17c-4d07-9365-996cce339bd4\") " pod="kube-system/cilium-tgn7k" Jun 20 18:42:15.058430 kubelet[3281]: I0620 18:42:15.058369 3281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlw2x\" (UniqueName: \"kubernetes.io/projected/049bac15-f17c-4d07-9365-996cce339bd4-kube-api-access-rlw2x\") pod \"cilium-tgn7k\" (UID: \"049bac15-f17c-4d07-9365-996cce339bd4\") " pod="kube-system/cilium-tgn7k" Jun 20 18:42:15.058714 kubelet[3281]: I0620 18:42:15.058393 3281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/049bac15-f17c-4d07-9365-996cce339bd4-bpf-maps\") pod \"cilium-tgn7k\" (UID: \"049bac15-f17c-4d07-9365-996cce339bd4\") " pod="kube-system/cilium-tgn7k" Jun 20 18:42:15.058714 kubelet[3281]: I0620 18:42:15.058577 3281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/049bac15-f17c-4d07-9365-996cce339bd4-hostproc\") pod \"cilium-tgn7k\" (UID: \"049bac15-f17c-4d07-9365-996cce339bd4\") " pod="kube-system/cilium-tgn7k" Jun 20 18:42:15.059662 kubelet[3281]: I0620 18:42:15.058594 3281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/049bac15-f17c-4d07-9365-996cce339bd4-cni-path\") pod \"cilium-tgn7k\" (UID: \"049bac15-f17c-4d07-9365-996cce339bd4\") " pod="kube-system/cilium-tgn7k" Jun 20 18:42:15.059896 kubelet[3281]: I0620 18:42:15.059742 3281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/049bac15-f17c-4d07-9365-996cce339bd4-etc-cni-netd\") pod \"cilium-tgn7k\" (UID: \"049bac15-f17c-4d07-9365-996cce339bd4\") " pod="kube-system/cilium-tgn7k" Jun 20 18:42:15.059896 kubelet[3281]: I0620 18:42:15.059798 3281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e7a053f3-2adc-4378-ae33-861a182d7927-lib-modules\") pod \"kube-proxy-pl558\" (UID: \"e7a053f3-2adc-4378-ae33-861a182d7927\") " pod="kube-system/kube-proxy-pl558" Jun 20 18:42:15.059896 kubelet[3281]: I0620 18:42:15.059850 3281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/049bac15-f17c-4d07-9365-996cce339bd4-cilium-cgroup\") pod \"cilium-tgn7k\" (UID: \"049bac15-f17c-4d07-9365-996cce339bd4\") " pod="kube-system/cilium-tgn7k" Jun 20 18:42:15.059896 kubelet[3281]: I0620 18:42:15.059869 3281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/049bac15-f17c-4d07-9365-996cce339bd4-xtables-lock\") pod \"cilium-tgn7k\" (UID: \"049bac15-f17c-4d07-9365-996cce339bd4\") " pod="kube-system/cilium-tgn7k" Jun 20 18:42:15.060072 kubelet[3281]: I0620 18:42:15.060018 3281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/049bac15-f17c-4d07-9365-996cce339bd4-clustermesh-secrets\") pod \"cilium-tgn7k\" (UID: \"049bac15-f17c-4d07-9365-996cce339bd4\") " pod="kube-system/cilium-tgn7k" Jun 20 18:42:15.060072 kubelet[3281]: I0620 18:42:15.060054 3281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/049bac15-f17c-4d07-9365-996cce339bd4-cilium-config-path\") pod \"cilium-tgn7k\" (UID: \"049bac15-f17c-4d07-9365-996cce339bd4\") " pod="kube-system/cilium-tgn7k" Jun 20 18:42:15.060146 kubelet[3281]: I0620 18:42:15.060104 3281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/049bac15-f17c-4d07-9365-996cce339bd4-host-proc-sys-net\") pod \"cilium-tgn7k\" (UID: \"049bac15-f17c-4d07-9365-996cce339bd4\") " pod="kube-system/cilium-tgn7k" Jun 20 18:42:15.350878 containerd[1720]: time="2025-06-20T18:42:15.350773239Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pl558,Uid:e7a053f3-2adc-4378-ae33-861a182d7927,Namespace:kube-system,Attempt:0,}" Jun 20 18:42:15.357378 containerd[1720]: time="2025-06-20T18:42:15.357157212Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tgn7k,Uid:049bac15-f17c-4d07-9365-996cce339bd4,Namespace:kube-system,Attempt:0,}" Jun 20 18:42:15.411391 containerd[1720]: time="2025-06-20T18:42:15.411311122Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 18:42:15.411501 containerd[1720]: time="2025-06-20T18:42:15.411407562Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 18:42:15.411501 containerd[1720]: time="2025-06-20T18:42:15.411442802Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:42:15.411612 containerd[1720]: time="2025-06-20T18:42:15.411582082Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:42:15.427914 systemd[1]: Started cri-containerd-6afec02532c37b6926c4874dde2536a373e81dea2e8d603deab885ee1656b772.scope - libcontainer container 6afec02532c37b6926c4874dde2536a373e81dea2e8d603deab885ee1656b772. Jun 20 18:42:15.448895 containerd[1720]: time="2025-06-20T18:42:15.448816598Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 18:42:15.449269 containerd[1720]: time="2025-06-20T18:42:15.449219759Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 18:42:15.449562 containerd[1720]: time="2025-06-20T18:42:15.449531200Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:42:15.450628 containerd[1720]: time="2025-06-20T18:42:15.450480202Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:42:15.464968 containerd[1720]: time="2025-06-20T18:42:15.464647791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pl558,Uid:e7a053f3-2adc-4378-ae33-861a182d7927,Namespace:kube-system,Attempt:0,} returns sandbox id \"6afec02532c37b6926c4874dde2536a373e81dea2e8d603deab885ee1656b772\"" Jun 20 18:42:15.470935 systemd[1]: Started cri-containerd-5af161cc0fa09f13219972f40b1a38e6db4ba039291769ceefc966c20cbf3bfa.scope - libcontainer container 5af161cc0fa09f13219972f40b1a38e6db4ba039291769ceefc966c20cbf3bfa. Jun 20 18:42:15.478980 containerd[1720]: time="2025-06-20T18:42:15.478923940Z" level=info msg="CreateContainer within sandbox \"6afec02532c37b6926c4874dde2536a373e81dea2e8d603deab885ee1656b772\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jun 20 18:42:15.493829 containerd[1720]: time="2025-06-20T18:42:15.493735610Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tgn7k,Uid:049bac15-f17c-4d07-9365-996cce339bd4,Namespace:kube-system,Attempt:0,} returns sandbox id \"5af161cc0fa09f13219972f40b1a38e6db4ba039291769ceefc966c20cbf3bfa\"" Jun 20 18:42:15.496738 containerd[1720]: time="2025-06-20T18:42:15.496607136Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jun 20 18:42:15.541238 containerd[1720]: time="2025-06-20T18:42:15.541187546Z" level=info msg="CreateContainer within sandbox \"6afec02532c37b6926c4874dde2536a373e81dea2e8d603deab885ee1656b772\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8ae022d6ec6de41b127e2f1ae27d659a4bda0bae10c857b92aa4947a16785e0f\"" Jun 20 18:42:15.542575 containerd[1720]: time="2025-06-20T18:42:15.542540309Z" level=info msg="StartContainer for \"8ae022d6ec6de41b127e2f1ae27d659a4bda0bae10c857b92aa4947a16785e0f\"" Jun 20 18:42:15.573911 systemd[1]: Started cri-containerd-8ae022d6ec6de41b127e2f1ae27d659a4bda0bae10c857b92aa4947a16785e0f.scope - libcontainer container 8ae022d6ec6de41b127e2f1ae27d659a4bda0bae10c857b92aa4947a16785e0f. Jun 20 18:42:15.606887 containerd[1720]: time="2025-06-20T18:42:15.606646280Z" level=info msg="StartContainer for \"8ae022d6ec6de41b127e2f1ae27d659a4bda0bae10c857b92aa4947a16785e0f\" returns successfully" Jun 20 18:42:15.633965 systemd[1]: Created slice kubepods-besteffort-podd84f1a56_c0ae_4220_897d_625acf9e4c43.slice - libcontainer container kubepods-besteffort-podd84f1a56_c0ae_4220_897d_625acf9e4c43.slice. Jun 20 18:42:15.665265 kubelet[3281]: I0620 18:42:15.665203 3281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mpj6\" (UniqueName: \"kubernetes.io/projected/d84f1a56-c0ae-4220-897d-625acf9e4c43-kube-api-access-5mpj6\") pod \"cilium-operator-6c4d7847fc-mqbrr\" (UID: \"d84f1a56-c0ae-4220-897d-625acf9e4c43\") " pod="kube-system/cilium-operator-6c4d7847fc-mqbrr" Jun 20 18:42:15.665265 kubelet[3281]: I0620 18:42:15.665262 3281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d84f1a56-c0ae-4220-897d-625acf9e4c43-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-mqbrr\" (UID: \"d84f1a56-c0ae-4220-897d-625acf9e4c43\") " pod="kube-system/cilium-operator-6c4d7847fc-mqbrr" Jun 20 18:42:15.937516 containerd[1720]: time="2025-06-20T18:42:15.937388073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-mqbrr,Uid:d84f1a56-c0ae-4220-897d-625acf9e4c43,Namespace:kube-system,Attempt:0,}" Jun 20 18:42:15.991208 containerd[1720]: time="2025-06-20T18:42:15.990985422Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 18:42:15.991208 containerd[1720]: time="2025-06-20T18:42:15.991045942Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 18:42:15.991208 containerd[1720]: time="2025-06-20T18:42:15.991057982Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:42:15.991208 containerd[1720]: time="2025-06-20T18:42:15.991144263Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:42:16.013943 systemd[1]: Started cri-containerd-98c273deb1c9ddce91b3319872c67804a5e8a1fd45315ab72794b8c9db94b57c.scope - libcontainer container 98c273deb1c9ddce91b3319872c67804a5e8a1fd45315ab72794b8c9db94b57c. Jun 20 18:42:16.045086 containerd[1720]: time="2025-06-20T18:42:16.045044932Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-mqbrr,Uid:d84f1a56-c0ae-4220-897d-625acf9e4c43,Namespace:kube-system,Attempt:0,} returns sandbox id \"98c273deb1c9ddce91b3319872c67804a5e8a1fd45315ab72794b8c9db94b57c\"" Jun 20 18:42:21.123864 kubelet[3281]: I0620 18:42:21.123738 3281 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-pl558" podStartSLOduration=6.123722565 podStartE2EDuration="6.123722565s" podCreationTimestamp="2025-06-20 18:42:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 18:42:15.70030415 +0000 UTC m=+6.732579948" watchObservedRunningTime="2025-06-20 18:42:21.123722565 +0000 UTC m=+12.155998363" Jun 20 18:42:21.356244 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2231186579.mount: Deactivated successfully. Jun 20 18:42:23.887593 containerd[1720]: time="2025-06-20T18:42:23.887532162Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:42:23.892771 containerd[1720]: time="2025-06-20T18:42:23.892643769Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Jun 20 18:42:23.897078 containerd[1720]: time="2025-06-20T18:42:23.897025015Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:42:23.898691 containerd[1720]: time="2025-06-20T18:42:23.898578737Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 8.401935721s" Jun 20 18:42:23.898691 containerd[1720]: time="2025-06-20T18:42:23.898611177Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jun 20 18:42:23.900288 containerd[1720]: time="2025-06-20T18:42:23.900145419Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jun 20 18:42:23.908142 containerd[1720]: time="2025-06-20T18:42:23.908008950Z" level=info msg="CreateContainer within sandbox \"5af161cc0fa09f13219972f40b1a38e6db4ba039291769ceefc966c20cbf3bfa\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jun 20 18:42:23.951120 containerd[1720]: time="2025-06-20T18:42:23.951055291Z" level=info msg="CreateContainer within sandbox \"5af161cc0fa09f13219972f40b1a38e6db4ba039291769ceefc966c20cbf3bfa\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"5a19ff5ccdb7bb41fe9c900451ed7faa8f2dac42367d73663aec959778d09e9d\"" Jun 20 18:42:23.951925 containerd[1720]: time="2025-06-20T18:42:23.951787292Z" level=info msg="StartContainer for \"5a19ff5ccdb7bb41fe9c900451ed7faa8f2dac42367d73663aec959778d09e9d\"" Jun 20 18:42:23.980903 systemd[1]: Started cri-containerd-5a19ff5ccdb7bb41fe9c900451ed7faa8f2dac42367d73663aec959778d09e9d.scope - libcontainer container 5a19ff5ccdb7bb41fe9c900451ed7faa8f2dac42367d73663aec959778d09e9d. Jun 20 18:42:24.007453 containerd[1720]: time="2025-06-20T18:42:24.006803289Z" level=info msg="StartContainer for \"5a19ff5ccdb7bb41fe9c900451ed7faa8f2dac42367d73663aec959778d09e9d\" returns successfully" Jun 20 18:42:24.013817 systemd[1]: cri-containerd-5a19ff5ccdb7bb41fe9c900451ed7faa8f2dac42367d73663aec959778d09e9d.scope: Deactivated successfully. Jun 20 18:42:24.033419 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5a19ff5ccdb7bb41fe9c900451ed7faa8f2dac42367d73663aec959778d09e9d-rootfs.mount: Deactivated successfully. Jun 20 18:42:27.499159 containerd[1720]: time="2025-06-20T18:42:27.499068337Z" level=info msg="shim disconnected" id=5a19ff5ccdb7bb41fe9c900451ed7faa8f2dac42367d73663aec959778d09e9d namespace=k8s.io Jun 20 18:42:27.499159 containerd[1720]: time="2025-06-20T18:42:27.499148377Z" level=warning msg="cleaning up after shim disconnected" id=5a19ff5ccdb7bb41fe9c900451ed7faa8f2dac42367d73663aec959778d09e9d namespace=k8s.io Jun 20 18:42:27.499159 containerd[1720]: time="2025-06-20T18:42:27.499167617Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 18:42:27.714266 containerd[1720]: time="2025-06-20T18:42:27.713861198Z" level=info msg="CreateContainer within sandbox \"5af161cc0fa09f13219972f40b1a38e6db4ba039291769ceefc966c20cbf3bfa\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jun 20 18:42:27.740224 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2066123324.mount: Deactivated successfully. Jun 20 18:42:27.751315 containerd[1720]: time="2025-06-20T18:42:27.751056970Z" level=info msg="CreateContainer within sandbox \"5af161cc0fa09f13219972f40b1a38e6db4ba039291769ceefc966c20cbf3bfa\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"50ad09cf5de5dda6a34201c3145a92b6e6c06893aaed0d74e937af9d6f79f6d8\"" Jun 20 18:42:27.753114 containerd[1720]: time="2025-06-20T18:42:27.752284612Z" level=info msg="StartContainer for \"50ad09cf5de5dda6a34201c3145a92b6e6c06893aaed0d74e937af9d6f79f6d8\"" Jun 20 18:42:27.780951 systemd[1]: Started cri-containerd-50ad09cf5de5dda6a34201c3145a92b6e6c06893aaed0d74e937af9d6f79f6d8.scope - libcontainer container 50ad09cf5de5dda6a34201c3145a92b6e6c06893aaed0d74e937af9d6f79f6d8. Jun 20 18:42:27.811703 containerd[1720]: time="2025-06-20T18:42:27.811659695Z" level=info msg="StartContainer for \"50ad09cf5de5dda6a34201c3145a92b6e6c06893aaed0d74e937af9d6f79f6d8\" returns successfully" Jun 20 18:42:27.816803 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 20 18:42:27.817029 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 20 18:42:27.817362 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jun 20 18:42:27.823105 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 20 18:42:27.828167 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jun 20 18:42:27.828866 systemd[1]: cri-containerd-50ad09cf5de5dda6a34201c3145a92b6e6c06893aaed0d74e937af9d6f79f6d8.scope: Deactivated successfully. Jun 20 18:42:27.843063 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 20 18:42:27.872834 containerd[1720]: time="2025-06-20T18:42:27.872769460Z" level=info msg="shim disconnected" id=50ad09cf5de5dda6a34201c3145a92b6e6c06893aaed0d74e937af9d6f79f6d8 namespace=k8s.io Jun 20 18:42:27.872834 containerd[1720]: time="2025-06-20T18:42:27.872824980Z" level=warning msg="cleaning up after shim disconnected" id=50ad09cf5de5dda6a34201c3145a92b6e6c06893aaed0d74e937af9d6f79f6d8 namespace=k8s.io Jun 20 18:42:27.872834 containerd[1720]: time="2025-06-20T18:42:27.872834060Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 18:42:28.720555 containerd[1720]: time="2025-06-20T18:42:28.720517647Z" level=info msg="CreateContainer within sandbox \"5af161cc0fa09f13219972f40b1a38e6db4ba039291769ceefc966c20cbf3bfa\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jun 20 18:42:28.737850 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-50ad09cf5de5dda6a34201c3145a92b6e6c06893aaed0d74e937af9d6f79f6d8-rootfs.mount: Deactivated successfully. Jun 20 18:42:29.047706 containerd[1720]: time="2025-06-20T18:42:29.047599705Z" level=info msg="CreateContainer within sandbox \"5af161cc0fa09f13219972f40b1a38e6db4ba039291769ceefc966c20cbf3bfa\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"8231c1c339a94dfe7dc69d8c39029ce59606c96bc882a1cb98733b28faa4eeed\"" Jun 20 18:42:29.048836 containerd[1720]: time="2025-06-20T18:42:29.048804066Z" level=info msg="StartContainer for \"8231c1c339a94dfe7dc69d8c39029ce59606c96bc882a1cb98733b28faa4eeed\"" Jun 20 18:42:29.083032 systemd[1]: Started cri-containerd-8231c1c339a94dfe7dc69d8c39029ce59606c96bc882a1cb98733b28faa4eeed.scope - libcontainer container 8231c1c339a94dfe7dc69d8c39029ce59606c96bc882a1cb98733b28faa4eeed. Jun 20 18:42:29.120011 systemd[1]: cri-containerd-8231c1c339a94dfe7dc69d8c39029ce59606c96bc882a1cb98733b28faa4eeed.scope: Deactivated successfully. Jun 20 18:42:29.124629 containerd[1720]: time="2025-06-20T18:42:29.124464572Z" level=info msg="StartContainer for \"8231c1c339a94dfe7dc69d8c39029ce59606c96bc882a1cb98733b28faa4eeed\" returns successfully" Jun 20 18:42:29.247783 containerd[1720]: time="2025-06-20T18:42:29.247617945Z" level=info msg="shim disconnected" id=8231c1c339a94dfe7dc69d8c39029ce59606c96bc882a1cb98733b28faa4eeed namespace=k8s.io Jun 20 18:42:29.247783 containerd[1720]: time="2025-06-20T18:42:29.247670385Z" level=warning msg="cleaning up after shim disconnected" id=8231c1c339a94dfe7dc69d8c39029ce59606c96bc882a1cb98733b28faa4eeed namespace=k8s.io Jun 20 18:42:29.247783 containerd[1720]: time="2025-06-20T18:42:29.247679345Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 18:42:29.507907 containerd[1720]: time="2025-06-20T18:42:29.507849709Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:42:29.511725 containerd[1720]: time="2025-06-20T18:42:29.511679234Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Jun 20 18:42:29.516799 containerd[1720]: time="2025-06-20T18:42:29.516355441Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:42:29.517686 containerd[1720]: time="2025-06-20T18:42:29.517573643Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 5.617394024s" Jun 20 18:42:29.517686 containerd[1720]: time="2025-06-20T18:42:29.517611203Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jun 20 18:42:29.525258 containerd[1720]: time="2025-06-20T18:42:29.525179573Z" level=info msg="CreateContainer within sandbox \"98c273deb1c9ddce91b3319872c67804a5e8a1fd45315ab72794b8c9db94b57c\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jun 20 18:42:29.557580 containerd[1720]: time="2025-06-20T18:42:29.557531099Z" level=info msg="CreateContainer within sandbox \"98c273deb1c9ddce91b3319872c67804a5e8a1fd45315ab72794b8c9db94b57c\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"2f89d3e014333fd21e0c4f5f6345fa2e89cf58930495c14b7bd7777eb1421211\"" Jun 20 18:42:29.558718 containerd[1720]: time="2025-06-20T18:42:29.558676020Z" level=info msg="StartContainer for \"2f89d3e014333fd21e0c4f5f6345fa2e89cf58930495c14b7bd7777eb1421211\"" Jun 20 18:42:29.582917 systemd[1]: Started cri-containerd-2f89d3e014333fd21e0c4f5f6345fa2e89cf58930495c14b7bd7777eb1421211.scope - libcontainer container 2f89d3e014333fd21e0c4f5f6345fa2e89cf58930495c14b7bd7777eb1421211. Jun 20 18:42:29.614832 containerd[1720]: time="2025-06-20T18:42:29.614713259Z" level=info msg="StartContainer for \"2f89d3e014333fd21e0c4f5f6345fa2e89cf58930495c14b7bd7777eb1421211\" returns successfully" Jun 20 18:42:29.739842 containerd[1720]: time="2025-06-20T18:42:29.739794314Z" level=info msg="CreateContainer within sandbox \"5af161cc0fa09f13219972f40b1a38e6db4ba039291769ceefc966c20cbf3bfa\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jun 20 18:42:29.741196 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8231c1c339a94dfe7dc69d8c39029ce59606c96bc882a1cb98733b28faa4eeed-rootfs.mount: Deactivated successfully. Jun 20 18:42:29.781928 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2565192574.mount: Deactivated successfully. Jun 20 18:42:29.802212 containerd[1720]: time="2025-06-20T18:42:29.802172401Z" level=info msg="CreateContainer within sandbox \"5af161cc0fa09f13219972f40b1a38e6db4ba039291769ceefc966c20cbf3bfa\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"2774b9a77d8f88da189d9d1483c6cd523f948f16528208b2bbc57d502755938c\"" Jun 20 18:42:29.802986 containerd[1720]: time="2025-06-20T18:42:29.802873322Z" level=info msg="StartContainer for \"2774b9a77d8f88da189d9d1483c6cd523f948f16528208b2bbc57d502755938c\"" Jun 20 18:42:29.837946 systemd[1]: Started cri-containerd-2774b9a77d8f88da189d9d1483c6cd523f948f16528208b2bbc57d502755938c.scope - libcontainer container 2774b9a77d8f88da189d9d1483c6cd523f948f16528208b2bbc57d502755938c. Jun 20 18:42:29.840586 kubelet[3281]: I0620 18:42:29.840507 3281 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-mqbrr" podStartSLOduration=1.368183026 podStartE2EDuration="14.840493415s" podCreationTimestamp="2025-06-20 18:42:15 +0000 UTC" firstStartedPulling="2025-06-20 18:42:16.046328775 +0000 UTC m=+7.078604573" lastFinishedPulling="2025-06-20 18:42:29.518639164 +0000 UTC m=+20.550914962" observedRunningTime="2025-06-20 18:42:29.763838107 +0000 UTC m=+20.796113905" watchObservedRunningTime="2025-06-20 18:42:29.840493415 +0000 UTC m=+20.872769213" Jun 20 18:42:29.910023 systemd[1]: cri-containerd-2774b9a77d8f88da189d9d1483c6cd523f948f16528208b2bbc57d502755938c.scope: Deactivated successfully. Jun 20 18:42:29.915989 containerd[1720]: time="2025-06-20T18:42:29.915263439Z" level=info msg="StartContainer for \"2774b9a77d8f88da189d9d1483c6cd523f948f16528208b2bbc57d502755938c\" returns successfully" Jun 20 18:42:30.550103 containerd[1720]: time="2025-06-20T18:42:30.550045168Z" level=info msg="shim disconnected" id=2774b9a77d8f88da189d9d1483c6cd523f948f16528208b2bbc57d502755938c namespace=k8s.io Jun 20 18:42:30.550103 containerd[1720]: time="2025-06-20T18:42:30.550094688Z" level=warning msg="cleaning up after shim disconnected" id=2774b9a77d8f88da189d9d1483c6cd523f948f16528208b2bbc57d502755938c namespace=k8s.io Jun 20 18:42:30.550103 containerd[1720]: time="2025-06-20T18:42:30.550103088Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 18:42:30.738571 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2774b9a77d8f88da189d9d1483c6cd523f948f16528208b2bbc57d502755938c-rootfs.mount: Deactivated successfully. Jun 20 18:42:30.744852 containerd[1720]: time="2025-06-20T18:42:30.744809521Z" level=info msg="CreateContainer within sandbox \"5af161cc0fa09f13219972f40b1a38e6db4ba039291769ceefc966c20cbf3bfa\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jun 20 18:42:30.790787 containerd[1720]: time="2025-06-20T18:42:30.790692105Z" level=info msg="CreateContainer within sandbox \"5af161cc0fa09f13219972f40b1a38e6db4ba039291769ceefc966c20cbf3bfa\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"490fe7611025efe2cf9f7575d44727d3a29fd61fe34dcea6207d6e0eff66c430\"" Jun 20 18:42:30.792146 containerd[1720]: time="2025-06-20T18:42:30.791534746Z" level=info msg="StartContainer for \"490fe7611025efe2cf9f7575d44727d3a29fd61fe34dcea6207d6e0eff66c430\"" Jun 20 18:42:30.818899 systemd[1]: Started cri-containerd-490fe7611025efe2cf9f7575d44727d3a29fd61fe34dcea6207d6e0eff66c430.scope - libcontainer container 490fe7611025efe2cf9f7575d44727d3a29fd61fe34dcea6207d6e0eff66c430. Jun 20 18:42:30.845688 containerd[1720]: time="2025-06-20T18:42:30.845612462Z" level=info msg="StartContainer for \"490fe7611025efe2cf9f7575d44727d3a29fd61fe34dcea6207d6e0eff66c430\" returns successfully" Jun 20 18:42:30.966458 kubelet[3281]: I0620 18:42:30.964309 3281 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jun 20 18:42:31.017214 systemd[1]: Created slice kubepods-burstable-pod84eb89f8_bcf2_4d5a_b336_ff0b9ea53480.slice - libcontainer container kubepods-burstable-pod84eb89f8_bcf2_4d5a_b336_ff0b9ea53480.slice. Jun 20 18:42:31.025092 systemd[1]: Created slice kubepods-burstable-podc7cee4b6_412f_4e64_a562_8281979cd791.slice - libcontainer container kubepods-burstable-podc7cee4b6_412f_4e64_a562_8281979cd791.slice. Jun 20 18:42:31.081826 kubelet[3281]: I0620 18:42:31.081700 3281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phfr9\" (UniqueName: \"kubernetes.io/projected/84eb89f8-bcf2-4d5a-b336-ff0b9ea53480-kube-api-access-phfr9\") pod \"coredns-674b8bbfcf-d7f56\" (UID: \"84eb89f8-bcf2-4d5a-b336-ff0b9ea53480\") " pod="kube-system/coredns-674b8bbfcf-d7f56" Jun 20 18:42:31.081826 kubelet[3281]: I0620 18:42:31.081758 3281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njrks\" (UniqueName: \"kubernetes.io/projected/c7cee4b6-412f-4e64-a562-8281979cd791-kube-api-access-njrks\") pod \"coredns-674b8bbfcf-n59ft\" (UID: \"c7cee4b6-412f-4e64-a562-8281979cd791\") " pod="kube-system/coredns-674b8bbfcf-n59ft" Jun 20 18:42:31.081826 kubelet[3281]: I0620 18:42:31.081780 3281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/84eb89f8-bcf2-4d5a-b336-ff0b9ea53480-config-volume\") pod \"coredns-674b8bbfcf-d7f56\" (UID: \"84eb89f8-bcf2-4d5a-b336-ff0b9ea53480\") " pod="kube-system/coredns-674b8bbfcf-d7f56" Jun 20 18:42:31.081826 kubelet[3281]: I0620 18:42:31.081799 3281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c7cee4b6-412f-4e64-a562-8281979cd791-config-volume\") pod \"coredns-674b8bbfcf-n59ft\" (UID: \"c7cee4b6-412f-4e64-a562-8281979cd791\") " pod="kube-system/coredns-674b8bbfcf-n59ft" Jun 20 18:42:31.322637 containerd[1720]: time="2025-06-20T18:42:31.322583250Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-d7f56,Uid:84eb89f8-bcf2-4d5a-b336-ff0b9ea53480,Namespace:kube-system,Attempt:0,}" Jun 20 18:42:31.329470 containerd[1720]: time="2025-06-20T18:42:31.329239939Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-n59ft,Uid:c7cee4b6-412f-4e64-a562-8281979cd791,Namespace:kube-system,Attempt:0,}" Jun 20 18:42:31.765879 kubelet[3281]: I0620 18:42:31.765811 3281 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-tgn7k" podStartSLOduration=8.361824946 podStartE2EDuration="16.765793311s" podCreationTimestamp="2025-06-20 18:42:15 +0000 UTC" firstStartedPulling="2025-06-20 18:42:15.495959254 +0000 UTC m=+6.528235052" lastFinishedPulling="2025-06-20 18:42:23.899927659 +0000 UTC m=+14.932203417" observedRunningTime="2025-06-20 18:42:31.76554731 +0000 UTC m=+22.797823108" watchObservedRunningTime="2025-06-20 18:42:31.765793311 +0000 UTC m=+22.798069109" Jun 20 18:42:32.971558 systemd-networkd[1618]: cilium_host: Link UP Jun 20 18:42:32.972371 systemd-networkd[1618]: cilium_net: Link UP Jun 20 18:42:32.972381 systemd-networkd[1618]: cilium_net: Gained carrier Jun 20 18:42:32.973522 systemd-networkd[1618]: cilium_host: Gained carrier Jun 20 18:42:32.973937 systemd-networkd[1618]: cilium_net: Gained IPv6LL Jun 20 18:42:33.095425 systemd-networkd[1618]: cilium_vxlan: Link UP Jun 20 18:42:33.095435 systemd-networkd[1618]: cilium_vxlan: Gained carrier Jun 20 18:42:33.402826 kernel: NET: Registered PF_ALG protocol family Jun 20 18:42:33.599947 systemd-networkd[1618]: cilium_host: Gained IPv6LL Jun 20 18:42:34.044993 systemd-networkd[1618]: lxc_health: Link UP Jun 20 18:42:34.051488 systemd-networkd[1618]: lxc_health: Gained carrier Jun 20 18:42:34.406502 systemd-networkd[1618]: lxcb024a46d7ac8: Link UP Jun 20 18:42:34.420851 kernel: eth0: renamed from tmp666fb Jun 20 18:42:34.430240 systemd-networkd[1618]: lxcb024a46d7ac8: Gained carrier Jun 20 18:42:34.438893 systemd-networkd[1618]: lxc60b53215b74c: Link UP Jun 20 18:42:34.451813 kernel: eth0: renamed from tmped167 Jun 20 18:42:34.458721 systemd-networkd[1618]: lxc60b53215b74c: Gained carrier Jun 20 18:42:35.135933 systemd-networkd[1618]: cilium_vxlan: Gained IPv6LL Jun 20 18:42:35.583935 systemd-networkd[1618]: lxcb024a46d7ac8: Gained IPv6LL Jun 20 18:42:35.840979 systemd-networkd[1618]: lxc60b53215b74c: Gained IPv6LL Jun 20 18:42:36.032865 systemd-networkd[1618]: lxc_health: Gained IPv6LL Jun 20 18:42:38.002939 containerd[1720]: time="2025-06-20T18:42:38.002823849Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 18:42:38.005061 containerd[1720]: time="2025-06-20T18:42:38.002949450Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 18:42:38.005061 containerd[1720]: time="2025-06-20T18:42:38.002981090Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:42:38.005061 containerd[1720]: time="2025-06-20T18:42:38.003098450Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:42:38.037863 systemd[1]: Started cri-containerd-ed16735a6f46dd7bce9bb6dc0a796f68e50c9d23cfb7c65f6fb4d23eeafaf953.scope - libcontainer container ed16735a6f46dd7bce9bb6dc0a796f68e50c9d23cfb7c65f6fb4d23eeafaf953. Jun 20 18:42:38.038156 containerd[1720]: time="2025-06-20T18:42:38.036905177Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 18:42:38.038156 containerd[1720]: time="2025-06-20T18:42:38.036957137Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 18:42:38.038156 containerd[1720]: time="2025-06-20T18:42:38.036967657Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:42:38.038156 containerd[1720]: time="2025-06-20T18:42:38.037042137Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:42:38.070920 systemd[1]: Started cri-containerd-666fb6501b9b77fb3262c141b8c507fd51cdfaf4bd88c664946895d19afa1c5e.scope - libcontainer container 666fb6501b9b77fb3262c141b8c507fd51cdfaf4bd88c664946895d19afa1c5e. Jun 20 18:42:38.097068 containerd[1720]: time="2025-06-20T18:42:38.097035341Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-n59ft,Uid:c7cee4b6-412f-4e64-a562-8281979cd791,Namespace:kube-system,Attempt:0,} returns sandbox id \"ed16735a6f46dd7bce9bb6dc0a796f68e50c9d23cfb7c65f6fb4d23eeafaf953\"" Jun 20 18:42:38.107903 containerd[1720]: time="2025-06-20T18:42:38.107843556Z" level=info msg="CreateContainer within sandbox \"ed16735a6f46dd7bce9bb6dc0a796f68e50c9d23cfb7c65f6fb4d23eeafaf953\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 20 18:42:38.119560 containerd[1720]: time="2025-06-20T18:42:38.119505093Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-d7f56,Uid:84eb89f8-bcf2-4d5a-b336-ff0b9ea53480,Namespace:kube-system,Attempt:0,} returns sandbox id \"666fb6501b9b77fb3262c141b8c507fd51cdfaf4bd88c664946895d19afa1c5e\"" Jun 20 18:42:38.128773 containerd[1720]: time="2025-06-20T18:42:38.128702946Z" level=info msg="CreateContainer within sandbox \"666fb6501b9b77fb3262c141b8c507fd51cdfaf4bd88c664946895d19afa1c5e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 20 18:42:38.178868 containerd[1720]: time="2025-06-20T18:42:38.178824256Z" level=info msg="CreateContainer within sandbox \"666fb6501b9b77fb3262c141b8c507fd51cdfaf4bd88c664946895d19afa1c5e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ab4671581592df423cf0748a161ddb8019705efbf1dce2f018d2cc1712a44db1\"" Jun 20 18:42:38.179708 containerd[1720]: time="2025-06-20T18:42:38.179676177Z" level=info msg="StartContainer for \"ab4671581592df423cf0748a161ddb8019705efbf1dce2f018d2cc1712a44db1\"" Jun 20 18:42:38.193740 containerd[1720]: time="2025-06-20T18:42:38.193694877Z" level=info msg="CreateContainer within sandbox \"ed16735a6f46dd7bce9bb6dc0a796f68e50c9d23cfb7c65f6fb4d23eeafaf953\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"91cd3eac1c3664284166d75331ad3ae93866d401eb82e2040a6a46ec411ada19\"" Jun 20 18:42:38.193740 containerd[1720]: time="2025-06-20T18:42:38.194611878Z" level=info msg="StartContainer for \"91cd3eac1c3664284166d75331ad3ae93866d401eb82e2040a6a46ec411ada19\"" Jun 20 18:42:38.205914 systemd[1]: Started cri-containerd-ab4671581592df423cf0748a161ddb8019705efbf1dce2f018d2cc1712a44db1.scope - libcontainer container ab4671581592df423cf0748a161ddb8019705efbf1dce2f018d2cc1712a44db1. Jun 20 18:42:38.226361 systemd[1]: Started cri-containerd-91cd3eac1c3664284166d75331ad3ae93866d401eb82e2040a6a46ec411ada19.scope - libcontainer container 91cd3eac1c3664284166d75331ad3ae93866d401eb82e2040a6a46ec411ada19. Jun 20 18:42:38.245777 containerd[1720]: time="2025-06-20T18:42:38.245585069Z" level=info msg="StartContainer for \"ab4671581592df423cf0748a161ddb8019705efbf1dce2f018d2cc1712a44db1\" returns successfully" Jun 20 18:42:38.261806 containerd[1720]: time="2025-06-20T18:42:38.260598851Z" level=info msg="StartContainer for \"91cd3eac1c3664284166d75331ad3ae93866d401eb82e2040a6a46ec411ada19\" returns successfully" Jun 20 18:42:38.776515 kubelet[3281]: I0620 18:42:38.776421 3281 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-n59ft" podStartSLOduration=23.776406133000002 podStartE2EDuration="23.776406133s" podCreationTimestamp="2025-06-20 18:42:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 18:42:38.771795087 +0000 UTC m=+29.804070885" watchObservedRunningTime="2025-06-20 18:42:38.776406133 +0000 UTC m=+29.808681891" Jun 20 18:42:38.817993 kubelet[3281]: I0620 18:42:38.816809 3281 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-d7f56" podStartSLOduration=23.81679171 podStartE2EDuration="23.81679171s" podCreationTimestamp="2025-06-20 18:42:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 18:42:38.791852155 +0000 UTC m=+29.824127953" watchObservedRunningTime="2025-06-20 18:42:38.81679171 +0000 UTC m=+29.849067508" Jun 20 18:43:48.380005 systemd[1]: Started sshd@7-10.200.20.15:22-10.200.16.10:53534.service - OpenSSH per-connection server daemon (10.200.16.10:53534). Jun 20 18:43:48.843129 sshd[4678]: Accepted publickey for core from 10.200.16.10 port 53534 ssh2: RSA SHA256:V0PIl5bgEVSoJBEasTsvuXRLX17TbdMMyIxacTZE6p0 Jun 20 18:43:48.843797 sshd-session[4678]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:43:48.848102 systemd-logind[1701]: New session 10 of user core. Jun 20 18:43:48.854915 systemd[1]: Started session-10.scope - Session 10 of User core. Jun 20 18:43:49.251553 sshd[4680]: Connection closed by 10.200.16.10 port 53534 Jun 20 18:43:49.252491 sshd-session[4678]: pam_unix(sshd:session): session closed for user core Jun 20 18:43:49.256330 systemd-logind[1701]: Session 10 logged out. Waiting for processes to exit. Jun 20 18:43:49.256364 systemd[1]: sshd@7-10.200.20.15:22-10.200.16.10:53534.service: Deactivated successfully. Jun 20 18:43:49.258720 systemd[1]: session-10.scope: Deactivated successfully. Jun 20 18:43:49.260051 systemd-logind[1701]: Removed session 10. Jun 20 18:43:54.338841 systemd[1]: Started sshd@8-10.200.20.15:22-10.200.16.10:56594.service - OpenSSH per-connection server daemon (10.200.16.10:56594). Jun 20 18:43:54.795041 sshd[4693]: Accepted publickey for core from 10.200.16.10 port 56594 ssh2: RSA SHA256:V0PIl5bgEVSoJBEasTsvuXRLX17TbdMMyIxacTZE6p0 Jun 20 18:43:54.796216 sshd-session[4693]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:43:54.800374 systemd-logind[1701]: New session 11 of user core. Jun 20 18:43:54.804961 systemd[1]: Started session-11.scope - Session 11 of User core. Jun 20 18:43:55.192669 sshd[4695]: Connection closed by 10.200.16.10 port 56594 Jun 20 18:43:55.193328 sshd-session[4693]: pam_unix(sshd:session): session closed for user core Jun 20 18:43:55.196866 systemd[1]: sshd@8-10.200.20.15:22-10.200.16.10:56594.service: Deactivated successfully. Jun 20 18:43:55.198937 systemd[1]: session-11.scope: Deactivated successfully. Jun 20 18:43:55.200689 systemd-logind[1701]: Session 11 logged out. Waiting for processes to exit. Jun 20 18:43:55.201698 systemd-logind[1701]: Removed session 11. Jun 20 18:44:00.279988 systemd[1]: Started sshd@9-10.200.20.15:22-10.200.16.10:35944.service - OpenSSH per-connection server daemon (10.200.16.10:35944). Jun 20 18:44:00.735695 sshd[4709]: Accepted publickey for core from 10.200.16.10 port 35944 ssh2: RSA SHA256:V0PIl5bgEVSoJBEasTsvuXRLX17TbdMMyIxacTZE6p0 Jun 20 18:44:00.736995 sshd-session[4709]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:44:00.741035 systemd-logind[1701]: New session 12 of user core. Jun 20 18:44:00.745878 systemd[1]: Started session-12.scope - Session 12 of User core. Jun 20 18:44:01.129124 sshd[4711]: Connection closed by 10.200.16.10 port 35944 Jun 20 18:44:01.129700 sshd-session[4709]: pam_unix(sshd:session): session closed for user core Jun 20 18:44:01.133902 systemd-logind[1701]: Session 12 logged out. Waiting for processes to exit. Jun 20 18:44:01.134631 systemd[1]: sshd@9-10.200.20.15:22-10.200.16.10:35944.service: Deactivated successfully. Jun 20 18:44:01.137412 systemd[1]: session-12.scope: Deactivated successfully. Jun 20 18:44:01.138595 systemd-logind[1701]: Removed session 12. Jun 20 18:44:06.232023 systemd[1]: Started sshd@10-10.200.20.15:22-10.200.16.10:35960.service - OpenSSH per-connection server daemon (10.200.16.10:35960). Jun 20 18:44:06.766812 sshd[4725]: Accepted publickey for core from 10.200.16.10 port 35960 ssh2: RSA SHA256:V0PIl5bgEVSoJBEasTsvuXRLX17TbdMMyIxacTZE6p0 Jun 20 18:44:06.768104 sshd-session[4725]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:44:06.773222 systemd-logind[1701]: New session 13 of user core. Jun 20 18:44:06.779902 systemd[1]: Started session-13.scope - Session 13 of User core. Jun 20 18:44:07.212800 sshd[4727]: Connection closed by 10.200.16.10 port 35960 Jun 20 18:44:07.213329 sshd-session[4725]: pam_unix(sshd:session): session closed for user core Jun 20 18:44:07.216857 systemd[1]: sshd@10-10.200.20.15:22-10.200.16.10:35960.service: Deactivated successfully. Jun 20 18:44:07.219306 systemd[1]: session-13.scope: Deactivated successfully. Jun 20 18:44:07.220296 systemd-logind[1701]: Session 13 logged out. Waiting for processes to exit. Jun 20 18:44:07.222377 systemd-logind[1701]: Removed session 13. Jun 20 18:44:07.302005 systemd[1]: Started sshd@11-10.200.20.15:22-10.200.16.10:35970.service - OpenSSH per-connection server daemon (10.200.16.10:35970). Jun 20 18:44:07.752452 sshd[4740]: Accepted publickey for core from 10.200.16.10 port 35970 ssh2: RSA SHA256:V0PIl5bgEVSoJBEasTsvuXRLX17TbdMMyIxacTZE6p0 Jun 20 18:44:07.753798 sshd-session[4740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:44:07.757831 systemd-logind[1701]: New session 14 of user core. Jun 20 18:44:07.765918 systemd[1]: Started session-14.scope - Session 14 of User core. Jun 20 18:44:08.183378 sshd[4742]: Connection closed by 10.200.16.10 port 35970 Jun 20 18:44:08.183952 sshd-session[4740]: pam_unix(sshd:session): session closed for user core Jun 20 18:44:08.187495 systemd[1]: sshd@11-10.200.20.15:22-10.200.16.10:35970.service: Deactivated successfully. Jun 20 18:44:08.190337 systemd[1]: session-14.scope: Deactivated successfully. Jun 20 18:44:08.191648 systemd-logind[1701]: Session 14 logged out. Waiting for processes to exit. Jun 20 18:44:08.192697 systemd-logind[1701]: Removed session 14. Jun 20 18:44:08.267121 systemd[1]: Started sshd@12-10.200.20.15:22-10.200.16.10:35986.service - OpenSSH per-connection server daemon (10.200.16.10:35986). Jun 20 18:44:08.732735 sshd[4752]: Accepted publickey for core from 10.200.16.10 port 35986 ssh2: RSA SHA256:V0PIl5bgEVSoJBEasTsvuXRLX17TbdMMyIxacTZE6p0 Jun 20 18:44:08.735299 sshd-session[4752]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:44:08.739726 systemd-logind[1701]: New session 15 of user core. Jun 20 18:44:08.745913 systemd[1]: Started session-15.scope - Session 15 of User core. Jun 20 18:44:09.130592 sshd[4754]: Connection closed by 10.200.16.10 port 35986 Jun 20 18:44:09.131343 sshd-session[4752]: pam_unix(sshd:session): session closed for user core Jun 20 18:44:09.134343 systemd-logind[1701]: Session 15 logged out. Waiting for processes to exit. Jun 20 18:44:09.136159 systemd[1]: sshd@12-10.200.20.15:22-10.200.16.10:35986.service: Deactivated successfully. Jun 20 18:44:09.138162 systemd[1]: session-15.scope: Deactivated successfully. Jun 20 18:44:09.139535 systemd-logind[1701]: Removed session 15. Jun 20 18:44:14.227165 systemd[1]: Started sshd@13-10.200.20.15:22-10.200.16.10:53984.service - OpenSSH per-connection server daemon (10.200.16.10:53984). Jun 20 18:44:14.759827 sshd[4767]: Accepted publickey for core from 10.200.16.10 port 53984 ssh2: RSA SHA256:V0PIl5bgEVSoJBEasTsvuXRLX17TbdMMyIxacTZE6p0 Jun 20 18:44:14.761462 sshd-session[4767]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:44:14.765935 systemd-logind[1701]: New session 16 of user core. Jun 20 18:44:14.770923 systemd[1]: Started session-16.scope - Session 16 of User core. Jun 20 18:44:15.199479 sshd[4769]: Connection closed by 10.200.16.10 port 53984 Jun 20 18:44:15.200046 sshd-session[4767]: pam_unix(sshd:session): session closed for user core Jun 20 18:44:15.203512 systemd[1]: sshd@13-10.200.20.15:22-10.200.16.10:53984.service: Deactivated successfully. Jun 20 18:44:15.207223 systemd[1]: session-16.scope: Deactivated successfully. Jun 20 18:44:15.207966 systemd-logind[1701]: Session 16 logged out. Waiting for processes to exit. Jun 20 18:44:15.209247 systemd-logind[1701]: Removed session 16. Jun 20 18:44:20.283665 systemd[1]: Started sshd@14-10.200.20.15:22-10.200.16.10:36508.service - OpenSSH per-connection server daemon (10.200.16.10:36508). Jun 20 18:44:20.743692 sshd[4783]: Accepted publickey for core from 10.200.16.10 port 36508 ssh2: RSA SHA256:V0PIl5bgEVSoJBEasTsvuXRLX17TbdMMyIxacTZE6p0 Jun 20 18:44:20.744981 sshd-session[4783]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:44:20.749785 systemd-logind[1701]: New session 17 of user core. Jun 20 18:44:20.755900 systemd[1]: Started session-17.scope - Session 17 of User core. Jun 20 18:44:21.137506 sshd[4785]: Connection closed by 10.200.16.10 port 36508 Jun 20 18:44:21.137350 sshd-session[4783]: pam_unix(sshd:session): session closed for user core Jun 20 18:44:21.140948 systemd[1]: sshd@14-10.200.20.15:22-10.200.16.10:36508.service: Deactivated successfully. Jun 20 18:44:21.142520 systemd[1]: session-17.scope: Deactivated successfully. Jun 20 18:44:21.143249 systemd-logind[1701]: Session 17 logged out. Waiting for processes to exit. Jun 20 18:44:21.144363 systemd-logind[1701]: Removed session 17. Jun 20 18:44:21.219489 systemd[1]: Started sshd@15-10.200.20.15:22-10.200.16.10:36516.service - OpenSSH per-connection server daemon (10.200.16.10:36516). Jun 20 18:44:21.677741 sshd[4796]: Accepted publickey for core from 10.200.16.10 port 36516 ssh2: RSA SHA256:V0PIl5bgEVSoJBEasTsvuXRLX17TbdMMyIxacTZE6p0 Jun 20 18:44:21.679018 sshd-session[4796]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:44:21.683960 systemd-logind[1701]: New session 18 of user core. Jun 20 18:44:21.691889 systemd[1]: Started session-18.scope - Session 18 of User core. Jun 20 18:44:22.120741 sshd[4798]: Connection closed by 10.200.16.10 port 36516 Jun 20 18:44:22.121085 sshd-session[4796]: pam_unix(sshd:session): session closed for user core Jun 20 18:44:22.125619 systemd-logind[1701]: Session 18 logged out. Waiting for processes to exit. Jun 20 18:44:22.125866 systemd[1]: sshd@15-10.200.20.15:22-10.200.16.10:36516.service: Deactivated successfully. Jun 20 18:44:22.127444 systemd[1]: session-18.scope: Deactivated successfully. Jun 20 18:44:22.129546 systemd-logind[1701]: Removed session 18. Jun 20 18:44:22.213155 systemd[1]: Started sshd@16-10.200.20.15:22-10.200.16.10:36528.service - OpenSSH per-connection server daemon (10.200.16.10:36528). Jun 20 18:44:22.705464 sshd[4808]: Accepted publickey for core from 10.200.16.10 port 36528 ssh2: RSA SHA256:V0PIl5bgEVSoJBEasTsvuXRLX17TbdMMyIxacTZE6p0 Jun 20 18:44:22.706687 sshd-session[4808]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:44:22.711134 systemd-logind[1701]: New session 19 of user core. Jun 20 18:44:22.719886 systemd[1]: Started session-19.scope - Session 19 of User core. Jun 20 18:44:23.763449 sshd[4810]: Connection closed by 10.200.16.10 port 36528 Jun 20 18:44:23.762999 sshd-session[4808]: pam_unix(sshd:session): session closed for user core Jun 20 18:44:23.766335 systemd[1]: sshd@16-10.200.20.15:22-10.200.16.10:36528.service: Deactivated successfully. Jun 20 18:44:23.768436 systemd[1]: session-19.scope: Deactivated successfully. Jun 20 18:44:23.769308 systemd-logind[1701]: Session 19 logged out. Waiting for processes to exit. Jun 20 18:44:23.770348 systemd-logind[1701]: Removed session 19. Jun 20 18:44:23.854214 systemd[1]: Started sshd@17-10.200.20.15:22-10.200.16.10:36544.service - OpenSSH per-connection server daemon (10.200.16.10:36544). Jun 20 18:44:24.342617 sshd[4827]: Accepted publickey for core from 10.200.16.10 port 36544 ssh2: RSA SHA256:V0PIl5bgEVSoJBEasTsvuXRLX17TbdMMyIxacTZE6p0 Jun 20 18:44:24.344233 sshd-session[4827]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:44:24.348734 systemd-logind[1701]: New session 20 of user core. Jun 20 18:44:24.355962 systemd[1]: Started session-20.scope - Session 20 of User core. Jun 20 18:44:24.864689 sshd[4829]: Connection closed by 10.200.16.10 port 36544 Jun 20 18:44:24.864598 sshd-session[4827]: pam_unix(sshd:session): session closed for user core Jun 20 18:44:24.868367 systemd-logind[1701]: Session 20 logged out. Waiting for processes to exit. Jun 20 18:44:24.868935 systemd[1]: sshd@17-10.200.20.15:22-10.200.16.10:36544.service: Deactivated successfully. Jun 20 18:44:24.870885 systemd[1]: session-20.scope: Deactivated successfully. Jun 20 18:44:24.872195 systemd-logind[1701]: Removed session 20. Jun 20 18:44:24.963707 systemd[1]: Started sshd@18-10.200.20.15:22-10.200.16.10:36552.service - OpenSSH per-connection server daemon (10.200.16.10:36552). Jun 20 18:44:25.420074 sshd[4839]: Accepted publickey for core from 10.200.16.10 port 36552 ssh2: RSA SHA256:V0PIl5bgEVSoJBEasTsvuXRLX17TbdMMyIxacTZE6p0 Jun 20 18:44:25.421586 sshd-session[4839]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:44:25.425838 systemd-logind[1701]: New session 21 of user core. Jun 20 18:44:25.428885 systemd[1]: Started session-21.scope - Session 21 of User core. Jun 20 18:44:25.803581 sshd[4841]: Connection closed by 10.200.16.10 port 36552 Jun 20 18:44:25.803491 sshd-session[4839]: pam_unix(sshd:session): session closed for user core Jun 20 18:44:25.806729 systemd-logind[1701]: Session 21 logged out. Waiting for processes to exit. Jun 20 18:44:25.807686 systemd[1]: sshd@18-10.200.20.15:22-10.200.16.10:36552.service: Deactivated successfully. Jun 20 18:44:25.809634 systemd[1]: session-21.scope: Deactivated successfully. Jun 20 18:44:25.810493 systemd-logind[1701]: Removed session 21. Jun 20 18:44:30.892021 systemd[1]: Started sshd@19-10.200.20.15:22-10.200.16.10:55934.service - OpenSSH per-connection server daemon (10.200.16.10:55934). Jun 20 18:44:31.344339 sshd[4854]: Accepted publickey for core from 10.200.16.10 port 55934 ssh2: RSA SHA256:V0PIl5bgEVSoJBEasTsvuXRLX17TbdMMyIxacTZE6p0 Jun 20 18:44:31.345645 sshd-session[4854]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:44:31.349842 systemd-logind[1701]: New session 22 of user core. Jun 20 18:44:31.357905 systemd[1]: Started session-22.scope - Session 22 of User core. Jun 20 18:44:31.739885 sshd[4856]: Connection closed by 10.200.16.10 port 55934 Jun 20 18:44:31.740438 sshd-session[4854]: pam_unix(sshd:session): session closed for user core Jun 20 18:44:31.744052 systemd[1]: sshd@19-10.200.20.15:22-10.200.16.10:55934.service: Deactivated successfully. Jun 20 18:44:31.744189 systemd-logind[1701]: Session 22 logged out. Waiting for processes to exit. Jun 20 18:44:31.746687 systemd[1]: session-22.scope: Deactivated successfully. Jun 20 18:44:31.747873 systemd-logind[1701]: Removed session 22. Jun 20 18:44:36.831984 systemd[1]: Started sshd@20-10.200.20.15:22-10.200.16.10:55936.service - OpenSSH per-connection server daemon (10.200.16.10:55936). Jun 20 18:44:37.287790 sshd[4868]: Accepted publickey for core from 10.200.16.10 port 55936 ssh2: RSA SHA256:V0PIl5bgEVSoJBEasTsvuXRLX17TbdMMyIxacTZE6p0 Jun 20 18:44:37.289003 sshd-session[4868]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:44:37.292949 systemd-logind[1701]: New session 23 of user core. Jun 20 18:44:37.300921 systemd[1]: Started session-23.scope - Session 23 of User core. Jun 20 18:44:37.677005 sshd[4870]: Connection closed by 10.200.16.10 port 55936 Jun 20 18:44:37.677174 sshd-session[4868]: pam_unix(sshd:session): session closed for user core Jun 20 18:44:37.679627 systemd[1]: sshd@20-10.200.20.15:22-10.200.16.10:55936.service: Deactivated successfully. Jun 20 18:44:37.681391 systemd[1]: session-23.scope: Deactivated successfully. Jun 20 18:44:37.683800 systemd-logind[1701]: Session 23 logged out. Waiting for processes to exit. Jun 20 18:44:37.684866 systemd-logind[1701]: Removed session 23. Jun 20 18:44:42.772009 systemd[1]: Started sshd@21-10.200.20.15:22-10.200.16.10:44512.service - OpenSSH per-connection server daemon (10.200.16.10:44512). Jun 20 18:44:43.259312 sshd[4881]: Accepted publickey for core from 10.200.16.10 port 44512 ssh2: RSA SHA256:V0PIl5bgEVSoJBEasTsvuXRLX17TbdMMyIxacTZE6p0 Jun 20 18:44:43.260598 sshd-session[4881]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:44:43.264625 systemd-logind[1701]: New session 24 of user core. Jun 20 18:44:43.269888 systemd[1]: Started session-24.scope - Session 24 of User core. Jun 20 18:44:43.665858 sshd[4883]: Connection closed by 10.200.16.10 port 44512 Jun 20 18:44:43.666362 sshd-session[4881]: pam_unix(sshd:session): session closed for user core Jun 20 18:44:43.669839 systemd[1]: sshd@21-10.200.20.15:22-10.200.16.10:44512.service: Deactivated successfully. Jun 20 18:44:43.671593 systemd[1]: session-24.scope: Deactivated successfully. Jun 20 18:44:43.672954 systemd-logind[1701]: Session 24 logged out. Waiting for processes to exit. Jun 20 18:44:43.674173 systemd-logind[1701]: Removed session 24. Jun 20 18:44:43.759068 systemd[1]: Started sshd@22-10.200.20.15:22-10.200.16.10:44526.service - OpenSSH per-connection server daemon (10.200.16.10:44526). Jun 20 18:44:44.246377 sshd[4894]: Accepted publickey for core from 10.200.16.10 port 44526 ssh2: RSA SHA256:V0PIl5bgEVSoJBEasTsvuXRLX17TbdMMyIxacTZE6p0 Jun 20 18:44:44.247699 sshd-session[4894]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:44:44.253293 systemd-logind[1701]: New session 25 of user core. Jun 20 18:44:44.254948 systemd[1]: Started session-25.scope - Session 25 of User core. Jun 20 18:44:46.090772 containerd[1720]: time="2025-06-20T18:44:46.089972598Z" level=info msg="StopContainer for \"2f89d3e014333fd21e0c4f5f6345fa2e89cf58930495c14b7bd7777eb1421211\" with timeout 30 (s)" Jun 20 18:44:46.091884 containerd[1720]: time="2025-06-20T18:44:46.091527720Z" level=info msg="Stop container \"2f89d3e014333fd21e0c4f5f6345fa2e89cf58930495c14b7bd7777eb1421211\" with signal terminated" Jun 20 18:44:46.099845 containerd[1720]: time="2025-06-20T18:44:46.099812811Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 20 18:44:46.107878 containerd[1720]: time="2025-06-20T18:44:46.107838621Z" level=info msg="StopContainer for \"490fe7611025efe2cf9f7575d44727d3a29fd61fe34dcea6207d6e0eff66c430\" with timeout 2 (s)" Jun 20 18:44:46.108624 containerd[1720]: time="2025-06-20T18:44:46.108527822Z" level=info msg="Stop container \"490fe7611025efe2cf9f7575d44727d3a29fd61fe34dcea6207d6e0eff66c430\" with signal terminated" Jun 20 18:44:46.109712 systemd[1]: cri-containerd-2f89d3e014333fd21e0c4f5f6345fa2e89cf58930495c14b7bd7777eb1421211.scope: Deactivated successfully. Jun 20 18:44:46.120266 systemd-networkd[1618]: lxc_health: Link DOWN Jun 20 18:44:46.120282 systemd-networkd[1618]: lxc_health: Lost carrier Jun 20 18:44:46.137699 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2f89d3e014333fd21e0c4f5f6345fa2e89cf58930495c14b7bd7777eb1421211-rootfs.mount: Deactivated successfully. Jun 20 18:44:46.140134 systemd[1]: cri-containerd-490fe7611025efe2cf9f7575d44727d3a29fd61fe34dcea6207d6e0eff66c430.scope: Deactivated successfully. Jun 20 18:44:46.140426 systemd[1]: cri-containerd-490fe7611025efe2cf9f7575d44727d3a29fd61fe34dcea6207d6e0eff66c430.scope: Consumed 6.229s CPU time, 122.6M memory peak, 136K read from disk, 12.9M written to disk. Jun 20 18:44:46.160514 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-490fe7611025efe2cf9f7575d44727d3a29fd61fe34dcea6207d6e0eff66c430-rootfs.mount: Deactivated successfully. Jun 20 18:44:46.205640 containerd[1720]: time="2025-06-20T18:44:46.205492508Z" level=info msg="shim disconnected" id=2f89d3e014333fd21e0c4f5f6345fa2e89cf58930495c14b7bd7777eb1421211 namespace=k8s.io Jun 20 18:44:46.205640 containerd[1720]: time="2025-06-20T18:44:46.205568908Z" level=warning msg="cleaning up after shim disconnected" id=2f89d3e014333fd21e0c4f5f6345fa2e89cf58930495c14b7bd7777eb1421211 namespace=k8s.io Jun 20 18:44:46.205640 containerd[1720]: time="2025-06-20T18:44:46.205587988Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 18:44:46.206418 containerd[1720]: time="2025-06-20T18:44:46.206299669Z" level=info msg="shim disconnected" id=490fe7611025efe2cf9f7575d44727d3a29fd61fe34dcea6207d6e0eff66c430 namespace=k8s.io Jun 20 18:44:46.206418 containerd[1720]: time="2025-06-20T18:44:46.206343749Z" level=warning msg="cleaning up after shim disconnected" id=490fe7611025efe2cf9f7575d44727d3a29fd61fe34dcea6207d6e0eff66c430 namespace=k8s.io Jun 20 18:44:46.206418 containerd[1720]: time="2025-06-20T18:44:46.206351749Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 18:44:46.230066 containerd[1720]: time="2025-06-20T18:44:46.230024579Z" level=info msg="StopContainer for \"2f89d3e014333fd21e0c4f5f6345fa2e89cf58930495c14b7bd7777eb1421211\" returns successfully" Jun 20 18:44:46.230938 containerd[1720]: time="2025-06-20T18:44:46.230834061Z" level=info msg="StopPodSandbox for \"98c273deb1c9ddce91b3319872c67804a5e8a1fd45315ab72794b8c9db94b57c\"" Jun 20 18:44:46.230938 containerd[1720]: time="2025-06-20T18:44:46.230896301Z" level=info msg="Container to stop \"2f89d3e014333fd21e0c4f5f6345fa2e89cf58930495c14b7bd7777eb1421211\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 18:44:46.231459 containerd[1720]: time="2025-06-20T18:44:46.231426061Z" level=info msg="StopContainer for \"490fe7611025efe2cf9f7575d44727d3a29fd61fe34dcea6207d6e0eff66c430\" returns successfully" Jun 20 18:44:46.233070 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-98c273deb1c9ddce91b3319872c67804a5e8a1fd45315ab72794b8c9db94b57c-shm.mount: Deactivated successfully. Jun 20 18:44:46.233762 containerd[1720]: time="2025-06-20T18:44:46.233625424Z" level=info msg="StopPodSandbox for \"5af161cc0fa09f13219972f40b1a38e6db4ba039291769ceefc966c20cbf3bfa\"" Jun 20 18:44:46.233762 containerd[1720]: time="2025-06-20T18:44:46.233659584Z" level=info msg="Container to stop \"2774b9a77d8f88da189d9d1483c6cd523f948f16528208b2bbc57d502755938c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 18:44:46.233762 containerd[1720]: time="2025-06-20T18:44:46.233695744Z" level=info msg="Container to stop \"5a19ff5ccdb7bb41fe9c900451ed7faa8f2dac42367d73663aec959778d09e9d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 18:44:46.233762 containerd[1720]: time="2025-06-20T18:44:46.233707424Z" level=info msg="Container to stop \"50ad09cf5de5dda6a34201c3145a92b6e6c06893aaed0d74e937af9d6f79f6d8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 18:44:46.233762 containerd[1720]: time="2025-06-20T18:44:46.233715664Z" level=info msg="Container to stop \"8231c1c339a94dfe7dc69d8c39029ce59606c96bc882a1cb98733b28faa4eeed\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 18:44:46.233762 containerd[1720]: time="2025-06-20T18:44:46.233723464Z" level=info msg="Container to stop \"490fe7611025efe2cf9f7575d44727d3a29fd61fe34dcea6207d6e0eff66c430\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 18:44:46.236736 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5af161cc0fa09f13219972f40b1a38e6db4ba039291769ceefc966c20cbf3bfa-shm.mount: Deactivated successfully. Jun 20 18:44:46.241429 systemd[1]: cri-containerd-98c273deb1c9ddce91b3319872c67804a5e8a1fd45315ab72794b8c9db94b57c.scope: Deactivated successfully. Jun 20 18:44:46.251197 systemd[1]: cri-containerd-5af161cc0fa09f13219972f40b1a38e6db4ba039291769ceefc966c20cbf3bfa.scope: Deactivated successfully. Jun 20 18:44:46.279484 containerd[1720]: time="2025-06-20T18:44:46.279324363Z" level=info msg="shim disconnected" id=5af161cc0fa09f13219972f40b1a38e6db4ba039291769ceefc966c20cbf3bfa namespace=k8s.io Jun 20 18:44:46.279484 containerd[1720]: time="2025-06-20T18:44:46.279397164Z" level=warning msg="cleaning up after shim disconnected" id=5af161cc0fa09f13219972f40b1a38e6db4ba039291769ceefc966c20cbf3bfa namespace=k8s.io Jun 20 18:44:46.279484 containerd[1720]: time="2025-06-20T18:44:46.279405724Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 18:44:46.280389 containerd[1720]: time="2025-06-20T18:44:46.280027284Z" level=info msg="shim disconnected" id=98c273deb1c9ddce91b3319872c67804a5e8a1fd45315ab72794b8c9db94b57c namespace=k8s.io Jun 20 18:44:46.281512 containerd[1720]: time="2025-06-20T18:44:46.281470126Z" level=warning msg="cleaning up after shim disconnected" id=98c273deb1c9ddce91b3319872c67804a5e8a1fd45315ab72794b8c9db94b57c namespace=k8s.io Jun 20 18:44:46.281512 containerd[1720]: time="2025-06-20T18:44:46.281498326Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 18:44:46.293659 containerd[1720]: time="2025-06-20T18:44:46.293533422Z" level=info msg="TearDown network for sandbox \"5af161cc0fa09f13219972f40b1a38e6db4ba039291769ceefc966c20cbf3bfa\" successfully" Jun 20 18:44:46.293659 containerd[1720]: time="2025-06-20T18:44:46.293576102Z" level=info msg="StopPodSandbox for \"5af161cc0fa09f13219972f40b1a38e6db4ba039291769ceefc966c20cbf3bfa\" returns successfully" Jun 20 18:44:46.296200 containerd[1720]: time="2025-06-20T18:44:46.296132105Z" level=info msg="TearDown network for sandbox \"98c273deb1c9ddce91b3319872c67804a5e8a1fd45315ab72794b8c9db94b57c\" successfully" Jun 20 18:44:46.296200 containerd[1720]: time="2025-06-20T18:44:46.296158625Z" level=info msg="StopPodSandbox for \"98c273deb1c9ddce91b3319872c67804a5e8a1fd45315ab72794b8c9db94b57c\" returns successfully" Jun 20 18:44:46.460504 kubelet[3281]: I0620 18:44:46.460454 3281 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/049bac15-f17c-4d07-9365-996cce339bd4-etc-cni-netd\") pod \"049bac15-f17c-4d07-9365-996cce339bd4\" (UID: \"049bac15-f17c-4d07-9365-996cce339bd4\") " Jun 20 18:44:46.460956 kubelet[3281]: I0620 18:44:46.460553 3281 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/049bac15-f17c-4d07-9365-996cce339bd4-cilium-config-path\") pod \"049bac15-f17c-4d07-9365-996cce339bd4\" (UID: \"049bac15-f17c-4d07-9365-996cce339bd4\") " Jun 20 18:44:46.460956 kubelet[3281]: I0620 18:44:46.460573 3281 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/049bac15-f17c-4d07-9365-996cce339bd4-cilium-run\") pod \"049bac15-f17c-4d07-9365-996cce339bd4\" (UID: \"049bac15-f17c-4d07-9365-996cce339bd4\") " Jun 20 18:44:46.460956 kubelet[3281]: I0620 18:44:46.460588 3281 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/049bac15-f17c-4d07-9365-996cce339bd4-lib-modules\") pod \"049bac15-f17c-4d07-9365-996cce339bd4\" (UID: \"049bac15-f17c-4d07-9365-996cce339bd4\") " Jun 20 18:44:46.460956 kubelet[3281]: I0620 18:44:46.460601 3281 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/049bac15-f17c-4d07-9365-996cce339bd4-host-proc-sys-kernel\") pod \"049bac15-f17c-4d07-9365-996cce339bd4\" (UID: \"049bac15-f17c-4d07-9365-996cce339bd4\") " Jun 20 18:44:46.460956 kubelet[3281]: I0620 18:44:46.460614 3281 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/049bac15-f17c-4d07-9365-996cce339bd4-host-proc-sys-net\") pod \"049bac15-f17c-4d07-9365-996cce339bd4\" (UID: \"049bac15-f17c-4d07-9365-996cce339bd4\") " Jun 20 18:44:46.460956 kubelet[3281]: I0620 18:44:46.460630 3281 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d84f1a56-c0ae-4220-897d-625acf9e4c43-cilium-config-path\") pod \"d84f1a56-c0ae-4220-897d-625acf9e4c43\" (UID: \"d84f1a56-c0ae-4220-897d-625acf9e4c43\") " Jun 20 18:44:46.461104 kubelet[3281]: I0620 18:44:46.460644 3281 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/049bac15-f17c-4d07-9365-996cce339bd4-cni-path\") pod \"049bac15-f17c-4d07-9365-996cce339bd4\" (UID: \"049bac15-f17c-4d07-9365-996cce339bd4\") " Jun 20 18:44:46.461104 kubelet[3281]: I0620 18:44:46.460657 3281 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/049bac15-f17c-4d07-9365-996cce339bd4-xtables-lock\") pod \"049bac15-f17c-4d07-9365-996cce339bd4\" (UID: \"049bac15-f17c-4d07-9365-996cce339bd4\") " Jun 20 18:44:46.461104 kubelet[3281]: I0620 18:44:46.460676 3281 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/049bac15-f17c-4d07-9365-996cce339bd4-hubble-tls\") pod \"049bac15-f17c-4d07-9365-996cce339bd4\" (UID: \"049bac15-f17c-4d07-9365-996cce339bd4\") " Jun 20 18:44:46.461104 kubelet[3281]: I0620 18:44:46.460690 3281 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/049bac15-f17c-4d07-9365-996cce339bd4-hostproc\") pod \"049bac15-f17c-4d07-9365-996cce339bd4\" (UID: \"049bac15-f17c-4d07-9365-996cce339bd4\") " Jun 20 18:44:46.461104 kubelet[3281]: I0620 18:44:46.460706 3281 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/049bac15-f17c-4d07-9365-996cce339bd4-clustermesh-secrets\") pod \"049bac15-f17c-4d07-9365-996cce339bd4\" (UID: \"049bac15-f17c-4d07-9365-996cce339bd4\") " Jun 20 18:44:46.461104 kubelet[3281]: I0620 18:44:46.460722 3281 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5mpj6\" (UniqueName: \"kubernetes.io/projected/d84f1a56-c0ae-4220-897d-625acf9e4c43-kube-api-access-5mpj6\") pod \"d84f1a56-c0ae-4220-897d-625acf9e4c43\" (UID: \"d84f1a56-c0ae-4220-897d-625acf9e4c43\") " Jun 20 18:44:46.461231 kubelet[3281]: I0620 18:44:46.460743 3281 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rlw2x\" (UniqueName: \"kubernetes.io/projected/049bac15-f17c-4d07-9365-996cce339bd4-kube-api-access-rlw2x\") pod \"049bac15-f17c-4d07-9365-996cce339bd4\" (UID: \"049bac15-f17c-4d07-9365-996cce339bd4\") " Jun 20 18:44:46.461231 kubelet[3281]: I0620 18:44:46.460780 3281 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/049bac15-f17c-4d07-9365-996cce339bd4-bpf-maps\") pod \"049bac15-f17c-4d07-9365-996cce339bd4\" (UID: \"049bac15-f17c-4d07-9365-996cce339bd4\") " Jun 20 18:44:46.461231 kubelet[3281]: I0620 18:44:46.460794 3281 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/049bac15-f17c-4d07-9365-996cce339bd4-cilium-cgroup\") pod \"049bac15-f17c-4d07-9365-996cce339bd4\" (UID: \"049bac15-f17c-4d07-9365-996cce339bd4\") " Jun 20 18:44:46.461231 kubelet[3281]: I0620 18:44:46.460877 3281 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/049bac15-f17c-4d07-9365-996cce339bd4-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "049bac15-f17c-4d07-9365-996cce339bd4" (UID: "049bac15-f17c-4d07-9365-996cce339bd4"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 18:44:46.461231 kubelet[3281]: I0620 18:44:46.460913 3281 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/049bac15-f17c-4d07-9365-996cce339bd4-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "049bac15-f17c-4d07-9365-996cce339bd4" (UID: "049bac15-f17c-4d07-9365-996cce339bd4"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 18:44:46.462779 kubelet[3281]: I0620 18:44:46.461373 3281 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/049bac15-f17c-4d07-9365-996cce339bd4-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "049bac15-f17c-4d07-9365-996cce339bd4" (UID: "049bac15-f17c-4d07-9365-996cce339bd4"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 18:44:46.462779 kubelet[3281]: I0620 18:44:46.461405 3281 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/049bac15-f17c-4d07-9365-996cce339bd4-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "049bac15-f17c-4d07-9365-996cce339bd4" (UID: "049bac15-f17c-4d07-9365-996cce339bd4"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 18:44:46.462779 kubelet[3281]: I0620 18:44:46.461418 3281 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/049bac15-f17c-4d07-9365-996cce339bd4-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "049bac15-f17c-4d07-9365-996cce339bd4" (UID: "049bac15-f17c-4d07-9365-996cce339bd4"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 18:44:46.462779 kubelet[3281]: I0620 18:44:46.461430 3281 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/049bac15-f17c-4d07-9365-996cce339bd4-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "049bac15-f17c-4d07-9365-996cce339bd4" (UID: "049bac15-f17c-4d07-9365-996cce339bd4"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 18:44:46.462779 kubelet[3281]: I0620 18:44:46.461444 3281 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/049bac15-f17c-4d07-9365-996cce339bd4-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "049bac15-f17c-4d07-9365-996cce339bd4" (UID: "049bac15-f17c-4d07-9365-996cce339bd4"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 18:44:46.462934 kubelet[3281]: I0620 18:44:46.462743 3281 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/049bac15-f17c-4d07-9365-996cce339bd4-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "049bac15-f17c-4d07-9365-996cce339bd4" (UID: "049bac15-f17c-4d07-9365-996cce339bd4"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jun 20 18:44:46.463613 kubelet[3281]: I0620 18:44:46.463588 3281 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d84f1a56-c0ae-4220-897d-625acf9e4c43-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d84f1a56-c0ae-4220-897d-625acf9e4c43" (UID: "d84f1a56-c0ae-4220-897d-625acf9e4c43"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jun 20 18:44:46.463727 kubelet[3281]: I0620 18:44:46.463713 3281 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/049bac15-f17c-4d07-9365-996cce339bd4-cni-path" (OuterVolumeSpecName: "cni-path") pod "049bac15-f17c-4d07-9365-996cce339bd4" (UID: "049bac15-f17c-4d07-9365-996cce339bd4"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 18:44:46.464436 kubelet[3281]: I0620 18:44:46.464413 3281 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/049bac15-f17c-4d07-9365-996cce339bd4-hostproc" (OuterVolumeSpecName: "hostproc") pod "049bac15-f17c-4d07-9365-996cce339bd4" (UID: "049bac15-f17c-4d07-9365-996cce339bd4"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 18:44:46.465211 kubelet[3281]: I0620 18:44:46.465180 3281 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/049bac15-f17c-4d07-9365-996cce339bd4-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "049bac15-f17c-4d07-9365-996cce339bd4" (UID: "049bac15-f17c-4d07-9365-996cce339bd4"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jun 20 18:44:46.465293 kubelet[3281]: I0620 18:44:46.465230 3281 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/049bac15-f17c-4d07-9365-996cce339bd4-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "049bac15-f17c-4d07-9365-996cce339bd4" (UID: "049bac15-f17c-4d07-9365-996cce339bd4"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 18:44:46.467052 kubelet[3281]: I0620 18:44:46.467016 3281 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/049bac15-f17c-4d07-9365-996cce339bd4-kube-api-access-rlw2x" (OuterVolumeSpecName: "kube-api-access-rlw2x") pod "049bac15-f17c-4d07-9365-996cce339bd4" (UID: "049bac15-f17c-4d07-9365-996cce339bd4"). InnerVolumeSpecName "kube-api-access-rlw2x". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jun 20 18:44:46.468585 kubelet[3281]: I0620 18:44:46.468564 3281 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/049bac15-f17c-4d07-9365-996cce339bd4-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "049bac15-f17c-4d07-9365-996cce339bd4" (UID: "049bac15-f17c-4d07-9365-996cce339bd4"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jun 20 18:44:46.468724 kubelet[3281]: I0620 18:44:46.468703 3281 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d84f1a56-c0ae-4220-897d-625acf9e4c43-kube-api-access-5mpj6" (OuterVolumeSpecName: "kube-api-access-5mpj6") pod "d84f1a56-c0ae-4220-897d-625acf9e4c43" (UID: "d84f1a56-c0ae-4220-897d-625acf9e4c43"). InnerVolumeSpecName "kube-api-access-5mpj6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jun 20 18:44:46.562067 kubelet[3281]: I0620 18:44:46.561884 3281 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/049bac15-f17c-4d07-9365-996cce339bd4-etc-cni-netd\") on node \"ci-4230.2.0-a-c483281568\" DevicePath \"\"" Jun 20 18:44:46.562067 kubelet[3281]: I0620 18:44:46.561924 3281 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/049bac15-f17c-4d07-9365-996cce339bd4-cilium-config-path\") on node \"ci-4230.2.0-a-c483281568\" DevicePath \"\"" Jun 20 18:44:46.562067 kubelet[3281]: I0620 18:44:46.561935 3281 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/049bac15-f17c-4d07-9365-996cce339bd4-cilium-run\") on node \"ci-4230.2.0-a-c483281568\" DevicePath \"\"" Jun 20 18:44:46.562067 kubelet[3281]: I0620 18:44:46.561946 3281 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/049bac15-f17c-4d07-9365-996cce339bd4-lib-modules\") on node \"ci-4230.2.0-a-c483281568\" DevicePath \"\"" Jun 20 18:44:46.562067 kubelet[3281]: I0620 18:44:46.561954 3281 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/049bac15-f17c-4d07-9365-996cce339bd4-host-proc-sys-kernel\") on node \"ci-4230.2.0-a-c483281568\" DevicePath \"\"" Jun 20 18:44:46.562067 kubelet[3281]: I0620 18:44:46.561963 3281 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/049bac15-f17c-4d07-9365-996cce339bd4-host-proc-sys-net\") on node \"ci-4230.2.0-a-c483281568\" DevicePath \"\"" Jun 20 18:44:46.562067 kubelet[3281]: I0620 18:44:46.561972 3281 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d84f1a56-c0ae-4220-897d-625acf9e4c43-cilium-config-path\") on node \"ci-4230.2.0-a-c483281568\" DevicePath \"\"" Jun 20 18:44:46.562067 kubelet[3281]: I0620 18:44:46.561979 3281 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/049bac15-f17c-4d07-9365-996cce339bd4-cni-path\") on node \"ci-4230.2.0-a-c483281568\" DevicePath \"\"" Jun 20 18:44:46.562347 kubelet[3281]: I0620 18:44:46.561987 3281 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/049bac15-f17c-4d07-9365-996cce339bd4-xtables-lock\") on node \"ci-4230.2.0-a-c483281568\" DevicePath \"\"" Jun 20 18:44:46.562347 kubelet[3281]: I0620 18:44:46.561996 3281 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/049bac15-f17c-4d07-9365-996cce339bd4-hubble-tls\") on node \"ci-4230.2.0-a-c483281568\" DevicePath \"\"" Jun 20 18:44:46.562347 kubelet[3281]: I0620 18:44:46.562004 3281 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/049bac15-f17c-4d07-9365-996cce339bd4-hostproc\") on node \"ci-4230.2.0-a-c483281568\" DevicePath \"\"" Jun 20 18:44:46.562347 kubelet[3281]: I0620 18:44:46.562012 3281 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/049bac15-f17c-4d07-9365-996cce339bd4-clustermesh-secrets\") on node \"ci-4230.2.0-a-c483281568\" DevicePath \"\"" Jun 20 18:44:46.562347 kubelet[3281]: I0620 18:44:46.562021 3281 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5mpj6\" (UniqueName: \"kubernetes.io/projected/d84f1a56-c0ae-4220-897d-625acf9e4c43-kube-api-access-5mpj6\") on node \"ci-4230.2.0-a-c483281568\" DevicePath \"\"" Jun 20 18:44:46.562347 kubelet[3281]: I0620 18:44:46.562028 3281 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rlw2x\" (UniqueName: \"kubernetes.io/projected/049bac15-f17c-4d07-9365-996cce339bd4-kube-api-access-rlw2x\") on node \"ci-4230.2.0-a-c483281568\" DevicePath \"\"" Jun 20 18:44:46.562347 kubelet[3281]: I0620 18:44:46.562037 3281 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/049bac15-f17c-4d07-9365-996cce339bd4-bpf-maps\") on node \"ci-4230.2.0-a-c483281568\" DevicePath \"\"" Jun 20 18:44:46.562347 kubelet[3281]: I0620 18:44:46.562047 3281 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/049bac15-f17c-4d07-9365-996cce339bd4-cilium-cgroup\") on node \"ci-4230.2.0-a-c483281568\" DevicePath \"\"" Jun 20 18:44:46.994066 kubelet[3281]: I0620 18:44:46.992847 3281 scope.go:117] "RemoveContainer" containerID="490fe7611025efe2cf9f7575d44727d3a29fd61fe34dcea6207d6e0eff66c430" Jun 20 18:44:46.997602 containerd[1720]: time="2025-06-20T18:44:46.997521855Z" level=info msg="RemoveContainer for \"490fe7611025efe2cf9f7575d44727d3a29fd61fe34dcea6207d6e0eff66c430\"" Jun 20 18:44:46.998724 systemd[1]: Removed slice kubepods-burstable-pod049bac15_f17c_4d07_9365_996cce339bd4.slice - libcontainer container kubepods-burstable-pod049bac15_f17c_4d07_9365_996cce339bd4.slice. Jun 20 18:44:46.998841 systemd[1]: kubepods-burstable-pod049bac15_f17c_4d07_9365_996cce339bd4.slice: Consumed 6.298s CPU time, 123M memory peak, 136K read from disk, 12.9M written to disk. Jun 20 18:44:47.003241 systemd[1]: Removed slice kubepods-besteffort-podd84f1a56_c0ae_4220_897d_625acf9e4c43.slice - libcontainer container kubepods-besteffort-podd84f1a56_c0ae_4220_897d_625acf9e4c43.slice. Jun 20 18:44:47.011515 containerd[1720]: time="2025-06-20T18:44:47.011476193Z" level=info msg="RemoveContainer for \"490fe7611025efe2cf9f7575d44727d3a29fd61fe34dcea6207d6e0eff66c430\" returns successfully" Jun 20 18:44:47.011951 kubelet[3281]: I0620 18:44:47.011924 3281 scope.go:117] "RemoveContainer" containerID="2774b9a77d8f88da189d9d1483c6cd523f948f16528208b2bbc57d502755938c" Jun 20 18:44:47.013156 containerd[1720]: time="2025-06-20T18:44:47.013124555Z" level=info msg="RemoveContainer for \"2774b9a77d8f88da189d9d1483c6cd523f948f16528208b2bbc57d502755938c\"" Jun 20 18:44:47.021323 containerd[1720]: time="2025-06-20T18:44:47.021259366Z" level=info msg="RemoveContainer for \"2774b9a77d8f88da189d9d1483c6cd523f948f16528208b2bbc57d502755938c\" returns successfully" Jun 20 18:44:47.021592 kubelet[3281]: I0620 18:44:47.021579 3281 scope.go:117] "RemoveContainer" containerID="8231c1c339a94dfe7dc69d8c39029ce59606c96bc882a1cb98733b28faa4eeed" Jun 20 18:44:47.023470 containerd[1720]: time="2025-06-20T18:44:47.023124848Z" level=info msg="RemoveContainer for \"8231c1c339a94dfe7dc69d8c39029ce59606c96bc882a1cb98733b28faa4eeed\"" Jun 20 18:44:47.031311 containerd[1720]: time="2025-06-20T18:44:47.030841818Z" level=info msg="RemoveContainer for \"8231c1c339a94dfe7dc69d8c39029ce59606c96bc882a1cb98733b28faa4eeed\" returns successfully" Jun 20 18:44:47.031596 kubelet[3281]: I0620 18:44:47.031508 3281 scope.go:117] "RemoveContainer" containerID="50ad09cf5de5dda6a34201c3145a92b6e6c06893aaed0d74e937af9d6f79f6d8" Jun 20 18:44:47.032866 containerd[1720]: time="2025-06-20T18:44:47.032838061Z" level=info msg="RemoveContainer for \"50ad09cf5de5dda6a34201c3145a92b6e6c06893aaed0d74e937af9d6f79f6d8\"" Jun 20 18:44:47.042081 containerd[1720]: time="2025-06-20T18:44:47.042050993Z" level=info msg="RemoveContainer for \"50ad09cf5de5dda6a34201c3145a92b6e6c06893aaed0d74e937af9d6f79f6d8\" returns successfully" Jun 20 18:44:47.042315 kubelet[3281]: I0620 18:44:47.042290 3281 scope.go:117] "RemoveContainer" containerID="5a19ff5ccdb7bb41fe9c900451ed7faa8f2dac42367d73663aec959778d09e9d" Jun 20 18:44:47.043406 containerd[1720]: time="2025-06-20T18:44:47.043348914Z" level=info msg="RemoveContainer for \"5a19ff5ccdb7bb41fe9c900451ed7faa8f2dac42367d73663aec959778d09e9d\"" Jun 20 18:44:47.051187 containerd[1720]: time="2025-06-20T18:44:47.051158445Z" level=info msg="RemoveContainer for \"5a19ff5ccdb7bb41fe9c900451ed7faa8f2dac42367d73663aec959778d09e9d\" returns successfully" Jun 20 18:44:47.051524 kubelet[3281]: I0620 18:44:47.051430 3281 scope.go:117] "RemoveContainer" containerID="490fe7611025efe2cf9f7575d44727d3a29fd61fe34dcea6207d6e0eff66c430" Jun 20 18:44:47.051789 containerd[1720]: time="2025-06-20T18:44:47.051731445Z" level=error msg="ContainerStatus for \"490fe7611025efe2cf9f7575d44727d3a29fd61fe34dcea6207d6e0eff66c430\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"490fe7611025efe2cf9f7575d44727d3a29fd61fe34dcea6207d6e0eff66c430\": not found" Jun 20 18:44:47.051902 kubelet[3281]: E0620 18:44:47.051879 3281 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"490fe7611025efe2cf9f7575d44727d3a29fd61fe34dcea6207d6e0eff66c430\": not found" containerID="490fe7611025efe2cf9f7575d44727d3a29fd61fe34dcea6207d6e0eff66c430" Jun 20 18:44:47.051956 kubelet[3281]: I0620 18:44:47.051910 3281 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"490fe7611025efe2cf9f7575d44727d3a29fd61fe34dcea6207d6e0eff66c430"} err="failed to get container status \"490fe7611025efe2cf9f7575d44727d3a29fd61fe34dcea6207d6e0eff66c430\": rpc error: code = NotFound desc = an error occurred when try to find container \"490fe7611025efe2cf9f7575d44727d3a29fd61fe34dcea6207d6e0eff66c430\": not found" Jun 20 18:44:47.051956 kubelet[3281]: I0620 18:44:47.051954 3281 scope.go:117] "RemoveContainer" containerID="2774b9a77d8f88da189d9d1483c6cd523f948f16528208b2bbc57d502755938c" Jun 20 18:44:47.052215 containerd[1720]: time="2025-06-20T18:44:47.052130326Z" level=error msg="ContainerStatus for \"2774b9a77d8f88da189d9d1483c6cd523f948f16528208b2bbc57d502755938c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2774b9a77d8f88da189d9d1483c6cd523f948f16528208b2bbc57d502755938c\": not found" Jun 20 18:44:47.052301 kubelet[3281]: E0620 18:44:47.052263 3281 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2774b9a77d8f88da189d9d1483c6cd523f948f16528208b2bbc57d502755938c\": not found" containerID="2774b9a77d8f88da189d9d1483c6cd523f948f16528208b2bbc57d502755938c" Jun 20 18:44:47.052365 kubelet[3281]: I0620 18:44:47.052333 3281 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2774b9a77d8f88da189d9d1483c6cd523f948f16528208b2bbc57d502755938c"} err="failed to get container status \"2774b9a77d8f88da189d9d1483c6cd523f948f16528208b2bbc57d502755938c\": rpc error: code = NotFound desc = an error occurred when try to find container \"2774b9a77d8f88da189d9d1483c6cd523f948f16528208b2bbc57d502755938c\": not found" Jun 20 18:44:47.052365 kubelet[3281]: I0620 18:44:47.052354 3281 scope.go:117] "RemoveContainer" containerID="8231c1c339a94dfe7dc69d8c39029ce59606c96bc882a1cb98733b28faa4eeed" Jun 20 18:44:47.052628 containerd[1720]: time="2025-06-20T18:44:47.052599126Z" level=error msg="ContainerStatus for \"8231c1c339a94dfe7dc69d8c39029ce59606c96bc882a1cb98733b28faa4eeed\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8231c1c339a94dfe7dc69d8c39029ce59606c96bc882a1cb98733b28faa4eeed\": not found" Jun 20 18:44:47.052898 kubelet[3281]: E0620 18:44:47.052794 3281 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8231c1c339a94dfe7dc69d8c39029ce59606c96bc882a1cb98733b28faa4eeed\": not found" containerID="8231c1c339a94dfe7dc69d8c39029ce59606c96bc882a1cb98733b28faa4eeed" Jun 20 18:44:47.052898 kubelet[3281]: I0620 18:44:47.052840 3281 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8231c1c339a94dfe7dc69d8c39029ce59606c96bc882a1cb98733b28faa4eeed"} err="failed to get container status \"8231c1c339a94dfe7dc69d8c39029ce59606c96bc882a1cb98733b28faa4eeed\": rpc error: code = NotFound desc = an error occurred when try to find container \"8231c1c339a94dfe7dc69d8c39029ce59606c96bc882a1cb98733b28faa4eeed\": not found" Jun 20 18:44:47.052898 kubelet[3281]: I0620 18:44:47.052854 3281 scope.go:117] "RemoveContainer" containerID="50ad09cf5de5dda6a34201c3145a92b6e6c06893aaed0d74e937af9d6f79f6d8" Jun 20 18:44:47.053385 containerd[1720]: time="2025-06-20T18:44:47.053323487Z" level=error msg="ContainerStatus for \"50ad09cf5de5dda6a34201c3145a92b6e6c06893aaed0d74e937af9d6f79f6d8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"50ad09cf5de5dda6a34201c3145a92b6e6c06893aaed0d74e937af9d6f79f6d8\": not found" Jun 20 18:44:47.053477 kubelet[3281]: E0620 18:44:47.053450 3281 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"50ad09cf5de5dda6a34201c3145a92b6e6c06893aaed0d74e937af9d6f79f6d8\": not found" containerID="50ad09cf5de5dda6a34201c3145a92b6e6c06893aaed0d74e937af9d6f79f6d8" Jun 20 18:44:47.053514 kubelet[3281]: I0620 18:44:47.053493 3281 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"50ad09cf5de5dda6a34201c3145a92b6e6c06893aaed0d74e937af9d6f79f6d8"} err="failed to get container status \"50ad09cf5de5dda6a34201c3145a92b6e6c06893aaed0d74e937af9d6f79f6d8\": rpc error: code = NotFound desc = an error occurred when try to find container \"50ad09cf5de5dda6a34201c3145a92b6e6c06893aaed0d74e937af9d6f79f6d8\": not found" Jun 20 18:44:47.053514 kubelet[3281]: I0620 18:44:47.053509 3281 scope.go:117] "RemoveContainer" containerID="5a19ff5ccdb7bb41fe9c900451ed7faa8f2dac42367d73663aec959778d09e9d" Jun 20 18:44:47.053790 containerd[1720]: time="2025-06-20T18:44:47.053709488Z" level=error msg="ContainerStatus for \"5a19ff5ccdb7bb41fe9c900451ed7faa8f2dac42367d73663aec959778d09e9d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5a19ff5ccdb7bb41fe9c900451ed7faa8f2dac42367d73663aec959778d09e9d\": not found" Jun 20 18:44:47.053839 kubelet[3281]: E0620 18:44:47.053803 3281 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5a19ff5ccdb7bb41fe9c900451ed7faa8f2dac42367d73663aec959778d09e9d\": not found" containerID="5a19ff5ccdb7bb41fe9c900451ed7faa8f2dac42367d73663aec959778d09e9d" Jun 20 18:44:47.053839 kubelet[3281]: I0620 18:44:47.053820 3281 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5a19ff5ccdb7bb41fe9c900451ed7faa8f2dac42367d73663aec959778d09e9d"} err="failed to get container status \"5a19ff5ccdb7bb41fe9c900451ed7faa8f2dac42367d73663aec959778d09e9d\": rpc error: code = NotFound desc = an error occurred when try to find container \"5a19ff5ccdb7bb41fe9c900451ed7faa8f2dac42367d73663aec959778d09e9d\": not found" Jun 20 18:44:47.053839 kubelet[3281]: I0620 18:44:47.053831 3281 scope.go:117] "RemoveContainer" containerID="2f89d3e014333fd21e0c4f5f6345fa2e89cf58930495c14b7bd7777eb1421211" Jun 20 18:44:47.054856 containerd[1720]: time="2025-06-20T18:44:47.054833369Z" level=info msg="RemoveContainer for \"2f89d3e014333fd21e0c4f5f6345fa2e89cf58930495c14b7bd7777eb1421211\"" Jun 20 18:44:47.070211 containerd[1720]: time="2025-06-20T18:44:47.070155989Z" level=info msg="RemoveContainer for \"2f89d3e014333fd21e0c4f5f6345fa2e89cf58930495c14b7bd7777eb1421211\" returns successfully" Jun 20 18:44:47.070783 kubelet[3281]: I0620 18:44:47.070453 3281 scope.go:117] "RemoveContainer" containerID="2f89d3e014333fd21e0c4f5f6345fa2e89cf58930495c14b7bd7777eb1421211" Jun 20 18:44:47.070860 containerd[1720]: time="2025-06-20T18:44:47.070703910Z" level=error msg="ContainerStatus for \"2f89d3e014333fd21e0c4f5f6345fa2e89cf58930495c14b7bd7777eb1421211\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2f89d3e014333fd21e0c4f5f6345fa2e89cf58930495c14b7bd7777eb1421211\": not found" Jun 20 18:44:47.071035 kubelet[3281]: E0620 18:44:47.070978 3281 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2f89d3e014333fd21e0c4f5f6345fa2e89cf58930495c14b7bd7777eb1421211\": not found" containerID="2f89d3e014333fd21e0c4f5f6345fa2e89cf58930495c14b7bd7777eb1421211" Jun 20 18:44:47.071137 kubelet[3281]: I0620 18:44:47.071108 3281 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2f89d3e014333fd21e0c4f5f6345fa2e89cf58930495c14b7bd7777eb1421211"} err="failed to get container status \"2f89d3e014333fd21e0c4f5f6345fa2e89cf58930495c14b7bd7777eb1421211\": rpc error: code = NotFound desc = an error occurred when try to find container \"2f89d3e014333fd21e0c4f5f6345fa2e89cf58930495c14b7bd7777eb1421211\": not found" Jun 20 18:44:47.081237 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-98c273deb1c9ddce91b3319872c67804a5e8a1fd45315ab72794b8c9db94b57c-rootfs.mount: Deactivated successfully. Jun 20 18:44:47.081367 systemd[1]: var-lib-kubelet-pods-d84f1a56\x2dc0ae\x2d4220\x2d897d\x2d625acf9e4c43-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5mpj6.mount: Deactivated successfully. Jun 20 18:44:47.081432 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5af161cc0fa09f13219972f40b1a38e6db4ba039291769ceefc966c20cbf3bfa-rootfs.mount: Deactivated successfully. Jun 20 18:44:47.081490 systemd[1]: var-lib-kubelet-pods-049bac15\x2df17c\x2d4d07\x2d9365\x2d996cce339bd4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drlw2x.mount: Deactivated successfully. Jun 20 18:44:47.081545 systemd[1]: var-lib-kubelet-pods-049bac15\x2df17c\x2d4d07\x2d9365\x2d996cce339bd4-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jun 20 18:44:47.081601 systemd[1]: var-lib-kubelet-pods-049bac15\x2df17c\x2d4d07\x2d9365\x2d996cce339bd4-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jun 20 18:44:47.644222 kubelet[3281]: I0620 18:44:47.644113 3281 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="049bac15-f17c-4d07-9365-996cce339bd4" path="/var/lib/kubelet/pods/049bac15-f17c-4d07-9365-996cce339bd4/volumes" Jun 20 18:44:47.644769 kubelet[3281]: I0620 18:44:47.644661 3281 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d84f1a56-c0ae-4220-897d-625acf9e4c43" path="/var/lib/kubelet/pods/d84f1a56-c0ae-4220-897d-625acf9e4c43/volumes" Jun 20 18:44:48.093210 sshd[4896]: Connection closed by 10.200.16.10 port 44526 Jun 20 18:44:48.093650 sshd-session[4894]: pam_unix(sshd:session): session closed for user core Jun 20 18:44:48.097111 systemd-logind[1701]: Session 25 logged out. Waiting for processes to exit. Jun 20 18:44:48.097260 systemd[1]: sshd@22-10.200.20.15:22-10.200.16.10:44526.service: Deactivated successfully. Jun 20 18:44:48.100661 systemd[1]: session-25.scope: Deactivated successfully. Jun 20 18:44:48.103728 systemd-logind[1701]: Removed session 25. Jun 20 18:44:48.176396 systemd[1]: Started sshd@23-10.200.20.15:22-10.200.16.10:44542.service - OpenSSH per-connection server daemon (10.200.16.10:44542). Jun 20 18:44:48.635765 sshd[5062]: Accepted publickey for core from 10.200.16.10 port 44542 ssh2: RSA SHA256:V0PIl5bgEVSoJBEasTsvuXRLX17TbdMMyIxacTZE6p0 Jun 20 18:44:48.637094 sshd-session[5062]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:44:48.641328 systemd-logind[1701]: New session 26 of user core. Jun 20 18:44:48.651938 systemd[1]: Started session-26.scope - Session 26 of User core. Jun 20 18:44:49.753419 kubelet[3281]: E0620 18:44:49.753288 3281 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jun 20 18:44:49.958151 systemd[1]: Created slice kubepods-burstable-poda8025268_4005_4091_a63b_4e034c0e998c.slice - libcontainer container kubepods-burstable-poda8025268_4005_4091_a63b_4e034c0e998c.slice. Jun 20 18:44:49.978222 sshd[5064]: Connection closed by 10.200.16.10 port 44542 Jun 20 18:44:49.980947 sshd-session[5062]: pam_unix(sshd:session): session closed for user core Jun 20 18:44:49.987053 systemd-logind[1701]: Session 26 logged out. Waiting for processes to exit. Jun 20 18:44:49.988982 systemd[1]: sshd@23-10.200.20.15:22-10.200.16.10:44542.service: Deactivated successfully. Jun 20 18:44:49.990461 systemd[1]: session-26.scope: Deactivated successfully. Jun 20 18:44:49.993344 systemd-logind[1701]: Removed session 26. Jun 20 18:44:50.070972 systemd[1]: Started sshd@24-10.200.20.15:22-10.200.16.10:52626.service - OpenSSH per-connection server daemon (10.200.16.10:52626). Jun 20 18:44:50.081935 kubelet[3281]: I0620 18:44:50.081590 3281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a8025268-4005-4091-a63b-4e034c0e998c-cilium-run\") pod \"cilium-7vq6f\" (UID: \"a8025268-4005-4091-a63b-4e034c0e998c\") " pod="kube-system/cilium-7vq6f" Jun 20 18:44:50.081935 kubelet[3281]: I0620 18:44:50.081631 3281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a8025268-4005-4091-a63b-4e034c0e998c-host-proc-sys-net\") pod \"cilium-7vq6f\" (UID: \"a8025268-4005-4091-a63b-4e034c0e998c\") " pod="kube-system/cilium-7vq6f" Jun 20 18:44:50.081935 kubelet[3281]: I0620 18:44:50.081648 3281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a8025268-4005-4091-a63b-4e034c0e998c-host-proc-sys-kernel\") pod \"cilium-7vq6f\" (UID: \"a8025268-4005-4091-a63b-4e034c0e998c\") " pod="kube-system/cilium-7vq6f" Jun 20 18:44:50.081935 kubelet[3281]: I0620 18:44:50.081663 3281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a8025268-4005-4091-a63b-4e034c0e998c-etc-cni-netd\") pod \"cilium-7vq6f\" (UID: \"a8025268-4005-4091-a63b-4e034c0e998c\") " pod="kube-system/cilium-7vq6f" Jun 20 18:44:50.081935 kubelet[3281]: I0620 18:44:50.081678 3281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a8025268-4005-4091-a63b-4e034c0e998c-cilium-config-path\") pod \"cilium-7vq6f\" (UID: \"a8025268-4005-4091-a63b-4e034c0e998c\") " pod="kube-system/cilium-7vq6f" Jun 20 18:44:50.081935 kubelet[3281]: I0620 18:44:50.081693 3281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a8025268-4005-4091-a63b-4e034c0e998c-hostproc\") pod \"cilium-7vq6f\" (UID: \"a8025268-4005-4091-a63b-4e034c0e998c\") " pod="kube-system/cilium-7vq6f" Jun 20 18:44:50.082150 kubelet[3281]: I0620 18:44:50.081707 3281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a8025268-4005-4091-a63b-4e034c0e998c-cilium-cgroup\") pod \"cilium-7vq6f\" (UID: \"a8025268-4005-4091-a63b-4e034c0e998c\") " pod="kube-system/cilium-7vq6f" Jun 20 18:44:50.082150 kubelet[3281]: I0620 18:44:50.081722 3281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a8025268-4005-4091-a63b-4e034c0e998c-cni-path\") pod \"cilium-7vq6f\" (UID: \"a8025268-4005-4091-a63b-4e034c0e998c\") " pod="kube-system/cilium-7vq6f" Jun 20 18:44:50.082150 kubelet[3281]: I0620 18:44:50.081736 3281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a8025268-4005-4091-a63b-4e034c0e998c-lib-modules\") pod \"cilium-7vq6f\" (UID: \"a8025268-4005-4091-a63b-4e034c0e998c\") " pod="kube-system/cilium-7vq6f" Jun 20 18:44:50.082150 kubelet[3281]: I0620 18:44:50.081762 3281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a8025268-4005-4091-a63b-4e034c0e998c-clustermesh-secrets\") pod \"cilium-7vq6f\" (UID: \"a8025268-4005-4091-a63b-4e034c0e998c\") " pod="kube-system/cilium-7vq6f" Jun 20 18:44:50.082150 kubelet[3281]: I0620 18:44:50.081782 3281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbdg6\" (UniqueName: \"kubernetes.io/projected/a8025268-4005-4091-a63b-4e034c0e998c-kube-api-access-kbdg6\") pod \"cilium-7vq6f\" (UID: \"a8025268-4005-4091-a63b-4e034c0e998c\") " pod="kube-system/cilium-7vq6f" Jun 20 18:44:50.082150 kubelet[3281]: I0620 18:44:50.081799 3281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a8025268-4005-4091-a63b-4e034c0e998c-bpf-maps\") pod \"cilium-7vq6f\" (UID: \"a8025268-4005-4091-a63b-4e034c0e998c\") " pod="kube-system/cilium-7vq6f" Jun 20 18:44:50.082266 kubelet[3281]: I0620 18:44:50.081812 3281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a8025268-4005-4091-a63b-4e034c0e998c-cilium-ipsec-secrets\") pod \"cilium-7vq6f\" (UID: \"a8025268-4005-4091-a63b-4e034c0e998c\") " pod="kube-system/cilium-7vq6f" Jun 20 18:44:50.082266 kubelet[3281]: I0620 18:44:50.081825 3281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a8025268-4005-4091-a63b-4e034c0e998c-hubble-tls\") pod \"cilium-7vq6f\" (UID: \"a8025268-4005-4091-a63b-4e034c0e998c\") " pod="kube-system/cilium-7vq6f" Jun 20 18:44:50.082266 kubelet[3281]: I0620 18:44:50.081840 3281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a8025268-4005-4091-a63b-4e034c0e998c-xtables-lock\") pod \"cilium-7vq6f\" (UID: \"a8025268-4005-4091-a63b-4e034c0e998c\") " pod="kube-system/cilium-7vq6f" Jun 20 18:44:50.264042 containerd[1720]: time="2025-06-20T18:44:50.263992408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7vq6f,Uid:a8025268-4005-4091-a63b-4e034c0e998c,Namespace:kube-system,Attempt:0,}" Jun 20 18:44:50.306229 containerd[1720]: time="2025-06-20T18:44:50.306074848Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 18:44:50.306229 containerd[1720]: time="2025-06-20T18:44:50.306130489Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 18:44:50.306229 containerd[1720]: time="2025-06-20T18:44:50.306145849Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:44:50.306872 containerd[1720]: time="2025-06-20T18:44:50.306328809Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:44:50.326936 systemd[1]: Started cri-containerd-6529905a296bc8c873a398c742d6a93c92150704cbc3eb8ebeadcdc54fcfc9b6.scope - libcontainer container 6529905a296bc8c873a398c742d6a93c92150704cbc3eb8ebeadcdc54fcfc9b6. Jun 20 18:44:50.349398 containerd[1720]: time="2025-06-20T18:44:50.349206770Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7vq6f,Uid:a8025268-4005-4091-a63b-4e034c0e998c,Namespace:kube-system,Attempt:0,} returns sandbox id \"6529905a296bc8c873a398c742d6a93c92150704cbc3eb8ebeadcdc54fcfc9b6\"" Jun 20 18:44:50.360091 containerd[1720]: time="2025-06-20T18:44:50.360052111Z" level=info msg="CreateContainer within sandbox \"6529905a296bc8c873a398c742d6a93c92150704cbc3eb8ebeadcdc54fcfc9b6\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jun 20 18:44:50.392872 containerd[1720]: time="2025-06-20T18:44:50.392829613Z" level=info msg="CreateContainer within sandbox \"6529905a296bc8c873a398c742d6a93c92150704cbc3eb8ebeadcdc54fcfc9b6\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"cfda54c6d9c31a9879aeb5b4f2c18a983825c9a69cf815ec36ff2f011adfdc2f\"" Jun 20 18:44:50.393836 containerd[1720]: time="2025-06-20T18:44:50.393542455Z" level=info msg="StartContainer for \"cfda54c6d9c31a9879aeb5b4f2c18a983825c9a69cf815ec36ff2f011adfdc2f\"" Jun 20 18:44:50.420933 systemd[1]: Started cri-containerd-cfda54c6d9c31a9879aeb5b4f2c18a983825c9a69cf815ec36ff2f011adfdc2f.scope - libcontainer container cfda54c6d9c31a9879aeb5b4f2c18a983825c9a69cf815ec36ff2f011adfdc2f. Jun 20 18:44:50.447188 containerd[1720]: time="2025-06-20T18:44:50.447076076Z" level=info msg="StartContainer for \"cfda54c6d9c31a9879aeb5b4f2c18a983825c9a69cf815ec36ff2f011adfdc2f\" returns successfully" Jun 20 18:44:50.453986 systemd[1]: cri-containerd-cfda54c6d9c31a9879aeb5b4f2c18a983825c9a69cf815ec36ff2f011adfdc2f.scope: Deactivated successfully. Jun 20 18:44:50.498314 containerd[1720]: time="2025-06-20T18:44:50.498244414Z" level=info msg="shim disconnected" id=cfda54c6d9c31a9879aeb5b4f2c18a983825c9a69cf815ec36ff2f011adfdc2f namespace=k8s.io Jun 20 18:44:50.498314 containerd[1720]: time="2025-06-20T18:44:50.498297774Z" level=warning msg="cleaning up after shim disconnected" id=cfda54c6d9c31a9879aeb5b4f2c18a983825c9a69cf815ec36ff2f011adfdc2f namespace=k8s.io Jun 20 18:44:50.498314 containerd[1720]: time="2025-06-20T18:44:50.498306974Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 18:44:50.559683 sshd[5075]: Accepted publickey for core from 10.200.16.10 port 52626 ssh2: RSA SHA256:V0PIl5bgEVSoJBEasTsvuXRLX17TbdMMyIxacTZE6p0 Jun 20 18:44:50.561310 sshd-session[5075]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:44:50.565071 systemd-logind[1701]: New session 27 of user core. Jun 20 18:44:50.569894 systemd[1]: Started session-27.scope - Session 27 of User core. Jun 20 18:44:50.907945 sshd[5184]: Connection closed by 10.200.16.10 port 52626 Jun 20 18:44:50.908593 sshd-session[5075]: pam_unix(sshd:session): session closed for user core Jun 20 18:44:50.912310 systemd-logind[1701]: Session 27 logged out. Waiting for processes to exit. Jun 20 18:44:50.912987 systemd[1]: sshd@24-10.200.20.15:22-10.200.16.10:52626.service: Deactivated successfully. Jun 20 18:44:50.915553 systemd[1]: session-27.scope: Deactivated successfully. Jun 20 18:44:50.917211 systemd-logind[1701]: Removed session 27. Jun 20 18:44:50.998101 systemd[1]: Started sshd@25-10.200.20.15:22-10.200.16.10:52634.service - OpenSSH per-connection server daemon (10.200.16.10:52634). Jun 20 18:44:51.023075 containerd[1720]: time="2025-06-20T18:44:51.023015451Z" level=info msg="CreateContainer within sandbox \"6529905a296bc8c873a398c742d6a93c92150704cbc3eb8ebeadcdc54fcfc9b6\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jun 20 18:44:51.057354 containerd[1720]: time="2025-06-20T18:44:51.057233076Z" level=info msg="CreateContainer within sandbox \"6529905a296bc8c873a398c742d6a93c92150704cbc3eb8ebeadcdc54fcfc9b6\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f518b09fca8b310c8ed5e909c0b857674052f467653c00614cd45aab76e82653\"" Jun 20 18:44:51.058220 containerd[1720]: time="2025-06-20T18:44:51.058188238Z" level=info msg="StartContainer for \"f518b09fca8b310c8ed5e909c0b857674052f467653c00614cd45aab76e82653\"" Jun 20 18:44:51.082950 systemd[1]: Started cri-containerd-f518b09fca8b310c8ed5e909c0b857674052f467653c00614cd45aab76e82653.scope - libcontainer container f518b09fca8b310c8ed5e909c0b857674052f467653c00614cd45aab76e82653. Jun 20 18:44:51.110220 containerd[1720]: time="2025-06-20T18:44:51.110169777Z" level=info msg="StartContainer for \"f518b09fca8b310c8ed5e909c0b857674052f467653c00614cd45aab76e82653\" returns successfully" Jun 20 18:44:51.114241 systemd[1]: cri-containerd-f518b09fca8b310c8ed5e909c0b857674052f467653c00614cd45aab76e82653.scope: Deactivated successfully. Jun 20 18:44:51.147138 containerd[1720]: time="2025-06-20T18:44:51.147054247Z" level=info msg="shim disconnected" id=f518b09fca8b310c8ed5e909c0b857674052f467653c00614cd45aab76e82653 namespace=k8s.io Jun 20 18:44:51.147138 containerd[1720]: time="2025-06-20T18:44:51.147130647Z" level=warning msg="cleaning up after shim disconnected" id=f518b09fca8b310c8ed5e909c0b857674052f467653c00614cd45aab76e82653 namespace=k8s.io Jun 20 18:44:51.147138 containerd[1720]: time="2025-06-20T18:44:51.147140087Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 18:44:51.461595 sshd[5191]: Accepted publickey for core from 10.200.16.10 port 52634 ssh2: RSA SHA256:V0PIl5bgEVSoJBEasTsvuXRLX17TbdMMyIxacTZE6p0 Jun 20 18:44:51.462903 sshd-session[5191]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:44:51.467456 systemd-logind[1701]: New session 28 of user core. Jun 20 18:44:51.472959 systemd[1]: Started session-28.scope - Session 28 of User core. Jun 20 18:44:52.020381 containerd[1720]: time="2025-06-20T18:44:52.020237907Z" level=info msg="CreateContainer within sandbox \"6529905a296bc8c873a398c742d6a93c92150704cbc3eb8ebeadcdc54fcfc9b6\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jun 20 18:44:52.059792 containerd[1720]: time="2025-06-20T18:44:52.059676239Z" level=info msg="CreateContainer within sandbox \"6529905a296bc8c873a398c742d6a93c92150704cbc3eb8ebeadcdc54fcfc9b6\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"956d50dd7a41f43064735272097ba4c92e606506dd1f3f70a7d3053d018d3d26\"" Jun 20 18:44:52.060536 containerd[1720]: time="2025-06-20T18:44:52.060505520Z" level=info msg="StartContainer for \"956d50dd7a41f43064735272097ba4c92e606506dd1f3f70a7d3053d018d3d26\"" Jun 20 18:44:52.085913 systemd[1]: Started cri-containerd-956d50dd7a41f43064735272097ba4c92e606506dd1f3f70a7d3053d018d3d26.scope - libcontainer container 956d50dd7a41f43064735272097ba4c92e606506dd1f3f70a7d3053d018d3d26. Jun 20 18:44:52.117281 systemd[1]: cri-containerd-956d50dd7a41f43064735272097ba4c92e606506dd1f3f70a7d3053d018d3d26.scope: Deactivated successfully. Jun 20 18:44:52.121219 containerd[1720]: time="2025-06-20T18:44:52.121114080Z" level=info msg="StartContainer for \"956d50dd7a41f43064735272097ba4c92e606506dd1f3f70a7d3053d018d3d26\" returns successfully" Jun 20 18:44:52.154148 containerd[1720]: time="2025-06-20T18:44:52.154089964Z" level=info msg="shim disconnected" id=956d50dd7a41f43064735272097ba4c92e606506dd1f3f70a7d3053d018d3d26 namespace=k8s.io Jun 20 18:44:52.154555 containerd[1720]: time="2025-06-20T18:44:52.154401564Z" level=warning msg="cleaning up after shim disconnected" id=956d50dd7a41f43064735272097ba4c92e606506dd1f3f70a7d3053d018d3d26 namespace=k8s.io Jun 20 18:44:52.154555 containerd[1720]: time="2025-06-20T18:44:52.154417964Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 18:44:52.186280 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-956d50dd7a41f43064735272097ba4c92e606506dd1f3f70a7d3053d018d3d26-rootfs.mount: Deactivated successfully. Jun 20 18:44:52.718088 kubelet[3281]: I0620 18:44:52.717924 3281 setters.go:618] "Node became not ready" node="ci-4230.2.0-a-c483281568" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-06-20T18:44:52Z","lastTransitionTime":"2025-06-20T18:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jun 20 18:44:53.030045 containerd[1720]: time="2025-06-20T18:44:53.029868841Z" level=info msg="CreateContainer within sandbox \"6529905a296bc8c873a398c742d6a93c92150704cbc3eb8ebeadcdc54fcfc9b6\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jun 20 18:44:53.073136 containerd[1720]: time="2025-06-20T18:44:53.073046818Z" level=info msg="CreateContainer within sandbox \"6529905a296bc8c873a398c742d6a93c92150704cbc3eb8ebeadcdc54fcfc9b6\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d1d0e62f48aa7fc35fc239add64591825df1c2dc7bc7560a5a8f261bd741daa7\"" Jun 20 18:44:53.073699 containerd[1720]: time="2025-06-20T18:44:53.073539259Z" level=info msg="StartContainer for \"d1d0e62f48aa7fc35fc239add64591825df1c2dc7bc7560a5a8f261bd741daa7\"" Jun 20 18:44:53.102931 systemd[1]: Started cri-containerd-d1d0e62f48aa7fc35fc239add64591825df1c2dc7bc7560a5a8f261bd741daa7.scope - libcontainer container d1d0e62f48aa7fc35fc239add64591825df1c2dc7bc7560a5a8f261bd741daa7. Jun 20 18:44:53.124692 systemd[1]: cri-containerd-d1d0e62f48aa7fc35fc239add64591825df1c2dc7bc7560a5a8f261bd741daa7.scope: Deactivated successfully. Jun 20 18:44:53.131818 containerd[1720]: time="2025-06-20T18:44:53.131701296Z" level=info msg="StartContainer for \"d1d0e62f48aa7fc35fc239add64591825df1c2dc7bc7560a5a8f261bd741daa7\" returns successfully" Jun 20 18:44:53.162466 containerd[1720]: time="2025-06-20T18:44:53.162395496Z" level=info msg="shim disconnected" id=d1d0e62f48aa7fc35fc239add64591825df1c2dc7bc7560a5a8f261bd741daa7 namespace=k8s.io Jun 20 18:44:53.162466 containerd[1720]: time="2025-06-20T18:44:53.162455336Z" level=warning msg="cleaning up after shim disconnected" id=d1d0e62f48aa7fc35fc239add64591825df1c2dc7bc7560a5a8f261bd741daa7 namespace=k8s.io Jun 20 18:44:53.162466 containerd[1720]: time="2025-06-20T18:44:53.162463896Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 18:44:53.186896 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d1d0e62f48aa7fc35fc239add64591825df1c2dc7bc7560a5a8f261bd741daa7-rootfs.mount: Deactivated successfully. Jun 20 18:44:54.036350 containerd[1720]: time="2025-06-20T18:44:54.036246411Z" level=info msg="CreateContainer within sandbox \"6529905a296bc8c873a398c742d6a93c92150704cbc3eb8ebeadcdc54fcfc9b6\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jun 20 18:44:54.080272 containerd[1720]: time="2025-06-20T18:44:54.080192909Z" level=info msg="CreateContainer within sandbox \"6529905a296bc8c873a398c742d6a93c92150704cbc3eb8ebeadcdc54fcfc9b6\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b46d2f766f816a82408a54bc19acf0dec8349ef5eaa5eaefd1002a71c1565003\"" Jun 20 18:44:54.080844 containerd[1720]: time="2025-06-20T18:44:54.080638510Z" level=info msg="StartContainer for \"b46d2f766f816a82408a54bc19acf0dec8349ef5eaa5eaefd1002a71c1565003\"" Jun 20 18:44:54.106889 systemd[1]: Started cri-containerd-b46d2f766f816a82408a54bc19acf0dec8349ef5eaa5eaefd1002a71c1565003.scope - libcontainer container b46d2f766f816a82408a54bc19acf0dec8349ef5eaa5eaefd1002a71c1565003. Jun 20 18:44:54.144763 containerd[1720]: time="2025-06-20T18:44:54.144666635Z" level=info msg="StartContainer for \"b46d2f766f816a82408a54bc19acf0dec8349ef5eaa5eaefd1002a71c1565003\" returns successfully" Jun 20 18:44:54.478794 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jun 20 18:44:55.901460 systemd[1]: run-containerd-runc-k8s.io-b46d2f766f816a82408a54bc19acf0dec8349ef5eaa5eaefd1002a71c1565003-runc.ykxB7U.mount: Deactivated successfully. Jun 20 18:44:57.146473 systemd-networkd[1618]: lxc_health: Link UP Jun 20 18:44:57.150156 systemd-networkd[1618]: lxc_health: Gained carrier Jun 20 18:44:58.132891 kubelet[3281]: E0620 18:44:58.132518 3281 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:53308->127.0.0.1:42481: write tcp 127.0.0.1:53308->127.0.0.1:42481: write: broken pipe Jun 20 18:44:58.290785 kubelet[3281]: I0620 18:44:58.290414 3281 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-7vq6f" podStartSLOduration=9.290396409 podStartE2EDuration="9.290396409s" podCreationTimestamp="2025-06-20 18:44:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 18:44:55.049520551 +0000 UTC m=+166.081796349" watchObservedRunningTime="2025-06-20 18:44:58.290396409 +0000 UTC m=+169.322672207" Jun 20 18:44:58.367935 systemd-networkd[1618]: lxc_health: Gained IPv6LL Jun 20 18:45:00.235241 systemd[1]: run-containerd-runc-k8s.io-b46d2f766f816a82408a54bc19acf0dec8349ef5eaa5eaefd1002a71c1565003-runc.pAv5CS.mount: Deactivated successfully. Jun 20 18:45:02.476614 sshd[5254]: Connection closed by 10.200.16.10 port 52634 Jun 20 18:45:02.477269 sshd-session[5191]: pam_unix(sshd:session): session closed for user core Jun 20 18:45:02.480305 systemd-logind[1701]: Session 28 logged out. Waiting for processes to exit. Jun 20 18:45:02.481453 systemd[1]: sshd@25-10.200.20.15:22-10.200.16.10:52634.service: Deactivated successfully. Jun 20 18:45:02.483608 systemd[1]: session-28.scope: Deactivated successfully. Jun 20 18:45:02.484809 systemd-logind[1701]: Removed session 28.