Feb 13 19:51:12.375391 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Feb 13 19:51:12.375414 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Thu Feb 13 17:39:57 -00 2025 Feb 13 19:51:12.375423 kernel: KASLR enabled Feb 13 19:51:12.375428 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Feb 13 19:51:12.375436 kernel: printk: bootconsole [pl11] enabled Feb 13 19:51:12.375441 kernel: efi: EFI v2.7 by EDK II Feb 13 19:51:12.375448 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f20e698 RNG=0x3fd5f998 MEMRESERVE=0x3e477598 Feb 13 19:51:12.375454 kernel: random: crng init done Feb 13 19:51:12.375460 kernel: secureboot: Secure boot disabled Feb 13 19:51:12.375466 kernel: ACPI: Early table checksum verification disabled Feb 13 19:51:12.375472 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Feb 13 19:51:12.375478 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 19:51:12.375484 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 19:51:12.375492 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Feb 13 19:51:12.375499 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 19:51:12.375506 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 19:51:12.375512 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 19:51:12.375519 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 19:51:12.375526 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 19:51:12.375532 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 19:51:12.375538 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Feb 13 19:51:12.375545 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 19:51:12.375551 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Feb 13 19:51:12.375557 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Feb 13 19:51:12.375564 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Feb 13 19:51:12.375570 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Feb 13 19:51:12.375576 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Feb 13 19:51:12.375582 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Feb 13 19:51:12.375603 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Feb 13 19:51:12.375610 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Feb 13 19:51:12.375616 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Feb 13 19:51:12.375622 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Feb 13 19:51:12.375628 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Feb 13 19:51:12.375635 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Feb 13 19:51:12.375641 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Feb 13 19:51:12.375647 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Feb 13 19:51:12.375654 kernel: Zone ranges: Feb 13 19:51:12.375660 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Feb 13 19:51:12.375666 kernel: DMA32 empty Feb 13 19:51:12.375672 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Feb 13 19:51:12.375683 kernel: Movable zone start for each node Feb 13 19:51:12.375690 kernel: Early memory node ranges Feb 13 19:51:12.375697 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Feb 13 19:51:12.375703 kernel: node 0: [mem 0x0000000000824000-0x000000003e45ffff] Feb 13 19:51:12.375710 kernel: node 0: [mem 0x000000003e460000-0x000000003e46ffff] Feb 13 19:51:12.375718 kernel: node 0: [mem 0x000000003e470000-0x000000003e54ffff] Feb 13 19:51:12.375724 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Feb 13 19:51:12.375731 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Feb 13 19:51:12.375738 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Feb 13 19:51:12.375744 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Feb 13 19:51:12.375751 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Feb 13 19:51:12.375757 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Feb 13 19:51:12.375764 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Feb 13 19:51:12.375771 kernel: psci: probing for conduit method from ACPI. Feb 13 19:51:12.375777 kernel: psci: PSCIv1.1 detected in firmware. Feb 13 19:51:12.375784 kernel: psci: Using standard PSCI v0.2 function IDs Feb 13 19:51:12.375791 kernel: psci: MIGRATE_INFO_TYPE not supported. Feb 13 19:51:12.375799 kernel: psci: SMC Calling Convention v1.4 Feb 13 19:51:12.375805 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Feb 13 19:51:12.375812 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Feb 13 19:51:12.375819 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Feb 13 19:51:12.375825 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Feb 13 19:51:12.375832 kernel: pcpu-alloc: [0] 0 [0] 1 Feb 13 19:51:12.375838 kernel: Detected PIPT I-cache on CPU0 Feb 13 19:51:12.375845 kernel: CPU features: detected: GIC system register CPU interface Feb 13 19:51:12.375852 kernel: CPU features: detected: Hardware dirty bit management Feb 13 19:51:12.375858 kernel: CPU features: detected: Spectre-BHB Feb 13 19:51:12.375865 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 13 19:51:12.375874 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 13 19:51:12.375880 kernel: CPU features: detected: ARM erratum 1418040 Feb 13 19:51:12.375887 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Feb 13 19:51:12.375894 kernel: CPU features: detected: SSBS not fully self-synchronizing Feb 13 19:51:12.375900 kernel: alternatives: applying boot alternatives Feb 13 19:51:12.375908 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=f06bad36699a22ae88c1968cd72b62b3503d97da521712e50a4b744320b1ba33 Feb 13 19:51:12.375916 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 19:51:12.375922 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 19:51:12.375929 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 19:51:12.375936 kernel: Fallback order for Node 0: 0 Feb 13 19:51:12.375942 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Feb 13 19:51:12.375951 kernel: Policy zone: Normal Feb 13 19:51:12.375957 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 19:51:12.375964 kernel: software IO TLB: area num 2. Feb 13 19:51:12.375970 kernel: software IO TLB: mapped [mem 0x0000000036550000-0x000000003a550000] (64MB) Feb 13 19:51:12.375977 kernel: Memory: 3983656K/4194160K available (10304K kernel code, 2186K rwdata, 8092K rodata, 38336K init, 897K bss, 210504K reserved, 0K cma-reserved) Feb 13 19:51:12.375984 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 13 19:51:12.375991 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 19:51:12.375998 kernel: rcu: RCU event tracing is enabled. Feb 13 19:51:12.376005 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 13 19:51:12.376011 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 19:51:12.376018 kernel: Tracing variant of Tasks RCU enabled. Feb 13 19:51:12.376026 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 19:51:12.376033 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 13 19:51:12.376040 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 13 19:51:12.376046 kernel: GICv3: 960 SPIs implemented Feb 13 19:51:12.376053 kernel: GICv3: 0 Extended SPIs implemented Feb 13 19:51:12.376060 kernel: Root IRQ handler: gic_handle_irq Feb 13 19:51:12.376066 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Feb 13 19:51:12.376073 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Feb 13 19:51:12.376079 kernel: ITS: No ITS available, not enabling LPIs Feb 13 19:51:12.376086 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 19:51:12.376093 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 19:51:12.376099 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Feb 13 19:51:12.376107 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Feb 13 19:51:12.376114 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Feb 13 19:51:12.376121 kernel: Console: colour dummy device 80x25 Feb 13 19:51:12.376128 kernel: printk: console [tty1] enabled Feb 13 19:51:12.376135 kernel: ACPI: Core revision 20230628 Feb 13 19:51:12.376142 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Feb 13 19:51:12.376149 kernel: pid_max: default: 32768 minimum: 301 Feb 13 19:51:12.376156 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 19:51:12.376163 kernel: landlock: Up and running. Feb 13 19:51:12.376171 kernel: SELinux: Initializing. Feb 13 19:51:12.376178 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:51:12.376185 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:51:12.376191 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 19:51:12.376198 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 19:51:12.376205 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Feb 13 19:51:12.376212 kernel: Hyper-V: Host Build 10.0.22477.1619-1-0 Feb 13 19:51:12.376225 kernel: Hyper-V: enabling crash_kexec_post_notifiers Feb 13 19:51:12.376233 kernel: rcu: Hierarchical SRCU implementation. Feb 13 19:51:12.376240 kernel: rcu: Max phase no-delay instances is 400. Feb 13 19:51:12.376247 kernel: Remapping and enabling EFI services. Feb 13 19:51:12.376254 kernel: smp: Bringing up secondary CPUs ... Feb 13 19:51:12.376263 kernel: Detected PIPT I-cache on CPU1 Feb 13 19:51:12.376270 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Feb 13 19:51:12.376277 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 19:51:12.376284 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Feb 13 19:51:12.376292 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 19:51:12.376301 kernel: SMP: Total of 2 processors activated. Feb 13 19:51:12.376308 kernel: CPU features: detected: 32-bit EL0 Support Feb 13 19:51:12.376315 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Feb 13 19:51:12.376323 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Feb 13 19:51:12.376330 kernel: CPU features: detected: CRC32 instructions Feb 13 19:51:12.376337 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Feb 13 19:51:12.376344 kernel: CPU features: detected: LSE atomic instructions Feb 13 19:51:12.376352 kernel: CPU features: detected: Privileged Access Never Feb 13 19:51:12.376359 kernel: CPU: All CPU(s) started at EL1 Feb 13 19:51:12.376368 kernel: alternatives: applying system-wide alternatives Feb 13 19:51:12.376375 kernel: devtmpfs: initialized Feb 13 19:51:12.376382 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 19:51:12.376389 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 13 19:51:12.376396 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 19:51:12.376403 kernel: SMBIOS 3.1.0 present. Feb 13 19:51:12.376410 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Feb 13 19:51:12.376418 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 19:51:12.376425 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 13 19:51:12.376433 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 13 19:51:12.376441 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 13 19:51:12.376448 kernel: audit: initializing netlink subsys (disabled) Feb 13 19:51:12.376455 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Feb 13 19:51:12.376462 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 19:51:12.376469 kernel: cpuidle: using governor menu Feb 13 19:51:12.376476 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 13 19:51:12.376484 kernel: ASID allocator initialised with 32768 entries Feb 13 19:51:12.376491 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 19:51:12.376499 kernel: Serial: AMBA PL011 UART driver Feb 13 19:51:12.376507 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Feb 13 19:51:12.376514 kernel: Modules: 0 pages in range for non-PLT usage Feb 13 19:51:12.376521 kernel: Modules: 509280 pages in range for PLT usage Feb 13 19:51:12.376528 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 19:51:12.376535 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 19:51:12.376542 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Feb 13 19:51:12.376549 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Feb 13 19:51:12.376557 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 19:51:12.376566 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 19:51:12.376573 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Feb 13 19:51:12.376580 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Feb 13 19:51:12.378656 kernel: ACPI: Added _OSI(Module Device) Feb 13 19:51:12.378681 kernel: ACPI: Added _OSI(Processor Device) Feb 13 19:51:12.378690 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 19:51:12.378697 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 19:51:12.378705 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 19:51:12.378713 kernel: ACPI: Interpreter enabled Feb 13 19:51:12.378727 kernel: ACPI: Using GIC for interrupt routing Feb 13 19:51:12.378734 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Feb 13 19:51:12.378741 kernel: printk: console [ttyAMA0] enabled Feb 13 19:51:12.378749 kernel: printk: bootconsole [pl11] disabled Feb 13 19:51:12.378756 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Feb 13 19:51:12.378763 kernel: iommu: Default domain type: Translated Feb 13 19:51:12.378771 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 13 19:51:12.378778 kernel: efivars: Registered efivars operations Feb 13 19:51:12.378785 kernel: vgaarb: loaded Feb 13 19:51:12.378794 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 13 19:51:12.378801 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 19:51:12.378809 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 19:51:12.378816 kernel: pnp: PnP ACPI init Feb 13 19:51:12.378823 kernel: pnp: PnP ACPI: found 0 devices Feb 13 19:51:12.378830 kernel: NET: Registered PF_INET protocol family Feb 13 19:51:12.378837 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 19:51:12.378845 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 19:51:12.378852 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 19:51:12.378861 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 19:51:12.378869 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 19:51:12.378876 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 19:51:12.378883 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:51:12.378891 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:51:12.378898 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 19:51:12.378905 kernel: PCI: CLS 0 bytes, default 64 Feb 13 19:51:12.378913 kernel: kvm [1]: HYP mode not available Feb 13 19:51:12.378920 kernel: Initialise system trusted keyrings Feb 13 19:51:12.378928 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 19:51:12.378936 kernel: Key type asymmetric registered Feb 13 19:51:12.378943 kernel: Asymmetric key parser 'x509' registered Feb 13 19:51:12.378950 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Feb 13 19:51:12.378957 kernel: io scheduler mq-deadline registered Feb 13 19:51:12.378964 kernel: io scheduler kyber registered Feb 13 19:51:12.378972 kernel: io scheduler bfq registered Feb 13 19:51:12.378979 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 19:51:12.378986 kernel: thunder_xcv, ver 1.0 Feb 13 19:51:12.378995 kernel: thunder_bgx, ver 1.0 Feb 13 19:51:12.379002 kernel: nicpf, ver 1.0 Feb 13 19:51:12.379009 kernel: nicvf, ver 1.0 Feb 13 19:51:12.379177 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 13 19:51:12.379254 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-13T19:51:11 UTC (1739476271) Feb 13 19:51:12.379264 kernel: efifb: probing for efifb Feb 13 19:51:12.379272 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Feb 13 19:51:12.379279 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Feb 13 19:51:12.379289 kernel: efifb: scrolling: redraw Feb 13 19:51:12.379296 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Feb 13 19:51:12.379304 kernel: Console: switching to colour frame buffer device 128x48 Feb 13 19:51:12.379311 kernel: fb0: EFI VGA frame buffer device Feb 13 19:51:12.379318 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Feb 13 19:51:12.379325 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 19:51:12.379333 kernel: No ACPI PMU IRQ for CPU0 Feb 13 19:51:12.379340 kernel: No ACPI PMU IRQ for CPU1 Feb 13 19:51:12.379347 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Feb 13 19:51:12.379356 kernel: watchdog: Delayed init of the lockup detector failed: -19 Feb 13 19:51:12.379364 kernel: watchdog: Hard watchdog permanently disabled Feb 13 19:51:12.379371 kernel: NET: Registered PF_INET6 protocol family Feb 13 19:51:12.379378 kernel: Segment Routing with IPv6 Feb 13 19:51:12.379386 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 19:51:12.379393 kernel: NET: Registered PF_PACKET protocol family Feb 13 19:51:12.379400 kernel: Key type dns_resolver registered Feb 13 19:51:12.379408 kernel: registered taskstats version 1 Feb 13 19:51:12.379415 kernel: Loading compiled-in X.509 certificates Feb 13 19:51:12.379424 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 58bec1a0c6b8a133d1af4ea745973da0351f7027' Feb 13 19:51:12.379431 kernel: Key type .fscrypt registered Feb 13 19:51:12.379439 kernel: Key type fscrypt-provisioning registered Feb 13 19:51:12.379446 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 19:51:12.379453 kernel: ima: Allocated hash algorithm: sha1 Feb 13 19:51:12.379460 kernel: ima: No architecture policies found Feb 13 19:51:12.379468 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 13 19:51:12.379475 kernel: clk: Disabling unused clocks Feb 13 19:51:12.379482 kernel: Freeing unused kernel memory: 38336K Feb 13 19:51:12.379491 kernel: Run /init as init process Feb 13 19:51:12.379499 kernel: with arguments: Feb 13 19:51:12.379506 kernel: /init Feb 13 19:51:12.379513 kernel: with environment: Feb 13 19:51:12.379520 kernel: HOME=/ Feb 13 19:51:12.379527 kernel: TERM=linux Feb 13 19:51:12.379534 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 19:51:12.379543 systemd[1]: Successfully made /usr/ read-only. Feb 13 19:51:12.379556 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Feb 13 19:51:12.379564 systemd[1]: Detected virtualization microsoft. Feb 13 19:51:12.379572 systemd[1]: Detected architecture arm64. Feb 13 19:51:12.379579 systemd[1]: Running in initrd. Feb 13 19:51:12.379587 systemd[1]: No hostname configured, using default hostname. Feb 13 19:51:12.379618 systemd[1]: Hostname set to . Feb 13 19:51:12.379626 systemd[1]: Initializing machine ID from random generator. Feb 13 19:51:12.379634 systemd[1]: Queued start job for default target initrd.target. Feb 13 19:51:12.379644 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:51:12.379652 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:51:12.379661 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 19:51:12.379669 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:51:12.379677 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 19:51:12.379686 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 19:51:12.379695 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 19:51:12.379705 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 19:51:12.379713 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:51:12.379720 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:51:12.379728 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:51:12.379736 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:51:12.379743 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:51:12.379751 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:51:12.379759 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:51:12.379769 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:51:12.379777 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 19:51:12.379785 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Feb 13 19:51:12.379793 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:51:12.379801 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:51:12.379809 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:51:12.379817 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:51:12.379831 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 19:51:12.379840 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:51:12.379850 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 19:51:12.379858 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 19:51:12.379866 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:51:12.379887 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:51:12.379921 systemd-journald[218]: Collecting audit messages is disabled. Feb 13 19:51:12.379943 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:51:12.379952 systemd-journald[218]: Journal started Feb 13 19:51:12.379971 systemd-journald[218]: Runtime Journal (/run/log/journal/51b8b2bcfa834439823192e1c74127a1) is 8M, max 78.5M, 70.5M free. Feb 13 19:51:12.380614 systemd-modules-load[221]: Inserted module 'overlay' Feb 13 19:51:12.414235 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 19:51:12.414291 kernel: Bridge firewalling registered Feb 13 19:51:12.414303 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:51:12.414374 systemd-modules-load[221]: Inserted module 'br_netfilter' Feb 13 19:51:12.431017 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 19:51:12.437061 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:51:12.445114 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 19:51:12.455219 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:51:12.468404 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:51:12.502886 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:51:12.510749 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:51:12.531771 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 19:51:12.556853 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:51:12.573370 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:51:12.589182 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:51:12.596010 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:51:12.619641 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:51:12.640891 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 19:51:12.650107 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:51:12.664795 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:51:12.689722 dracut-cmdline[252]: dracut-dracut-053 Feb 13 19:51:12.695698 dracut-cmdline[252]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=f06bad36699a22ae88c1968cd72b62b3503d97da521712e50a4b744320b1ba33 Feb 13 19:51:12.732782 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:51:12.745946 systemd-resolved[253]: Positive Trust Anchors: Feb 13 19:51:12.745957 systemd-resolved[253]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:51:12.745987 systemd-resolved[253]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:51:12.749407 systemd-resolved[253]: Defaulting to hostname 'linux'. Feb 13 19:51:12.750286 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:51:12.763520 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:51:12.878740 kernel: SCSI subsystem initialized Feb 13 19:51:12.892398 kernel: Loading iSCSI transport class v2.0-870. Feb 13 19:51:12.905619 kernel: iscsi: registered transport (tcp) Feb 13 19:51:12.924026 kernel: iscsi: registered transport (qla4xxx) Feb 13 19:51:12.924090 kernel: QLogic iSCSI HBA Driver Feb 13 19:51:12.957910 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 19:51:12.972740 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 19:51:13.005612 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 19:51:13.005674 kernel: device-mapper: uevent: version 1.0.3 Feb 13 19:51:13.012070 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 19:51:13.061634 kernel: raid6: neonx8 gen() 15750 MB/s Feb 13 19:51:13.081610 kernel: raid6: neonx4 gen() 15811 MB/s Feb 13 19:51:13.101605 kernel: raid6: neonx2 gen() 13195 MB/s Feb 13 19:51:13.122620 kernel: raid6: neonx1 gen() 10445 MB/s Feb 13 19:51:13.142601 kernel: raid6: int64x8 gen() 6796 MB/s Feb 13 19:51:13.162600 kernel: raid6: int64x4 gen() 7350 MB/s Feb 13 19:51:13.183619 kernel: raid6: int64x2 gen() 6111 MB/s Feb 13 19:51:13.207039 kernel: raid6: int64x1 gen() 5059 MB/s Feb 13 19:51:13.207050 kernel: raid6: using algorithm neonx4 gen() 15811 MB/s Feb 13 19:51:13.230919 kernel: raid6: .... xor() 12474 MB/s, rmw enabled Feb 13 19:51:13.230983 kernel: raid6: using neon recovery algorithm Feb 13 19:51:13.243193 kernel: xor: measuring software checksum speed Feb 13 19:51:13.243212 kernel: 8regs : 21636 MB/sec Feb 13 19:51:13.246703 kernel: 32regs : 21670 MB/sec Feb 13 19:51:13.250331 kernel: arm64_neon : 27917 MB/sec Feb 13 19:51:13.254508 kernel: xor: using function: arm64_neon (27917 MB/sec) Feb 13 19:51:13.306624 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 19:51:13.318679 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:51:13.337782 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:51:13.363986 systemd-udevd[438]: Using default interface naming scheme 'v255'. Feb 13 19:51:13.369623 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:51:13.395749 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 19:51:13.417411 dracut-pre-trigger[452]: rd.md=0: removing MD RAID activation Feb 13 19:51:13.449640 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:51:13.468872 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:51:13.511619 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:51:13.535782 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 19:51:13.567335 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 19:51:13.585148 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:51:13.600375 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:51:13.617872 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:51:13.628295 kernel: hv_vmbus: Vmbus version:5.3 Feb 13 19:51:13.628325 kernel: hv_vmbus: registering driver hyperv_keyboard Feb 13 19:51:13.634133 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Feb 13 19:51:13.651005 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 19:51:13.672057 kernel: hv_vmbus: registering driver hv_netvsc Feb 13 19:51:13.693841 kernel: hv_vmbus: registering driver hv_storvsc Feb 13 19:51:13.693895 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 13 19:51:13.687011 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:51:13.714191 kernel: scsi host1: storvsc_host_t Feb 13 19:51:13.714428 kernel: hv_vmbus: registering driver hid_hyperv Feb 13 19:51:13.714456 kernel: scsi host0: storvsc_host_t Feb 13 19:51:13.723586 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Feb 13 19:51:13.716067 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:51:13.758576 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 13 19:51:13.758632 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Feb 13 19:51:13.758667 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Feb 13 19:51:13.716256 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:51:13.785258 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Feb 13 19:51:13.792786 kernel: hv_netvsc 000d3af6-09c4-000d-3af6-09c4000d3af6 eth0: VF slot 1 added Feb 13 19:51:13.767665 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:51:13.793117 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:51:13.793336 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:51:13.808676 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:51:13.846631 kernel: hv_vmbus: registering driver hv_pci Feb 13 19:51:13.846679 kernel: PTP clock support registered Feb 13 19:51:13.847074 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:51:13.603029 kernel: hv_utils: Registering HyperV Utility Driver Feb 13 19:51:13.616177 kernel: hv_vmbus: registering driver hv_utils Feb 13 19:51:13.616197 kernel: hv_utils: Shutdown IC version 3.2 Feb 13 19:51:13.616205 kernel: hv_utils: Heartbeat IC version 3.0 Feb 13 19:51:13.616216 kernel: hv_utils: TimeSync IC version 4.0 Feb 13 19:51:13.616223 kernel: hv_pci 34721e84-92f9-41c9-9232-32efc3246c57: PCI VMBus probing: Using version 0x10004 Feb 13 19:51:13.750203 systemd-journald[218]: Time jumped backwards, rotating. Feb 13 19:51:13.752674 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Feb 13 19:51:13.752846 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 13 19:51:13.752856 kernel: hv_pci 34721e84-92f9-41c9-9232-32efc3246c57: PCI host bridge to bus 92f9:00 Feb 13 19:51:13.752945 kernel: pci_bus 92f9:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Feb 13 19:51:13.753040 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Feb 13 19:51:13.753142 kernel: pci_bus 92f9:00: No busn resource found for root bus, will use [bus 00-ff] Feb 13 19:51:13.753227 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Feb 13 19:51:13.753329 kernel: pci 92f9:00:02.0: [15b3:1018] type 00 class 0x020000 Feb 13 19:51:13.753444 kernel: sd 0:0:0:0: [sda] Write Protect is off Feb 13 19:51:13.753565 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Feb 13 19:51:13.753679 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Feb 13 19:51:13.753873 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Feb 13 19:51:13.753973 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 19:51:13.753982 kernel: pci 92f9:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Feb 13 19:51:13.754092 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Feb 13 19:51:13.754185 kernel: pci 92f9:00:02.0: enabling Extended Tags Feb 13 19:51:13.754284 kernel: pci 92f9:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 92f9:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Feb 13 19:51:13.754382 kernel: pci_bus 92f9:00: busn_res: [bus 00-ff] end is updated to 00 Feb 13 19:51:13.754468 kernel: pci 92f9:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Feb 13 19:51:13.885602 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:51:13.885712 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:51:13.592107 systemd-resolved[253]: Clock change detected. Flushing caches. Feb 13 19:51:13.615752 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Feb 13 19:51:13.629894 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:51:13.710954 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:51:13.836700 kernel: mlx5_core 92f9:00:02.0: enabling device (0000 -> 0002) Feb 13 19:51:14.058197 kernel: mlx5_core 92f9:00:02.0: firmware version: 16.30.1284 Feb 13 19:51:14.058338 kernel: hv_netvsc 000d3af6-09c4-000d-3af6-09c4000d3af6 eth0: VF registering: eth1 Feb 13 19:51:14.058464 kernel: mlx5_core 92f9:00:02.0 eth1: joined to eth0 Feb 13 19:51:14.058568 kernel: mlx5_core 92f9:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Feb 13 19:51:13.761901 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:51:13.806258 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:51:14.082655 kernel: mlx5_core 92f9:00:02.0 enP37625s1: renamed from eth1 Feb 13 19:51:14.330362 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Feb 13 19:51:14.415673 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by (udev-worker) (486) Feb 13 19:51:14.436973 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Feb 13 19:51:14.460768 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Feb 13 19:51:14.553940 kernel: BTRFS: device fsid 4fff035f-dd55-45d8-9bb7-2a61f21b22d5 devid 1 transid 38 /dev/sda3 scanned by (udev-worker) (495) Feb 13 19:51:14.570403 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Feb 13 19:51:14.578821 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Feb 13 19:51:14.610873 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 19:51:14.638675 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 19:51:15.652135 disk-uuid[606]: The operation has completed successfully. Feb 13 19:51:15.657388 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 19:51:15.714636 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 19:51:15.716805 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 19:51:15.767769 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 19:51:15.780843 sh[692]: Success Feb 13 19:51:15.808685 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 13 19:51:16.057526 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 19:51:16.068763 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 19:51:16.081580 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 19:51:16.115965 kernel: BTRFS info (device dm-0): first mount of filesystem 4fff035f-dd55-45d8-9bb7-2a61f21b22d5 Feb 13 19:51:16.116020 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:51:16.124150 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 19:51:16.130073 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 19:51:16.134941 kernel: BTRFS info (device dm-0): using free space tree Feb 13 19:51:16.509011 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 19:51:16.514687 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 19:51:16.535873 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 19:51:16.544367 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 19:51:16.590479 kernel: BTRFS info (device sda6): first mount of filesystem 843e6c1f-b3c4-44a3-b5c6-7983dd77012d Feb 13 19:51:16.590542 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:51:16.595202 kernel: BTRFS info (device sda6): using free space tree Feb 13 19:51:16.616431 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 19:51:16.633199 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 19:51:16.638714 kernel: BTRFS info (device sda6): last unmount of filesystem 843e6c1f-b3c4-44a3-b5c6-7983dd77012d Feb 13 19:51:16.646885 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 19:51:16.662881 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 19:51:16.671820 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:51:16.692600 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:51:16.733245 systemd-networkd[877]: lo: Link UP Feb 13 19:51:16.733253 systemd-networkd[877]: lo: Gained carrier Feb 13 19:51:16.734982 systemd-networkd[877]: Enumeration completed Feb 13 19:51:16.737316 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:51:16.743867 systemd-networkd[877]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:51:16.743871 systemd-networkd[877]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:51:16.746777 systemd[1]: Reached target network.target - Network. Feb 13 19:51:16.807643 kernel: mlx5_core 92f9:00:02.0 enP37625s1: Link up Feb 13 19:51:16.853663 kernel: hv_netvsc 000d3af6-09c4-000d-3af6-09c4000d3af6 eth0: Data path switched to VF: enP37625s1 Feb 13 19:51:16.853865 systemd-networkd[877]: enP37625s1: Link UP Feb 13 19:51:16.853935 systemd-networkd[877]: eth0: Link UP Feb 13 19:51:16.854027 systemd-networkd[877]: eth0: Gained carrier Feb 13 19:51:16.854035 systemd-networkd[877]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:51:16.878175 systemd-networkd[877]: enP37625s1: Gained carrier Feb 13 19:51:16.901657 systemd-networkd[877]: eth0: DHCPv4 address 10.200.20.12/24, gateway 10.200.20.1 acquired from 168.63.129.16 Feb 13 19:51:17.955607 ignition[875]: Ignition 2.20.0 Feb 13 19:51:17.955635 ignition[875]: Stage: fetch-offline Feb 13 19:51:17.960191 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:51:17.955674 ignition[875]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:51:17.955683 ignition[875]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 19:51:17.955779 ignition[875]: parsed url from cmdline: "" Feb 13 19:51:17.955782 ignition[875]: no config URL provided Feb 13 19:51:17.955787 ignition[875]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 19:51:17.955795 ignition[875]: no config at "/usr/lib/ignition/user.ign" Feb 13 19:51:17.992908 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 19:51:17.955799 ignition[875]: failed to fetch config: resource requires networking Feb 13 19:51:17.955992 ignition[875]: Ignition finished successfully Feb 13 19:51:18.018676 ignition[888]: Ignition 2.20.0 Feb 13 19:51:18.018682 ignition[888]: Stage: fetch Feb 13 19:51:18.019354 ignition[888]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:51:18.019366 ignition[888]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 19:51:18.019470 ignition[888]: parsed url from cmdline: "" Feb 13 19:51:18.019473 ignition[888]: no config URL provided Feb 13 19:51:18.019478 ignition[888]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 19:51:18.019485 ignition[888]: no config at "/usr/lib/ignition/user.ign" Feb 13 19:51:18.019512 ignition[888]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Feb 13 19:51:18.115269 ignition[888]: GET result: OK Feb 13 19:51:18.115295 ignition[888]: failed to retrieve userdata from IMDS, falling back to custom data: not a config (empty) Feb 13 19:51:18.151009 ignition[888]: opening config device: "/dev/sr0" Feb 13 19:51:18.152512 ignition[888]: getting drive status for "/dev/sr0" Feb 13 19:51:18.152579 ignition[888]: drive status: OK Feb 13 19:51:18.152708 ignition[888]: mounting config device Feb 13 19:51:18.152722 ignition[888]: op(1): [started] mounting "/dev/sr0" at "/tmp/ignition-azure2550873840" Feb 13 19:51:18.177637 kernel: UDF-fs: INFO Mounting volume 'UDF Volume', timestamp 2025/02/14 00:00 (1000) Feb 13 19:51:18.177732 ignition[888]: op(1): [finished] mounting "/dev/sr0" at "/tmp/ignition-azure2550873840" Feb 13 19:51:18.177742 ignition[888]: checking for config drive Feb 13 19:51:18.186242 systemd[1]: tmp-ignition\x2dazure2550873840.mount: Deactivated successfully. Feb 13 19:51:18.178234 ignition[888]: reading config Feb 13 19:51:18.190394 unknown[888]: fetched base config from "system" Feb 13 19:51:18.185820 ignition[888]: op(2): [started] unmounting "/dev/sr0" at "/tmp/ignition-azure2550873840" Feb 13 19:51:18.190401 unknown[888]: fetched base config from "system" Feb 13 19:51:18.185938 ignition[888]: op(2): [finished] unmounting "/dev/sr0" at "/tmp/ignition-azure2550873840" Feb 13 19:51:18.190407 unknown[888]: fetched user config from "azure" Feb 13 19:51:18.185955 ignition[888]: config has been read from custom data Feb 13 19:51:18.195075 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 19:51:18.185995 ignition[888]: parsing config with SHA512: f0a1db3ace979099504f1493fe89339caca7dfe789460b4bac7726e46923f2ba673da78183dfd75290320aa792330e9b87a24681d15033bd6342754c2ae92e7d Feb 13 19:51:18.219891 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 19:51:18.190782 ignition[888]: fetch: fetch complete Feb 13 19:51:18.190787 ignition[888]: fetch: fetch passed Feb 13 19:51:18.190827 ignition[888]: Ignition finished successfully Feb 13 19:51:18.261698 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 19:51:18.254749 ignition[896]: Ignition 2.20.0 Feb 13 19:51:18.254757 ignition[896]: Stage: kargs Feb 13 19:51:18.254992 ignition[896]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:51:18.255001 ignition[896]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 19:51:18.283904 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 19:51:18.256097 ignition[896]: kargs: kargs passed Feb 13 19:51:18.256150 ignition[896]: Ignition finished successfully Feb 13 19:51:18.306576 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 19:51:18.304090 ignition[903]: Ignition 2.20.0 Feb 13 19:51:18.315449 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 19:51:18.304096 ignition[903]: Stage: disks Feb 13 19:51:18.325072 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 19:51:18.304293 ignition[903]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:51:18.338317 systemd-networkd[877]: eth0: Gained IPv6LL Feb 13 19:51:18.304303 ignition[903]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 19:51:18.339274 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:51:18.305464 ignition[903]: disks: disks passed Feb 13 19:51:18.350421 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:51:18.305524 ignition[903]: Ignition finished successfully Feb 13 19:51:18.362189 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:51:18.381889 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 19:51:18.401735 systemd-networkd[877]: enP37625s1: Gained IPv6LL Feb 13 19:51:18.493098 systemd-fsck[912]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Feb 13 19:51:18.504362 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 19:51:18.523870 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 19:51:18.585675 kernel: EXT4-fs (sda9): mounted filesystem 24882d04-b1a5-4a27-95f1-925956e69b18 r/w with ordered data mode. Quota mode: none. Feb 13 19:51:18.586483 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 19:51:18.591521 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 19:51:18.631703 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:51:18.642420 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 19:51:18.652886 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Feb 13 19:51:18.659944 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 19:51:18.659984 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:51:18.667887 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 19:51:18.718728 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (923) Feb 13 19:51:18.718793 kernel: BTRFS info (device sda6): first mount of filesystem 843e6c1f-b3c4-44a3-b5c6-7983dd77012d Feb 13 19:51:18.724962 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:51:18.725476 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 19:51:18.740846 kernel: BTRFS info (device sda6): using free space tree Feb 13 19:51:18.747637 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 19:51:18.749066 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:51:19.682588 coreos-metadata[925]: Feb 13 19:51:19.682 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Feb 13 19:51:19.691558 coreos-metadata[925]: Feb 13 19:51:19.691 INFO Fetch successful Feb 13 19:51:19.691558 coreos-metadata[925]: Feb 13 19:51:19.691 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Feb 13 19:51:19.709543 coreos-metadata[925]: Feb 13 19:51:19.708 INFO Fetch successful Feb 13 19:51:19.709543 coreos-metadata[925]: Feb 13 19:51:19.708 INFO wrote hostname ci-4230.0.1-a-4092b3335a to /sysroot/etc/hostname Feb 13 19:51:19.710113 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 13 19:51:20.074800 initrd-setup-root[953]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 19:51:20.101284 initrd-setup-root[960]: cut: /sysroot/etc/group: No such file or directory Feb 13 19:51:20.111381 initrd-setup-root[967]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 19:51:20.122375 initrd-setup-root[974]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 19:51:21.367549 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 19:51:21.385088 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 19:51:21.395855 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 19:51:21.415068 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 19:51:21.424720 kernel: BTRFS info (device sda6): last unmount of filesystem 843e6c1f-b3c4-44a3-b5c6-7983dd77012d Feb 13 19:51:21.445265 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 19:51:21.457276 ignition[1042]: INFO : Ignition 2.20.0 Feb 13 19:51:21.457276 ignition[1042]: INFO : Stage: mount Feb 13 19:51:21.466309 ignition[1042]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:51:21.466309 ignition[1042]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 19:51:21.466309 ignition[1042]: INFO : mount: mount passed Feb 13 19:51:21.466309 ignition[1042]: INFO : Ignition finished successfully Feb 13 19:51:21.463056 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 19:51:21.486809 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 19:51:21.507869 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:51:21.543707 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (1054) Feb 13 19:51:21.562481 kernel: BTRFS info (device sda6): first mount of filesystem 843e6c1f-b3c4-44a3-b5c6-7983dd77012d Feb 13 19:51:21.562546 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:51:21.562558 kernel: BTRFS info (device sda6): using free space tree Feb 13 19:51:21.570632 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 19:51:21.572794 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:51:21.597174 ignition[1072]: INFO : Ignition 2.20.0 Feb 13 19:51:21.601767 ignition[1072]: INFO : Stage: files Feb 13 19:51:21.601767 ignition[1072]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:51:21.601767 ignition[1072]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 19:51:21.601767 ignition[1072]: DEBUG : files: compiled without relabeling support, skipping Feb 13 19:51:21.649869 ignition[1072]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 19:51:21.649869 ignition[1072]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 19:51:21.752573 ignition[1072]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 19:51:21.760779 ignition[1072]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 19:51:21.760779 ignition[1072]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 19:51:21.753068 unknown[1072]: wrote ssh authorized keys file for user: core Feb 13 19:51:21.782298 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Feb 13 19:51:21.782298 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Feb 13 19:51:21.862466 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 19:51:22.020975 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Feb 13 19:51:22.020975 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 19:51:22.042497 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Feb 13 19:51:22.513797 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 13 19:51:22.578833 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 19:51:22.588947 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Feb 13 19:51:22.588947 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 19:51:22.588947 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 19:51:22.588947 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 19:51:22.588947 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 19:51:22.588947 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 19:51:22.588947 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 19:51:22.588947 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 19:51:22.588947 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:51:22.588947 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:51:22.588947 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Feb 13 19:51:22.588947 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Feb 13 19:51:22.588947 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Feb 13 19:51:22.588947 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-arm64.raw: attempt #1 Feb 13 19:51:22.991768 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Feb 13 19:51:23.360564 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Feb 13 19:51:23.360564 ignition[1072]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Feb 13 19:51:23.396726 ignition[1072]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 19:51:23.409983 ignition[1072]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 19:51:23.409983 ignition[1072]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Feb 13 19:51:23.409983 ignition[1072]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Feb 13 19:51:23.409983 ignition[1072]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 19:51:23.409983 ignition[1072]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:51:23.409983 ignition[1072]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:51:23.409983 ignition[1072]: INFO : files: files passed Feb 13 19:51:23.409983 ignition[1072]: INFO : Ignition finished successfully Feb 13 19:51:23.408884 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 19:51:23.444380 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 19:51:23.458871 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 19:51:23.488157 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 19:51:23.553070 initrd-setup-root-after-ignition[1098]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:51:23.553070 initrd-setup-root-after-ignition[1098]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:51:23.488251 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 19:51:23.586328 initrd-setup-root-after-ignition[1102]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:51:23.514306 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:51:23.531552 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 19:51:23.557714 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 19:51:23.598627 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 19:51:23.600946 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 19:51:23.612742 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 19:51:23.625323 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 19:51:23.639451 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 19:51:23.655899 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 19:51:23.698692 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:51:23.715870 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 19:51:23.739021 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:51:23.753574 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:51:23.760836 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 19:51:23.766650 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 19:51:23.766837 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:51:23.785590 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 19:51:23.791862 systemd[1]: Stopped target basic.target - Basic System. Feb 13 19:51:23.803145 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 19:51:23.817562 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:51:23.829849 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 19:51:23.841137 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 19:51:23.853640 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:51:23.867531 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 19:51:23.879905 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 19:51:23.891179 systemd[1]: Stopped target swap.target - Swaps. Feb 13 19:51:23.902632 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 19:51:23.902810 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:51:23.920435 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:51:23.932906 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:51:23.945162 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 19:51:23.945274 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:51:23.959221 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 19:51:23.959406 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 19:51:23.978549 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 19:51:23.978762 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:51:23.987023 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 19:51:23.987179 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 19:51:23.998450 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 13 19:51:23.998650 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 13 19:51:24.074379 ignition[1123]: INFO : Ignition 2.20.0 Feb 13 19:51:24.074379 ignition[1123]: INFO : Stage: umount Feb 13 19:51:24.074379 ignition[1123]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:51:24.074379 ignition[1123]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 19:51:24.074379 ignition[1123]: INFO : umount: umount passed Feb 13 19:51:24.074379 ignition[1123]: INFO : Ignition finished successfully Feb 13 19:51:24.047752 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 19:51:24.074792 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 19:51:24.085584 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 19:51:24.085793 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:51:24.098333 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 19:51:24.098503 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:51:24.121256 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 19:51:24.121367 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 19:51:24.132867 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 19:51:24.134283 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 19:51:24.143374 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 19:51:24.143483 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 19:51:24.153712 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 19:51:24.153784 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 19:51:24.164720 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 19:51:24.164786 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 19:51:24.184067 systemd[1]: Stopped target network.target - Network. Feb 13 19:51:24.195353 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 19:51:24.195445 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:51:24.208351 systemd[1]: Stopped target paths.target - Path Units. Feb 13 19:51:24.219257 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 19:51:24.229646 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:51:24.237400 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 19:51:24.249972 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 19:51:24.261167 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 19:51:24.261224 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:51:24.272594 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 19:51:24.272640 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:51:24.284460 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 19:51:24.284524 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 19:51:24.296106 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 19:51:24.296163 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 19:51:24.308662 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 19:51:24.321442 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 19:51:24.334252 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 19:51:24.334852 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 19:51:24.334962 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 19:51:24.354736 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Feb 13 19:51:24.354991 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 19:51:24.355185 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 19:51:24.363506 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 19:51:24.363603 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 19:51:24.607535 kernel: hv_netvsc 000d3af6-09c4-000d-3af6-09c4000d3af6 eth0: Data path switched from VF: enP37625s1 Feb 13 19:51:24.385347 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Feb 13 19:51:24.386627 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 19:51:24.386685 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:51:24.403494 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 19:51:24.403571 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 19:51:24.434806 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 19:51:24.444641 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 19:51:24.444723 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:51:24.458038 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 19:51:24.458098 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:51:24.475509 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 19:51:24.475565 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 19:51:24.482314 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 19:51:24.482377 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:51:24.501930 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:51:24.513674 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 13 19:51:24.513749 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Feb 13 19:51:24.528081 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 19:51:24.528242 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:51:24.543773 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 19:51:24.543841 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 19:51:24.550679 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 19:51:24.550723 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:51:24.564276 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 19:51:24.564334 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:51:24.590060 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 19:51:24.590143 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 19:51:24.607577 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:51:24.607677 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:51:24.646861 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 19:51:24.662920 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 19:51:24.663003 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:51:24.684116 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 13 19:51:24.684181 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:51:24.691556 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 19:51:24.691607 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:51:24.703825 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:51:24.703888 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:51:24.723685 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Feb 13 19:51:24.943465 systemd-journald[218]: Received SIGTERM from PID 1 (systemd). Feb 13 19:51:24.723755 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Feb 13 19:51:24.724116 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 19:51:24.724202 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 19:51:24.735091 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 19:51:24.735183 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 19:51:24.748549 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 19:51:24.785879 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 19:51:24.805554 systemd[1]: Switching root. Feb 13 19:51:24.994243 systemd-journald[218]: Journal stopped Feb 13 19:51:31.184040 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 19:51:31.184067 kernel: SELinux: policy capability open_perms=1 Feb 13 19:51:31.184078 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 19:51:31.184086 kernel: SELinux: policy capability always_check_network=0 Feb 13 19:51:31.184099 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 19:51:31.184106 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 19:51:31.184115 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 19:51:31.184123 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 19:51:31.184131 kernel: audit: type=1403 audit(1739476286.483:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 19:51:31.184141 systemd[1]: Successfully loaded SELinux policy in 310.816ms. Feb 13 19:51:31.184153 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 11.264ms. Feb 13 19:51:31.184163 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Feb 13 19:51:31.184171 systemd[1]: Detected virtualization microsoft. Feb 13 19:51:31.184180 systemd[1]: Detected architecture arm64. Feb 13 19:51:31.184190 systemd[1]: Detected first boot. Feb 13 19:51:31.184201 systemd[1]: Hostname set to . Feb 13 19:51:31.184210 systemd[1]: Initializing machine ID from random generator. Feb 13 19:51:31.184219 zram_generator::config[1167]: No configuration found. Feb 13 19:51:31.184228 kernel: NET: Registered PF_VSOCK protocol family Feb 13 19:51:31.184237 systemd[1]: Populated /etc with preset unit settings. Feb 13 19:51:31.184247 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Feb 13 19:51:31.184256 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 19:51:31.184267 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 19:51:31.184276 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 19:51:31.184285 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 19:51:31.184295 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 19:51:31.184304 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 19:51:31.184313 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 19:51:31.184322 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 19:51:31.184333 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 19:51:31.184342 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 19:51:31.184351 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 19:51:31.184360 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:51:31.184370 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:51:31.184379 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 19:51:31.184388 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 19:51:31.184397 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 19:51:31.184408 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:51:31.184417 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Feb 13 19:51:31.184426 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:51:31.184437 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 19:51:31.184447 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 19:51:31.184456 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 19:51:31.184466 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 19:51:31.184475 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:51:31.184487 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:51:31.184498 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:51:31.184507 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:51:31.184516 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 19:51:31.184541 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 19:51:31.184550 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Feb 13 19:51:31.184562 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:51:31.184572 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:51:31.184581 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:51:31.184590 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 19:51:31.184600 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 19:51:31.184610 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 19:51:31.184629 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 19:51:31.184641 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 19:51:31.184650 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 19:51:31.184660 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 19:51:31.184669 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 19:51:31.184679 systemd[1]: Reached target machines.target - Containers. Feb 13 19:51:31.184688 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 19:51:31.184698 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:51:31.184707 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:51:31.184718 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 19:51:31.184730 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:51:31.184739 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:51:31.184748 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:51:31.184758 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 19:51:31.184767 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:51:31.184777 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 19:51:31.184786 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 19:51:31.184797 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 19:51:31.184807 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 19:51:31.184816 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 19:51:31.184825 kernel: loop: module loaded Feb 13 19:51:31.184834 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 19:51:31.184843 kernel: fuse: init (API version 7.39) Feb 13 19:51:31.184852 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:51:31.184862 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:51:31.184871 kernel: ACPI: bus type drm_connector registered Feb 13 19:51:31.184881 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 19:51:31.184891 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 19:51:31.184922 systemd-journald[1264]: Collecting audit messages is disabled. Feb 13 19:51:31.184944 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Feb 13 19:51:31.184956 systemd-journald[1264]: Journal started Feb 13 19:51:31.184978 systemd-journald[1264]: Runtime Journal (/run/log/journal/c3f176746e6c41d6a47db69a5ed2c6ed) is 8M, max 78.5M, 70.5M free. Feb 13 19:51:30.108861 systemd[1]: Queued start job for default target multi-user.target. Feb 13 19:51:30.113436 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Feb 13 19:51:30.113847 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 19:51:30.114209 systemd[1]: systemd-journald.service: Consumed 3.529s CPU time. Feb 13 19:51:31.213425 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:51:31.225391 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 19:51:31.225449 systemd[1]: Stopped verity-setup.service. Feb 13 19:51:31.245094 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:51:31.245754 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 19:51:31.252852 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 19:51:31.259540 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 19:51:31.265219 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 19:51:31.271344 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 19:51:31.278136 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 19:51:31.285783 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 19:51:31.293272 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:51:31.301245 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 19:51:31.301416 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 19:51:31.308932 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:51:31.310652 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:51:31.319270 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:51:31.319438 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:51:31.326856 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:51:31.327027 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:51:31.334601 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 19:51:31.334794 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 19:51:31.341552 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:51:31.341726 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:51:31.348896 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:51:31.355939 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 19:51:31.363737 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 19:51:31.371961 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Feb 13 19:51:31.388833 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:51:31.401699 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 19:51:31.414720 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 19:51:31.422929 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 19:51:31.429816 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 19:51:31.429967 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:51:31.437387 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Feb 13 19:51:31.448772 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 19:51:31.456962 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 19:51:31.463142 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:51:31.464343 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 19:51:31.472146 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 19:51:31.479289 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:51:31.480814 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 19:51:31.487261 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:51:31.488828 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:51:31.504812 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 19:51:31.514905 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 19:51:31.530453 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 19:51:31.541713 systemd-journald[1264]: Time spent on flushing to /var/log/journal/c3f176746e6c41d6a47db69a5ed2c6ed is 49.671ms for 931 entries. Feb 13 19:51:31.541713 systemd-journald[1264]: System Journal (/var/log/journal/c3f176746e6c41d6a47db69a5ed2c6ed) is 11.8M, max 2.6G, 2.6G free. Feb 13 19:51:31.686166 systemd-journald[1264]: Received client request to flush runtime journal. Feb 13 19:51:31.686262 kernel: loop0: detected capacity change from 0 to 113512 Feb 13 19:51:31.686306 systemd-journald[1264]: /var/log/journal/c3f176746e6c41d6a47db69a5ed2c6ed/system.journal: Realtime clock jumped backwards relative to last journal entry, rotating. Feb 13 19:51:31.686354 systemd-journald[1264]: Rotating system journal. Feb 13 19:51:31.543820 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 19:51:31.562124 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 19:51:31.570161 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 19:51:31.592323 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 19:51:31.614707 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 19:51:31.638058 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Feb 13 19:51:31.645803 udevadm[1310]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 13 19:51:31.667173 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:51:31.688954 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 19:51:31.706145 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 19:51:31.706840 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Feb 13 19:51:31.715366 systemd-tmpfiles[1309]: ACLs are not supported, ignoring. Feb 13 19:51:31.715386 systemd-tmpfiles[1309]: ACLs are not supported, ignoring. Feb 13 19:51:31.721118 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:51:31.734776 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 19:51:32.167807 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 19:51:32.185959 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:51:32.203571 systemd-tmpfiles[1327]: ACLs are not supported, ignoring. Feb 13 19:51:32.203592 systemd-tmpfiles[1327]: ACLs are not supported, ignoring. Feb 13 19:51:32.207727 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:51:32.335642 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 19:51:32.413911 kernel: loop1: detected capacity change from 0 to 28720 Feb 13 19:51:33.160235 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 19:51:33.172780 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:51:33.197340 systemd-udevd[1333]: Using default interface naming scheme 'v255'. Feb 13 19:51:33.294656 kernel: loop2: detected capacity change from 0 to 201592 Feb 13 19:51:33.364642 kernel: loop3: detected capacity change from 0 to 123192 Feb 13 19:51:33.678191 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:51:33.699095 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:51:33.757885 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 19:51:33.766136 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Feb 13 19:51:33.823665 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 19:51:33.867106 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 19:51:33.892983 kernel: hv_vmbus: registering driver hv_balloon Feb 13 19:51:33.893081 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Feb 13 19:51:33.897228 kernel: hv_balloon: Memory hot add disabled on ARM64 Feb 13 19:51:33.931655 kernel: hv_vmbus: registering driver hyperv_fb Feb 13 19:51:33.943938 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Feb 13 19:51:33.944064 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Feb 13 19:51:33.954722 kernel: Console: switching to colour dummy device 80x25 Feb 13 19:51:33.954807 kernel: Console: switching to colour frame buffer device 128x48 Feb 13 19:51:33.977025 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:51:34.005693 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1344) Feb 13 19:51:34.025990 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:51:34.028760 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:51:34.042547 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Feb 13 19:51:34.045343 systemd-networkd[1346]: lo: Link UP Feb 13 19:51:34.046753 systemd-networkd[1346]: lo: Gained carrier Feb 13 19:51:34.051325 systemd-networkd[1346]: Enumeration completed Feb 13 19:51:34.052398 systemd-networkd[1346]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:51:34.054048 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:51:34.055267 systemd-networkd[1346]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:51:34.081736 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Feb 13 19:51:34.107822 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 19:51:34.116512 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:51:34.137761 kernel: mlx5_core 92f9:00:02.0 enP37625s1: Link up Feb 13 19:51:34.145686 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Feb 13 19:51:34.166843 kernel: hv_netvsc 000d3af6-09c4-000d-3af6-09c4000d3af6 eth0: Data path switched to VF: enP37625s1 Feb 13 19:51:34.166530 systemd-networkd[1346]: enP37625s1: Link UP Feb 13 19:51:34.166652 systemd-networkd[1346]: eth0: Link UP Feb 13 19:51:34.166655 systemd-networkd[1346]: eth0: Gained carrier Feb 13 19:51:34.166671 systemd-networkd[1346]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:51:34.169930 systemd-networkd[1346]: enP37625s1: Gained carrier Feb 13 19:51:34.172843 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 19:51:34.186806 systemd-networkd[1346]: eth0: DHCPv4 address 10.200.20.12/24, gateway 10.200.20.1 acquired from 168.63.129.16 Feb 13 19:51:34.189215 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Feb 13 19:51:34.226044 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 19:51:34.237719 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 19:51:34.253791 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 19:51:34.275658 kernel: loop4: detected capacity change from 0 to 113512 Feb 13 19:51:34.287646 kernel: loop5: detected capacity change from 0 to 28720 Feb 13 19:51:34.296636 kernel: loop6: detected capacity change from 0 to 201592 Feb 13 19:51:34.315651 kernel: loop7: detected capacity change from 0 to 123192 Feb 13 19:51:34.321104 (sd-merge)[1459]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Feb 13 19:51:34.321576 (sd-merge)[1459]: Merged extensions into '/usr'. Feb 13 19:51:34.325860 systemd[1]: Reload requested from client PID 1307 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 19:51:34.325876 systemd[1]: Reloading... Feb 13 19:51:34.345280 lvm[1458]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:51:34.386645 zram_generator::config[1487]: No configuration found. Feb 13 19:51:34.547465 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:51:34.643228 systemd[1]: Reloading finished in 316 ms. Feb 13 19:51:34.660641 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 19:51:34.668548 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:51:34.676371 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 19:51:34.687947 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:51:34.698898 systemd[1]: Starting ensure-sysext.service... Feb 13 19:51:34.704851 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 19:51:34.716850 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:51:34.717481 lvm[1551]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:51:34.739326 systemd[1]: Reload requested from client PID 1550 ('systemctl') (unit ensure-sysext.service)... Feb 13 19:51:34.739350 systemd[1]: Reloading... Feb 13 19:51:34.747313 systemd-tmpfiles[1552]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 19:51:34.748098 systemd-tmpfiles[1552]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 19:51:34.749036 systemd-tmpfiles[1552]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 19:51:34.749415 systemd-tmpfiles[1552]: ACLs are not supported, ignoring. Feb 13 19:51:34.749540 systemd-tmpfiles[1552]: ACLs are not supported, ignoring. Feb 13 19:51:34.774746 systemd-tmpfiles[1552]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:51:34.774874 systemd-tmpfiles[1552]: Skipping /boot Feb 13 19:51:34.786089 systemd-tmpfiles[1552]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:51:34.786384 systemd-tmpfiles[1552]: Skipping /boot Feb 13 19:51:34.827702 zram_generator::config[1585]: No configuration found. Feb 13 19:51:34.931782 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:51:35.029916 systemd[1]: Reloading finished in 290 ms. Feb 13 19:51:35.059714 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 19:51:35.071718 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:51:35.089858 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:51:35.111890 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 19:51:35.123491 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 19:51:35.144942 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:51:35.153299 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 19:51:35.163575 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:51:35.166918 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:51:35.176985 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:51:35.187278 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:51:35.194489 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:51:35.194647 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 19:51:35.197749 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:51:35.197945 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:51:35.206078 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:51:35.206823 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:51:35.233445 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:51:35.233663 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:51:35.241828 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:51:35.251566 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:51:35.260671 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:51:35.276978 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:51:35.284662 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:51:35.284949 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 19:51:35.285099 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 19:51:35.291491 augenrules[1678]: No rules Feb 13 19:51:35.294729 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:51:35.295762 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:51:35.302861 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 19:51:35.312100 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:51:35.312381 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:51:35.320115 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:51:35.320279 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:51:35.327574 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 19:51:35.335987 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:51:35.336167 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:51:35.347608 systemd[1]: Finished ensure-sysext.service. Feb 13 19:51:35.359052 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:51:35.359132 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:51:35.425757 systemd-networkd[1346]: enP37625s1: Gained IPv6LL Feb 13 19:51:35.474560 systemd-resolved[1659]: Positive Trust Anchors: Feb 13 19:51:35.474585 systemd-resolved[1659]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:51:35.474641 systemd-resolved[1659]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:51:35.489744 systemd-networkd[1346]: eth0: Gained IPv6LL Feb 13 19:51:35.494592 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 19:51:35.504667 systemd-resolved[1659]: Using system hostname 'ci-4230.0.1-a-4092b3335a'. Feb 13 19:51:35.506383 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:51:35.513088 systemd[1]: Reached target network.target - Network. Feb 13 19:51:35.518338 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 19:51:35.525199 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:51:36.043870 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 19:51:36.052138 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 19:51:39.436046 ldconfig[1302]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 19:51:39.729678 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 19:51:39.741765 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 19:51:39.756661 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 19:51:39.763468 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:51:39.770049 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 19:51:39.776868 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 19:51:39.784117 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 19:51:39.790221 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 19:51:39.797086 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 19:51:39.804205 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 19:51:39.804241 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:51:39.809156 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:51:39.814948 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 19:51:39.822918 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 19:51:39.830341 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Feb 13 19:51:39.837707 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Feb 13 19:51:39.844728 systemd[1]: Reached target ssh-access.target - SSH Access Available. Feb 13 19:51:39.853161 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 19:51:39.860314 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Feb 13 19:51:39.869230 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 19:51:39.875281 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:51:39.881400 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:51:39.886822 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:51:39.886852 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:51:39.895762 systemd[1]: Starting chronyd.service - NTP client/server... Feb 13 19:51:39.904788 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 19:51:39.916811 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Feb 13 19:51:39.927845 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 19:51:39.935034 (chronyd)[1697]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Feb 13 19:51:39.943829 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 19:51:39.951812 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 19:51:39.953778 jq[1704]: false Feb 13 19:51:39.960541 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 19:51:39.960587 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Feb 13 19:51:39.963851 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Feb 13 19:51:39.970588 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Feb 13 19:51:39.971917 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:51:39.981159 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 19:51:39.987675 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 19:51:39.997806 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 19:51:40.004839 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 19:51:40.013813 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 19:51:40.023540 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 19:51:40.030693 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 19:51:40.031458 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 19:51:40.033828 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 19:51:40.057755 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 19:51:40.066857 jq[1720]: true Feb 13 19:51:40.067774 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 19:51:40.068123 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 19:51:40.078032 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 19:51:40.081548 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 19:51:40.100933 KVP[1706]: KVP starting; pid is:1706 Feb 13 19:51:40.108648 jq[1724]: true Feb 13 19:51:40.170719 chronyd[1749]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Feb 13 19:51:40.174033 KVP[1706]: KVP LIC Version: 3.1 Feb 13 19:51:40.175135 kernel: hv_utils: KVP IC version 4.0 Feb 13 19:51:40.217042 (ntainerd)[1753]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 19:51:40.217117 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 19:51:40.217787 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 19:51:40.534457 tar[1723]: linux-arm64/LICENSE Feb 13 19:51:40.534457 tar[1723]: linux-arm64/helm Feb 13 19:51:40.562021 update_engine[1719]: I20250213 19:51:40.561288 1719 main.cc:92] Flatcar Update Engine starting Feb 13 19:51:40.579010 extend-filesystems[1705]: Found loop4 Feb 13 19:51:40.579010 extend-filesystems[1705]: Found loop5 Feb 13 19:51:40.579010 extend-filesystems[1705]: Found loop6 Feb 13 19:51:40.579010 extend-filesystems[1705]: Found loop7 Feb 13 19:51:40.579010 extend-filesystems[1705]: Found sda Feb 13 19:51:40.579010 extend-filesystems[1705]: Found sda1 Feb 13 19:51:40.579010 extend-filesystems[1705]: Found sda2 Feb 13 19:51:40.579010 extend-filesystems[1705]: Found sda3 Feb 13 19:51:40.579010 extend-filesystems[1705]: Found usr Feb 13 19:51:40.579010 extend-filesystems[1705]: Found sda4 Feb 13 19:51:40.666590 extend-filesystems[1705]: Found sda6 Feb 13 19:51:40.666590 extend-filesystems[1705]: Found sda7 Feb 13 19:51:40.666590 extend-filesystems[1705]: Found sda9 Feb 13 19:51:40.666590 extend-filesystems[1705]: Checking size of /dev/sda9 Feb 13 19:51:40.586970 systemd-logind[1717]: New seat seat0. Feb 13 19:51:40.595282 systemd-logind[1717]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 13 19:51:40.598709 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 19:51:40.632042 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 19:51:40.709982 chronyd[1749]: Timezone right/UTC failed leap second check, ignoring Feb 13 19:51:40.710233 chronyd[1749]: Loaded seccomp filter (level 2) Feb 13 19:51:40.712921 systemd[1]: Started chronyd.service - NTP client/server. Feb 13 19:51:41.040523 extend-filesystems[1705]: Old size kept for /dev/sda9 Feb 13 19:51:41.040523 extend-filesystems[1705]: Found sr0 Feb 13 19:51:41.057137 dbus-daemon[1700]: [system] SELinux support is enabled Feb 13 19:51:41.041410 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 19:51:41.041650 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 19:51:41.075084 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 19:51:41.081582 update_engine[1719]: I20250213 19:51:41.081414 1719 update_check_scheduler.cc:74] Next update check in 7m0s Feb 13 19:51:41.089042 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 19:51:41.089074 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 19:51:41.098564 dbus-daemon[1700]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 13 19:51:41.101978 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 19:51:41.102004 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 19:51:41.110091 systemd[1]: Started update-engine.service - Update Engine. Feb 13 19:51:41.124931 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 19:51:41.240651 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1772) Feb 13 19:51:41.372386 tar[1723]: linux-arm64/README.md Feb 13 19:51:41.385475 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:51:41.395775 (kubelet)[1837]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:51:41.529017 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 19:51:41.622065 coreos-metadata[1699]: Feb 13 19:51:41.621 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Feb 13 19:51:41.626215 coreos-metadata[1699]: Feb 13 19:51:41.625 INFO Fetch successful Feb 13 19:51:41.626215 coreos-metadata[1699]: Feb 13 19:51:41.625 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Feb 13 19:51:41.630275 coreos-metadata[1699]: Feb 13 19:51:41.629 INFO Fetch successful Feb 13 19:51:41.630412 coreos-metadata[1699]: Feb 13 19:51:41.630 INFO Fetching http://168.63.129.16/machine/e0c8ba0d-4b9d-43c3-8f80-b7a088f98cca/a899c839%2D1e1f%2D478f%2Daef8%2Dd422e47f97bc.%5Fci%2D4230.0.1%2Da%2D4092b3335a?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Feb 13 19:51:41.632973 coreos-metadata[1699]: Feb 13 19:51:41.632 INFO Fetch successful Feb 13 19:51:41.633355 coreos-metadata[1699]: Feb 13 19:51:41.633 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Feb 13 19:51:41.644827 coreos-metadata[1699]: Feb 13 19:51:41.644 INFO Fetch successful Feb 13 19:51:41.676597 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Feb 13 19:51:41.685077 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 19:51:41.934458 kubelet[1837]: E0213 19:51:41.907832 1837 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:51:41.909431 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:51:41.909562 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:51:41.910062 systemd[1]: kubelet.service: Consumed 705ms CPU time, 250.1M memory peak. Feb 13 19:51:42.128943 sshd_keygen[1761]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 19:51:42.147781 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 19:51:42.159867 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 19:51:42.166851 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Feb 13 19:51:42.172997 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 19:51:42.174670 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 19:51:42.188932 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 19:51:42.196789 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Feb 13 19:51:42.640310 locksmithd[1789]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 19:51:43.085894 containerd[1753]: time="2025-02-13T19:51:43.026064220Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 19:51:43.085894 containerd[1753]: time="2025-02-13T19:51:43.049977900Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:51:43.085894 containerd[1753]: time="2025-02-13T19:51:43.051354860Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:51:43.085894 containerd[1753]: time="2025-02-13T19:51:43.051383860Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 19:51:43.085894 containerd[1753]: time="2025-02-13T19:51:43.051400340Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 19:51:43.085894 containerd[1753]: time="2025-02-13T19:51:43.051562260Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 19:51:43.085894 containerd[1753]: time="2025-02-13T19:51:43.051577940Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 19:51:43.085894 containerd[1753]: time="2025-02-13T19:51:43.051667260Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:51:43.085894 containerd[1753]: time="2025-02-13T19:51:43.051681260Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:51:43.085894 containerd[1753]: time="2025-02-13T19:51:43.051878020Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:51:43.085894 containerd[1753]: time="2025-02-13T19:51:43.051892660Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 19:51:42.644250 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 19:51:43.086375 containerd[1753]: time="2025-02-13T19:51:43.051905780Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:51:43.086375 containerd[1753]: time="2025-02-13T19:51:43.051914460Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 19:51:43.086375 containerd[1753]: time="2025-02-13T19:51:43.051991740Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:51:43.086375 containerd[1753]: time="2025-02-13T19:51:43.052185180Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:51:43.086375 containerd[1753]: time="2025-02-13T19:51:43.052302580Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:51:43.086375 containerd[1753]: time="2025-02-13T19:51:43.052315060Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 19:51:43.086375 containerd[1753]: time="2025-02-13T19:51:43.052381500Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 19:51:43.086375 containerd[1753]: time="2025-02-13T19:51:43.052433620Z" level=info msg="metadata content store policy set" policy=shared Feb 13 19:51:42.658268 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 19:51:42.665375 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Feb 13 19:51:42.673231 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 19:51:43.094220 bash[1746]: Updated "/home/core/.ssh/authorized_keys" Feb 13 19:51:43.095780 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 19:51:43.105129 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Feb 13 19:51:43.117546 containerd[1753]: time="2025-02-13T19:51:43.117489980Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 19:51:43.117686 containerd[1753]: time="2025-02-13T19:51:43.117577180Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 19:51:43.117686 containerd[1753]: time="2025-02-13T19:51:43.117596620Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 19:51:43.117686 containerd[1753]: time="2025-02-13T19:51:43.117639300Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 19:51:43.117686 containerd[1753]: time="2025-02-13T19:51:43.117656980Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 19:51:43.117886 containerd[1753]: time="2025-02-13T19:51:43.117861740Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 19:51:43.118146 containerd[1753]: time="2025-02-13T19:51:43.118125860Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 19:51:43.118255 containerd[1753]: time="2025-02-13T19:51:43.118231900Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 19:51:43.118284 containerd[1753]: time="2025-02-13T19:51:43.118255540Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 19:51:43.118284 containerd[1753]: time="2025-02-13T19:51:43.118271140Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 19:51:43.118321 containerd[1753]: time="2025-02-13T19:51:43.118284380Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 19:51:43.118321 containerd[1753]: time="2025-02-13T19:51:43.118297780Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 19:51:43.118321 containerd[1753]: time="2025-02-13T19:51:43.118309860Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 19:51:43.118376 containerd[1753]: time="2025-02-13T19:51:43.118322780Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 19:51:43.118376 containerd[1753]: time="2025-02-13T19:51:43.118337380Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 19:51:43.118376 containerd[1753]: time="2025-02-13T19:51:43.118352620Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 19:51:43.118376 containerd[1753]: time="2025-02-13T19:51:43.118366260Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 19:51:43.118441 containerd[1753]: time="2025-02-13T19:51:43.118377500Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 19:51:43.118441 containerd[1753]: time="2025-02-13T19:51:43.118402060Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 19:51:43.118441 containerd[1753]: time="2025-02-13T19:51:43.118415980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 19:51:43.118441 containerd[1753]: time="2025-02-13T19:51:43.118428660Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 19:51:43.118441 containerd[1753]: time="2025-02-13T19:51:43.118441580Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 19:51:43.118531 containerd[1753]: time="2025-02-13T19:51:43.118453740Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 19:51:43.118531 containerd[1753]: time="2025-02-13T19:51:43.118466740Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 19:51:43.118531 containerd[1753]: time="2025-02-13T19:51:43.118477860Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 19:51:43.118531 containerd[1753]: time="2025-02-13T19:51:43.118490420Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 19:51:43.118531 containerd[1753]: time="2025-02-13T19:51:43.118502820Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 19:51:43.118531 containerd[1753]: time="2025-02-13T19:51:43.118516180Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 19:51:43.118531 containerd[1753]: time="2025-02-13T19:51:43.118528020Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 19:51:43.118662 containerd[1753]: time="2025-02-13T19:51:43.118540100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 19:51:43.118662 containerd[1753]: time="2025-02-13T19:51:43.118551900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 19:51:43.118662 containerd[1753]: time="2025-02-13T19:51:43.118565980Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 19:51:43.118662 containerd[1753]: time="2025-02-13T19:51:43.118587060Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 19:51:43.118662 containerd[1753]: time="2025-02-13T19:51:43.118600020Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 19:51:43.118662 containerd[1753]: time="2025-02-13T19:51:43.118635660Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 19:51:43.118778 containerd[1753]: time="2025-02-13T19:51:43.118691540Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 19:51:43.118778 containerd[1753]: time="2025-02-13T19:51:43.118710660Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 19:51:43.118778 containerd[1753]: time="2025-02-13T19:51:43.118721300Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 19:51:43.118778 containerd[1753]: time="2025-02-13T19:51:43.118733220Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 19:51:43.118778 containerd[1753]: time="2025-02-13T19:51:43.118741980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 19:51:43.118778 containerd[1753]: time="2025-02-13T19:51:43.118753780Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 19:51:43.118778 containerd[1753]: time="2025-02-13T19:51:43.118763260Z" level=info msg="NRI interface is disabled by configuration." Feb 13 19:51:43.118778 containerd[1753]: time="2025-02-13T19:51:43.118774340Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 19:51:43.119101 containerd[1753]: time="2025-02-13T19:51:43.119049620Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 19:51:43.119216 containerd[1753]: time="2025-02-13T19:51:43.119102140Z" level=info msg="Connect containerd service" Feb 13 19:51:43.119216 containerd[1753]: time="2025-02-13T19:51:43.119135340Z" level=info msg="using legacy CRI server" Feb 13 19:51:43.119216 containerd[1753]: time="2025-02-13T19:51:43.119141860Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 19:51:43.119282 containerd[1753]: time="2025-02-13T19:51:43.119246140Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 19:51:43.119963 containerd[1753]: time="2025-02-13T19:51:43.119925740Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:51:43.120090 containerd[1753]: time="2025-02-13T19:51:43.120063620Z" level=info msg="Start subscribing containerd event" Feb 13 19:51:43.120123 containerd[1753]: time="2025-02-13T19:51:43.120102060Z" level=info msg="Start recovering state" Feb 13 19:51:43.120377 containerd[1753]: time="2025-02-13T19:51:43.120162140Z" level=info msg="Start event monitor" Feb 13 19:51:43.120377 containerd[1753]: time="2025-02-13T19:51:43.120182820Z" level=info msg="Start snapshots syncer" Feb 13 19:51:43.120377 containerd[1753]: time="2025-02-13T19:51:43.120195580Z" level=info msg="Start cni network conf syncer for default" Feb 13 19:51:43.120377 containerd[1753]: time="2025-02-13T19:51:43.120203660Z" level=info msg="Start streaming server" Feb 13 19:51:43.120487 containerd[1753]: time="2025-02-13T19:51:43.120396660Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 19:51:43.120487 containerd[1753]: time="2025-02-13T19:51:43.120443500Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 19:51:43.120523 containerd[1753]: time="2025-02-13T19:51:43.120497020Z" level=info msg="containerd successfully booted in 0.188478s" Feb 13 19:51:43.120670 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 19:51:43.128056 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 19:51:43.136182 systemd[1]: Startup finished in 727ms (kernel) + 14.691s (initrd) + 16.962s (userspace) = 32.381s. Feb 13 19:51:44.543849 login[1885]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Feb 13 19:51:44.545050 login[1884]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:51:44.556013 systemd-logind[1717]: New session 2 of user core. Feb 13 19:51:44.556416 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 19:51:44.561834 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 19:51:44.573378 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 19:51:44.581943 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 19:51:44.584853 (systemd)[1896]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 19:51:44.587261 systemd-logind[1717]: New session c1 of user core. Feb 13 19:51:44.808009 systemd[1896]: Queued start job for default target default.target. Feb 13 19:51:44.820499 systemd[1896]: Created slice app.slice - User Application Slice. Feb 13 19:51:44.820895 systemd[1896]: Reached target paths.target - Paths. Feb 13 19:51:44.820958 systemd[1896]: Reached target timers.target - Timers. Feb 13 19:51:44.822251 systemd[1896]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 19:51:44.831283 systemd[1896]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 19:51:44.831338 systemd[1896]: Reached target sockets.target - Sockets. Feb 13 19:51:44.831374 systemd[1896]: Reached target basic.target - Basic System. Feb 13 19:51:44.831402 systemd[1896]: Reached target default.target - Main User Target. Feb 13 19:51:44.831427 systemd[1896]: Startup finished in 238ms. Feb 13 19:51:44.832186 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 19:51:44.833541 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 19:51:45.545341 login[1885]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:51:45.550208 systemd-logind[1717]: New session 1 of user core. Feb 13 19:51:45.555810 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 19:51:50.085134 waagent[1876]: 2025-02-13T19:51:50.085036Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Feb 13 19:51:50.091219 waagent[1876]: 2025-02-13T19:51:50.091157Z INFO Daemon Daemon OS: flatcar 4230.0.1 Feb 13 19:51:50.096003 waagent[1876]: 2025-02-13T19:51:50.095949Z INFO Daemon Daemon Python: 3.11.11 Feb 13 19:51:50.100756 waagent[1876]: 2025-02-13T19:51:50.100696Z INFO Daemon Daemon Run daemon Feb 13 19:51:50.104894 waagent[1876]: 2025-02-13T19:51:50.104842Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4230.0.1' Feb 13 19:51:50.114237 waagent[1876]: 2025-02-13T19:51:50.114176Z INFO Daemon Daemon Using waagent for provisioning Feb 13 19:51:50.119594 waagent[1876]: 2025-02-13T19:51:50.119548Z INFO Daemon Daemon Activate resource disk Feb 13 19:51:50.124283 waagent[1876]: 2025-02-13T19:51:50.124233Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Feb 13 19:51:50.136872 waagent[1876]: 2025-02-13T19:51:50.136812Z INFO Daemon Daemon Found device: None Feb 13 19:51:50.141511 waagent[1876]: 2025-02-13T19:51:50.141464Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Feb 13 19:51:50.150678 waagent[1876]: 2025-02-13T19:51:50.150609Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Feb 13 19:51:50.163018 waagent[1876]: 2025-02-13T19:51:50.162966Z INFO Daemon Daemon Clean protocol and wireserver endpoint Feb 13 19:51:50.168899 waagent[1876]: 2025-02-13T19:51:50.168849Z INFO Daemon Daemon Running default provisioning handler Feb 13 19:51:50.181288 waagent[1876]: 2025-02-13T19:51:50.181211Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Feb 13 19:51:50.196268 waagent[1876]: 2025-02-13T19:51:50.196199Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Feb 13 19:51:50.206800 waagent[1876]: 2025-02-13T19:51:50.206736Z INFO Daemon Daemon cloud-init is enabled: False Feb 13 19:51:50.212139 waagent[1876]: 2025-02-13T19:51:50.212083Z INFO Daemon Daemon Copying ovf-env.xml Feb 13 19:51:50.230687 waagent[1876]: 2025-02-13T19:51:50.227829Z INFO Daemon Daemon Successfully mounted dvd Feb 13 19:51:50.246246 waagent[1876]: 2025-02-13T19:51:50.244437Z INFO Daemon Daemon Detect protocol endpoint Feb 13 19:51:50.244781 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Feb 13 19:51:50.249724 waagent[1876]: 2025-02-13T19:51:50.249660Z INFO Daemon Daemon Clean protocol and wireserver endpoint Feb 13 19:51:50.256027 waagent[1876]: 2025-02-13T19:51:50.255964Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Feb 13 19:51:50.262935 waagent[1876]: 2025-02-13T19:51:50.262878Z INFO Daemon Daemon Test for route to 168.63.129.16 Feb 13 19:51:50.268517 waagent[1876]: 2025-02-13T19:51:50.268461Z INFO Daemon Daemon Route to 168.63.129.16 exists Feb 13 19:51:50.274134 waagent[1876]: 2025-02-13T19:51:50.274078Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Feb 13 19:51:50.336247 waagent[1876]: 2025-02-13T19:51:50.336151Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Feb 13 19:51:50.342873 waagent[1876]: 2025-02-13T19:51:50.342842Z INFO Daemon Daemon Wire protocol version:2012-11-30 Feb 13 19:51:50.348181 waagent[1876]: 2025-02-13T19:51:50.348136Z INFO Daemon Daemon Server preferred version:2015-04-05 Feb 13 19:51:50.484788 waagent[1876]: 2025-02-13T19:51:50.484677Z INFO Daemon Daemon Initializing goal state during protocol detection Feb 13 19:51:50.492839 waagent[1876]: 2025-02-13T19:51:50.492763Z INFO Daemon Daemon Forcing an update of the goal state. Feb 13 19:51:50.502728 waagent[1876]: 2025-02-13T19:51:50.502668Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Feb 13 19:51:50.578300 waagent[1876]: 2025-02-13T19:51:50.578251Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.159 Feb 13 19:51:50.585160 waagent[1876]: 2025-02-13T19:51:50.585109Z INFO Daemon Feb 13 19:51:50.588581 waagent[1876]: 2025-02-13T19:51:50.588500Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 6a36a033-1fb2-45f8-a5c0-5de792c41aef eTag: 13960261099523463156 source: Fabric] Feb 13 19:51:50.601852 waagent[1876]: 2025-02-13T19:51:50.601800Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Feb 13 19:51:50.610288 waagent[1876]: 2025-02-13T19:51:50.610237Z INFO Daemon Feb 13 19:51:50.613587 waagent[1876]: 2025-02-13T19:51:50.613541Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Feb 13 19:51:50.628024 waagent[1876]: 2025-02-13T19:51:50.627986Z INFO Daemon Daemon Downloading artifacts profile blob Feb 13 19:51:50.724660 waagent[1876]: 2025-02-13T19:51:50.723998Z INFO Daemon Downloaded certificate {'thumbprint': '68384E37955B4DC7B1AE36801B5339D8763E8E1B', 'hasPrivateKey': True} Feb 13 19:51:50.736408 waagent[1876]: 2025-02-13T19:51:50.736126Z INFO Daemon Downloaded certificate {'thumbprint': '3A1AEC3E4FAD3A007DD2EFFAE1982860DE763219', 'hasPrivateKey': False} Feb 13 19:51:50.746443 waagent[1876]: 2025-02-13T19:51:50.746386Z INFO Daemon Fetch goal state completed Feb 13 19:51:50.758856 waagent[1876]: 2025-02-13T19:51:50.758810Z INFO Daemon Daemon Starting provisioning Feb 13 19:51:50.765730 waagent[1876]: 2025-02-13T19:51:50.765161Z INFO Daemon Daemon Handle ovf-env.xml. Feb 13 19:51:50.771465 waagent[1876]: 2025-02-13T19:51:50.771380Z INFO Daemon Daemon Set hostname [ci-4230.0.1-a-4092b3335a] Feb 13 19:51:51.839654 waagent[1876]: 2025-02-13T19:51:51.839490Z INFO Daemon Daemon Publish hostname [ci-4230.0.1-a-4092b3335a] Feb 13 19:51:51.852658 waagent[1876]: 2025-02-13T19:51:51.846628Z INFO Daemon Daemon Examine /proc/net/route for primary interface Feb 13 19:51:51.853687 waagent[1876]: 2025-02-13T19:51:51.853610Z INFO Daemon Daemon Primary interface is [eth0] Feb 13 19:51:51.866150 systemd-networkd[1346]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:51:51.866158 systemd-networkd[1346]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:51:51.866184 systemd-networkd[1346]: eth0: DHCP lease lost Feb 13 19:51:51.868066 waagent[1876]: 2025-02-13T19:51:51.867993Z INFO Daemon Daemon Create user account if not exists Feb 13 19:51:51.873931 waagent[1876]: 2025-02-13T19:51:51.873862Z INFO Daemon Daemon User core already exists, skip useradd Feb 13 19:51:51.879788 waagent[1876]: 2025-02-13T19:51:51.879726Z INFO Daemon Daemon Configure sudoer Feb 13 19:51:51.884658 waagent[1876]: 2025-02-13T19:51:51.884560Z INFO Daemon Daemon Configure sshd Feb 13 19:51:51.889399 waagent[1876]: 2025-02-13T19:51:51.889323Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Feb 13 19:51:51.902220 waagent[1876]: 2025-02-13T19:51:51.902146Z INFO Daemon Daemon Deploy ssh public key. Feb 13 19:51:51.917714 systemd-networkd[1346]: eth0: DHCPv4 address 10.200.20.12/24, gateway 10.200.20.1 acquired from 168.63.129.16 Feb 13 19:51:51.951962 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 19:51:51.962838 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:51:52.151699 waagent[1876]: 2025-02-13T19:51:52.149384Z INFO Daemon Daemon Decode custom data Feb 13 19:51:52.154017 waagent[1876]: 2025-02-13T19:51:52.153965Z INFO Daemon Daemon Save custom data Feb 13 19:51:52.216526 waagent[1876]: 2025-02-13T19:51:52.216462Z INFO Daemon Daemon Provisioning complete Feb 13 19:51:52.236749 waagent[1876]: 2025-02-13T19:51:52.236698Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Feb 13 19:51:52.242896 waagent[1876]: 2025-02-13T19:51:52.242838Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Feb 13 19:51:52.254409 waagent[1876]: 2025-02-13T19:51:52.254344Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Feb 13 19:51:52.393409 waagent[1952]: 2025-02-13T19:51:52.392873Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Feb 13 19:51:52.393409 waagent[1952]: 2025-02-13T19:51:52.393029Z INFO ExtHandler ExtHandler OS: flatcar 4230.0.1 Feb 13 19:51:52.393409 waagent[1952]: 2025-02-13T19:51:52.393082Z INFO ExtHandler ExtHandler Python: 3.11.11 Feb 13 19:51:52.997646 waagent[1952]: 2025-02-13T19:51:52.995361Z INFO ExtHandler ExtHandler Distro: flatcar-4230.0.1; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.11; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Feb 13 19:51:52.997646 waagent[1952]: 2025-02-13T19:51:52.995639Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 13 19:51:52.997646 waagent[1952]: 2025-02-13T19:51:52.995714Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 13 19:51:53.004039 waagent[1952]: 2025-02-13T19:51:53.003966Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Feb 13 19:51:53.014209 waagent[1952]: 2025-02-13T19:51:53.014159Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.159 Feb 13 19:51:53.014908 waagent[1952]: 2025-02-13T19:51:53.014857Z INFO ExtHandler Feb 13 19:51:53.015081 waagent[1952]: 2025-02-13T19:51:53.015045Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: c77be4a6-332b-4166-ac78-6d9ed11afdbc eTag: 13960261099523463156 source: Fabric] Feb 13 19:51:53.015446 waagent[1952]: 2025-02-13T19:51:53.015405Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Feb 13 19:51:53.313928 waagent[1952]: 2025-02-13T19:51:53.313760Z INFO ExtHandler Feb 13 19:51:53.314016 waagent[1952]: 2025-02-13T19:51:53.313960Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Feb 13 19:51:53.319957 waagent[1952]: 2025-02-13T19:51:53.319921Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Feb 13 19:51:54.258697 waagent[1952]: 2025-02-13T19:51:54.257939Z INFO ExtHandler Downloaded certificate {'thumbprint': '68384E37955B4DC7B1AE36801B5339D8763E8E1B', 'hasPrivateKey': True} Feb 13 19:51:54.258697 waagent[1952]: 2025-02-13T19:51:54.258425Z INFO ExtHandler Downloaded certificate {'thumbprint': '3A1AEC3E4FAD3A007DD2EFFAE1982860DE763219', 'hasPrivateKey': False} Feb 13 19:51:54.259402 waagent[1952]: 2025-02-13T19:51:54.259349Z INFO ExtHandler Fetch goal state completed Feb 13 19:51:54.323648 waagent[1952]: 2025-02-13T19:51:54.323283Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1952 Feb 13 19:51:54.323648 waagent[1952]: 2025-02-13T19:51:54.323488Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Feb 13 19:51:54.323869 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:51:54.326532 waagent[1952]: 2025-02-13T19:51:54.326465Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4230.0.1', '', 'Flatcar Container Linux by Kinvolk'] Feb 13 19:51:54.326773 (kubelet)[1971]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:51:54.329259 waagent[1952]: 2025-02-13T19:51:54.328386Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Feb 13 19:51:54.384163 kubelet[1971]: E0213 19:51:54.384079 1971 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:51:54.387194 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:51:54.387346 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:51:54.387921 systemd[1]: kubelet.service: Consumed 130ms CPU time, 102.3M memory peak. Feb 13 19:51:56.334520 waagent[1952]: 2025-02-13T19:51:56.334466Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Feb 13 19:51:56.334852 waagent[1952]: 2025-02-13T19:51:56.334709Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Feb 13 19:51:56.340647 waagent[1952]: 2025-02-13T19:51:56.340449Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Feb 13 19:51:56.347316 systemd[1]: Reload requested from client PID 1983 ('systemctl') (unit waagent.service)... Feb 13 19:51:56.347329 systemd[1]: Reloading... Feb 13 19:51:56.442656 zram_generator::config[2025]: No configuration found. Feb 13 19:51:56.545214 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:51:56.638801 systemd[1]: Reloading finished in 291 ms. Feb 13 19:51:56.651348 waagent[1952]: 2025-02-13T19:51:56.650961Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Feb 13 19:51:56.657065 systemd[1]: Reload requested from client PID 2076 ('systemctl') (unit waagent.service)... Feb 13 19:51:56.657080 systemd[1]: Reloading... Feb 13 19:51:56.735638 zram_generator::config[2116]: No configuration found. Feb 13 19:51:56.855731 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:51:56.949757 systemd[1]: Reloading finished in 292 ms. Feb 13 19:51:56.962654 waagent[1952]: 2025-02-13T19:51:56.962011Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Feb 13 19:51:56.962654 waagent[1952]: 2025-02-13T19:51:56.962179Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Feb 13 19:51:57.423651 waagent[1952]: 2025-02-13T19:51:57.422210Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Feb 13 19:51:57.423651 waagent[1952]: 2025-02-13T19:51:57.422993Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Feb 13 19:51:57.424369 waagent[1952]: 2025-02-13T19:51:57.424307Z INFO ExtHandler ExtHandler Starting env monitor service. Feb 13 19:51:57.424541 waagent[1952]: 2025-02-13T19:51:57.424486Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 13 19:51:57.425070 waagent[1952]: 2025-02-13T19:51:57.425024Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 13 19:51:57.425216 waagent[1952]: 2025-02-13T19:51:57.425154Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Feb 13 19:51:57.425520 waagent[1952]: 2025-02-13T19:51:57.425466Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Feb 13 19:51:57.425910 waagent[1952]: 2025-02-13T19:51:57.425861Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Feb 13 19:51:57.426201 waagent[1952]: 2025-02-13T19:51:57.426156Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Feb 13 19:51:57.426201 waagent[1952]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Feb 13 19:51:57.426201 waagent[1952]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Feb 13 19:51:57.426201 waagent[1952]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Feb 13 19:51:57.426201 waagent[1952]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Feb 13 19:51:57.426201 waagent[1952]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 13 19:51:57.426201 waagent[1952]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 13 19:51:57.426677 waagent[1952]: 2025-02-13T19:51:57.426629Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 13 19:51:57.426759 waagent[1952]: 2025-02-13T19:51:57.426712Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 13 19:51:57.426898 waagent[1952]: 2025-02-13T19:51:57.426856Z INFO EnvHandler ExtHandler Configure routes Feb 13 19:51:57.426958 waagent[1952]: 2025-02-13T19:51:57.426929Z INFO EnvHandler ExtHandler Gateway:None Feb 13 19:51:57.427007 waagent[1952]: 2025-02-13T19:51:57.426980Z INFO EnvHandler ExtHandler Routes:None Feb 13 19:51:57.427295 waagent[1952]: 2025-02-13T19:51:57.426532Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Feb 13 19:51:57.427593 waagent[1952]: 2025-02-13T19:51:57.427529Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Feb 13 19:51:57.427716 waagent[1952]: 2025-02-13T19:51:57.427594Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Feb 13 19:51:57.428506 waagent[1952]: 2025-02-13T19:51:57.428072Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Feb 13 19:51:57.439040 waagent[1952]: 2025-02-13T19:51:57.438972Z INFO ExtHandler ExtHandler Feb 13 19:51:57.439153 waagent[1952]: 2025-02-13T19:51:57.439119Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: e23286b1-9a38-47e6-b916-b4e03013d828 correlation 418044bd-151d-44f9-beb8-e2b45c320f37 created: 2025-02-13T19:50:26.162430Z] Feb 13 19:51:57.439587 waagent[1952]: 2025-02-13T19:51:57.439526Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Feb 13 19:51:57.440223 waagent[1952]: 2025-02-13T19:51:57.440176Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Feb 13 19:51:57.538485 waagent[1952]: 2025-02-13T19:51:57.538421Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 2543F2BA-E694-458E-ADA6-16EFDFF563C7;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Feb 13 19:51:57.562344 waagent[1952]: 2025-02-13T19:51:57.562252Z INFO MonitorHandler ExtHandler Network interfaces: Feb 13 19:51:57.562344 waagent[1952]: Executing ['ip', '-a', '-o', 'link']: Feb 13 19:51:57.562344 waagent[1952]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Feb 13 19:51:57.562344 waagent[1952]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:f6:09:c4 brd ff:ff:ff:ff:ff:ff Feb 13 19:51:57.562344 waagent[1952]: 3: enP37625s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:f6:09:c4 brd ff:ff:ff:ff:ff:ff\ altname enP37625p0s2 Feb 13 19:51:57.562344 waagent[1952]: Executing ['ip', '-4', '-a', '-o', 'address']: Feb 13 19:51:57.562344 waagent[1952]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Feb 13 19:51:57.562344 waagent[1952]: 2: eth0 inet 10.200.20.12/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Feb 13 19:51:57.562344 waagent[1952]: Executing ['ip', '-6', '-a', '-o', 'address']: Feb 13 19:51:57.562344 waagent[1952]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Feb 13 19:51:57.562344 waagent[1952]: 2: eth0 inet6 fe80::20d:3aff:fef6:9c4/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Feb 13 19:51:57.562344 waagent[1952]: 3: enP37625s1 inet6 fe80::20d:3aff:fef6:9c4/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Feb 13 19:51:57.626653 waagent[1952]: 2025-02-13T19:51:57.626408Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Feb 13 19:51:57.626653 waagent[1952]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Feb 13 19:51:57.626653 waagent[1952]: pkts bytes target prot opt in out source destination Feb 13 19:51:57.626653 waagent[1952]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Feb 13 19:51:57.626653 waagent[1952]: pkts bytes target prot opt in out source destination Feb 13 19:51:57.626653 waagent[1952]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Feb 13 19:51:57.626653 waagent[1952]: pkts bytes target prot opt in out source destination Feb 13 19:51:57.626653 waagent[1952]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Feb 13 19:51:57.626653 waagent[1952]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Feb 13 19:51:57.626653 waagent[1952]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Feb 13 19:51:57.629680 waagent[1952]: 2025-02-13T19:51:57.629583Z INFO EnvHandler ExtHandler Current Firewall rules: Feb 13 19:51:57.629680 waagent[1952]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Feb 13 19:51:57.629680 waagent[1952]: pkts bytes target prot opt in out source destination Feb 13 19:51:57.629680 waagent[1952]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Feb 13 19:51:57.629680 waagent[1952]: pkts bytes target prot opt in out source destination Feb 13 19:51:57.629680 waagent[1952]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Feb 13 19:51:57.629680 waagent[1952]: pkts bytes target prot opt in out source destination Feb 13 19:51:57.629680 waagent[1952]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Feb 13 19:51:57.629680 waagent[1952]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Feb 13 19:51:57.629680 waagent[1952]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Feb 13 19:51:57.629945 waagent[1952]: 2025-02-13T19:51:57.629906Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Feb 13 19:52:04.452010 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 19:52:04.458799 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:52:04.511775 chronyd[1749]: Selected source PHC0 Feb 13 19:52:04.569258 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:52:04.576894 (kubelet)[2209]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:52:04.657168 kubelet[2209]: E0213 19:52:04.657112 2209 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:52:04.659901 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:52:04.660154 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:52:04.660509 systemd[1]: kubelet.service: Consumed 123ms CPU time, 100.2M memory peak. Feb 13 19:52:14.702127 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Feb 13 19:52:14.709242 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:52:15.014093 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:52:15.023922 (kubelet)[2224]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:52:15.060650 kubelet[2224]: E0213 19:52:15.059989 2224 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:52:15.062253 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:52:15.062407 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:52:15.062861 systemd[1]: kubelet.service: Consumed 129ms CPU time, 102.2M memory peak. Feb 13 19:52:22.003509 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Feb 13 19:52:25.202159 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Feb 13 19:52:25.206817 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:52:25.546484 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:52:25.550352 (kubelet)[2238]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:52:25.587035 kubelet[2238]: E0213 19:52:25.586914 2238 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:52:25.589131 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:52:25.589284 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:52:25.589859 systemd[1]: kubelet.service: Consumed 126ms CPU time, 100.1M memory peak. Feb 13 19:52:26.777751 update_engine[1719]: I20250213 19:52:26.777660 1719 update_attempter.cc:509] Updating boot flags... Feb 13 19:52:26.849087 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (2261) Feb 13 19:52:26.978706 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (2260) Feb 13 19:52:33.006724 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 19:52:33.007951 systemd[1]: Started sshd@0-10.200.20.12:22-10.200.16.10:53992.service - OpenSSH per-connection server daemon (10.200.16.10:53992). Feb 13 19:52:33.650562 sshd[2361]: Accepted publickey for core from 10.200.16.10 port 53992 ssh2: RSA SHA256:LTmo/6k/2cyRFZrv4Ga+drA+aFwEaiiiTQilASdJKcU Feb 13 19:52:33.651919 sshd-session[2361]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:52:33.657168 systemd-logind[1717]: New session 3 of user core. Feb 13 19:52:33.662813 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 19:52:34.050895 systemd[1]: Started sshd@1-10.200.20.12:22-10.200.16.10:53994.service - OpenSSH per-connection server daemon (10.200.16.10:53994). Feb 13 19:52:34.496180 sshd[2366]: Accepted publickey for core from 10.200.16.10 port 53994 ssh2: RSA SHA256:LTmo/6k/2cyRFZrv4Ga+drA+aFwEaiiiTQilASdJKcU Feb 13 19:52:34.497480 sshd-session[2366]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:52:34.502270 systemd-logind[1717]: New session 4 of user core. Feb 13 19:52:34.508772 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 19:52:34.851236 sshd[2368]: Connection closed by 10.200.16.10 port 53994 Feb 13 19:52:34.851927 sshd-session[2366]: pam_unix(sshd:session): session closed for user core Feb 13 19:52:34.854793 systemd-logind[1717]: Session 4 logged out. Waiting for processes to exit. Feb 13 19:52:34.855840 systemd[1]: sshd@1-10.200.20.12:22-10.200.16.10:53994.service: Deactivated successfully. Feb 13 19:52:34.858019 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 19:52:34.860248 systemd-logind[1717]: Removed session 4. Feb 13 19:52:34.936882 systemd[1]: Started sshd@2-10.200.20.12:22-10.200.16.10:54008.service - OpenSSH per-connection server daemon (10.200.16.10:54008). Feb 13 19:52:35.365274 sshd[2374]: Accepted publickey for core from 10.200.16.10 port 54008 ssh2: RSA SHA256:LTmo/6k/2cyRFZrv4Ga+drA+aFwEaiiiTQilASdJKcU Feb 13 19:52:35.366547 sshd-session[2374]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:52:35.370831 systemd-logind[1717]: New session 5 of user core. Feb 13 19:52:35.379004 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 19:52:35.681196 sshd[2376]: Connection closed by 10.200.16.10 port 54008 Feb 13 19:52:35.680456 sshd-session[2374]: pam_unix(sshd:session): session closed for user core Feb 13 19:52:35.684333 systemd[1]: sshd@2-10.200.20.12:22-10.200.16.10:54008.service: Deactivated successfully. Feb 13 19:52:35.686386 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 19:52:35.688959 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Feb 13 19:52:35.689658 systemd-logind[1717]: Session 5 logged out. Waiting for processes to exit. Feb 13 19:52:35.696837 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:52:35.698046 systemd-logind[1717]: Removed session 5. Feb 13 19:52:35.776895 systemd[1]: Started sshd@3-10.200.20.12:22-10.200.16.10:54010.service - OpenSSH per-connection server daemon (10.200.16.10:54010). Feb 13 19:52:36.010965 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:52:36.015283 (kubelet)[2392]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:52:36.053646 kubelet[2392]: E0213 19:52:36.053162 2392 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:52:36.055402 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:52:36.055559 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:52:36.056079 systemd[1]: kubelet.service: Consumed 130ms CPU time, 102.1M memory peak. Feb 13 19:52:36.223047 sshd[2385]: Accepted publickey for core from 10.200.16.10 port 54010 ssh2: RSA SHA256:LTmo/6k/2cyRFZrv4Ga+drA+aFwEaiiiTQilASdJKcU Feb 13 19:52:36.224419 sshd-session[2385]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:52:36.229104 systemd-logind[1717]: New session 6 of user core. Feb 13 19:52:36.236791 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 19:52:36.553658 sshd[2399]: Connection closed by 10.200.16.10 port 54010 Feb 13 19:52:36.553481 sshd-session[2385]: pam_unix(sshd:session): session closed for user core Feb 13 19:52:36.557485 systemd[1]: sshd@3-10.200.20.12:22-10.200.16.10:54010.service: Deactivated successfully. Feb 13 19:52:36.559170 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 19:52:36.560424 systemd-logind[1717]: Session 6 logged out. Waiting for processes to exit. Feb 13 19:52:36.561544 systemd-logind[1717]: Removed session 6. Feb 13 19:52:36.629434 systemd[1]: Started sshd@4-10.200.20.12:22-10.200.16.10:54026.service - OpenSSH per-connection server daemon (10.200.16.10:54026). Feb 13 19:52:37.083057 sshd[2405]: Accepted publickey for core from 10.200.16.10 port 54026 ssh2: RSA SHA256:LTmo/6k/2cyRFZrv4Ga+drA+aFwEaiiiTQilASdJKcU Feb 13 19:52:37.084260 sshd-session[2405]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:52:37.089697 systemd-logind[1717]: New session 7 of user core. Feb 13 19:52:37.094773 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 19:52:37.471128 sudo[2408]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 19:52:37.471419 sudo[2408]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:52:37.515093 sudo[2408]: pam_unix(sudo:session): session closed for user root Feb 13 19:52:37.612372 sshd[2407]: Connection closed by 10.200.16.10 port 54026 Feb 13 19:52:37.612210 sshd-session[2405]: pam_unix(sshd:session): session closed for user core Feb 13 19:52:37.615738 systemd-logind[1717]: Session 7 logged out. Waiting for processes to exit. Feb 13 19:52:37.616017 systemd[1]: sshd@4-10.200.20.12:22-10.200.16.10:54026.service: Deactivated successfully. Feb 13 19:52:37.618130 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 19:52:37.619830 systemd-logind[1717]: Removed session 7. Feb 13 19:52:37.709871 systemd[1]: Started sshd@5-10.200.20.12:22-10.200.16.10:54040.service - OpenSSH per-connection server daemon (10.200.16.10:54040). Feb 13 19:52:38.198338 sshd[2414]: Accepted publickey for core from 10.200.16.10 port 54040 ssh2: RSA SHA256:LTmo/6k/2cyRFZrv4Ga+drA+aFwEaiiiTQilASdJKcU Feb 13 19:52:38.199763 sshd-session[2414]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:52:38.205188 systemd-logind[1717]: New session 8 of user core. Feb 13 19:52:38.210812 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 19:52:38.473998 sudo[2418]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 19:52:38.474275 sudo[2418]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:52:38.477451 sudo[2418]: pam_unix(sudo:session): session closed for user root Feb 13 19:52:38.482858 sudo[2417]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Feb 13 19:52:38.483152 sudo[2417]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:52:38.498918 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:52:38.522807 augenrules[2440]: No rules Feb 13 19:52:38.524039 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:52:38.525671 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:52:38.527019 sudo[2417]: pam_unix(sudo:session): session closed for user root Feb 13 19:52:38.620169 sshd[2416]: Connection closed by 10.200.16.10 port 54040 Feb 13 19:52:38.620754 sshd-session[2414]: pam_unix(sshd:session): session closed for user core Feb 13 19:52:38.623607 systemd[1]: sshd@5-10.200.20.12:22-10.200.16.10:54040.service: Deactivated successfully. Feb 13 19:52:38.625359 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 19:52:38.628044 systemd-logind[1717]: Session 8 logged out. Waiting for processes to exit. Feb 13 19:52:38.629424 systemd-logind[1717]: Removed session 8. Feb 13 19:52:38.707874 systemd[1]: Started sshd@6-10.200.20.12:22-10.200.16.10:54054.service - OpenSSH per-connection server daemon (10.200.16.10:54054). Feb 13 19:52:39.136483 sshd[2449]: Accepted publickey for core from 10.200.16.10 port 54054 ssh2: RSA SHA256:LTmo/6k/2cyRFZrv4Ga+drA+aFwEaiiiTQilASdJKcU Feb 13 19:52:39.137798 sshd-session[2449]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:52:39.141963 systemd-logind[1717]: New session 9 of user core. Feb 13 19:52:39.153766 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 19:52:39.380651 sudo[2452]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 19:52:39.380953 sudo[2452]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:52:40.860862 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 19:52:40.861017 (dockerd)[2469]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 19:52:42.215992 dockerd[2469]: time="2025-02-13T19:52:42.215934311Z" level=info msg="Starting up" Feb 13 19:52:42.590828 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1196387351-merged.mount: Deactivated successfully. Feb 13 19:52:42.680824 systemd[1]: var-lib-docker-metacopy\x2dcheck3834199893-merged.mount: Deactivated successfully. Feb 13 19:52:42.696365 dockerd[2469]: time="2025-02-13T19:52:42.696317409Z" level=info msg="Loading containers: start." Feb 13 19:52:42.940724 kernel: Initializing XFRM netlink socket Feb 13 19:52:43.132931 systemd-networkd[1346]: docker0: Link UP Feb 13 19:52:43.174648 dockerd[2469]: time="2025-02-13T19:52:43.174071583Z" level=info msg="Loading containers: done." Feb 13 19:52:43.202288 dockerd[2469]: time="2025-02-13T19:52:43.202230713Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 19:52:43.202448 dockerd[2469]: time="2025-02-13T19:52:43.202353433Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Feb 13 19:52:43.202509 dockerd[2469]: time="2025-02-13T19:52:43.202482354Z" level=info msg="Daemon has completed initialization" Feb 13 19:52:43.253392 dockerd[2469]: time="2025-02-13T19:52:43.253328844Z" level=info msg="API listen on /run/docker.sock" Feb 13 19:52:43.253959 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 19:52:43.588361 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3561078759-merged.mount: Deactivated successfully. Feb 13 19:52:43.945707 containerd[1753]: time="2025-02-13T19:52:43.945566441Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.2\"" Feb 13 19:52:45.001728 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1720324423.mount: Deactivated successfully. Feb 13 19:52:46.202216 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Feb 13 19:52:46.207811 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:52:46.314828 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:52:46.318670 (kubelet)[2715]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:52:46.412917 kubelet[2715]: E0213 19:52:46.412793 2715 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:52:46.415018 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:52:46.415165 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:52:46.416694 systemd[1]: kubelet.service: Consumed 126ms CPU time, 100.2M memory peak. Feb 13 19:52:46.742911 containerd[1753]: time="2025-02-13T19:52:46.742850747Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:52:46.746484 containerd[1753]: time="2025-02-13T19:52:46.746240593Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.2: active requests=0, bytes read=26218236" Feb 13 19:52:46.749924 containerd[1753]: time="2025-02-13T19:52:46.749871399Z" level=info msg="ImageCreate event name:\"sha256:6417e1437b6d9a789e1ca789695a574e1df00a632bdbfbcae9695c9a7d500e32\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:52:46.756586 containerd[1753]: time="2025-02-13T19:52:46.756511851Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:52:46.757878 containerd[1753]: time="2025-02-13T19:52:46.757714573Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.2\" with image id \"sha256:6417e1437b6d9a789e1ca789695a574e1df00a632bdbfbcae9695c9a7d500e32\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f\", size \"26215036\" in 2.812105052s" Feb 13 19:52:46.757878 containerd[1753]: time="2025-02-13T19:52:46.757752013Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.2\" returns image reference \"sha256:6417e1437b6d9a789e1ca789695a574e1df00a632bdbfbcae9695c9a7d500e32\"" Feb 13 19:52:46.758509 containerd[1753]: time="2025-02-13T19:52:46.758485694Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.2\"" Feb 13 19:52:48.363652 containerd[1753]: time="2025-02-13T19:52:48.363428014Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:52:48.366507 containerd[1753]: time="2025-02-13T19:52:48.366213019Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.2: active requests=0, bytes read=22528145" Feb 13 19:52:48.369718 containerd[1753]: time="2025-02-13T19:52:48.369675505Z" level=info msg="ImageCreate event name:\"sha256:3c9285acfd2ff7915bb451cc40ac060366ac519f3fef00c455f5aca0e0346c4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:52:48.375169 containerd[1753]: time="2025-02-13T19:52:48.375117034Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:52:48.376260 containerd[1753]: time="2025-02-13T19:52:48.376094476Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.2\" with image id \"sha256:3c9285acfd2ff7915bb451cc40ac060366ac519f3fef00c455f5aca0e0346c4d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90\", size \"23968941\" in 1.617502342s" Feb 13 19:52:48.376260 containerd[1753]: time="2025-02-13T19:52:48.376127276Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.2\" returns image reference \"sha256:3c9285acfd2ff7915bb451cc40ac060366ac519f3fef00c455f5aca0e0346c4d\"" Feb 13 19:52:48.377201 containerd[1753]: time="2025-02-13T19:52:48.377177478Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.2\"" Feb 13 19:52:49.927665 containerd[1753]: time="2025-02-13T19:52:49.927244382Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:52:49.930606 containerd[1753]: time="2025-02-13T19:52:49.930379147Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.2: active requests=0, bytes read=17480800" Feb 13 19:52:49.937474 containerd[1753]: time="2025-02-13T19:52:49.937417680Z" level=info msg="ImageCreate event name:\"sha256:82dfa03f692fb5d84f66c17d6ee9126b081182152b25d28ea456d89b7d5d8911\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:52:49.947876 containerd[1753]: time="2025-02-13T19:52:49.947807378Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:52:49.949936 containerd[1753]: time="2025-02-13T19:52:49.949909301Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.2\" with image id \"sha256:82dfa03f692fb5d84f66c17d6ee9126b081182152b25d28ea456d89b7d5d8911\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76\", size \"18921614\" in 1.572616103s" Feb 13 19:52:49.950128 containerd[1753]: time="2025-02-13T19:52:49.950035342Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.2\" returns image reference \"sha256:82dfa03f692fb5d84f66c17d6ee9126b081182152b25d28ea456d89b7d5d8911\"" Feb 13 19:52:49.950693 containerd[1753]: time="2025-02-13T19:52:49.950491462Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.2\"" Feb 13 19:52:51.250126 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3203761777.mount: Deactivated successfully. Feb 13 19:52:51.610742 containerd[1753]: time="2025-02-13T19:52:51.610136758Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:52:51.614171 containerd[1753]: time="2025-02-13T19:52:51.614124085Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.2: active requests=0, bytes read=27363382" Feb 13 19:52:51.618818 containerd[1753]: time="2025-02-13T19:52:51.618790453Z" level=info msg="ImageCreate event name:\"sha256:e5aac5df76d9b8dc899ab8c4db25a7648e7fb25cafe7a155066247883c78f062\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:52:51.623942 containerd[1753]: time="2025-02-13T19:52:51.623889862Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:52:51.624456 containerd[1753]: time="2025-02-13T19:52:51.624427903Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.2\" with image id \"sha256:e5aac5df76d9b8dc899ab8c4db25a7648e7fb25cafe7a155066247883c78f062\", repo tag \"registry.k8s.io/kube-proxy:v1.32.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d\", size \"27362401\" in 1.67390788s" Feb 13 19:52:51.624545 containerd[1753]: time="2025-02-13T19:52:51.624531143Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.2\" returns image reference \"sha256:e5aac5df76d9b8dc899ab8c4db25a7648e7fb25cafe7a155066247883c78f062\"" Feb 13 19:52:51.625224 containerd[1753]: time="2025-02-13T19:52:51.625192664Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Feb 13 19:52:52.410909 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4078070922.mount: Deactivated successfully. Feb 13 19:52:53.546762 containerd[1753]: time="2025-02-13T19:52:53.546707555Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:52:53.553494 containerd[1753]: time="2025-02-13T19:52:53.553238446Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" Feb 13 19:52:53.580255 containerd[1753]: time="2025-02-13T19:52:53.580174775Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:52:53.587464 containerd[1753]: time="2025-02-13T19:52:53.587122708Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:52:53.589520 containerd[1753]: time="2025-02-13T19:52:53.589488272Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.964104048s" Feb 13 19:52:53.589652 containerd[1753]: time="2025-02-13T19:52:53.589631752Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Feb 13 19:52:53.590148 containerd[1753]: time="2025-02-13T19:52:53.590117993Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Feb 13 19:52:54.752432 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2414506684.mount: Deactivated successfully. Feb 13 19:52:54.785286 containerd[1753]: time="2025-02-13T19:52:54.785231551Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:52:54.788025 containerd[1753]: time="2025-02-13T19:52:54.787842955Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Feb 13 19:52:54.793536 containerd[1753]: time="2025-02-13T19:52:54.793491645Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:52:54.800426 containerd[1753]: time="2025-02-13T19:52:54.800378218Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:52:54.801670 containerd[1753]: time="2025-02-13T19:52:54.801060219Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 1.210778346s" Feb 13 19:52:54.801670 containerd[1753]: time="2025-02-13T19:52:54.801123419Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Feb 13 19:52:54.801769 containerd[1753]: time="2025-02-13T19:52:54.801754100Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Feb 13 19:52:55.601273 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1683140729.mount: Deactivated successfully. Feb 13 19:52:56.452096 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Feb 13 19:52:56.458809 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:52:56.556140 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:52:56.560315 (kubelet)[2850]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:52:56.598830 kubelet[2850]: E0213 19:52:56.598705 2850 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:52:56.600941 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:52:56.601100 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:52:56.601586 systemd[1]: kubelet.service: Consumed 128ms CPU time, 102.1M memory peak. Feb 13 19:52:58.742599 containerd[1753]: time="2025-02-13T19:52:58.742539295Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:52:58.745267 containerd[1753]: time="2025-02-13T19:52:58.745212620Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812429" Feb 13 19:52:58.783040 containerd[1753]: time="2025-02-13T19:52:58.782953728Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:52:58.790852 containerd[1753]: time="2025-02-13T19:52:58.790775302Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:52:58.792546 containerd[1753]: time="2025-02-13T19:52:58.792063704Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 3.990283764s" Feb 13 19:52:58.792546 containerd[1753]: time="2025-02-13T19:52:58.792102224Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Feb 13 19:53:04.334318 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:53:04.334971 systemd[1]: kubelet.service: Consumed 128ms CPU time, 102.1M memory peak. Feb 13 19:53:04.343863 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:53:04.371356 systemd[1]: Reload requested from client PID 2890 ('systemctl') (unit session-9.scope)... Feb 13 19:53:04.371373 systemd[1]: Reloading... Feb 13 19:53:04.485652 zram_generator::config[2938]: No configuration found. Feb 13 19:53:04.593545 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:53:04.694333 systemd[1]: Reloading finished in 322 ms. Feb 13 19:53:04.970384 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 13 19:53:04.970496 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 13 19:53:04.970792 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:53:04.977991 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:53:05.866222 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:53:05.872024 (kubelet)[3001]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:53:05.912823 kubelet[3001]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:53:05.913208 kubelet[3001]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Feb 13 19:53:05.913263 kubelet[3001]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:53:05.913430 kubelet[3001]: I0213 19:53:05.913397 3001 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:53:06.884375 kubelet[3001]: I0213 19:53:06.884329 3001 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Feb 13 19:53:06.884375 kubelet[3001]: I0213 19:53:06.884365 3001 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:53:06.884679 kubelet[3001]: I0213 19:53:06.884659 3001 server.go:954] "Client rotation is on, will bootstrap in background" Feb 13 19:53:06.904607 kubelet[3001]: E0213 19:53:06.904558 3001 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.12:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.12:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:53:06.908690 kubelet[3001]: I0213 19:53:06.907498 3001 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:53:06.915711 kubelet[3001]: E0213 19:53:06.915670 3001 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 19:53:06.916115 kubelet[3001]: I0213 19:53:06.916096 3001 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 19:53:06.919537 kubelet[3001]: I0213 19:53:06.919508 3001 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:53:06.920564 kubelet[3001]: I0213 19:53:06.920522 3001 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:53:06.920876 kubelet[3001]: I0213 19:53:06.920681 3001 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230.0.1-a-4092b3335a","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 19:53:06.921017 kubelet[3001]: I0213 19:53:06.921003 3001 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:53:06.921072 kubelet[3001]: I0213 19:53:06.921064 3001 container_manager_linux.go:304] "Creating device plugin manager" Feb 13 19:53:06.921253 kubelet[3001]: I0213 19:53:06.921238 3001 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:53:06.924239 kubelet[3001]: I0213 19:53:06.924216 3001 kubelet.go:446] "Attempting to sync node with API server" Feb 13 19:53:06.924355 kubelet[3001]: I0213 19:53:06.924343 3001 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:53:06.924428 kubelet[3001]: I0213 19:53:06.924419 3001 kubelet.go:352] "Adding apiserver pod source" Feb 13 19:53:06.924492 kubelet[3001]: I0213 19:53:06.924483 3001 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:53:06.927991 kubelet[3001]: I0213 19:53:06.927960 3001 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 19:53:06.928449 kubelet[3001]: I0213 19:53:06.928427 3001 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:53:06.928519 kubelet[3001]: W0213 19:53:06.928488 3001 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 19:53:06.929070 kubelet[3001]: I0213 19:53:06.929046 3001 watchdog_linux.go:99] "Systemd watchdog is not enabled" Feb 13 19:53:06.929405 kubelet[3001]: I0213 19:53:06.929136 3001 server.go:1287] "Started kubelet" Feb 13 19:53:06.929405 kubelet[3001]: W0213 19:53:06.929273 3001 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.0.1-a-4092b3335a&limit=500&resourceVersion=0": dial tcp 10.200.20.12:6443: connect: connection refused Feb 13 19:53:06.929405 kubelet[3001]: E0213 19:53:06.929320 3001 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.0.1-a-4092b3335a&limit=500&resourceVersion=0\": dial tcp 10.200.20.12:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:53:06.933794 kubelet[3001]: I0213 19:53:06.933761 3001 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:53:06.934483 kubelet[3001]: W0213 19:53:06.934431 3001 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.12:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.12:6443: connect: connection refused Feb 13 19:53:06.934603 kubelet[3001]: E0213 19:53:06.934586 3001 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.12:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.12:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:53:06.935766 kubelet[3001]: I0213 19:53:06.935739 3001 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:53:06.937648 kubelet[3001]: I0213 19:53:06.936705 3001 server.go:490] "Adding debug handlers to kubelet server" Feb 13 19:53:06.937648 kubelet[3001]: I0213 19:53:06.937495 3001 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:53:06.937937 kubelet[3001]: I0213 19:53:06.937920 3001 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:53:06.938314 kubelet[3001]: I0213 19:53:06.938290 3001 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 19:53:06.939542 kubelet[3001]: I0213 19:53:06.939519 3001 volume_manager.go:297] "Starting Kubelet Volume Manager" Feb 13 19:53:06.939922 kubelet[3001]: E0213 19:53:06.939786 3001 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.12:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.12:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230.0.1-a-4092b3335a.1823dc8eca163b13 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230.0.1-a-4092b3335a,UID:ci-4230.0.1-a-4092b3335a,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230.0.1-a-4092b3335a,},FirstTimestamp:2025-02-13 19:53:06.929064723 +0000 UTC m=+1.053507311,LastTimestamp:2025-02-13 19:53:06.929064723 +0000 UTC m=+1.053507311,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230.0.1-a-4092b3335a,}" Feb 13 19:53:06.940413 kubelet[3001]: E0213 19:53:06.940375 3001 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4230.0.1-a-4092b3335a\" not found" Feb 13 19:53:06.941210 kubelet[3001]: I0213 19:53:06.941181 3001 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 19:53:06.941282 kubelet[3001]: I0213 19:53:06.941240 3001 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:53:06.941354 kubelet[3001]: E0213 19:53:06.941324 3001 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.0.1-a-4092b3335a?timeout=10s\": dial tcp 10.200.20.12:6443: connect: connection refused" interval="200ms" Feb 13 19:53:06.941639 kubelet[3001]: I0213 19:53:06.941590 3001 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:53:06.941786 kubelet[3001]: I0213 19:53:06.941760 3001 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:53:06.942555 kubelet[3001]: E0213 19:53:06.942531 3001 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 19:53:06.942915 kubelet[3001]: W0213 19:53:06.942875 3001 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.12:6443: connect: connection refused Feb 13 19:53:06.942993 kubelet[3001]: E0213 19:53:06.942930 3001 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.12:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:53:06.943060 kubelet[3001]: I0213 19:53:06.943040 3001 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:53:06.963318 kubelet[3001]: I0213 19:53:06.963087 3001 cpu_manager.go:221] "Starting CPU manager" policy="none" Feb 13 19:53:06.963318 kubelet[3001]: I0213 19:53:06.963113 3001 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Feb 13 19:53:06.963318 kubelet[3001]: I0213 19:53:06.963133 3001 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:53:06.965334 kubelet[3001]: I0213 19:53:06.965296 3001 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:53:06.966716 kubelet[3001]: I0213 19:53:06.966506 3001 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:53:06.966716 kubelet[3001]: I0213 19:53:06.966532 3001 status_manager.go:227] "Starting to sync pod status with apiserver" Feb 13 19:53:06.966716 kubelet[3001]: I0213 19:53:06.966556 3001 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Feb 13 19:53:06.966716 kubelet[3001]: I0213 19:53:06.966565 3001 kubelet.go:2388] "Starting kubelet main sync loop" Feb 13 19:53:06.966716 kubelet[3001]: E0213 19:53:06.966604 3001 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 19:53:06.971125 kubelet[3001]: I0213 19:53:06.970821 3001 policy_none.go:49] "None policy: Start" Feb 13 19:53:06.971125 kubelet[3001]: I0213 19:53:06.970845 3001 memory_manager.go:186] "Starting memorymanager" policy="None" Feb 13 19:53:06.971125 kubelet[3001]: I0213 19:53:06.970856 3001 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:53:06.972863 kubelet[3001]: W0213 19:53:06.972811 3001 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.12:6443: connect: connection refused Feb 13 19:53:06.973684 kubelet[3001]: E0213 19:53:06.972872 3001 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.12:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:53:06.978773 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 19:53:06.998482 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 19:53:07.008990 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 19:53:07.010464 kubelet[3001]: I0213 19:53:07.010427 3001 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:53:07.011059 kubelet[3001]: I0213 19:53:07.010665 3001 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 19:53:07.011059 kubelet[3001]: I0213 19:53:07.010683 3001 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:53:07.011059 kubelet[3001]: I0213 19:53:07.010921 3001 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:53:07.012314 kubelet[3001]: E0213 19:53:07.012289 3001 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Feb 13 19:53:07.013075 kubelet[3001]: E0213 19:53:07.012497 3001 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4230.0.1-a-4092b3335a\" not found" Feb 13 19:53:07.078267 systemd[1]: Created slice kubepods-burstable-poddb07326c730f9fc27d440b6e383b9810.slice - libcontainer container kubepods-burstable-poddb07326c730f9fc27d440b6e383b9810.slice. Feb 13 19:53:07.096084 kubelet[3001]: E0213 19:53:07.095992 3001 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.0.1-a-4092b3335a\" not found" node="ci-4230.0.1-a-4092b3335a" Feb 13 19:53:07.099921 systemd[1]: Created slice kubepods-burstable-pod268cf901c7f61aa9173d8531ad4a50b4.slice - libcontainer container kubepods-burstable-pod268cf901c7f61aa9173d8531ad4a50b4.slice. Feb 13 19:53:07.101914 kubelet[3001]: E0213 19:53:07.101889 3001 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.0.1-a-4092b3335a\" not found" node="ci-4230.0.1-a-4092b3335a" Feb 13 19:53:07.105016 systemd[1]: Created slice kubepods-burstable-pod2319ce70e449a8c319e699e35e557cc0.slice - libcontainer container kubepods-burstable-pod2319ce70e449a8c319e699e35e557cc0.slice. Feb 13 19:53:07.107578 kubelet[3001]: E0213 19:53:07.107393 3001 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.0.1-a-4092b3335a\" not found" node="ci-4230.0.1-a-4092b3335a" Feb 13 19:53:07.112955 kubelet[3001]: I0213 19:53:07.112902 3001 kubelet_node_status.go:76] "Attempting to register node" node="ci-4230.0.1-a-4092b3335a" Feb 13 19:53:07.113310 kubelet[3001]: E0213 19:53:07.113283 3001 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.200.20.12:6443/api/v1/nodes\": dial tcp 10.200.20.12:6443: connect: connection refused" node="ci-4230.0.1-a-4092b3335a" Feb 13 19:53:07.142018 kubelet[3001]: E0213 19:53:07.141877 3001 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.0.1-a-4092b3335a?timeout=10s\": dial tcp 10.200.20.12:6443: connect: connection refused" interval="400ms" Feb 13 19:53:07.143288 kubelet[3001]: I0213 19:53:07.143005 3001 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/db07326c730f9fc27d440b6e383b9810-ca-certs\") pod \"kube-controller-manager-ci-4230.0.1-a-4092b3335a\" (UID: \"db07326c730f9fc27d440b6e383b9810\") " pod="kube-system/kube-controller-manager-ci-4230.0.1-a-4092b3335a" Feb 13 19:53:07.143288 kubelet[3001]: I0213 19:53:07.143036 3001 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/db07326c730f9fc27d440b6e383b9810-flexvolume-dir\") pod \"kube-controller-manager-ci-4230.0.1-a-4092b3335a\" (UID: \"db07326c730f9fc27d440b6e383b9810\") " pod="kube-system/kube-controller-manager-ci-4230.0.1-a-4092b3335a" Feb 13 19:53:07.143288 kubelet[3001]: I0213 19:53:07.143064 3001 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/db07326c730f9fc27d440b6e383b9810-k8s-certs\") pod \"kube-controller-manager-ci-4230.0.1-a-4092b3335a\" (UID: \"db07326c730f9fc27d440b6e383b9810\") " pod="kube-system/kube-controller-manager-ci-4230.0.1-a-4092b3335a" Feb 13 19:53:07.143288 kubelet[3001]: I0213 19:53:07.143081 3001 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/db07326c730f9fc27d440b6e383b9810-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230.0.1-a-4092b3335a\" (UID: \"db07326c730f9fc27d440b6e383b9810\") " pod="kube-system/kube-controller-manager-ci-4230.0.1-a-4092b3335a" Feb 13 19:53:07.143288 kubelet[3001]: I0213 19:53:07.143099 3001 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2319ce70e449a8c319e699e35e557cc0-k8s-certs\") pod \"kube-apiserver-ci-4230.0.1-a-4092b3335a\" (UID: \"2319ce70e449a8c319e699e35e557cc0\") " pod="kube-system/kube-apiserver-ci-4230.0.1-a-4092b3335a" Feb 13 19:53:07.143457 kubelet[3001]: I0213 19:53:07.143114 3001 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/db07326c730f9fc27d440b6e383b9810-kubeconfig\") pod \"kube-controller-manager-ci-4230.0.1-a-4092b3335a\" (UID: \"db07326c730f9fc27d440b6e383b9810\") " pod="kube-system/kube-controller-manager-ci-4230.0.1-a-4092b3335a" Feb 13 19:53:07.143457 kubelet[3001]: I0213 19:53:07.143129 3001 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/268cf901c7f61aa9173d8531ad4a50b4-kubeconfig\") pod \"kube-scheduler-ci-4230.0.1-a-4092b3335a\" (UID: \"268cf901c7f61aa9173d8531ad4a50b4\") " pod="kube-system/kube-scheduler-ci-4230.0.1-a-4092b3335a" Feb 13 19:53:07.143457 kubelet[3001]: I0213 19:53:07.143147 3001 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2319ce70e449a8c319e699e35e557cc0-ca-certs\") pod \"kube-apiserver-ci-4230.0.1-a-4092b3335a\" (UID: \"2319ce70e449a8c319e699e35e557cc0\") " pod="kube-system/kube-apiserver-ci-4230.0.1-a-4092b3335a" Feb 13 19:53:07.143457 kubelet[3001]: I0213 19:53:07.143163 3001 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2319ce70e449a8c319e699e35e557cc0-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230.0.1-a-4092b3335a\" (UID: \"2319ce70e449a8c319e699e35e557cc0\") " pod="kube-system/kube-apiserver-ci-4230.0.1-a-4092b3335a" Feb 13 19:53:07.315372 kubelet[3001]: I0213 19:53:07.315325 3001 kubelet_node_status.go:76] "Attempting to register node" node="ci-4230.0.1-a-4092b3335a" Feb 13 19:53:07.315778 kubelet[3001]: E0213 19:53:07.315743 3001 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.200.20.12:6443/api/v1/nodes\": dial tcp 10.200.20.12:6443: connect: connection refused" node="ci-4230.0.1-a-4092b3335a" Feb 13 19:53:07.398398 containerd[1753]: time="2025-02-13T19:53:07.398079076Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230.0.1-a-4092b3335a,Uid:db07326c730f9fc27d440b6e383b9810,Namespace:kube-system,Attempt:0,}" Feb 13 19:53:07.403162 containerd[1753]: time="2025-02-13T19:53:07.403123604Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230.0.1-a-4092b3335a,Uid:268cf901c7f61aa9173d8531ad4a50b4,Namespace:kube-system,Attempt:0,}" Feb 13 19:53:07.409095 containerd[1753]: time="2025-02-13T19:53:07.409046815Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230.0.1-a-4092b3335a,Uid:2319ce70e449a8c319e699e35e557cc0,Namespace:kube-system,Attempt:0,}" Feb 13 19:53:07.542539 kubelet[3001]: E0213 19:53:07.542489 3001 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.0.1-a-4092b3335a?timeout=10s\": dial tcp 10.200.20.12:6443: connect: connection refused" interval="800ms" Feb 13 19:53:07.718385 kubelet[3001]: I0213 19:53:07.718035 3001 kubelet_node_status.go:76] "Attempting to register node" node="ci-4230.0.1-a-4092b3335a" Feb 13 19:53:07.718519 kubelet[3001]: E0213 19:53:07.718396 3001 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.200.20.12:6443/api/v1/nodes\": dial tcp 10.200.20.12:6443: connect: connection refused" node="ci-4230.0.1-a-4092b3335a" Feb 13 19:53:07.839508 kubelet[3001]: W0213 19:53:07.839449 3001 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.0.1-a-4092b3335a&limit=500&resourceVersion=0": dial tcp 10.200.20.12:6443: connect: connection refused Feb 13 19:53:07.839748 kubelet[3001]: E0213 19:53:07.839720 3001 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.0.1-a-4092b3335a&limit=500&resourceVersion=0\": dial tcp 10.200.20.12:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:53:07.982844 kubelet[3001]: W0213 19:53:07.982681 3001 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.12:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.12:6443: connect: connection refused Feb 13 19:53:07.982844 kubelet[3001]: E0213 19:53:07.982748 3001 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.12:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.12:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:53:08.101267 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount707312426.mount: Deactivated successfully. Feb 13 19:53:08.145899 containerd[1753]: time="2025-02-13T19:53:08.145838283Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:53:08.161076 containerd[1753]: time="2025-02-13T19:53:08.161021670Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Feb 13 19:53:08.167472 containerd[1753]: time="2025-02-13T19:53:08.167427241Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:53:08.176287 containerd[1753]: time="2025-02-13T19:53:08.175162855Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:53:08.184664 containerd[1753]: time="2025-02-13T19:53:08.184487991Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:53:08.193867 containerd[1753]: time="2025-02-13T19:53:08.193817768Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:53:08.195471 containerd[1753]: time="2025-02-13T19:53:08.195433251Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:53:08.200291 containerd[1753]: time="2025-02-13T19:53:08.200193019Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:53:08.201383 containerd[1753]: time="2025-02-13T19:53:08.201129501Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 797.914416ms" Feb 13 19:53:08.203226 containerd[1753]: time="2025-02-13T19:53:08.203083065Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 804.926348ms" Feb 13 19:53:08.211443 containerd[1753]: time="2025-02-13T19:53:08.211398799Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 802.271184ms" Feb 13 19:53:08.343457 kubelet[3001]: E0213 19:53:08.343068 3001 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.0.1-a-4092b3335a?timeout=10s\": dial tcp 10.200.20.12:6443: connect: connection refused" interval="1.6s" Feb 13 19:53:08.419036 kubelet[3001]: W0213 19:53:08.418939 3001 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.12:6443: connect: connection refused Feb 13 19:53:08.419036 kubelet[3001]: E0213 19:53:08.419006 3001 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.12:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:53:08.419510 kubelet[3001]: W0213 19:53:08.419449 3001 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.12:6443: connect: connection refused Feb 13 19:53:08.419510 kubelet[3001]: E0213 19:53:08.419495 3001 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.12:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:53:08.521025 kubelet[3001]: I0213 19:53:08.520997 3001 kubelet_node_status.go:76] "Attempting to register node" node="ci-4230.0.1-a-4092b3335a" Feb 13 19:53:08.521604 kubelet[3001]: E0213 19:53:08.521578 3001 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.200.20.12:6443/api/v1/nodes\": dial tcp 10.200.20.12:6443: connect: connection refused" node="ci-4230.0.1-a-4092b3335a" Feb 13 19:53:08.918209 kubelet[3001]: E0213 19:53:08.918107 3001 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.12:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.12:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230.0.1-a-4092b3335a.1823dc8eca163b13 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230.0.1-a-4092b3335a,UID:ci-4230.0.1-a-4092b3335a,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230.0.1-a-4092b3335a,},FirstTimestamp:2025-02-13 19:53:06.929064723 +0000 UTC m=+1.053507311,LastTimestamp:2025-02-13 19:53:06.929064723 +0000 UTC m=+1.053507311,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230.0.1-a-4092b3335a,}" Feb 13 19:53:09.012430 containerd[1753]: time="2025-02-13T19:53:09.012181241Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:53:09.012430 containerd[1753]: time="2025-02-13T19:53:09.012259162Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:53:09.012430 containerd[1753]: time="2025-02-13T19:53:09.012270042Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:53:09.012430 containerd[1753]: time="2025-02-13T19:53:09.012350162Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:53:09.016968 containerd[1753]: time="2025-02-13T19:53:09.016858449Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:53:09.016968 containerd[1753]: time="2025-02-13T19:53:09.016932329Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:53:09.017143 containerd[1753]: time="2025-02-13T19:53:09.016944809Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:53:09.017143 containerd[1753]: time="2025-02-13T19:53:09.017019889Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:53:09.020041 containerd[1753]: time="2025-02-13T19:53:09.019978694Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:53:09.020223 containerd[1753]: time="2025-02-13T19:53:09.020196094Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:53:09.020342 containerd[1753]: time="2025-02-13T19:53:09.020316695Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:53:09.021043 containerd[1753]: time="2025-02-13T19:53:09.021001696Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:53:09.073866 systemd[1]: Started cri-containerd-8d6beee477b40d025b41da7b3bc60e2fee8e253e78846643cfdda3d349f8328a.scope - libcontainer container 8d6beee477b40d025b41da7b3bc60e2fee8e253e78846643cfdda3d349f8328a. Feb 13 19:53:09.077467 kubelet[3001]: E0213 19:53:09.077409 3001 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.12:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.12:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:53:09.079742 systemd[1]: Started cri-containerd-522a4eea1960715f8328ba48a6c6c926385802f692b98b132e37b513bd152947.scope - libcontainer container 522a4eea1960715f8328ba48a6c6c926385802f692b98b132e37b513bd152947. Feb 13 19:53:09.081364 systemd[1]: Started cri-containerd-b205d3957f22f8e9b89b381dada42f50f833d63bdb6ea3001cb66082f5bec540.scope - libcontainer container b205d3957f22f8e9b89b381dada42f50f833d63bdb6ea3001cb66082f5bec540. Feb 13 19:53:09.136473 containerd[1753]: time="2025-02-13T19:53:09.136156483Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230.0.1-a-4092b3335a,Uid:2319ce70e449a8c319e699e35e557cc0,Namespace:kube-system,Attempt:0,} returns sandbox id \"522a4eea1960715f8328ba48a6c6c926385802f692b98b132e37b513bd152947\"" Feb 13 19:53:09.143223 containerd[1753]: time="2025-02-13T19:53:09.142890454Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230.0.1-a-4092b3335a,Uid:268cf901c7f61aa9173d8531ad4a50b4,Namespace:kube-system,Attempt:0,} returns sandbox id \"8d6beee477b40d025b41da7b3bc60e2fee8e253e78846643cfdda3d349f8328a\"" Feb 13 19:53:09.143716 containerd[1753]: time="2025-02-13T19:53:09.143479295Z" level=info msg="CreateContainer within sandbox \"522a4eea1960715f8328ba48a6c6c926385802f692b98b132e37b513bd152947\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 19:53:09.145856 containerd[1753]: time="2025-02-13T19:53:09.145762259Z" level=info msg="CreateContainer within sandbox \"8d6beee477b40d025b41da7b3bc60e2fee8e253e78846643cfdda3d349f8328a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 19:53:09.150097 containerd[1753]: time="2025-02-13T19:53:09.150004546Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230.0.1-a-4092b3335a,Uid:db07326c730f9fc27d440b6e383b9810,Namespace:kube-system,Attempt:0,} returns sandbox id \"b205d3957f22f8e9b89b381dada42f50f833d63bdb6ea3001cb66082f5bec540\"" Feb 13 19:53:09.152810 containerd[1753]: time="2025-02-13T19:53:09.152779830Z" level=info msg="CreateContainer within sandbox \"b205d3957f22f8e9b89b381dada42f50f833d63bdb6ea3001cb66082f5bec540\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 19:53:09.184044 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount997133411.mount: Deactivated successfully. Feb 13 19:53:09.205465 containerd[1753]: time="2025-02-13T19:53:09.205417876Z" level=info msg="CreateContainer within sandbox \"522a4eea1960715f8328ba48a6c6c926385802f692b98b132e37b513bd152947\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"8c9b89537c4c9a5937f50f09009687606c50982d47d81ad55e89d6f2d4c326be\"" Feb 13 19:53:09.206430 containerd[1753]: time="2025-02-13T19:53:09.206106237Z" level=info msg="StartContainer for \"8c9b89537c4c9a5937f50f09009687606c50982d47d81ad55e89d6f2d4c326be\"" Feb 13 19:53:09.228424 containerd[1753]: time="2025-02-13T19:53:09.228285513Z" level=info msg="CreateContainer within sandbox \"8d6beee477b40d025b41da7b3bc60e2fee8e253e78846643cfdda3d349f8328a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"0e22b0f5e4df844982724c6416678e0de3c352cfce4f8ef1a5603b4c7972a0b6\"" Feb 13 19:53:09.228958 systemd[1]: Started cri-containerd-8c9b89537c4c9a5937f50f09009687606c50982d47d81ad55e89d6f2d4c326be.scope - libcontainer container 8c9b89537c4c9a5937f50f09009687606c50982d47d81ad55e89d6f2d4c326be. Feb 13 19:53:09.230214 containerd[1753]: time="2025-02-13T19:53:09.229692996Z" level=info msg="StartContainer for \"0e22b0f5e4df844982724c6416678e0de3c352cfce4f8ef1a5603b4c7972a0b6\"" Feb 13 19:53:09.235173 containerd[1753]: time="2025-02-13T19:53:09.233564762Z" level=info msg="CreateContainer within sandbox \"b205d3957f22f8e9b89b381dada42f50f833d63bdb6ea3001cb66082f5bec540\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c913ab0100d32ea3ab0e72119be93481381d84845869cea13c775fbfbb07b1ab\"" Feb 13 19:53:09.235874 containerd[1753]: time="2025-02-13T19:53:09.235709006Z" level=info msg="StartContainer for \"c913ab0100d32ea3ab0e72119be93481381d84845869cea13c775fbfbb07b1ab\"" Feb 13 19:53:09.261881 systemd[1]: Started cri-containerd-0e22b0f5e4df844982724c6416678e0de3c352cfce4f8ef1a5603b4c7972a0b6.scope - libcontainer container 0e22b0f5e4df844982724c6416678e0de3c352cfce4f8ef1a5603b4c7972a0b6. Feb 13 19:53:09.285780 containerd[1753]: time="2025-02-13T19:53:09.285710247Z" level=info msg="StartContainer for \"8c9b89537c4c9a5937f50f09009687606c50982d47d81ad55e89d6f2d4c326be\" returns successfully" Feb 13 19:53:09.287123 systemd[1]: Started cri-containerd-c913ab0100d32ea3ab0e72119be93481381d84845869cea13c775fbfbb07b1ab.scope - libcontainer container c913ab0100d32ea3ab0e72119be93481381d84845869cea13c775fbfbb07b1ab. Feb 13 19:53:09.340005 containerd[1753]: time="2025-02-13T19:53:09.337157651Z" level=info msg="StartContainer for \"0e22b0f5e4df844982724c6416678e0de3c352cfce4f8ef1a5603b4c7972a0b6\" returns successfully" Feb 13 19:53:09.356357 containerd[1753]: time="2025-02-13T19:53:09.356120562Z" level=info msg="StartContainer for \"c913ab0100d32ea3ab0e72119be93481381d84845869cea13c775fbfbb07b1ab\" returns successfully" Feb 13 19:53:09.983125 kubelet[3001]: E0213 19:53:09.982896 3001 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.0.1-a-4092b3335a\" not found" node="ci-4230.0.1-a-4092b3335a" Feb 13 19:53:09.986577 kubelet[3001]: E0213 19:53:09.986543 3001 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.0.1-a-4092b3335a\" not found" node="ci-4230.0.1-a-4092b3335a" Feb 13 19:53:09.988655 kubelet[3001]: E0213 19:53:09.988160 3001 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.0.1-a-4092b3335a\" not found" node="ci-4230.0.1-a-4092b3335a" Feb 13 19:53:10.123646 kubelet[3001]: I0213 19:53:10.123602 3001 kubelet_node_status.go:76] "Attempting to register node" node="ci-4230.0.1-a-4092b3335a" Feb 13 19:53:10.992063 kubelet[3001]: E0213 19:53:10.992031 3001 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.0.1-a-4092b3335a\" not found" node="ci-4230.0.1-a-4092b3335a" Feb 13 19:53:10.993892 kubelet[3001]: E0213 19:53:10.993868 3001 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.0.1-a-4092b3335a\" not found" node="ci-4230.0.1-a-4092b3335a" Feb 13 19:53:12.075842 kubelet[3001]: E0213 19:53:12.075800 3001 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4230.0.1-a-4092b3335a\" not found" node="ci-4230.0.1-a-4092b3335a" Feb 13 19:53:12.240405 kubelet[3001]: I0213 19:53:12.240360 3001 kubelet_node_status.go:79] "Successfully registered node" node="ci-4230.0.1-a-4092b3335a" Feb 13 19:53:12.240405 kubelet[3001]: E0213 19:53:12.240405 3001 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"ci-4230.0.1-a-4092b3335a\": node \"ci-4230.0.1-a-4092b3335a\" not found" Feb 13 19:53:12.341513 kubelet[3001]: I0213 19:53:12.341162 3001 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230.0.1-a-4092b3335a" Feb 13 19:53:12.357280 kubelet[3001]: E0213 19:53:12.357235 3001 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4230.0.1-a-4092b3335a\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4230.0.1-a-4092b3335a" Feb 13 19:53:12.357280 kubelet[3001]: I0213 19:53:12.357275 3001 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230.0.1-a-4092b3335a" Feb 13 19:53:12.360025 kubelet[3001]: E0213 19:53:12.359791 3001 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4230.0.1-a-4092b3335a\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4230.0.1-a-4092b3335a" Feb 13 19:53:12.360025 kubelet[3001]: I0213 19:53:12.359832 3001 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230.0.1-a-4092b3335a" Feb 13 19:53:12.368190 kubelet[3001]: E0213 19:53:12.368154 3001 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4230.0.1-a-4092b3335a\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4230.0.1-a-4092b3335a" Feb 13 19:53:12.553732 kubelet[3001]: I0213 19:53:12.553425 3001 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230.0.1-a-4092b3335a" Feb 13 19:53:12.555731 kubelet[3001]: E0213 19:53:12.555698 3001 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4230.0.1-a-4092b3335a\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4230.0.1-a-4092b3335a" Feb 13 19:53:12.861727 kubelet[3001]: I0213 19:53:12.861338 3001 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230.0.1-a-4092b3335a" Feb 13 19:53:12.863478 kubelet[3001]: E0213 19:53:12.863452 3001 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4230.0.1-a-4092b3335a\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4230.0.1-a-4092b3335a" Feb 13 19:53:12.937467 kubelet[3001]: I0213 19:53:12.937420 3001 apiserver.go:52] "Watching apiserver" Feb 13 19:53:12.942198 kubelet[3001]: I0213 19:53:12.942158 3001 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 19:53:13.594999 kubelet[3001]: I0213 19:53:13.594775 3001 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230.0.1-a-4092b3335a" Feb 13 19:53:13.607473 kubelet[3001]: W0213 19:53:13.607413 3001 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 19:53:14.387082 systemd[1]: Reload requested from client PID 3278 ('systemctl') (unit session-9.scope)... Feb 13 19:53:14.387104 systemd[1]: Reloading... Feb 13 19:53:14.508683 zram_generator::config[3328]: No configuration found. Feb 13 19:53:14.608883 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:53:14.719844 systemd[1]: Reloading finished in 332 ms. Feb 13 19:53:14.745812 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:53:14.759657 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 19:53:14.759946 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:53:14.760005 systemd[1]: kubelet.service: Consumed 1.404s CPU time, 121.9M memory peak. Feb 13 19:53:14.767333 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:53:15.012879 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:53:15.019404 (kubelet)[3389]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:53:15.065872 kubelet[3389]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:53:15.065872 kubelet[3389]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Feb 13 19:53:15.065872 kubelet[3389]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:53:15.065872 kubelet[3389]: I0213 19:53:15.064561 3389 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:53:15.071754 kubelet[3389]: I0213 19:53:15.071714 3389 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Feb 13 19:53:15.071754 kubelet[3389]: I0213 19:53:15.071744 3389 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:53:15.072040 kubelet[3389]: I0213 19:53:15.072017 3389 server.go:954] "Client rotation is on, will bootstrap in background" Feb 13 19:53:15.073339 kubelet[3389]: I0213 19:53:15.073317 3389 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 19:53:15.076054 kubelet[3389]: I0213 19:53:15.075861 3389 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:53:15.081990 kubelet[3389]: E0213 19:53:15.081933 3389 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 19:53:15.081990 kubelet[3389]: I0213 19:53:15.081981 3389 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 19:53:15.087066 kubelet[3389]: I0213 19:53:15.086274 3389 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:53:15.087066 kubelet[3389]: I0213 19:53:15.086434 3389 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:53:15.087066 kubelet[3389]: I0213 19:53:15.086456 3389 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230.0.1-a-4092b3335a","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 19:53:15.087066 kubelet[3389]: I0213 19:53:15.086698 3389 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:53:15.087294 kubelet[3389]: I0213 19:53:15.086707 3389 container_manager_linux.go:304] "Creating device plugin manager" Feb 13 19:53:15.087294 kubelet[3389]: I0213 19:53:15.086754 3389 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:53:15.087294 kubelet[3389]: I0213 19:53:15.086876 3389 kubelet.go:446] "Attempting to sync node with API server" Feb 13 19:53:15.087294 kubelet[3389]: I0213 19:53:15.086889 3389 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:53:15.087294 kubelet[3389]: I0213 19:53:15.086919 3389 kubelet.go:352] "Adding apiserver pod source" Feb 13 19:53:15.087294 kubelet[3389]: I0213 19:53:15.086930 3389 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:53:15.092259 kubelet[3389]: I0213 19:53:15.092206 3389 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 19:53:15.094557 kubelet[3389]: I0213 19:53:15.094524 3389 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:53:15.098794 kubelet[3389]: I0213 19:53:15.098760 3389 watchdog_linux.go:99] "Systemd watchdog is not enabled" Feb 13 19:53:15.098896 kubelet[3389]: I0213 19:53:15.098805 3389 server.go:1287] "Started kubelet" Feb 13 19:53:15.104489 kubelet[3389]: I0213 19:53:15.104455 3389 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:53:15.105277 kubelet[3389]: I0213 19:53:15.105246 3389 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:53:15.106833 kubelet[3389]: I0213 19:53:15.106814 3389 server.go:490] "Adding debug handlers to kubelet server" Feb 13 19:53:15.115423 kubelet[3389]: I0213 19:53:15.106995 3389 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:53:15.117873 kubelet[3389]: I0213 19:53:15.117811 3389 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:53:15.119542 kubelet[3389]: I0213 19:53:15.108062 3389 volume_manager.go:297] "Starting Kubelet Volume Manager" Feb 13 19:53:15.122865 kubelet[3389]: I0213 19:53:15.108071 3389 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 19:53:15.122865 kubelet[3389]: E0213 19:53:15.108193 3389 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4230.0.1-a-4092b3335a\" not found" Feb 13 19:53:15.122865 kubelet[3389]: I0213 19:53:15.107305 3389 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 19:53:15.122865 kubelet[3389]: I0213 19:53:15.122275 3389 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:53:15.123224 kubelet[3389]: I0213 19:53:15.121047 3389 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:53:15.123575 kubelet[3389]: I0213 19:53:15.123470 3389 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:53:15.129094 kubelet[3389]: I0213 19:53:15.129068 3389 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:53:15.132161 kubelet[3389]: E0213 19:53:15.131912 3389 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 19:53:15.154510 kubelet[3389]: I0213 19:53:15.154464 3389 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:53:15.156528 kubelet[3389]: I0213 19:53:15.156492 3389 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:53:15.156528 kubelet[3389]: I0213 19:53:15.156518 3389 status_manager.go:227] "Starting to sync pod status with apiserver" Feb 13 19:53:15.156636 kubelet[3389]: I0213 19:53:15.156538 3389 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Feb 13 19:53:15.156636 kubelet[3389]: I0213 19:53:15.156544 3389 kubelet.go:2388] "Starting kubelet main sync loop" Feb 13 19:53:15.156636 kubelet[3389]: E0213 19:53:15.156582 3389 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 19:53:15.194719 kubelet[3389]: I0213 19:53:15.193749 3389 cpu_manager.go:221] "Starting CPU manager" policy="none" Feb 13 19:53:15.194719 kubelet[3389]: I0213 19:53:15.193766 3389 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Feb 13 19:53:15.194719 kubelet[3389]: I0213 19:53:15.193786 3389 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:53:15.194719 kubelet[3389]: I0213 19:53:15.193948 3389 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 19:53:15.194719 kubelet[3389]: I0213 19:53:15.193958 3389 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 19:53:15.194719 kubelet[3389]: I0213 19:53:15.193975 3389 policy_none.go:49] "None policy: Start" Feb 13 19:53:15.194719 kubelet[3389]: I0213 19:53:15.193983 3389 memory_manager.go:186] "Starting memorymanager" policy="None" Feb 13 19:53:15.194719 kubelet[3389]: I0213 19:53:15.193991 3389 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:53:15.194719 kubelet[3389]: I0213 19:53:15.194098 3389 state_mem.go:75] "Updated machine memory state" Feb 13 19:53:15.199243 kubelet[3389]: I0213 19:53:15.198719 3389 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:53:15.199243 kubelet[3389]: I0213 19:53:15.198923 3389 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 19:53:15.199243 kubelet[3389]: I0213 19:53:15.198934 3389 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:53:15.199486 kubelet[3389]: I0213 19:53:15.199451 3389 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:53:15.202105 kubelet[3389]: E0213 19:53:15.202081 3389 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Feb 13 19:53:15.257300 kubelet[3389]: I0213 19:53:15.257261 3389 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230.0.1-a-4092b3335a" Feb 13 19:53:15.257996 kubelet[3389]: I0213 19:53:15.257750 3389 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230.0.1-a-4092b3335a" Feb 13 19:53:15.258585 kubelet[3389]: I0213 19:53:15.258401 3389 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230.0.1-a-4092b3335a" Feb 13 19:53:15.272673 kubelet[3389]: W0213 19:53:15.271593 3389 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 19:53:15.273107 kubelet[3389]: W0213 19:53:15.271596 3389 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 19:53:15.273107 kubelet[3389]: E0213 19:53:15.272999 3389 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4230.0.1-a-4092b3335a\" already exists" pod="kube-system/kube-scheduler-ci-4230.0.1-a-4092b3335a" Feb 13 19:53:15.273107 kubelet[3389]: W0213 19:53:15.271667 3389 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 19:53:15.302596 kubelet[3389]: I0213 19:53:15.302440 3389 kubelet_node_status.go:76] "Attempting to register node" node="ci-4230.0.1-a-4092b3335a" Feb 13 19:53:15.314411 kubelet[3389]: I0213 19:53:15.314352 3389 kubelet_node_status.go:125] "Node was previously registered" node="ci-4230.0.1-a-4092b3335a" Feb 13 19:53:15.314552 kubelet[3389]: I0213 19:53:15.314477 3389 kubelet_node_status.go:79] "Successfully registered node" node="ci-4230.0.1-a-4092b3335a" Feb 13 19:53:15.399857 sudo[3420]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 13 19:53:15.400138 sudo[3420]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Feb 13 19:53:15.423123 kubelet[3389]: I0213 19:53:15.423079 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/db07326c730f9fc27d440b6e383b9810-kubeconfig\") pod \"kube-controller-manager-ci-4230.0.1-a-4092b3335a\" (UID: \"db07326c730f9fc27d440b6e383b9810\") " pod="kube-system/kube-controller-manager-ci-4230.0.1-a-4092b3335a" Feb 13 19:53:15.423123 kubelet[3389]: I0213 19:53:15.423125 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/268cf901c7f61aa9173d8531ad4a50b4-kubeconfig\") pod \"kube-scheduler-ci-4230.0.1-a-4092b3335a\" (UID: \"268cf901c7f61aa9173d8531ad4a50b4\") " pod="kube-system/kube-scheduler-ci-4230.0.1-a-4092b3335a" Feb 13 19:53:15.423123 kubelet[3389]: I0213 19:53:15.423146 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2319ce70e449a8c319e699e35e557cc0-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230.0.1-a-4092b3335a\" (UID: \"2319ce70e449a8c319e699e35e557cc0\") " pod="kube-system/kube-apiserver-ci-4230.0.1-a-4092b3335a" Feb 13 19:53:15.423715 kubelet[3389]: I0213 19:53:15.423443 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2319ce70e449a8c319e699e35e557cc0-k8s-certs\") pod \"kube-apiserver-ci-4230.0.1-a-4092b3335a\" (UID: \"2319ce70e449a8c319e699e35e557cc0\") " pod="kube-system/kube-apiserver-ci-4230.0.1-a-4092b3335a" Feb 13 19:53:15.423715 kubelet[3389]: I0213 19:53:15.423550 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/db07326c730f9fc27d440b6e383b9810-ca-certs\") pod \"kube-controller-manager-ci-4230.0.1-a-4092b3335a\" (UID: \"db07326c730f9fc27d440b6e383b9810\") " pod="kube-system/kube-controller-manager-ci-4230.0.1-a-4092b3335a" Feb 13 19:53:15.423715 kubelet[3389]: I0213 19:53:15.423571 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/db07326c730f9fc27d440b6e383b9810-flexvolume-dir\") pod \"kube-controller-manager-ci-4230.0.1-a-4092b3335a\" (UID: \"db07326c730f9fc27d440b6e383b9810\") " pod="kube-system/kube-controller-manager-ci-4230.0.1-a-4092b3335a" Feb 13 19:53:15.423715 kubelet[3389]: I0213 19:53:15.423656 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/db07326c730f9fc27d440b6e383b9810-k8s-certs\") pod \"kube-controller-manager-ci-4230.0.1-a-4092b3335a\" (UID: \"db07326c730f9fc27d440b6e383b9810\") " pod="kube-system/kube-controller-manager-ci-4230.0.1-a-4092b3335a" Feb 13 19:53:15.423715 kubelet[3389]: I0213 19:53:15.423676 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/db07326c730f9fc27d440b6e383b9810-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230.0.1-a-4092b3335a\" (UID: \"db07326c730f9fc27d440b6e383b9810\") " pod="kube-system/kube-controller-manager-ci-4230.0.1-a-4092b3335a" Feb 13 19:53:15.423882 kubelet[3389]: I0213 19:53:15.423692 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2319ce70e449a8c319e699e35e557cc0-ca-certs\") pod \"kube-apiserver-ci-4230.0.1-a-4092b3335a\" (UID: \"2319ce70e449a8c319e699e35e557cc0\") " pod="kube-system/kube-apiserver-ci-4230.0.1-a-4092b3335a" Feb 13 19:53:15.880194 sudo[3420]: pam_unix(sudo:session): session closed for user root Feb 13 19:53:16.088067 kubelet[3389]: I0213 19:53:16.088026 3389 apiserver.go:52] "Watching apiserver" Feb 13 19:53:16.122753 kubelet[3389]: I0213 19:53:16.122683 3389 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 19:53:16.133736 kubelet[3389]: I0213 19:53:16.132749 3389 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4230.0.1-a-4092b3335a" podStartSLOduration=1.132729761 podStartE2EDuration="1.132729761s" podCreationTimestamp="2025-02-13 19:53:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:53:16.131780079 +0000 UTC m=+1.108524327" watchObservedRunningTime="2025-02-13 19:53:16.132729761 +0000 UTC m=+1.109474009" Feb 13 19:53:16.134096 kubelet[3389]: I0213 19:53:16.133944 3389 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4230.0.1-a-4092b3335a" podStartSLOduration=3.1339336429999998 podStartE2EDuration="3.133933643s" podCreationTimestamp="2025-02-13 19:53:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:53:16.116195334 +0000 UTC m=+1.092939582" watchObservedRunningTime="2025-02-13 19:53:16.133933643 +0000 UTC m=+1.110677891" Feb 13 19:53:16.160373 kubelet[3389]: I0213 19:53:16.159720 3389 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4230.0.1-a-4092b3335a" podStartSLOduration=1.159702685 podStartE2EDuration="1.159702685s" podCreationTimestamp="2025-02-13 19:53:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:53:16.157785562 +0000 UTC m=+1.134529810" watchObservedRunningTime="2025-02-13 19:53:16.159702685 +0000 UTC m=+1.136446933" Feb 13 19:53:16.181738 kubelet[3389]: I0213 19:53:16.178982 3389 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230.0.1-a-4092b3335a" Feb 13 19:53:16.188133 kubelet[3389]: W0213 19:53:16.188094 3389 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 19:53:16.188715 kubelet[3389]: E0213 19:53:16.188637 3389 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4230.0.1-a-4092b3335a\" already exists" pod="kube-system/kube-apiserver-ci-4230.0.1-a-4092b3335a" Feb 13 19:53:18.236742 sudo[2452]: pam_unix(sudo:session): session closed for user root Feb 13 19:53:18.333153 sshd[2451]: Connection closed by 10.200.16.10 port 54054 Feb 13 19:53:18.333715 sshd-session[2449]: pam_unix(sshd:session): session closed for user core Feb 13 19:53:18.339326 systemd-logind[1717]: Session 9 logged out. Waiting for processes to exit. Feb 13 19:53:18.339750 systemd[1]: sshd@6-10.200.20.12:22-10.200.16.10:54054.service: Deactivated successfully. Feb 13 19:53:18.342501 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 19:53:18.343821 systemd[1]: session-9.scope: Consumed 8.095s CPU time, 261.7M memory peak. Feb 13 19:53:18.345629 systemd-logind[1717]: Removed session 9. Feb 13 19:53:20.183244 kubelet[3389]: I0213 19:53:20.183212 3389 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 19:53:20.183945 kubelet[3389]: I0213 19:53:20.183714 3389 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 19:53:20.184011 containerd[1753]: time="2025-02-13T19:53:20.183505468Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 19:53:21.239427 systemd[1]: Created slice kubepods-besteffort-podd61b9e40_6399_4677_a184_f3cb3ae9c032.slice - libcontainer container kubepods-besteffort-podd61b9e40_6399_4677_a184_f3cb3ae9c032.slice. Feb 13 19:53:21.256555 kubelet[3389]: I0213 19:53:21.256441 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d61b9e40-6399-4677-a184-f3cb3ae9c032-lib-modules\") pod \"kube-proxy-c24d5\" (UID: \"d61b9e40-6399-4677-a184-f3cb3ae9c032\") " pod="kube-system/kube-proxy-c24d5" Feb 13 19:53:21.256555 kubelet[3389]: I0213 19:53:21.256474 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5lmn\" (UniqueName: \"kubernetes.io/projected/d61b9e40-6399-4677-a184-f3cb3ae9c032-kube-api-access-t5lmn\") pod \"kube-proxy-c24d5\" (UID: \"d61b9e40-6399-4677-a184-f3cb3ae9c032\") " pod="kube-system/kube-proxy-c24d5" Feb 13 19:53:21.256555 kubelet[3389]: I0213 19:53:21.256496 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d61b9e40-6399-4677-a184-f3cb3ae9c032-kube-proxy\") pod \"kube-proxy-c24d5\" (UID: \"d61b9e40-6399-4677-a184-f3cb3ae9c032\") " pod="kube-system/kube-proxy-c24d5" Feb 13 19:53:21.256555 kubelet[3389]: I0213 19:53:21.256519 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d61b9e40-6399-4677-a184-f3cb3ae9c032-xtables-lock\") pod \"kube-proxy-c24d5\" (UID: \"d61b9e40-6399-4677-a184-f3cb3ae9c032\") " pod="kube-system/kube-proxy-c24d5" Feb 13 19:53:21.266254 systemd[1]: Created slice kubepods-burstable-podc8663a93_7722_4665_9b64_5be417c5f887.slice - libcontainer container kubepods-burstable-podc8663a93_7722_4665_9b64_5be417c5f887.slice. Feb 13 19:53:21.349207 systemd[1]: Created slice kubepods-besteffort-podf744bc96_f465_451c_a924_5877886bb38c.slice - libcontainer container kubepods-besteffort-podf744bc96_f465_451c_a924_5877886bb38c.slice. Feb 13 19:53:21.357336 kubelet[3389]: I0213 19:53:21.357175 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c8663a93-7722-4665-9b64-5be417c5f887-cilium-cgroup\") pod \"cilium-pps7j\" (UID: \"c8663a93-7722-4665-9b64-5be417c5f887\") " pod="kube-system/cilium-pps7j" Feb 13 19:53:21.357336 kubelet[3389]: I0213 19:53:21.357247 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c8663a93-7722-4665-9b64-5be417c5f887-bpf-maps\") pod \"cilium-pps7j\" (UID: \"c8663a93-7722-4665-9b64-5be417c5f887\") " pod="kube-system/cilium-pps7j" Feb 13 19:53:21.357336 kubelet[3389]: I0213 19:53:21.357267 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87cv4\" (UniqueName: \"kubernetes.io/projected/c8663a93-7722-4665-9b64-5be417c5f887-kube-api-access-87cv4\") pod \"cilium-pps7j\" (UID: \"c8663a93-7722-4665-9b64-5be417c5f887\") " pod="kube-system/cilium-pps7j" Feb 13 19:53:21.357336 kubelet[3389]: I0213 19:53:21.357287 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f744bc96-f465-451c-a924-5877886bb38c-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-nbtkn\" (UID: \"f744bc96-f465-451c-a924-5877886bb38c\") " pod="kube-system/cilium-operator-6c4d7847fc-nbtkn" Feb 13 19:53:21.357336 kubelet[3389]: I0213 19:53:21.357305 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c8663a93-7722-4665-9b64-5be417c5f887-xtables-lock\") pod \"cilium-pps7j\" (UID: \"c8663a93-7722-4665-9b64-5be417c5f887\") " pod="kube-system/cilium-pps7j" Feb 13 19:53:21.357850 kubelet[3389]: I0213 19:53:21.357325 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c8663a93-7722-4665-9b64-5be417c5f887-cilium-config-path\") pod \"cilium-pps7j\" (UID: \"c8663a93-7722-4665-9b64-5be417c5f887\") " pod="kube-system/cilium-pps7j" Feb 13 19:53:21.357850 kubelet[3389]: I0213 19:53:21.357340 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c8663a93-7722-4665-9b64-5be417c5f887-host-proc-sys-kernel\") pod \"cilium-pps7j\" (UID: \"c8663a93-7722-4665-9b64-5be417c5f887\") " pod="kube-system/cilium-pps7j" Feb 13 19:53:21.357850 kubelet[3389]: I0213 19:53:21.357355 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c8663a93-7722-4665-9b64-5be417c5f887-cni-path\") pod \"cilium-pps7j\" (UID: \"c8663a93-7722-4665-9b64-5be417c5f887\") " pod="kube-system/cilium-pps7j" Feb 13 19:53:21.357850 kubelet[3389]: I0213 19:53:21.357369 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c8663a93-7722-4665-9b64-5be417c5f887-lib-modules\") pod \"cilium-pps7j\" (UID: \"c8663a93-7722-4665-9b64-5be417c5f887\") " pod="kube-system/cilium-pps7j" Feb 13 19:53:21.357850 kubelet[3389]: I0213 19:53:21.357384 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c8663a93-7722-4665-9b64-5be417c5f887-hubble-tls\") pod \"cilium-pps7j\" (UID: \"c8663a93-7722-4665-9b64-5be417c5f887\") " pod="kube-system/cilium-pps7j" Feb 13 19:53:21.357850 kubelet[3389]: I0213 19:53:21.357399 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c8663a93-7722-4665-9b64-5be417c5f887-etc-cni-netd\") pod \"cilium-pps7j\" (UID: \"c8663a93-7722-4665-9b64-5be417c5f887\") " pod="kube-system/cilium-pps7j" Feb 13 19:53:21.357983 kubelet[3389]: I0213 19:53:21.357416 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c8663a93-7722-4665-9b64-5be417c5f887-cilium-run\") pod \"cilium-pps7j\" (UID: \"c8663a93-7722-4665-9b64-5be417c5f887\") " pod="kube-system/cilium-pps7j" Feb 13 19:53:21.357983 kubelet[3389]: I0213 19:53:21.357448 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c8663a93-7722-4665-9b64-5be417c5f887-hostproc\") pod \"cilium-pps7j\" (UID: \"c8663a93-7722-4665-9b64-5be417c5f887\") " pod="kube-system/cilium-pps7j" Feb 13 19:53:21.357983 kubelet[3389]: I0213 19:53:21.357479 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnph7\" (UniqueName: \"kubernetes.io/projected/f744bc96-f465-451c-a924-5877886bb38c-kube-api-access-xnph7\") pod \"cilium-operator-6c4d7847fc-nbtkn\" (UID: \"f744bc96-f465-451c-a924-5877886bb38c\") " pod="kube-system/cilium-operator-6c4d7847fc-nbtkn" Feb 13 19:53:21.357983 kubelet[3389]: I0213 19:53:21.357554 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c8663a93-7722-4665-9b64-5be417c5f887-host-proc-sys-net\") pod \"cilium-pps7j\" (UID: \"c8663a93-7722-4665-9b64-5be417c5f887\") " pod="kube-system/cilium-pps7j" Feb 13 19:53:21.357983 kubelet[3389]: I0213 19:53:21.357573 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c8663a93-7722-4665-9b64-5be417c5f887-clustermesh-secrets\") pod \"cilium-pps7j\" (UID: \"c8663a93-7722-4665-9b64-5be417c5f887\") " pod="kube-system/cilium-pps7j" Feb 13 19:53:21.548782 containerd[1753]: time="2025-02-13T19:53:21.548659599Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-c24d5,Uid:d61b9e40-6399-4677-a184-f3cb3ae9c032,Namespace:kube-system,Attempt:0,}" Feb 13 19:53:21.574454 containerd[1753]: time="2025-02-13T19:53:21.574403964Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pps7j,Uid:c8663a93-7722-4665-9b64-5be417c5f887,Namespace:kube-system,Attempt:0,}" Feb 13 19:53:21.607883 containerd[1753]: time="2025-02-13T19:53:21.607748062Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:53:21.607883 containerd[1753]: time="2025-02-13T19:53:21.607815502Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:53:21.607883 containerd[1753]: time="2025-02-13T19:53:21.607849462Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:53:21.608250 containerd[1753]: time="2025-02-13T19:53:21.607936662Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:53:21.626819 systemd[1]: Started cri-containerd-84dee987230935966fade2e904baf448f4c201e7a20b74a80ed13f1431487740.scope - libcontainer container 84dee987230935966fade2e904baf448f4c201e7a20b74a80ed13f1431487740. Feb 13 19:53:21.638875 containerd[1753]: time="2025-02-13T19:53:21.638740075Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:53:21.638875 containerd[1753]: time="2025-02-13T19:53:21.638816636Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:53:21.639137 containerd[1753]: time="2025-02-13T19:53:21.638843556Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:53:21.639137 containerd[1753]: time="2025-02-13T19:53:21.638971156Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:53:21.656443 containerd[1753]: time="2025-02-13T19:53:21.656392746Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-nbtkn,Uid:f744bc96-f465-451c-a924-5877886bb38c,Namespace:kube-system,Attempt:0,}" Feb 13 19:53:21.662191 systemd[1]: Started cri-containerd-ef515ac533053ba6a297971b2791a19a74b4573a8de525ca95b0949198e8f404.scope - libcontainer container ef515ac533053ba6a297971b2791a19a74b4573a8de525ca95b0949198e8f404. Feb 13 19:53:21.663212 containerd[1753]: time="2025-02-13T19:53:21.663152358Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-c24d5,Uid:d61b9e40-6399-4677-a184-f3cb3ae9c032,Namespace:kube-system,Attempt:0,} returns sandbox id \"84dee987230935966fade2e904baf448f4c201e7a20b74a80ed13f1431487740\"" Feb 13 19:53:21.670494 containerd[1753]: time="2025-02-13T19:53:21.670439490Z" level=info msg="CreateContainer within sandbox \"84dee987230935966fade2e904baf448f4c201e7a20b74a80ed13f1431487740\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 19:53:21.689361 containerd[1753]: time="2025-02-13T19:53:21.689310803Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pps7j,Uid:c8663a93-7722-4665-9b64-5be417c5f887,Namespace:kube-system,Attempt:0,} returns sandbox id \"ef515ac533053ba6a297971b2791a19a74b4573a8de525ca95b0949198e8f404\"" Feb 13 19:53:21.692050 containerd[1753]: time="2025-02-13T19:53:21.692000888Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 13 19:53:21.710676 containerd[1753]: time="2025-02-13T19:53:21.710539560Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:53:21.710676 containerd[1753]: time="2025-02-13T19:53:21.710599040Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:53:21.710676 containerd[1753]: time="2025-02-13T19:53:21.710610560Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:53:21.711099 containerd[1753]: time="2025-02-13T19:53:21.710707840Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:53:21.727834 systemd[1]: Started cri-containerd-b0971812217fdda3ed576d333a25e369e4c50a6a7abe8440ed72b140d340daa6.scope - libcontainer container b0971812217fdda3ed576d333a25e369e4c50a6a7abe8440ed72b140d340daa6. Feb 13 19:53:21.729164 containerd[1753]: time="2025-02-13T19:53:21.728856032Z" level=info msg="CreateContainer within sandbox \"84dee987230935966fade2e904baf448f4c201e7a20b74a80ed13f1431487740\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"cb31f84e1846fde6f48a14d72412868373d650a246a809f469d7bf0c0f874018\"" Feb 13 19:53:21.730643 containerd[1753]: time="2025-02-13T19:53:21.730306514Z" level=info msg="StartContainer for \"cb31f84e1846fde6f48a14d72412868373d650a246a809f469d7bf0c0f874018\"" Feb 13 19:53:21.761005 systemd[1]: Started cri-containerd-cb31f84e1846fde6f48a14d72412868373d650a246a809f469d7bf0c0f874018.scope - libcontainer container cb31f84e1846fde6f48a14d72412868373d650a246a809f469d7bf0c0f874018. Feb 13 19:53:21.774406 containerd[1753]: time="2025-02-13T19:53:21.774236311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-nbtkn,Uid:f744bc96-f465-451c-a924-5877886bb38c,Namespace:kube-system,Attempt:0,} returns sandbox id \"b0971812217fdda3ed576d333a25e369e4c50a6a7abe8440ed72b140d340daa6\"" Feb 13 19:53:21.803316 containerd[1753]: time="2025-02-13T19:53:21.803207041Z" level=info msg="StartContainer for \"cb31f84e1846fde6f48a14d72412868373d650a246a809f469d7bf0c0f874018\" returns successfully" Feb 13 19:53:23.621702 kubelet[3389]: I0213 19:53:23.621584 3389 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-c24d5" podStartSLOduration=2.621564318 podStartE2EDuration="2.621564318s" podCreationTimestamp="2025-02-13 19:53:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:53:22.218825883 +0000 UTC m=+7.195570171" watchObservedRunningTime="2025-02-13 19:53:23.621564318 +0000 UTC m=+8.598308526" Feb 13 19:53:27.625565 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3015078996.mount: Deactivated successfully. Feb 13 19:53:29.752651 containerd[1753]: time="2025-02-13T19:53:29.752509960Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:53:29.755674 containerd[1753]: time="2025-02-13T19:53:29.755608845Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Feb 13 19:53:29.760241 containerd[1753]: time="2025-02-13T19:53:29.760189133Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:53:29.761997 containerd[1753]: time="2025-02-13T19:53:29.761872176Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 8.069828888s" Feb 13 19:53:29.761997 containerd[1753]: time="2025-02-13T19:53:29.761906376Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Feb 13 19:53:29.764178 containerd[1753]: time="2025-02-13T19:53:29.763939379Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 13 19:53:29.765812 containerd[1753]: time="2025-02-13T19:53:29.765695062Z" level=info msg="CreateContainer within sandbox \"ef515ac533053ba6a297971b2791a19a74b4573a8de525ca95b0949198e8f404\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 19:53:29.795061 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3095465658.mount: Deactivated successfully. Feb 13 19:53:29.804247 containerd[1753]: time="2025-02-13T19:53:29.804205446Z" level=info msg="CreateContainer within sandbox \"ef515ac533053ba6a297971b2791a19a74b4573a8de525ca95b0949198e8f404\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c2a07a265d3fbfac8199d146de185c4a7d8728852c973b4356df2252e32c9c7d\"" Feb 13 19:53:29.805205 containerd[1753]: time="2025-02-13T19:53:29.805122768Z" level=info msg="StartContainer for \"c2a07a265d3fbfac8199d146de185c4a7d8728852c973b4356df2252e32c9c7d\"" Feb 13 19:53:29.838830 systemd[1]: Started cri-containerd-c2a07a265d3fbfac8199d146de185c4a7d8728852c973b4356df2252e32c9c7d.scope - libcontainer container c2a07a265d3fbfac8199d146de185c4a7d8728852c973b4356df2252e32c9c7d. Feb 13 19:53:29.871578 containerd[1753]: time="2025-02-13T19:53:29.871517718Z" level=info msg="StartContainer for \"c2a07a265d3fbfac8199d146de185c4a7d8728852c973b4356df2252e32c9c7d\" returns successfully" Feb 13 19:53:29.879305 systemd[1]: cri-containerd-c2a07a265d3fbfac8199d146de185c4a7d8728852c973b4356df2252e32c9c7d.scope: Deactivated successfully. Feb 13 19:53:30.792569 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c2a07a265d3fbfac8199d146de185c4a7d8728852c973b4356df2252e32c9c7d-rootfs.mount: Deactivated successfully. Feb 13 19:53:31.089672 containerd[1753]: time="2025-02-13T19:53:31.089433066Z" level=info msg="shim disconnected" id=c2a07a265d3fbfac8199d146de185c4a7d8728852c973b4356df2252e32c9c7d namespace=k8s.io Feb 13 19:53:31.089672 containerd[1753]: time="2025-02-13T19:53:31.089488306Z" level=warning msg="cleaning up after shim disconnected" id=c2a07a265d3fbfac8199d146de185c4a7d8728852c973b4356df2252e32c9c7d namespace=k8s.io Feb 13 19:53:31.089672 containerd[1753]: time="2025-02-13T19:53:31.089498186Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:53:31.220340 containerd[1753]: time="2025-02-13T19:53:31.220088644Z" level=info msg="CreateContainer within sandbox \"ef515ac533053ba6a297971b2791a19a74b4573a8de525ca95b0949198e8f404\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 19:53:31.268057 containerd[1753]: time="2025-02-13T19:53:31.267998604Z" level=info msg="CreateContainer within sandbox \"ef515ac533053ba6a297971b2791a19a74b4573a8de525ca95b0949198e8f404\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"2695f55ec8356ebe4b701bb30bb94893735236204f0bf874cef36696ddb65753\"" Feb 13 19:53:31.268805 containerd[1753]: time="2025-02-13T19:53:31.268761885Z" level=info msg="StartContainer for \"2695f55ec8356ebe4b701bb30bb94893735236204f0bf874cef36696ddb65753\"" Feb 13 19:53:31.297791 systemd[1]: Started cri-containerd-2695f55ec8356ebe4b701bb30bb94893735236204f0bf874cef36696ddb65753.scope - libcontainer container 2695f55ec8356ebe4b701bb30bb94893735236204f0bf874cef36696ddb65753. Feb 13 19:53:31.324319 containerd[1753]: time="2025-02-13T19:53:31.324185657Z" level=info msg="StartContainer for \"2695f55ec8356ebe4b701bb30bb94893735236204f0bf874cef36696ddb65753\" returns successfully" Feb 13 19:53:31.333637 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 19:53:31.334260 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:53:31.334520 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:53:31.339933 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:53:31.340106 systemd[1]: cri-containerd-2695f55ec8356ebe4b701bb30bb94893735236204f0bf874cef36696ddb65753.scope: Deactivated successfully. Feb 13 19:53:31.359121 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:53:31.374225 containerd[1753]: time="2025-02-13T19:53:31.374160301Z" level=info msg="shim disconnected" id=2695f55ec8356ebe4b701bb30bb94893735236204f0bf874cef36696ddb65753 namespace=k8s.io Feb 13 19:53:31.374225 containerd[1753]: time="2025-02-13T19:53:31.374227861Z" level=warning msg="cleaning up after shim disconnected" id=2695f55ec8356ebe4b701bb30bb94893735236204f0bf874cef36696ddb65753 namespace=k8s.io Feb 13 19:53:31.374374 containerd[1753]: time="2025-02-13T19:53:31.374236301Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:53:31.792911 systemd[1]: run-containerd-runc-k8s.io-2695f55ec8356ebe4b701bb30bb94893735236204f0bf874cef36696ddb65753-runc.EYADLq.mount: Deactivated successfully. Feb 13 19:53:31.793018 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2695f55ec8356ebe4b701bb30bb94893735236204f0bf874cef36696ddb65753-rootfs.mount: Deactivated successfully. Feb 13 19:53:32.224877 containerd[1753]: time="2025-02-13T19:53:32.223352275Z" level=info msg="CreateContainer within sandbox \"ef515ac533053ba6a297971b2791a19a74b4573a8de525ca95b0949198e8f404\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 19:53:32.310335 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3148276931.mount: Deactivated successfully. Feb 13 19:53:32.367481 containerd[1753]: time="2025-02-13T19:53:32.367428075Z" level=info msg="CreateContainer within sandbox \"ef515ac533053ba6a297971b2791a19a74b4573a8de525ca95b0949198e8f404\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"0f7c6fe1bdffe72cc3b613fa34d0ff3289da43365755c2b59f880f1cc923b543\"" Feb 13 19:53:32.368508 containerd[1753]: time="2025-02-13T19:53:32.368273356Z" level=info msg="StartContainer for \"0f7c6fe1bdffe72cc3b613fa34d0ff3289da43365755c2b59f880f1cc923b543\"" Feb 13 19:53:32.400845 systemd[1]: Started cri-containerd-0f7c6fe1bdffe72cc3b613fa34d0ff3289da43365755c2b59f880f1cc923b543.scope - libcontainer container 0f7c6fe1bdffe72cc3b613fa34d0ff3289da43365755c2b59f880f1cc923b543. Feb 13 19:53:32.431332 systemd[1]: cri-containerd-0f7c6fe1bdffe72cc3b613fa34d0ff3289da43365755c2b59f880f1cc923b543.scope: Deactivated successfully. Feb 13 19:53:32.434738 containerd[1753]: time="2025-02-13T19:53:32.434576666Z" level=info msg="StartContainer for \"0f7c6fe1bdffe72cc3b613fa34d0ff3289da43365755c2b59f880f1cc923b543\" returns successfully" Feb 13 19:53:32.477199 containerd[1753]: time="2025-02-13T19:53:32.476700497Z" level=info msg="shim disconnected" id=0f7c6fe1bdffe72cc3b613fa34d0ff3289da43365755c2b59f880f1cc923b543 namespace=k8s.io Feb 13 19:53:32.477199 containerd[1753]: time="2025-02-13T19:53:32.476757937Z" level=warning msg="cleaning up after shim disconnected" id=0f7c6fe1bdffe72cc3b613fa34d0ff3289da43365755c2b59f880f1cc923b543 namespace=k8s.io Feb 13 19:53:32.477199 containerd[1753]: time="2025-02-13T19:53:32.476767457Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:53:32.819301 containerd[1753]: time="2025-02-13T19:53:32.819008525Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:53:32.823355 containerd[1753]: time="2025-02-13T19:53:32.823302213Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Feb 13 19:53:32.830410 containerd[1753]: time="2025-02-13T19:53:32.830224145Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:53:32.831943 containerd[1753]: time="2025-02-13T19:53:32.831715628Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 3.067740409s" Feb 13 19:53:32.831943 containerd[1753]: time="2025-02-13T19:53:32.831750268Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Feb 13 19:53:32.835378 containerd[1753]: time="2025-02-13T19:53:32.835340234Z" level=info msg="CreateContainer within sandbox \"b0971812217fdda3ed576d333a25e369e4c50a6a7abe8440ed72b140d340daa6\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 13 19:53:32.880911 containerd[1753]: time="2025-02-13T19:53:32.880857354Z" level=info msg="CreateContainer within sandbox \"b0971812217fdda3ed576d333a25e369e4c50a6a7abe8440ed72b140d340daa6\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"22c1a1236275515cf6eb04ef519a56d154381cae94e2deddc5f29c8d5ec559fe\"" Feb 13 19:53:32.881986 containerd[1753]: time="2025-02-13T19:53:32.881834036Z" level=info msg="StartContainer for \"22c1a1236275515cf6eb04ef519a56d154381cae94e2deddc5f29c8d5ec559fe\"" Feb 13 19:53:32.908818 systemd[1]: Started cri-containerd-22c1a1236275515cf6eb04ef519a56d154381cae94e2deddc5f29c8d5ec559fe.scope - libcontainer container 22c1a1236275515cf6eb04ef519a56d154381cae94e2deddc5f29c8d5ec559fe. Feb 13 19:53:32.935211 containerd[1753]: time="2025-02-13T19:53:32.935160331Z" level=info msg="StartContainer for \"22c1a1236275515cf6eb04ef519a56d154381cae94e2deddc5f29c8d5ec559fe\" returns successfully" Feb 13 19:53:33.232235 containerd[1753]: time="2025-02-13T19:53:33.232176576Z" level=info msg="CreateContainer within sandbox \"ef515ac533053ba6a297971b2791a19a74b4573a8de525ca95b0949198e8f404\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 19:53:33.280589 containerd[1753]: time="2025-02-13T19:53:33.280526261Z" level=info msg="CreateContainer within sandbox \"ef515ac533053ba6a297971b2791a19a74b4573a8de525ca95b0949198e8f404\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f7db6d54dd1dacee48a68222f3675769f6804ac4ad32905412284e4c8578785c\"" Feb 13 19:53:33.281497 containerd[1753]: time="2025-02-13T19:53:33.281463263Z" level=info msg="StartContainer for \"f7db6d54dd1dacee48a68222f3675769f6804ac4ad32905412284e4c8578785c\"" Feb 13 19:53:33.326224 systemd[1]: Started cri-containerd-f7db6d54dd1dacee48a68222f3675769f6804ac4ad32905412284e4c8578785c.scope - libcontainer container f7db6d54dd1dacee48a68222f3675769f6804ac4ad32905412284e4c8578785c. Feb 13 19:53:33.377548 systemd[1]: cri-containerd-f7db6d54dd1dacee48a68222f3675769f6804ac4ad32905412284e4c8578785c.scope: Deactivated successfully. Feb 13 19:53:33.382667 containerd[1753]: time="2025-02-13T19:53:33.382039001Z" level=info msg="StartContainer for \"f7db6d54dd1dacee48a68222f3675769f6804ac4ad32905412284e4c8578785c\" returns successfully" Feb 13 19:53:33.405269 kubelet[3389]: I0213 19:53:33.405209 3389 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-nbtkn" podStartSLOduration=1.349761449 podStartE2EDuration="12.405193722s" podCreationTimestamp="2025-02-13 19:53:21 +0000 UTC" firstStartedPulling="2025-02-13 19:53:21.777104516 +0000 UTC m=+6.753848764" lastFinishedPulling="2025-02-13 19:53:32.832536789 +0000 UTC m=+17.809281037" observedRunningTime="2025-02-13 19:53:33.269745842 +0000 UTC m=+18.246490090" watchObservedRunningTime="2025-02-13 19:53:33.405193722 +0000 UTC m=+18.381937970" Feb 13 19:53:33.687028 containerd[1753]: time="2025-02-13T19:53:33.686893580Z" level=info msg="shim disconnected" id=f7db6d54dd1dacee48a68222f3675769f6804ac4ad32905412284e4c8578785c namespace=k8s.io Feb 13 19:53:33.688654 containerd[1753]: time="2025-02-13T19:53:33.687249941Z" level=warning msg="cleaning up after shim disconnected" id=f7db6d54dd1dacee48a68222f3675769f6804ac4ad32905412284e4c8578785c namespace=k8s.io Feb 13 19:53:33.688654 containerd[1753]: time="2025-02-13T19:53:33.687269821Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:53:34.238643 containerd[1753]: time="2025-02-13T19:53:34.236466032Z" level=info msg="CreateContainer within sandbox \"ef515ac533053ba6a297971b2791a19a74b4573a8de525ca95b0949198e8f404\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 19:53:34.289319 containerd[1753]: time="2025-02-13T19:53:34.289269125Z" level=info msg="CreateContainer within sandbox \"ef515ac533053ba6a297971b2791a19a74b4573a8de525ca95b0949198e8f404\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"18c03a0fc5098419418be825e043da58bd8bd6a20aa725e21cddfc9ce197f8e7\"" Feb 13 19:53:34.290309 containerd[1753]: time="2025-02-13T19:53:34.290264087Z" level=info msg="StartContainer for \"18c03a0fc5098419418be825e043da58bd8bd6a20aa725e21cddfc9ce197f8e7\"" Feb 13 19:53:34.315843 systemd[1]: Started cri-containerd-18c03a0fc5098419418be825e043da58bd8bd6a20aa725e21cddfc9ce197f8e7.scope - libcontainer container 18c03a0fc5098419418be825e043da58bd8bd6a20aa725e21cddfc9ce197f8e7. Feb 13 19:53:34.352476 containerd[1753]: time="2025-02-13T19:53:34.352432637Z" level=info msg="StartContainer for \"18c03a0fc5098419418be825e043da58bd8bd6a20aa725e21cddfc9ce197f8e7\" returns successfully" Feb 13 19:53:34.534884 kubelet[3389]: I0213 19:53:34.534367 3389 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Feb 13 19:53:34.614732 systemd[1]: Created slice kubepods-burstable-pode199ca42_471a_4f35_ba4d_0ce0e800d4ba.slice - libcontainer container kubepods-burstable-pode199ca42_471a_4f35_ba4d_0ce0e800d4ba.slice. Feb 13 19:53:34.627663 systemd[1]: Created slice kubepods-burstable-pod8d78d9b0_0dc5_4a92_974b_8191470d5f19.slice - libcontainer container kubepods-burstable-pod8d78d9b0_0dc5_4a92_974b_8191470d5f19.slice. Feb 13 19:53:34.642747 kubelet[3389]: I0213 19:53:34.642531 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jn2hz\" (UniqueName: \"kubernetes.io/projected/e199ca42-471a-4f35-ba4d-0ce0e800d4ba-kube-api-access-jn2hz\") pod \"coredns-668d6bf9bc-xjc7n\" (UID: \"e199ca42-471a-4f35-ba4d-0ce0e800d4ba\") " pod="kube-system/coredns-668d6bf9bc-xjc7n" Feb 13 19:53:34.642747 kubelet[3389]: I0213 19:53:34.642578 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8d78d9b0-0dc5-4a92-974b-8191470d5f19-config-volume\") pod \"coredns-668d6bf9bc-w2fdb\" (UID: \"8d78d9b0-0dc5-4a92-974b-8191470d5f19\") " pod="kube-system/coredns-668d6bf9bc-w2fdb" Feb 13 19:53:34.642747 kubelet[3389]: I0213 19:53:34.642599 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpmq9\" (UniqueName: \"kubernetes.io/projected/8d78d9b0-0dc5-4a92-974b-8191470d5f19-kube-api-access-gpmq9\") pod \"coredns-668d6bf9bc-w2fdb\" (UID: \"8d78d9b0-0dc5-4a92-974b-8191470d5f19\") " pod="kube-system/coredns-668d6bf9bc-w2fdb" Feb 13 19:53:34.642747 kubelet[3389]: I0213 19:53:34.642656 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e199ca42-471a-4f35-ba4d-0ce0e800d4ba-config-volume\") pod \"coredns-668d6bf9bc-xjc7n\" (UID: \"e199ca42-471a-4f35-ba4d-0ce0e800d4ba\") " pod="kube-system/coredns-668d6bf9bc-xjc7n" Feb 13 19:53:34.920481 containerd[1753]: time="2025-02-13T19:53:34.920359641Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xjc7n,Uid:e199ca42-471a-4f35-ba4d-0ce0e800d4ba,Namespace:kube-system,Attempt:0,}" Feb 13 19:53:34.933015 containerd[1753]: time="2025-02-13T19:53:34.932971264Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-w2fdb,Uid:8d78d9b0-0dc5-4a92-974b-8191470d5f19,Namespace:kube-system,Attempt:0,}" Feb 13 19:53:36.495223 systemd-networkd[1346]: cilium_host: Link UP Feb 13 19:53:36.495342 systemd-networkd[1346]: cilium_net: Link UP Feb 13 19:53:36.495476 systemd-networkd[1346]: cilium_net: Gained carrier Feb 13 19:53:36.495582 systemd-networkd[1346]: cilium_host: Gained carrier Feb 13 19:53:36.495700 systemd-networkd[1346]: cilium_net: Gained IPv6LL Feb 13 19:53:36.496747 systemd-networkd[1346]: cilium_host: Gained IPv6LL Feb 13 19:53:36.689381 systemd-networkd[1346]: cilium_vxlan: Link UP Feb 13 19:53:36.689925 systemd-networkd[1346]: cilium_vxlan: Gained carrier Feb 13 19:53:37.058876 kernel: NET: Registered PF_ALG protocol family Feb 13 19:53:37.710036 systemd-networkd[1346]: lxc_health: Link UP Feb 13 19:53:37.721769 systemd-networkd[1346]: lxc_health: Gained carrier Feb 13 19:53:38.018424 systemd-networkd[1346]: lxc2e3222541a12: Link UP Feb 13 19:53:38.026647 kernel: eth0: renamed from tmp6b65b Feb 13 19:53:38.031020 systemd-networkd[1346]: lxc2e3222541a12: Gained carrier Feb 13 19:53:38.061651 kernel: eth0: renamed from tmpb0595 Feb 13 19:53:38.069550 systemd-networkd[1346]: lxc1ee7f9c429b5: Link UP Feb 13 19:53:38.072392 systemd-networkd[1346]: lxc1ee7f9c429b5: Gained carrier Feb 13 19:53:38.369815 systemd-networkd[1346]: cilium_vxlan: Gained IPv6LL Feb 13 19:53:39.266781 systemd-networkd[1346]: lxc2e3222541a12: Gained IPv6LL Feb 13 19:53:39.329778 systemd-networkd[1346]: lxc1ee7f9c429b5: Gained IPv6LL Feb 13 19:53:39.521759 systemd-networkd[1346]: lxc_health: Gained IPv6LL Feb 13 19:53:39.595890 kubelet[3389]: I0213 19:53:39.595502 3389 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-pps7j" podStartSLOduration=10.522971576 podStartE2EDuration="18.595485989s" podCreationTimestamp="2025-02-13 19:53:21 +0000 UTC" firstStartedPulling="2025-02-13 19:53:21.690622605 +0000 UTC m=+6.667366853" lastFinishedPulling="2025-02-13 19:53:29.763137018 +0000 UTC m=+14.739881266" observedRunningTime="2025-02-13 19:53:35.260958924 +0000 UTC m=+20.237703172" watchObservedRunningTime="2025-02-13 19:53:39.595485989 +0000 UTC m=+24.572230197" Feb 13 19:53:41.702342 containerd[1753]: time="2025-02-13T19:53:41.700772373Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:53:41.702342 containerd[1753]: time="2025-02-13T19:53:41.700898973Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:53:41.702342 containerd[1753]: time="2025-02-13T19:53:41.700925973Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:53:41.702342 containerd[1753]: time="2025-02-13T19:53:41.701029173Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:53:41.728953 systemd[1]: Started cri-containerd-b0595603c54c3f28d726a246bd865b1e977f143d14b087f62634bbd188914791.scope - libcontainer container b0595603c54c3f28d726a246bd865b1e977f143d14b087f62634bbd188914791. Feb 13 19:53:41.732794 containerd[1753]: time="2025-02-13T19:53:41.732223349Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:53:41.732794 containerd[1753]: time="2025-02-13T19:53:41.732279269Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:53:41.732794 containerd[1753]: time="2025-02-13T19:53:41.732293869Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:53:41.732794 containerd[1753]: time="2025-02-13T19:53:41.732367749Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:53:41.761773 systemd[1]: Started cri-containerd-6b65b3924038decf8831e79ec34b7707d94f6517c32cad425d550f7cef2581fa.scope - libcontainer container 6b65b3924038decf8831e79ec34b7707d94f6517c32cad425d550f7cef2581fa. Feb 13 19:53:41.808082 containerd[1753]: time="2025-02-13T19:53:41.807974004Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xjc7n,Uid:e199ca42-471a-4f35-ba4d-0ce0e800d4ba,Namespace:kube-system,Attempt:0,} returns sandbox id \"6b65b3924038decf8831e79ec34b7707d94f6517c32cad425d550f7cef2581fa\"" Feb 13 19:53:41.815460 containerd[1753]: time="2025-02-13T19:53:41.815266177Z" level=info msg="CreateContainer within sandbox \"6b65b3924038decf8831e79ec34b7707d94f6517c32cad425d550f7cef2581fa\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 19:53:41.822659 containerd[1753]: time="2025-02-13T19:53:41.822591070Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-w2fdb,Uid:8d78d9b0-0dc5-4a92-974b-8191470d5f19,Namespace:kube-system,Attempt:0,} returns sandbox id \"b0595603c54c3f28d726a246bd865b1e977f143d14b087f62634bbd188914791\"" Feb 13 19:53:41.827550 containerd[1753]: time="2025-02-13T19:53:41.827516239Z" level=info msg="CreateContainer within sandbox \"b0595603c54c3f28d726a246bd865b1e977f143d14b087f62634bbd188914791\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 19:53:41.853007 kubelet[3389]: I0213 19:53:41.852479 3389 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 19:53:41.877320 containerd[1753]: time="2025-02-13T19:53:41.877221088Z" level=info msg="CreateContainer within sandbox \"6b65b3924038decf8831e79ec34b7707d94f6517c32cad425d550f7cef2581fa\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e9a3ffdce42d95f6828d669d2ea01bcabef9bdf1491308ba1861102e5924b86d\"" Feb 13 19:53:41.878872 containerd[1753]: time="2025-02-13T19:53:41.878807931Z" level=info msg="StartContainer for \"e9a3ffdce42d95f6828d669d2ea01bcabef9bdf1491308ba1861102e5924b86d\"" Feb 13 19:53:41.890119 containerd[1753]: time="2025-02-13T19:53:41.889939951Z" level=info msg="CreateContainer within sandbox \"b0595603c54c3f28d726a246bd865b1e977f143d14b087f62634bbd188914791\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ae95a218092d60895f055cc3b4ac2b71a16aec12ddda0698af044727b5c47c54\"" Feb 13 19:53:41.893417 containerd[1753]: time="2025-02-13T19:53:41.893357597Z" level=info msg="StartContainer for \"ae95a218092d60895f055cc3b4ac2b71a16aec12ddda0698af044727b5c47c54\"" Feb 13 19:53:41.921982 systemd[1]: Started cri-containerd-e9a3ffdce42d95f6828d669d2ea01bcabef9bdf1491308ba1861102e5924b86d.scope - libcontainer container e9a3ffdce42d95f6828d669d2ea01bcabef9bdf1491308ba1861102e5924b86d. Feb 13 19:53:41.934768 systemd[1]: Started cri-containerd-ae95a218092d60895f055cc3b4ac2b71a16aec12ddda0698af044727b5c47c54.scope - libcontainer container ae95a218092d60895f055cc3b4ac2b71a16aec12ddda0698af044727b5c47c54. Feb 13 19:53:41.978687 containerd[1753]: time="2025-02-13T19:53:41.978513789Z" level=info msg="StartContainer for \"ae95a218092d60895f055cc3b4ac2b71a16aec12ddda0698af044727b5c47c54\" returns successfully" Feb 13 19:53:41.978687 containerd[1753]: time="2025-02-13T19:53:41.978483549Z" level=info msg="StartContainer for \"e9a3ffdce42d95f6828d669d2ea01bcabef9bdf1491308ba1861102e5924b86d\" returns successfully" Feb 13 19:53:42.270709 kubelet[3389]: I0213 19:53:42.270325 3389 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-w2fdb" podStartSLOduration=21.270308231 podStartE2EDuration="21.270308231s" podCreationTimestamp="2025-02-13 19:53:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:53:42.268995988 +0000 UTC m=+27.245740236" watchObservedRunningTime="2025-02-13 19:53:42.270308231 +0000 UTC m=+27.247052479" Feb 13 19:53:42.296215 kubelet[3389]: I0213 19:53:42.296150 3389 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-xjc7n" podStartSLOduration=21.296107277 podStartE2EDuration="21.296107277s" podCreationTimestamp="2025-02-13 19:53:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:53:42.295759476 +0000 UTC m=+27.272503724" watchObservedRunningTime="2025-02-13 19:53:42.296107277 +0000 UTC m=+27.272851525" Feb 13 19:55:11.955385 systemd[1]: Started sshd@7-10.200.20.12:22-10.200.16.10:38156.service - OpenSSH per-connection server daemon (10.200.16.10:38156). Feb 13 19:55:12.450909 sshd[4775]: Accepted publickey for core from 10.200.16.10 port 38156 ssh2: RSA SHA256:LTmo/6k/2cyRFZrv4Ga+drA+aFwEaiiiTQilASdJKcU Feb 13 19:55:12.452365 sshd-session[4775]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:55:12.457745 systemd-logind[1717]: New session 10 of user core. Feb 13 19:55:12.464879 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 19:55:12.882477 sshd[4777]: Connection closed by 10.200.16.10 port 38156 Feb 13 19:55:12.884399 sshd-session[4775]: pam_unix(sshd:session): session closed for user core Feb 13 19:55:12.887948 systemd-logind[1717]: Session 10 logged out. Waiting for processes to exit. Feb 13 19:55:12.888587 systemd[1]: sshd@7-10.200.20.12:22-10.200.16.10:38156.service: Deactivated successfully. Feb 13 19:55:12.891271 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 19:55:12.892691 systemd-logind[1717]: Removed session 10. Feb 13 19:55:17.968928 systemd[1]: Started sshd@8-10.200.20.12:22-10.200.16.10:38170.service - OpenSSH per-connection server daemon (10.200.16.10:38170). Feb 13 19:55:18.415844 sshd[4794]: Accepted publickey for core from 10.200.16.10 port 38170 ssh2: RSA SHA256:LTmo/6k/2cyRFZrv4Ga+drA+aFwEaiiiTQilASdJKcU Feb 13 19:55:18.417031 sshd-session[4794]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:55:18.421916 systemd-logind[1717]: New session 11 of user core. Feb 13 19:55:18.429789 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 19:55:18.830760 sshd[4796]: Connection closed by 10.200.16.10 port 38170 Feb 13 19:55:18.831513 sshd-session[4794]: pam_unix(sshd:session): session closed for user core Feb 13 19:55:18.835573 systemd[1]: sshd@8-10.200.20.12:22-10.200.16.10:38170.service: Deactivated successfully. Feb 13 19:55:18.837529 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 19:55:18.838263 systemd-logind[1717]: Session 11 logged out. Waiting for processes to exit. Feb 13 19:55:18.839225 systemd-logind[1717]: Removed session 11. Feb 13 19:55:23.906507 systemd[1]: Started sshd@9-10.200.20.12:22-10.200.16.10:47482.service - OpenSSH per-connection server daemon (10.200.16.10:47482). Feb 13 19:55:24.338303 sshd[4811]: Accepted publickey for core from 10.200.16.10 port 47482 ssh2: RSA SHA256:LTmo/6k/2cyRFZrv4Ga+drA+aFwEaiiiTQilASdJKcU Feb 13 19:55:24.339737 sshd-session[4811]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:55:24.344848 systemd-logind[1717]: New session 12 of user core. Feb 13 19:55:24.351809 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 19:55:24.724526 sshd[4813]: Connection closed by 10.200.16.10 port 47482 Feb 13 19:55:24.724420 sshd-session[4811]: pam_unix(sshd:session): session closed for user core Feb 13 19:55:24.728577 systemd[1]: sshd@9-10.200.20.12:22-10.200.16.10:47482.service: Deactivated successfully. Feb 13 19:55:24.730847 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 19:55:24.731867 systemd-logind[1717]: Session 12 logged out. Waiting for processes to exit. Feb 13 19:55:24.732762 systemd-logind[1717]: Removed session 12. Feb 13 19:55:29.806924 systemd[1]: Started sshd@10-10.200.20.12:22-10.200.16.10:45366.service - OpenSSH per-connection server daemon (10.200.16.10:45366). Feb 13 19:55:30.234970 sshd[4826]: Accepted publickey for core from 10.200.16.10 port 45366 ssh2: RSA SHA256:LTmo/6k/2cyRFZrv4Ga+drA+aFwEaiiiTQilASdJKcU Feb 13 19:55:30.238804 sshd-session[4826]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:55:30.244410 systemd-logind[1717]: New session 13 of user core. Feb 13 19:55:30.252889 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 19:55:30.640585 sshd[4828]: Connection closed by 10.200.16.10 port 45366 Feb 13 19:55:30.641509 sshd-session[4826]: pam_unix(sshd:session): session closed for user core Feb 13 19:55:30.645581 systemd[1]: sshd@10-10.200.20.12:22-10.200.16.10:45366.service: Deactivated successfully. Feb 13 19:55:30.650958 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 19:55:30.652100 systemd-logind[1717]: Session 13 logged out. Waiting for processes to exit. Feb 13 19:55:30.653301 systemd-logind[1717]: Removed session 13. Feb 13 19:55:30.726911 systemd[1]: Started sshd@11-10.200.20.12:22-10.200.16.10:45380.service - OpenSSH per-connection server daemon (10.200.16.10:45380). Feb 13 19:55:31.155213 sshd[4841]: Accepted publickey for core from 10.200.16.10 port 45380 ssh2: RSA SHA256:LTmo/6k/2cyRFZrv4Ga+drA+aFwEaiiiTQilASdJKcU Feb 13 19:55:31.156695 sshd-session[4841]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:55:31.161886 systemd-logind[1717]: New session 14 of user core. Feb 13 19:55:31.172813 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 19:55:31.604761 sshd[4843]: Connection closed by 10.200.16.10 port 45380 Feb 13 19:55:31.605913 sshd-session[4841]: pam_unix(sshd:session): session closed for user core Feb 13 19:55:31.609328 systemd-logind[1717]: Session 14 logged out. Waiting for processes to exit. Feb 13 19:55:31.609496 systemd[1]: sshd@11-10.200.20.12:22-10.200.16.10:45380.service: Deactivated successfully. Feb 13 19:55:31.611554 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 19:55:31.613502 systemd-logind[1717]: Removed session 14. Feb 13 19:55:31.691603 systemd[1]: Started sshd@12-10.200.20.12:22-10.200.16.10:45386.service - OpenSSH per-connection server daemon (10.200.16.10:45386). Feb 13 19:55:32.125084 sshd[4853]: Accepted publickey for core from 10.200.16.10 port 45386 ssh2: RSA SHA256:LTmo/6k/2cyRFZrv4Ga+drA+aFwEaiiiTQilASdJKcU Feb 13 19:55:32.126563 sshd-session[4853]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:55:32.132185 systemd-logind[1717]: New session 15 of user core. Feb 13 19:55:32.140835 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 19:55:32.532044 sshd[4855]: Connection closed by 10.200.16.10 port 45386 Feb 13 19:55:32.532678 sshd-session[4853]: pam_unix(sshd:session): session closed for user core Feb 13 19:55:32.536476 systemd[1]: sshd@12-10.200.20.12:22-10.200.16.10:45386.service: Deactivated successfully. Feb 13 19:55:32.538486 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 19:55:32.539330 systemd-logind[1717]: Session 15 logged out. Waiting for processes to exit. Feb 13 19:55:32.540448 systemd-logind[1717]: Removed session 15. Feb 13 19:55:37.628901 systemd[1]: Started sshd@13-10.200.20.12:22-10.200.16.10:45392.service - OpenSSH per-connection server daemon (10.200.16.10:45392). Feb 13 19:55:38.112423 sshd[4867]: Accepted publickey for core from 10.200.16.10 port 45392 ssh2: RSA SHA256:LTmo/6k/2cyRFZrv4Ga+drA+aFwEaiiiTQilASdJKcU Feb 13 19:55:38.113883 sshd-session[4867]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:55:38.118227 systemd-logind[1717]: New session 16 of user core. Feb 13 19:55:38.124908 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 19:55:38.522316 sshd[4869]: Connection closed by 10.200.16.10 port 45392 Feb 13 19:55:38.522218 sshd-session[4867]: pam_unix(sshd:session): session closed for user core Feb 13 19:55:38.525694 systemd[1]: sshd@13-10.200.20.12:22-10.200.16.10:45392.service: Deactivated successfully. Feb 13 19:55:38.528053 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 19:55:38.529301 systemd-logind[1717]: Session 16 logged out. Waiting for processes to exit. Feb 13 19:55:38.530258 systemd-logind[1717]: Removed session 16. Feb 13 19:55:43.604471 systemd[1]: Started sshd@14-10.200.20.12:22-10.200.16.10:33122.service - OpenSSH per-connection server daemon (10.200.16.10:33122). Feb 13 19:55:44.075644 sshd[4882]: Accepted publickey for core from 10.200.16.10 port 33122 ssh2: RSA SHA256:LTmo/6k/2cyRFZrv4Ga+drA+aFwEaiiiTQilASdJKcU Feb 13 19:55:44.077053 sshd-session[4882]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:55:44.081973 systemd-logind[1717]: New session 17 of user core. Feb 13 19:55:44.086925 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 19:55:44.498985 sshd[4884]: Connection closed by 10.200.16.10 port 33122 Feb 13 19:55:44.498424 sshd-session[4882]: pam_unix(sshd:session): session closed for user core Feb 13 19:55:44.501363 systemd[1]: sshd@14-10.200.20.12:22-10.200.16.10:33122.service: Deactivated successfully. Feb 13 19:55:44.504045 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 19:55:44.506653 systemd-logind[1717]: Session 17 logged out. Waiting for processes to exit. Feb 13 19:55:44.507843 systemd-logind[1717]: Removed session 17. Feb 13 19:55:44.592953 systemd[1]: Started sshd@15-10.200.20.12:22-10.200.16.10:33124.service - OpenSSH per-connection server daemon (10.200.16.10:33124). Feb 13 19:55:45.023687 sshd[4896]: Accepted publickey for core from 10.200.16.10 port 33124 ssh2: RSA SHA256:LTmo/6k/2cyRFZrv4Ga+drA+aFwEaiiiTQilASdJKcU Feb 13 19:55:45.025036 sshd-session[4896]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:55:45.029873 systemd-logind[1717]: New session 18 of user core. Feb 13 19:55:45.037958 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 19:55:45.437246 sshd[4898]: Connection closed by 10.200.16.10 port 33124 Feb 13 19:55:45.437937 sshd-session[4896]: pam_unix(sshd:session): session closed for user core Feb 13 19:55:45.441199 systemd-logind[1717]: Session 18 logged out. Waiting for processes to exit. Feb 13 19:55:45.441441 systemd[1]: sshd@15-10.200.20.12:22-10.200.16.10:33124.service: Deactivated successfully. Feb 13 19:55:45.443895 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 19:55:45.446592 systemd-logind[1717]: Removed session 18. Feb 13 19:55:45.519893 systemd[1]: Started sshd@16-10.200.20.12:22-10.200.16.10:33140.service - OpenSSH per-connection server daemon (10.200.16.10:33140). Feb 13 19:55:45.949119 sshd[4908]: Accepted publickey for core from 10.200.16.10 port 33140 ssh2: RSA SHA256:LTmo/6k/2cyRFZrv4Ga+drA+aFwEaiiiTQilASdJKcU Feb 13 19:55:45.950520 sshd-session[4908]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:55:45.956722 systemd-logind[1717]: New session 19 of user core. Feb 13 19:55:45.964822 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 19:55:47.173116 sshd[4910]: Connection closed by 10.200.16.10 port 33140 Feb 13 19:55:47.173960 sshd-session[4908]: pam_unix(sshd:session): session closed for user core Feb 13 19:55:47.177940 systemd[1]: sshd@16-10.200.20.12:22-10.200.16.10:33140.service: Deactivated successfully. Feb 13 19:55:47.181215 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 19:55:47.183491 systemd-logind[1717]: Session 19 logged out. Waiting for processes to exit. Feb 13 19:55:47.185727 systemd-logind[1717]: Removed session 19. Feb 13 19:55:47.266130 systemd[1]: Started sshd@17-10.200.20.12:22-10.200.16.10:33152.service - OpenSSH per-connection server daemon (10.200.16.10:33152). Feb 13 19:55:47.694669 sshd[4927]: Accepted publickey for core from 10.200.16.10 port 33152 ssh2: RSA SHA256:LTmo/6k/2cyRFZrv4Ga+drA+aFwEaiiiTQilASdJKcU Feb 13 19:55:47.696030 sshd-session[4927]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:55:47.702497 systemd-logind[1717]: New session 20 of user core. Feb 13 19:55:47.706806 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 19:55:48.197841 sshd[4929]: Connection closed by 10.200.16.10 port 33152 Feb 13 19:55:48.198663 sshd-session[4927]: pam_unix(sshd:session): session closed for user core Feb 13 19:55:48.203306 systemd[1]: sshd@17-10.200.20.12:22-10.200.16.10:33152.service: Deactivated successfully. Feb 13 19:55:48.205959 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 19:55:48.207318 systemd-logind[1717]: Session 20 logged out. Waiting for processes to exit. Feb 13 19:55:48.208393 systemd-logind[1717]: Removed session 20. Feb 13 19:55:48.288986 systemd[1]: Started sshd@18-10.200.20.12:22-10.200.16.10:33154.service - OpenSSH per-connection server daemon (10.200.16.10:33154). Feb 13 19:55:48.737173 sshd[4938]: Accepted publickey for core from 10.200.16.10 port 33154 ssh2: RSA SHA256:LTmo/6k/2cyRFZrv4Ga+drA+aFwEaiiiTQilASdJKcU Feb 13 19:55:48.738849 sshd-session[4938]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:55:48.744679 systemd-logind[1717]: New session 21 of user core. Feb 13 19:55:48.749910 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 19:55:49.130371 sshd[4940]: Connection closed by 10.200.16.10 port 33154 Feb 13 19:55:49.131361 sshd-session[4938]: pam_unix(sshd:session): session closed for user core Feb 13 19:55:49.135198 systemd[1]: sshd@18-10.200.20.12:22-10.200.16.10:33154.service: Deactivated successfully. Feb 13 19:55:49.137289 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 19:55:49.138284 systemd-logind[1717]: Session 21 logged out. Waiting for processes to exit. Feb 13 19:55:49.139738 systemd-logind[1717]: Removed session 21. Feb 13 19:55:54.228219 systemd[1]: Started sshd@19-10.200.20.12:22-10.200.16.10:49636.service - OpenSSH per-connection server daemon (10.200.16.10:49636). Feb 13 19:55:54.713176 sshd[4957]: Accepted publickey for core from 10.200.16.10 port 49636 ssh2: RSA SHA256:LTmo/6k/2cyRFZrv4Ga+drA+aFwEaiiiTQilASdJKcU Feb 13 19:55:54.714608 sshd-session[4957]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:55:54.719689 systemd-logind[1717]: New session 22 of user core. Feb 13 19:55:54.722840 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 19:55:55.140673 sshd[4959]: Connection closed by 10.200.16.10 port 49636 Feb 13 19:55:55.140469 sshd-session[4957]: pam_unix(sshd:session): session closed for user core Feb 13 19:55:55.143851 systemd-logind[1717]: Session 22 logged out. Waiting for processes to exit. Feb 13 19:55:55.144117 systemd[1]: sshd@19-10.200.20.12:22-10.200.16.10:49636.service: Deactivated successfully. Feb 13 19:55:55.146082 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 19:55:55.148576 systemd-logind[1717]: Removed session 22. Feb 13 19:56:00.221913 systemd[1]: Started sshd@20-10.200.20.12:22-10.200.16.10:35292.service - OpenSSH per-connection server daemon (10.200.16.10:35292). Feb 13 19:56:00.650873 sshd[4971]: Accepted publickey for core from 10.200.16.10 port 35292 ssh2: RSA SHA256:LTmo/6k/2cyRFZrv4Ga+drA+aFwEaiiiTQilASdJKcU Feb 13 19:56:00.652284 sshd-session[4971]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:56:00.657282 systemd-logind[1717]: New session 23 of user core. Feb 13 19:56:00.663794 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 13 19:56:01.041671 sshd[4973]: Connection closed by 10.200.16.10 port 35292 Feb 13 19:56:01.042206 sshd-session[4971]: pam_unix(sshd:session): session closed for user core Feb 13 19:56:01.045730 systemd[1]: sshd@20-10.200.20.12:22-10.200.16.10:35292.service: Deactivated successfully. Feb 13 19:56:01.047953 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 19:56:01.048790 systemd-logind[1717]: Session 23 logged out. Waiting for processes to exit. Feb 13 19:56:01.050099 systemd-logind[1717]: Removed session 23. Feb 13 19:56:06.147884 systemd[1]: Started sshd@21-10.200.20.12:22-10.200.16.10:35302.service - OpenSSH per-connection server daemon (10.200.16.10:35302). Feb 13 19:56:06.622346 sshd[4985]: Accepted publickey for core from 10.200.16.10 port 35302 ssh2: RSA SHA256:LTmo/6k/2cyRFZrv4Ga+drA+aFwEaiiiTQilASdJKcU Feb 13 19:56:06.623665 sshd-session[4985]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:56:06.628553 systemd-logind[1717]: New session 24 of user core. Feb 13 19:56:06.634873 systemd[1]: Started session-24.scope - Session 24 of User core. Feb 13 19:56:07.033224 sshd[4987]: Connection closed by 10.200.16.10 port 35302 Feb 13 19:56:07.033065 sshd-session[4985]: pam_unix(sshd:session): session closed for user core Feb 13 19:56:07.037049 systemd[1]: sshd@21-10.200.20.12:22-10.200.16.10:35302.service: Deactivated successfully. Feb 13 19:56:07.040160 systemd[1]: session-24.scope: Deactivated successfully. Feb 13 19:56:07.041170 systemd-logind[1717]: Session 24 logged out. Waiting for processes to exit. Feb 13 19:56:07.042431 systemd-logind[1717]: Removed session 24. Feb 13 19:56:07.120953 systemd[1]: Started sshd@22-10.200.20.12:22-10.200.16.10:35308.service - OpenSSH per-connection server daemon (10.200.16.10:35308). Feb 13 19:56:07.569112 sshd[5000]: Accepted publickey for core from 10.200.16.10 port 35308 ssh2: RSA SHA256:LTmo/6k/2cyRFZrv4Ga+drA+aFwEaiiiTQilASdJKcU Feb 13 19:56:07.570453 sshd-session[5000]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:56:07.574591 systemd-logind[1717]: New session 25 of user core. Feb 13 19:56:07.584821 systemd[1]: Started session-25.scope - Session 25 of User core. Feb 13 19:56:09.949498 containerd[1753]: time="2025-02-13T19:56:09.949442987Z" level=info msg="StopContainer for \"22c1a1236275515cf6eb04ef519a56d154381cae94e2deddc5f29c8d5ec559fe\" with timeout 30 (s)" Feb 13 19:56:09.955892 containerd[1753]: time="2025-02-13T19:56:09.950022148Z" level=info msg="Stop container \"22c1a1236275515cf6eb04ef519a56d154381cae94e2deddc5f29c8d5ec559fe\" with signal terminated" Feb 13 19:56:09.964808 containerd[1753]: time="2025-02-13T19:56:09.964751010Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:56:09.965785 systemd[1]: cri-containerd-22c1a1236275515cf6eb04ef519a56d154381cae94e2deddc5f29c8d5ec559fe.scope: Deactivated successfully. Feb 13 19:56:09.975803 containerd[1753]: time="2025-02-13T19:56:09.975718987Z" level=info msg="StopContainer for \"18c03a0fc5098419418be825e043da58bd8bd6a20aa725e21cddfc9ce197f8e7\" with timeout 2 (s)" Feb 13 19:56:09.976070 containerd[1753]: time="2025-02-13T19:56:09.976041587Z" level=info msg="Stop container \"18c03a0fc5098419418be825e043da58bd8bd6a20aa725e21cddfc9ce197f8e7\" with signal terminated" Feb 13 19:56:09.986755 systemd-networkd[1346]: lxc_health: Link DOWN Feb 13 19:56:09.986763 systemd-networkd[1346]: lxc_health: Lost carrier Feb 13 19:56:10.000456 systemd[1]: cri-containerd-18c03a0fc5098419418be825e043da58bd8bd6a20aa725e21cddfc9ce197f8e7.scope: Deactivated successfully. Feb 13 19:56:10.001165 systemd[1]: cri-containerd-18c03a0fc5098419418be825e043da58bd8bd6a20aa725e21cddfc9ce197f8e7.scope: Consumed 6.446s CPU time, 124.5M memory peak, 144K read from disk, 12.9M written to disk. Feb 13 19:56:10.007943 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-22c1a1236275515cf6eb04ef519a56d154381cae94e2deddc5f29c8d5ec559fe-rootfs.mount: Deactivated successfully. Feb 13 19:56:10.027904 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-18c03a0fc5098419418be825e043da58bd8bd6a20aa725e21cddfc9ce197f8e7-rootfs.mount: Deactivated successfully. Feb 13 19:56:10.040078 containerd[1753]: time="2025-02-13T19:56:10.039885484Z" level=info msg="shim disconnected" id=22c1a1236275515cf6eb04ef519a56d154381cae94e2deddc5f29c8d5ec559fe namespace=k8s.io Feb 13 19:56:10.040078 containerd[1753]: time="2025-02-13T19:56:10.039949604Z" level=warning msg="cleaning up after shim disconnected" id=22c1a1236275515cf6eb04ef519a56d154381cae94e2deddc5f29c8d5ec559fe namespace=k8s.io Feb 13 19:56:10.040078 containerd[1753]: time="2025-02-13T19:56:10.039957924Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:56:10.040456 containerd[1753]: time="2025-02-13T19:56:10.040259125Z" level=info msg="shim disconnected" id=18c03a0fc5098419418be825e043da58bd8bd6a20aa725e21cddfc9ce197f8e7 namespace=k8s.io Feb 13 19:56:10.040456 containerd[1753]: time="2025-02-13T19:56:10.040320445Z" level=warning msg="cleaning up after shim disconnected" id=18c03a0fc5098419418be825e043da58bd8bd6a20aa725e21cddfc9ce197f8e7 namespace=k8s.io Feb 13 19:56:10.040456 containerd[1753]: time="2025-02-13T19:56:10.040330645Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:56:10.057605 containerd[1753]: time="2025-02-13T19:56:10.057378071Z" level=warning msg="cleanup warnings time=\"2025-02-13T19:56:10Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 19:56:10.066185 containerd[1753]: time="2025-02-13T19:56:10.066131004Z" level=info msg="StopContainer for \"22c1a1236275515cf6eb04ef519a56d154381cae94e2deddc5f29c8d5ec559fe\" returns successfully" Feb 13 19:56:10.067202 containerd[1753]: time="2025-02-13T19:56:10.067044845Z" level=info msg="StopPodSandbox for \"b0971812217fdda3ed576d333a25e369e4c50a6a7abe8440ed72b140d340daa6\"" Feb 13 19:56:10.067202 containerd[1753]: time="2025-02-13T19:56:10.067094006Z" level=info msg="Container to stop \"22c1a1236275515cf6eb04ef519a56d154381cae94e2deddc5f29c8d5ec559fe\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:56:10.069826 containerd[1753]: time="2025-02-13T19:56:10.069679809Z" level=info msg="StopContainer for \"18c03a0fc5098419418be825e043da58bd8bd6a20aa725e21cddfc9ce197f8e7\" returns successfully" Feb 13 19:56:10.070587 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b0971812217fdda3ed576d333a25e369e4c50a6a7abe8440ed72b140d340daa6-shm.mount: Deactivated successfully. Feb 13 19:56:10.071849 containerd[1753]: time="2025-02-13T19:56:10.071012652Z" level=info msg="StopPodSandbox for \"ef515ac533053ba6a297971b2791a19a74b4573a8de525ca95b0949198e8f404\"" Feb 13 19:56:10.071849 containerd[1753]: time="2025-02-13T19:56:10.071688693Z" level=info msg="Container to stop \"c2a07a265d3fbfac8199d146de185c4a7d8728852c973b4356df2252e32c9c7d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:56:10.071849 containerd[1753]: time="2025-02-13T19:56:10.071715773Z" level=info msg="Container to stop \"2695f55ec8356ebe4b701bb30bb94893735236204f0bf874cef36696ddb65753\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:56:10.071849 containerd[1753]: time="2025-02-13T19:56:10.071727253Z" level=info msg="Container to stop \"f7db6d54dd1dacee48a68222f3675769f6804ac4ad32905412284e4c8578785c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:56:10.071849 containerd[1753]: time="2025-02-13T19:56:10.071735813Z" level=info msg="Container to stop \"18c03a0fc5098419418be825e043da58bd8bd6a20aa725e21cddfc9ce197f8e7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:56:10.071849 containerd[1753]: time="2025-02-13T19:56:10.071744733Z" level=info msg="Container to stop \"0f7c6fe1bdffe72cc3b613fa34d0ff3289da43365755c2b59f880f1cc923b543\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:56:10.077831 systemd[1]: cri-containerd-ef515ac533053ba6a297971b2791a19a74b4573a8de525ca95b0949198e8f404.scope: Deactivated successfully. Feb 13 19:56:10.079846 systemd[1]: cri-containerd-b0971812217fdda3ed576d333a25e369e4c50a6a7abe8440ed72b140d340daa6.scope: Deactivated successfully. Feb 13 19:56:10.119610 containerd[1753]: time="2025-02-13T19:56:10.119398525Z" level=info msg="shim disconnected" id=ef515ac533053ba6a297971b2791a19a74b4573a8de525ca95b0949198e8f404 namespace=k8s.io Feb 13 19:56:10.119610 containerd[1753]: time="2025-02-13T19:56:10.119648925Z" level=warning msg="cleaning up after shim disconnected" id=ef515ac533053ba6a297971b2791a19a74b4573a8de525ca95b0949198e8f404 namespace=k8s.io Feb 13 19:56:10.119610 containerd[1753]: time="2025-02-13T19:56:10.119667325Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:56:10.119610 containerd[1753]: time="2025-02-13T19:56:10.119606445Z" level=info msg="shim disconnected" id=b0971812217fdda3ed576d333a25e369e4c50a6a7abe8440ed72b140d340daa6 namespace=k8s.io Feb 13 19:56:10.119977 containerd[1753]: time="2025-02-13T19:56:10.119734446Z" level=warning msg="cleaning up after shim disconnected" id=b0971812217fdda3ed576d333a25e369e4c50a6a7abe8440ed72b140d340daa6 namespace=k8s.io Feb 13 19:56:10.119977 containerd[1753]: time="2025-02-13T19:56:10.119742526Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:56:10.139646 containerd[1753]: time="2025-02-13T19:56:10.137811153Z" level=info msg="TearDown network for sandbox \"b0971812217fdda3ed576d333a25e369e4c50a6a7abe8440ed72b140d340daa6\" successfully" Feb 13 19:56:10.139646 containerd[1753]: time="2025-02-13T19:56:10.137851913Z" level=info msg="StopPodSandbox for \"b0971812217fdda3ed576d333a25e369e4c50a6a7abe8440ed72b140d340daa6\" returns successfully" Feb 13 19:56:10.139646 containerd[1753]: time="2025-02-13T19:56:10.137844153Z" level=info msg="TearDown network for sandbox \"ef515ac533053ba6a297971b2791a19a74b4573a8de525ca95b0949198e8f404\" successfully" Feb 13 19:56:10.139646 containerd[1753]: time="2025-02-13T19:56:10.137915873Z" level=info msg="StopPodSandbox for \"ef515ac533053ba6a297971b2791a19a74b4573a8de525ca95b0949198e8f404\" returns successfully" Feb 13 19:56:10.244816 kubelet[3389]: E0213 19:56:10.244779 3389 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 19:56:10.275647 kubelet[3389]: I0213 19:56:10.274136 3389 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-87cv4\" (UniqueName: \"kubernetes.io/projected/c8663a93-7722-4665-9b64-5be417c5f887-kube-api-access-87cv4\") pod \"c8663a93-7722-4665-9b64-5be417c5f887\" (UID: \"c8663a93-7722-4665-9b64-5be417c5f887\") " Feb 13 19:56:10.275647 kubelet[3389]: I0213 19:56:10.274184 3389 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c8663a93-7722-4665-9b64-5be417c5f887-cilium-config-path\") pod \"c8663a93-7722-4665-9b64-5be417c5f887\" (UID: \"c8663a93-7722-4665-9b64-5be417c5f887\") " Feb 13 19:56:10.275647 kubelet[3389]: I0213 19:56:10.274202 3389 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c8663a93-7722-4665-9b64-5be417c5f887-host-proc-sys-net\") pod \"c8663a93-7722-4665-9b64-5be417c5f887\" (UID: \"c8663a93-7722-4665-9b64-5be417c5f887\") " Feb 13 19:56:10.275647 kubelet[3389]: I0213 19:56:10.274219 3389 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c8663a93-7722-4665-9b64-5be417c5f887-hubble-tls\") pod \"c8663a93-7722-4665-9b64-5be417c5f887\" (UID: \"c8663a93-7722-4665-9b64-5be417c5f887\") " Feb 13 19:56:10.275647 kubelet[3389]: I0213 19:56:10.274243 3389 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c8663a93-7722-4665-9b64-5be417c5f887-host-proc-sys-kernel\") pod \"c8663a93-7722-4665-9b64-5be417c5f887\" (UID: \"c8663a93-7722-4665-9b64-5be417c5f887\") " Feb 13 19:56:10.275647 kubelet[3389]: I0213 19:56:10.274257 3389 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c8663a93-7722-4665-9b64-5be417c5f887-xtables-lock\") pod \"c8663a93-7722-4665-9b64-5be417c5f887\" (UID: \"c8663a93-7722-4665-9b64-5be417c5f887\") " Feb 13 19:56:10.275899 kubelet[3389]: I0213 19:56:10.274271 3389 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c8663a93-7722-4665-9b64-5be417c5f887-lib-modules\") pod \"c8663a93-7722-4665-9b64-5be417c5f887\" (UID: \"c8663a93-7722-4665-9b64-5be417c5f887\") " Feb 13 19:56:10.275899 kubelet[3389]: I0213 19:56:10.274286 3389 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c8663a93-7722-4665-9b64-5be417c5f887-hostproc\") pod \"c8663a93-7722-4665-9b64-5be417c5f887\" (UID: \"c8663a93-7722-4665-9b64-5be417c5f887\") " Feb 13 19:56:10.275899 kubelet[3389]: I0213 19:56:10.274302 3389 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c8663a93-7722-4665-9b64-5be417c5f887-clustermesh-secrets\") pod \"c8663a93-7722-4665-9b64-5be417c5f887\" (UID: \"c8663a93-7722-4665-9b64-5be417c5f887\") " Feb 13 19:56:10.275899 kubelet[3389]: I0213 19:56:10.274317 3389 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c8663a93-7722-4665-9b64-5be417c5f887-cilium-cgroup\") pod \"c8663a93-7722-4665-9b64-5be417c5f887\" (UID: \"c8663a93-7722-4665-9b64-5be417c5f887\") " Feb 13 19:56:10.275899 kubelet[3389]: I0213 19:56:10.274339 3389 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c8663a93-7722-4665-9b64-5be417c5f887-bpf-maps\") pod \"c8663a93-7722-4665-9b64-5be417c5f887\" (UID: \"c8663a93-7722-4665-9b64-5be417c5f887\") " Feb 13 19:56:10.275899 kubelet[3389]: I0213 19:56:10.274356 3389 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnph7\" (UniqueName: \"kubernetes.io/projected/f744bc96-f465-451c-a924-5877886bb38c-kube-api-access-xnph7\") pod \"f744bc96-f465-451c-a924-5877886bb38c\" (UID: \"f744bc96-f465-451c-a924-5877886bb38c\") " Feb 13 19:56:10.276024 kubelet[3389]: I0213 19:56:10.274380 3389 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f744bc96-f465-451c-a924-5877886bb38c-cilium-config-path\") pod \"f744bc96-f465-451c-a924-5877886bb38c\" (UID: \"f744bc96-f465-451c-a924-5877886bb38c\") " Feb 13 19:56:10.276024 kubelet[3389]: I0213 19:56:10.274399 3389 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c8663a93-7722-4665-9b64-5be417c5f887-cni-path\") pod \"c8663a93-7722-4665-9b64-5be417c5f887\" (UID: \"c8663a93-7722-4665-9b64-5be417c5f887\") " Feb 13 19:56:10.276024 kubelet[3389]: I0213 19:56:10.274422 3389 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c8663a93-7722-4665-9b64-5be417c5f887-etc-cni-netd\") pod \"c8663a93-7722-4665-9b64-5be417c5f887\" (UID: \"c8663a93-7722-4665-9b64-5be417c5f887\") " Feb 13 19:56:10.276024 kubelet[3389]: I0213 19:56:10.274436 3389 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c8663a93-7722-4665-9b64-5be417c5f887-cilium-run\") pod \"c8663a93-7722-4665-9b64-5be417c5f887\" (UID: \"c8663a93-7722-4665-9b64-5be417c5f887\") " Feb 13 19:56:10.276024 kubelet[3389]: I0213 19:56:10.274493 3389 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c8663a93-7722-4665-9b64-5be417c5f887-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "c8663a93-7722-4665-9b64-5be417c5f887" (UID: "c8663a93-7722-4665-9b64-5be417c5f887"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:56:10.276233 kubelet[3389]: I0213 19:56:10.276195 3389 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c8663a93-7722-4665-9b64-5be417c5f887-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "c8663a93-7722-4665-9b64-5be417c5f887" (UID: "c8663a93-7722-4665-9b64-5be417c5f887"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:56:10.276985 kubelet[3389]: I0213 19:56:10.276931 3389 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8663a93-7722-4665-9b64-5be417c5f887-kube-api-access-87cv4" (OuterVolumeSpecName: "kube-api-access-87cv4") pod "c8663a93-7722-4665-9b64-5be417c5f887" (UID: "c8663a93-7722-4665-9b64-5be417c5f887"). InnerVolumeSpecName "kube-api-access-87cv4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 13 19:56:10.278097 kubelet[3389]: I0213 19:56:10.277819 3389 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8663a93-7722-4665-9b64-5be417c5f887-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c8663a93-7722-4665-9b64-5be417c5f887" (UID: "c8663a93-7722-4665-9b64-5be417c5f887"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 13 19:56:10.278097 kubelet[3389]: I0213 19:56:10.277871 3389 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c8663a93-7722-4665-9b64-5be417c5f887-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "c8663a93-7722-4665-9b64-5be417c5f887" (UID: "c8663a93-7722-4665-9b64-5be417c5f887"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:56:10.278097 kubelet[3389]: I0213 19:56:10.277890 3389 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c8663a93-7722-4665-9b64-5be417c5f887-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "c8663a93-7722-4665-9b64-5be417c5f887" (UID: "c8663a93-7722-4665-9b64-5be417c5f887"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:56:10.278097 kubelet[3389]: I0213 19:56:10.277905 3389 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c8663a93-7722-4665-9b64-5be417c5f887-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "c8663a93-7722-4665-9b64-5be417c5f887" (UID: "c8663a93-7722-4665-9b64-5be417c5f887"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:56:10.278097 kubelet[3389]: I0213 19:56:10.277920 3389 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c8663a93-7722-4665-9b64-5be417c5f887-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "c8663a93-7722-4665-9b64-5be417c5f887" (UID: "c8663a93-7722-4665-9b64-5be417c5f887"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:56:10.278348 kubelet[3389]: I0213 19:56:10.277936 3389 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c8663a93-7722-4665-9b64-5be417c5f887-hostproc" (OuterVolumeSpecName: "hostproc") pod "c8663a93-7722-4665-9b64-5be417c5f887" (UID: "c8663a93-7722-4665-9b64-5be417c5f887"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:56:10.278907 kubelet[3389]: I0213 19:56:10.278871 3389 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8663a93-7722-4665-9b64-5be417c5f887-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "c8663a93-7722-4665-9b64-5be417c5f887" (UID: "c8663a93-7722-4665-9b64-5be417c5f887"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 13 19:56:10.278983 kubelet[3389]: I0213 19:56:10.278920 3389 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c8663a93-7722-4665-9b64-5be417c5f887-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "c8663a93-7722-4665-9b64-5be417c5f887" (UID: "c8663a93-7722-4665-9b64-5be417c5f887"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:56:10.279473 kubelet[3389]: I0213 19:56:10.279330 3389 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c8663a93-7722-4665-9b64-5be417c5f887-cni-path" (OuterVolumeSpecName: "cni-path") pod "c8663a93-7722-4665-9b64-5be417c5f887" (UID: "c8663a93-7722-4665-9b64-5be417c5f887"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:56:10.279705 kubelet[3389]: I0213 19:56:10.279684 3389 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c8663a93-7722-4665-9b64-5be417c5f887-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "c8663a93-7722-4665-9b64-5be417c5f887" (UID: "c8663a93-7722-4665-9b64-5be417c5f887"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:56:10.282300 kubelet[3389]: I0213 19:56:10.282263 3389 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f744bc96-f465-451c-a924-5877886bb38c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f744bc96-f465-451c-a924-5877886bb38c" (UID: "f744bc96-f465-451c-a924-5877886bb38c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 13 19:56:10.282530 kubelet[3389]: I0213 19:56:10.282463 3389 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8663a93-7722-4665-9b64-5be417c5f887-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "c8663a93-7722-4665-9b64-5be417c5f887" (UID: "c8663a93-7722-4665-9b64-5be417c5f887"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 13 19:56:10.286459 kubelet[3389]: I0213 19:56:10.286411 3389 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f744bc96-f465-451c-a924-5877886bb38c-kube-api-access-xnph7" (OuterVolumeSpecName: "kube-api-access-xnph7") pod "f744bc96-f465-451c-a924-5877886bb38c" (UID: "f744bc96-f465-451c-a924-5877886bb38c"). InnerVolumeSpecName "kube-api-access-xnph7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 13 19:56:10.374935 kubelet[3389]: I0213 19:56:10.374889 3389 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c8663a93-7722-4665-9b64-5be417c5f887-etc-cni-netd\") on node \"ci-4230.0.1-a-4092b3335a\" DevicePath \"\"" Feb 13 19:56:10.374935 kubelet[3389]: I0213 19:56:10.374928 3389 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c8663a93-7722-4665-9b64-5be417c5f887-cilium-run\") on node \"ci-4230.0.1-a-4092b3335a\" DevicePath \"\"" Feb 13 19:56:10.374935 kubelet[3389]: I0213 19:56:10.374938 3389 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-87cv4\" (UniqueName: \"kubernetes.io/projected/c8663a93-7722-4665-9b64-5be417c5f887-kube-api-access-87cv4\") on node \"ci-4230.0.1-a-4092b3335a\" DevicePath \"\"" Feb 13 19:56:10.374935 kubelet[3389]: I0213 19:56:10.374948 3389 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c8663a93-7722-4665-9b64-5be417c5f887-cilium-config-path\") on node \"ci-4230.0.1-a-4092b3335a\" DevicePath \"\"" Feb 13 19:56:10.375157 kubelet[3389]: I0213 19:56:10.374958 3389 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c8663a93-7722-4665-9b64-5be417c5f887-host-proc-sys-net\") on node \"ci-4230.0.1-a-4092b3335a\" DevicePath \"\"" Feb 13 19:56:10.375157 kubelet[3389]: I0213 19:56:10.374967 3389 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c8663a93-7722-4665-9b64-5be417c5f887-host-proc-sys-kernel\") on node \"ci-4230.0.1-a-4092b3335a\" DevicePath \"\"" Feb 13 19:56:10.375157 kubelet[3389]: I0213 19:56:10.374976 3389 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c8663a93-7722-4665-9b64-5be417c5f887-xtables-lock\") on node \"ci-4230.0.1-a-4092b3335a\" DevicePath \"\"" Feb 13 19:56:10.375157 kubelet[3389]: I0213 19:56:10.374983 3389 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c8663a93-7722-4665-9b64-5be417c5f887-hubble-tls\") on node \"ci-4230.0.1-a-4092b3335a\" DevicePath \"\"" Feb 13 19:56:10.375157 kubelet[3389]: I0213 19:56:10.374991 3389 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c8663a93-7722-4665-9b64-5be417c5f887-lib-modules\") on node \"ci-4230.0.1-a-4092b3335a\" DevicePath \"\"" Feb 13 19:56:10.375157 kubelet[3389]: I0213 19:56:10.374999 3389 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c8663a93-7722-4665-9b64-5be417c5f887-hostproc\") on node \"ci-4230.0.1-a-4092b3335a\" DevicePath \"\"" Feb 13 19:56:10.375157 kubelet[3389]: I0213 19:56:10.375006 3389 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c8663a93-7722-4665-9b64-5be417c5f887-clustermesh-secrets\") on node \"ci-4230.0.1-a-4092b3335a\" DevicePath \"\"" Feb 13 19:56:10.375157 kubelet[3389]: I0213 19:56:10.375014 3389 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c8663a93-7722-4665-9b64-5be417c5f887-cilium-cgroup\") on node \"ci-4230.0.1-a-4092b3335a\" DevicePath \"\"" Feb 13 19:56:10.375322 kubelet[3389]: I0213 19:56:10.375024 3389 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c8663a93-7722-4665-9b64-5be417c5f887-bpf-maps\") on node \"ci-4230.0.1-a-4092b3335a\" DevicePath \"\"" Feb 13 19:56:10.375322 kubelet[3389]: I0213 19:56:10.375033 3389 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xnph7\" (UniqueName: \"kubernetes.io/projected/f744bc96-f465-451c-a924-5877886bb38c-kube-api-access-xnph7\") on node \"ci-4230.0.1-a-4092b3335a\" DevicePath \"\"" Feb 13 19:56:10.375322 kubelet[3389]: I0213 19:56:10.375042 3389 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f744bc96-f465-451c-a924-5877886bb38c-cilium-config-path\") on node \"ci-4230.0.1-a-4092b3335a\" DevicePath \"\"" Feb 13 19:56:10.375322 kubelet[3389]: I0213 19:56:10.375050 3389 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c8663a93-7722-4665-9b64-5be417c5f887-cni-path\") on node \"ci-4230.0.1-a-4092b3335a\" DevicePath \"\"" Feb 13 19:56:10.540738 kubelet[3389]: I0213 19:56:10.540096 3389 scope.go:117] "RemoveContainer" containerID="22c1a1236275515cf6eb04ef519a56d154381cae94e2deddc5f29c8d5ec559fe" Feb 13 19:56:10.544212 containerd[1753]: time="2025-02-13T19:56:10.544171651Z" level=info msg="RemoveContainer for \"22c1a1236275515cf6eb04ef519a56d154381cae94e2deddc5f29c8d5ec559fe\"" Feb 13 19:56:10.546764 systemd[1]: Removed slice kubepods-besteffort-podf744bc96_f465_451c_a924_5877886bb38c.slice - libcontainer container kubepods-besteffort-podf744bc96_f465_451c_a924_5877886bb38c.slice. Feb 13 19:56:10.559300 systemd[1]: Removed slice kubepods-burstable-podc8663a93_7722_4665_9b64_5be417c5f887.slice - libcontainer container kubepods-burstable-podc8663a93_7722_4665_9b64_5be417c5f887.slice. Feb 13 19:56:10.559438 systemd[1]: kubepods-burstable-podc8663a93_7722_4665_9b64_5be417c5f887.slice: Consumed 6.517s CPU time, 124.9M memory peak, 144K read from disk, 12.9M written to disk. Feb 13 19:56:10.564061 containerd[1753]: time="2025-02-13T19:56:10.563229880Z" level=info msg="RemoveContainer for \"22c1a1236275515cf6eb04ef519a56d154381cae94e2deddc5f29c8d5ec559fe\" returns successfully" Feb 13 19:56:10.564061 containerd[1753]: time="2025-02-13T19:56:10.563928721Z" level=error msg="ContainerStatus for \"22c1a1236275515cf6eb04ef519a56d154381cae94e2deddc5f29c8d5ec559fe\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"22c1a1236275515cf6eb04ef519a56d154381cae94e2deddc5f29c8d5ec559fe\": not found" Feb 13 19:56:10.565364 kubelet[3389]: I0213 19:56:10.563509 3389 scope.go:117] "RemoveContainer" containerID="22c1a1236275515cf6eb04ef519a56d154381cae94e2deddc5f29c8d5ec559fe" Feb 13 19:56:10.565364 kubelet[3389]: E0213 19:56:10.564464 3389 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"22c1a1236275515cf6eb04ef519a56d154381cae94e2deddc5f29c8d5ec559fe\": not found" containerID="22c1a1236275515cf6eb04ef519a56d154381cae94e2deddc5f29c8d5ec559fe" Feb 13 19:56:10.565364 kubelet[3389]: I0213 19:56:10.564498 3389 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"22c1a1236275515cf6eb04ef519a56d154381cae94e2deddc5f29c8d5ec559fe"} err="failed to get container status \"22c1a1236275515cf6eb04ef519a56d154381cae94e2deddc5f29c8d5ec559fe\": rpc error: code = NotFound desc = an error occurred when try to find container \"22c1a1236275515cf6eb04ef519a56d154381cae94e2deddc5f29c8d5ec559fe\": not found" Feb 13 19:56:10.565364 kubelet[3389]: I0213 19:56:10.564581 3389 scope.go:117] "RemoveContainer" containerID="18c03a0fc5098419418be825e043da58bd8bd6a20aa725e21cddfc9ce197f8e7" Feb 13 19:56:10.567916 containerd[1753]: time="2025-02-13T19:56:10.567719686Z" level=info msg="RemoveContainer for \"18c03a0fc5098419418be825e043da58bd8bd6a20aa725e21cddfc9ce197f8e7\"" Feb 13 19:56:10.577920 containerd[1753]: time="2025-02-13T19:56:10.577875222Z" level=info msg="RemoveContainer for \"18c03a0fc5098419418be825e043da58bd8bd6a20aa725e21cddfc9ce197f8e7\" returns successfully" Feb 13 19:56:10.578660 kubelet[3389]: I0213 19:56:10.578318 3389 scope.go:117] "RemoveContainer" containerID="f7db6d54dd1dacee48a68222f3675769f6804ac4ad32905412284e4c8578785c" Feb 13 19:56:10.579512 containerd[1753]: time="2025-02-13T19:56:10.579486584Z" level=info msg="RemoveContainer for \"f7db6d54dd1dacee48a68222f3675769f6804ac4ad32905412284e4c8578785c\"" Feb 13 19:56:10.591498 containerd[1753]: time="2025-02-13T19:56:10.591308202Z" level=info msg="RemoveContainer for \"f7db6d54dd1dacee48a68222f3675769f6804ac4ad32905412284e4c8578785c\" returns successfully" Feb 13 19:56:10.592264 kubelet[3389]: I0213 19:56:10.591943 3389 scope.go:117] "RemoveContainer" containerID="0f7c6fe1bdffe72cc3b613fa34d0ff3289da43365755c2b59f880f1cc923b543" Feb 13 19:56:10.595027 containerd[1753]: time="2025-02-13T19:56:10.594918488Z" level=info msg="RemoveContainer for \"0f7c6fe1bdffe72cc3b613fa34d0ff3289da43365755c2b59f880f1cc923b543\"" Feb 13 19:56:10.603633 containerd[1753]: time="2025-02-13T19:56:10.603522581Z" level=info msg="RemoveContainer for \"0f7c6fe1bdffe72cc3b613fa34d0ff3289da43365755c2b59f880f1cc923b543\" returns successfully" Feb 13 19:56:10.603914 kubelet[3389]: I0213 19:56:10.603884 3389 scope.go:117] "RemoveContainer" containerID="2695f55ec8356ebe4b701bb30bb94893735236204f0bf874cef36696ddb65753" Feb 13 19:56:10.605728 containerd[1753]: time="2025-02-13T19:56:10.605471464Z" level=info msg="RemoveContainer for \"2695f55ec8356ebe4b701bb30bb94893735236204f0bf874cef36696ddb65753\"" Feb 13 19:56:10.615409 containerd[1753]: time="2025-02-13T19:56:10.615367799Z" level=info msg="RemoveContainer for \"2695f55ec8356ebe4b701bb30bb94893735236204f0bf874cef36696ddb65753\" returns successfully" Feb 13 19:56:10.615766 kubelet[3389]: I0213 19:56:10.615727 3389 scope.go:117] "RemoveContainer" containerID="c2a07a265d3fbfac8199d146de185c4a7d8728852c973b4356df2252e32c9c7d" Feb 13 19:56:10.617062 containerd[1753]: time="2025-02-13T19:56:10.617025921Z" level=info msg="RemoveContainer for \"c2a07a265d3fbfac8199d146de185c4a7d8728852c973b4356df2252e32c9c7d\"" Feb 13 19:56:10.623544 containerd[1753]: time="2025-02-13T19:56:10.623490371Z" level=info msg="RemoveContainer for \"c2a07a265d3fbfac8199d146de185c4a7d8728852c973b4356df2252e32c9c7d\" returns successfully" Feb 13 19:56:10.623960 kubelet[3389]: I0213 19:56:10.623825 3389 scope.go:117] "RemoveContainer" containerID="18c03a0fc5098419418be825e043da58bd8bd6a20aa725e21cddfc9ce197f8e7" Feb 13 19:56:10.624151 containerd[1753]: time="2025-02-13T19:56:10.624108212Z" level=error msg="ContainerStatus for \"18c03a0fc5098419418be825e043da58bd8bd6a20aa725e21cddfc9ce197f8e7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"18c03a0fc5098419418be825e043da58bd8bd6a20aa725e21cddfc9ce197f8e7\": not found" Feb 13 19:56:10.624309 kubelet[3389]: E0213 19:56:10.624278 3389 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"18c03a0fc5098419418be825e043da58bd8bd6a20aa725e21cddfc9ce197f8e7\": not found" containerID="18c03a0fc5098419418be825e043da58bd8bd6a20aa725e21cddfc9ce197f8e7" Feb 13 19:56:10.624358 kubelet[3389]: I0213 19:56:10.624317 3389 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"18c03a0fc5098419418be825e043da58bd8bd6a20aa725e21cddfc9ce197f8e7"} err="failed to get container status \"18c03a0fc5098419418be825e043da58bd8bd6a20aa725e21cddfc9ce197f8e7\": rpc error: code = NotFound desc = an error occurred when try to find container \"18c03a0fc5098419418be825e043da58bd8bd6a20aa725e21cddfc9ce197f8e7\": not found" Feb 13 19:56:10.624358 kubelet[3389]: I0213 19:56:10.624344 3389 scope.go:117] "RemoveContainer" containerID="f7db6d54dd1dacee48a68222f3675769f6804ac4ad32905412284e4c8578785c" Feb 13 19:56:10.624605 containerd[1753]: time="2025-02-13T19:56:10.624549933Z" level=error msg="ContainerStatus for \"f7db6d54dd1dacee48a68222f3675769f6804ac4ad32905412284e4c8578785c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f7db6d54dd1dacee48a68222f3675769f6804ac4ad32905412284e4c8578785c\": not found" Feb 13 19:56:10.624878 kubelet[3389]: E0213 19:56:10.624851 3389 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f7db6d54dd1dacee48a68222f3675769f6804ac4ad32905412284e4c8578785c\": not found" containerID="f7db6d54dd1dacee48a68222f3675769f6804ac4ad32905412284e4c8578785c" Feb 13 19:56:10.624953 kubelet[3389]: I0213 19:56:10.624891 3389 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f7db6d54dd1dacee48a68222f3675769f6804ac4ad32905412284e4c8578785c"} err="failed to get container status \"f7db6d54dd1dacee48a68222f3675769f6804ac4ad32905412284e4c8578785c\": rpc error: code = NotFound desc = an error occurred when try to find container \"f7db6d54dd1dacee48a68222f3675769f6804ac4ad32905412284e4c8578785c\": not found" Feb 13 19:56:10.624953 kubelet[3389]: I0213 19:56:10.624907 3389 scope.go:117] "RemoveContainer" containerID="0f7c6fe1bdffe72cc3b613fa34d0ff3289da43365755c2b59f880f1cc923b543" Feb 13 19:56:10.625197 containerd[1753]: time="2025-02-13T19:56:10.625134774Z" level=error msg="ContainerStatus for \"0f7c6fe1bdffe72cc3b613fa34d0ff3289da43365755c2b59f880f1cc923b543\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0f7c6fe1bdffe72cc3b613fa34d0ff3289da43365755c2b59f880f1cc923b543\": not found" Feb 13 19:56:10.625395 kubelet[3389]: E0213 19:56:10.625366 3389 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0f7c6fe1bdffe72cc3b613fa34d0ff3289da43365755c2b59f880f1cc923b543\": not found" containerID="0f7c6fe1bdffe72cc3b613fa34d0ff3289da43365755c2b59f880f1cc923b543" Feb 13 19:56:10.625440 kubelet[3389]: I0213 19:56:10.625395 3389 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0f7c6fe1bdffe72cc3b613fa34d0ff3289da43365755c2b59f880f1cc923b543"} err="failed to get container status \"0f7c6fe1bdffe72cc3b613fa34d0ff3289da43365755c2b59f880f1cc923b543\": rpc error: code = NotFound desc = an error occurred when try to find container \"0f7c6fe1bdffe72cc3b613fa34d0ff3289da43365755c2b59f880f1cc923b543\": not found" Feb 13 19:56:10.625440 kubelet[3389]: I0213 19:56:10.625412 3389 scope.go:117] "RemoveContainer" containerID="2695f55ec8356ebe4b701bb30bb94893735236204f0bf874cef36696ddb65753" Feb 13 19:56:10.625680 containerd[1753]: time="2025-02-13T19:56:10.625644254Z" level=error msg="ContainerStatus for \"2695f55ec8356ebe4b701bb30bb94893735236204f0bf874cef36696ddb65753\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2695f55ec8356ebe4b701bb30bb94893735236204f0bf874cef36696ddb65753\": not found" Feb 13 19:56:10.625839 kubelet[3389]: E0213 19:56:10.625788 3389 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2695f55ec8356ebe4b701bb30bb94893735236204f0bf874cef36696ddb65753\": not found" containerID="2695f55ec8356ebe4b701bb30bb94893735236204f0bf874cef36696ddb65753" Feb 13 19:56:10.625879 kubelet[3389]: I0213 19:56:10.625842 3389 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2695f55ec8356ebe4b701bb30bb94893735236204f0bf874cef36696ddb65753"} err="failed to get container status \"2695f55ec8356ebe4b701bb30bb94893735236204f0bf874cef36696ddb65753\": rpc error: code = NotFound desc = an error occurred when try to find container \"2695f55ec8356ebe4b701bb30bb94893735236204f0bf874cef36696ddb65753\": not found" Feb 13 19:56:10.625879 kubelet[3389]: I0213 19:56:10.625858 3389 scope.go:117] "RemoveContainer" containerID="c2a07a265d3fbfac8199d146de185c4a7d8728852c973b4356df2252e32c9c7d" Feb 13 19:56:10.626176 containerd[1753]: time="2025-02-13T19:56:10.626072615Z" level=error msg="ContainerStatus for \"c2a07a265d3fbfac8199d146de185c4a7d8728852c973b4356df2252e32c9c7d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c2a07a265d3fbfac8199d146de185c4a7d8728852c973b4356df2252e32c9c7d\": not found" Feb 13 19:56:10.626225 kubelet[3389]: E0213 19:56:10.626175 3389 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c2a07a265d3fbfac8199d146de185c4a7d8728852c973b4356df2252e32c9c7d\": not found" containerID="c2a07a265d3fbfac8199d146de185c4a7d8728852c973b4356df2252e32c9c7d" Feb 13 19:56:10.626225 kubelet[3389]: I0213 19:56:10.626196 3389 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c2a07a265d3fbfac8199d146de185c4a7d8728852c973b4356df2252e32c9c7d"} err="failed to get container status \"c2a07a265d3fbfac8199d146de185c4a7d8728852c973b4356df2252e32c9c7d\": rpc error: code = NotFound desc = an error occurred when try to find container \"c2a07a265d3fbfac8199d146de185c4a7d8728852c973b4356df2252e32c9c7d\": not found" Feb 13 19:56:10.938215 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b0971812217fdda3ed576d333a25e369e4c50a6a7abe8440ed72b140d340daa6-rootfs.mount: Deactivated successfully. Feb 13 19:56:10.938315 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ef515ac533053ba6a297971b2791a19a74b4573a8de525ca95b0949198e8f404-rootfs.mount: Deactivated successfully. Feb 13 19:56:10.938365 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ef515ac533053ba6a297971b2791a19a74b4573a8de525ca95b0949198e8f404-shm.mount: Deactivated successfully. Feb 13 19:56:10.938429 systemd[1]: var-lib-kubelet-pods-c8663a93\x2d7722\x2d4665\x2d9b64\x2d5be417c5f887-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d87cv4.mount: Deactivated successfully. Feb 13 19:56:10.938481 systemd[1]: var-lib-kubelet-pods-f744bc96\x2df465\x2d451c\x2da924\x2d5877886bb38c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxnph7.mount: Deactivated successfully. Feb 13 19:56:10.938530 systemd[1]: var-lib-kubelet-pods-c8663a93\x2d7722\x2d4665\x2d9b64\x2d5be417c5f887-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 13 19:56:10.938577 systemd[1]: var-lib-kubelet-pods-c8663a93\x2d7722\x2d4665\x2d9b64\x2d5be417c5f887-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 13 19:56:11.160686 kubelet[3389]: I0213 19:56:11.159885 3389 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8663a93-7722-4665-9b64-5be417c5f887" path="/var/lib/kubelet/pods/c8663a93-7722-4665-9b64-5be417c5f887/volumes" Feb 13 19:56:11.160686 kubelet[3389]: I0213 19:56:11.160418 3389 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f744bc96-f465-451c-a924-5877886bb38c" path="/var/lib/kubelet/pods/f744bc96-f465-451c-a924-5877886bb38c/volumes" Feb 13 19:56:11.950777 sshd[5002]: Connection closed by 10.200.16.10 port 35308 Feb 13 19:56:11.951544 sshd-session[5000]: pam_unix(sshd:session): session closed for user core Feb 13 19:56:11.955790 systemd[1]: sshd@22-10.200.20.12:22-10.200.16.10:35308.service: Deactivated successfully. Feb 13 19:56:11.959171 systemd[1]: session-25.scope: Deactivated successfully. Feb 13 19:56:11.959628 systemd[1]: session-25.scope: Consumed 1.454s CPU time, 23.7M memory peak. Feb 13 19:56:11.960356 systemd-logind[1717]: Session 25 logged out. Waiting for processes to exit. Feb 13 19:56:11.961411 systemd-logind[1717]: Removed session 25. Feb 13 19:56:12.045880 systemd[1]: Started sshd@23-10.200.20.12:22-10.200.16.10:54380.service - OpenSSH per-connection server daemon (10.200.16.10:54380). Feb 13 19:56:12.536786 sshd[5165]: Accepted publickey for core from 10.200.16.10 port 54380 ssh2: RSA SHA256:LTmo/6k/2cyRFZrv4Ga+drA+aFwEaiiiTQilASdJKcU Feb 13 19:56:12.538087 sshd-session[5165]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:56:12.545381 systemd-logind[1717]: New session 26 of user core. Feb 13 19:56:12.548801 systemd[1]: Started session-26.scope - Session 26 of User core. Feb 13 19:56:13.759927 kubelet[3389]: I0213 19:56:13.759189 3389 memory_manager.go:355] "RemoveStaleState removing state" podUID="c8663a93-7722-4665-9b64-5be417c5f887" containerName="cilium-agent" Feb 13 19:56:13.759927 kubelet[3389]: I0213 19:56:13.759255 3389 memory_manager.go:355] "RemoveStaleState removing state" podUID="f744bc96-f465-451c-a924-5877886bb38c" containerName="cilium-operator" Feb 13 19:56:13.771224 systemd[1]: Created slice kubepods-burstable-pod2d5b84f3_98ed_4ecd_9d53_ee7ea459b740.slice - libcontainer container kubepods-burstable-pod2d5b84f3_98ed_4ecd_9d53_ee7ea459b740.slice. Feb 13 19:56:13.800773 sshd[5168]: Connection closed by 10.200.16.10 port 54380 Feb 13 19:56:13.801296 sshd-session[5165]: pam_unix(sshd:session): session closed for user core Feb 13 19:56:13.806303 systemd-logind[1717]: Session 26 logged out. Waiting for processes to exit. Feb 13 19:56:13.809769 systemd[1]: sshd@23-10.200.20.12:22-10.200.16.10:54380.service: Deactivated successfully. Feb 13 19:56:13.813342 systemd[1]: session-26.scope: Deactivated successfully. Feb 13 19:56:13.820994 systemd-logind[1717]: Removed session 26. Feb 13 19:56:13.892082 systemd[1]: Started sshd@24-10.200.20.12:22-10.200.16.10:54390.service - OpenSSH per-connection server daemon (10.200.16.10:54390). Feb 13 19:56:13.895449 kubelet[3389]: I0213 19:56:13.895400 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2d5b84f3-98ed-4ecd-9d53-ee7ea459b740-xtables-lock\") pod \"cilium-7z2t2\" (UID: \"2d5b84f3-98ed-4ecd-9d53-ee7ea459b740\") " pod="kube-system/cilium-7z2t2" Feb 13 19:56:13.895449 kubelet[3389]: I0213 19:56:13.895448 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2d5b84f3-98ed-4ecd-9d53-ee7ea459b740-hubble-tls\") pod \"cilium-7z2t2\" (UID: \"2d5b84f3-98ed-4ecd-9d53-ee7ea459b740\") " pod="kube-system/cilium-7z2t2" Feb 13 19:56:13.895693 kubelet[3389]: I0213 19:56:13.895486 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2d5b84f3-98ed-4ecd-9d53-ee7ea459b740-lib-modules\") pod \"cilium-7z2t2\" (UID: \"2d5b84f3-98ed-4ecd-9d53-ee7ea459b740\") " pod="kube-system/cilium-7z2t2" Feb 13 19:56:13.895693 kubelet[3389]: I0213 19:56:13.895520 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2d5b84f3-98ed-4ecd-9d53-ee7ea459b740-clustermesh-secrets\") pod \"cilium-7z2t2\" (UID: \"2d5b84f3-98ed-4ecd-9d53-ee7ea459b740\") " pod="kube-system/cilium-7z2t2" Feb 13 19:56:13.895693 kubelet[3389]: I0213 19:56:13.895541 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2d5b84f3-98ed-4ecd-9d53-ee7ea459b740-cilium-run\") pod \"cilium-7z2t2\" (UID: \"2d5b84f3-98ed-4ecd-9d53-ee7ea459b740\") " pod="kube-system/cilium-7z2t2" Feb 13 19:56:13.895693 kubelet[3389]: I0213 19:56:13.895557 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2d5b84f3-98ed-4ecd-9d53-ee7ea459b740-hostproc\") pod \"cilium-7z2t2\" (UID: \"2d5b84f3-98ed-4ecd-9d53-ee7ea459b740\") " pod="kube-system/cilium-7z2t2" Feb 13 19:56:13.895693 kubelet[3389]: I0213 19:56:13.895574 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2d5b84f3-98ed-4ecd-9d53-ee7ea459b740-cni-path\") pod \"cilium-7z2t2\" (UID: \"2d5b84f3-98ed-4ecd-9d53-ee7ea459b740\") " pod="kube-system/cilium-7z2t2" Feb 13 19:56:13.895693 kubelet[3389]: I0213 19:56:13.895589 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2d5b84f3-98ed-4ecd-9d53-ee7ea459b740-host-proc-sys-kernel\") pod \"cilium-7z2t2\" (UID: \"2d5b84f3-98ed-4ecd-9d53-ee7ea459b740\") " pod="kube-system/cilium-7z2t2" Feb 13 19:56:13.895877 kubelet[3389]: I0213 19:56:13.895604 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2d5b84f3-98ed-4ecd-9d53-ee7ea459b740-cilium-cgroup\") pod \"cilium-7z2t2\" (UID: \"2d5b84f3-98ed-4ecd-9d53-ee7ea459b740\") " pod="kube-system/cilium-7z2t2" Feb 13 19:56:13.895877 kubelet[3389]: I0213 19:56:13.895647 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2d5b84f3-98ed-4ecd-9d53-ee7ea459b740-bpf-maps\") pod \"cilium-7z2t2\" (UID: \"2d5b84f3-98ed-4ecd-9d53-ee7ea459b740\") " pod="kube-system/cilium-7z2t2" Feb 13 19:56:13.895877 kubelet[3389]: I0213 19:56:13.895667 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2d5b84f3-98ed-4ecd-9d53-ee7ea459b740-host-proc-sys-net\") pod \"cilium-7z2t2\" (UID: \"2d5b84f3-98ed-4ecd-9d53-ee7ea459b740\") " pod="kube-system/cilium-7z2t2" Feb 13 19:56:13.895877 kubelet[3389]: I0213 19:56:13.895701 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2d5b84f3-98ed-4ecd-9d53-ee7ea459b740-etc-cni-netd\") pod \"cilium-7z2t2\" (UID: \"2d5b84f3-98ed-4ecd-9d53-ee7ea459b740\") " pod="kube-system/cilium-7z2t2" Feb 13 19:56:13.895877 kubelet[3389]: I0213 19:56:13.895738 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/2d5b84f3-98ed-4ecd-9d53-ee7ea459b740-cilium-ipsec-secrets\") pod \"cilium-7z2t2\" (UID: \"2d5b84f3-98ed-4ecd-9d53-ee7ea459b740\") " pod="kube-system/cilium-7z2t2" Feb 13 19:56:13.895877 kubelet[3389]: I0213 19:56:13.895754 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgv7q\" (UniqueName: \"kubernetes.io/projected/2d5b84f3-98ed-4ecd-9d53-ee7ea459b740-kube-api-access-tgv7q\") pod \"cilium-7z2t2\" (UID: \"2d5b84f3-98ed-4ecd-9d53-ee7ea459b740\") " pod="kube-system/cilium-7z2t2" Feb 13 19:56:13.896140 kubelet[3389]: I0213 19:56:13.895771 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2d5b84f3-98ed-4ecd-9d53-ee7ea459b740-cilium-config-path\") pod \"cilium-7z2t2\" (UID: \"2d5b84f3-98ed-4ecd-9d53-ee7ea459b740\") " pod="kube-system/cilium-7z2t2" Feb 13 19:56:14.076761 containerd[1753]: time="2025-02-13T19:56:14.076437515Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7z2t2,Uid:2d5b84f3-98ed-4ecd-9d53-ee7ea459b740,Namespace:kube-system,Attempt:0,}" Feb 13 19:56:14.143648 containerd[1753]: time="2025-02-13T19:56:14.142730400Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:56:14.143648 containerd[1753]: time="2025-02-13T19:56:14.143133240Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:56:14.143648 containerd[1753]: time="2025-02-13T19:56:14.143146040Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:56:14.143648 containerd[1753]: time="2025-02-13T19:56:14.143230120Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:56:14.162861 systemd[1]: Started cri-containerd-11a91b85deb926ce6402974748381e90fe21902c7cc50ea6c806d6cb5c4bc229.scope - libcontainer container 11a91b85deb926ce6402974748381e90fe21902c7cc50ea6c806d6cb5c4bc229. Feb 13 19:56:14.188764 containerd[1753]: time="2025-02-13T19:56:14.188712578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7z2t2,Uid:2d5b84f3-98ed-4ecd-9d53-ee7ea459b740,Namespace:kube-system,Attempt:0,} returns sandbox id \"11a91b85deb926ce6402974748381e90fe21902c7cc50ea6c806d6cb5c4bc229\"" Feb 13 19:56:14.193512 containerd[1753]: time="2025-02-13T19:56:14.193449344Z" level=info msg="CreateContainer within sandbox \"11a91b85deb926ce6402974748381e90fe21902c7cc50ea6c806d6cb5c4bc229\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 19:56:14.240260 containerd[1753]: time="2025-02-13T19:56:14.240202003Z" level=info msg="CreateContainer within sandbox \"11a91b85deb926ce6402974748381e90fe21902c7cc50ea6c806d6cb5c4bc229\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"fe1b76e0ca21b0761467bead3e1471c1b437d6fc3382d1d394fc3ecb8212cb67\"" Feb 13 19:56:14.241846 containerd[1753]: time="2025-02-13T19:56:14.241810766Z" level=info msg="StartContainer for \"fe1b76e0ca21b0761467bead3e1471c1b437d6fc3382d1d394fc3ecb8212cb67\"" Feb 13 19:56:14.265871 systemd[1]: Started cri-containerd-fe1b76e0ca21b0761467bead3e1471c1b437d6fc3382d1d394fc3ecb8212cb67.scope - libcontainer container fe1b76e0ca21b0761467bead3e1471c1b437d6fc3382d1d394fc3ecb8212cb67. Feb 13 19:56:14.301050 containerd[1753]: time="2025-02-13T19:56:14.300989001Z" level=info msg="StartContainer for \"fe1b76e0ca21b0761467bead3e1471c1b437d6fc3382d1d394fc3ecb8212cb67\" returns successfully" Feb 13 19:56:14.306152 systemd[1]: cri-containerd-fe1b76e0ca21b0761467bead3e1471c1b437d6fc3382d1d394fc3ecb8212cb67.scope: Deactivated successfully. Feb 13 19:56:14.337391 sshd[5178]: Accepted publickey for core from 10.200.16.10 port 54390 ssh2: RSA SHA256:LTmo/6k/2cyRFZrv4Ga+drA+aFwEaiiiTQilASdJKcU Feb 13 19:56:14.339160 sshd-session[5178]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:56:14.345767 systemd-logind[1717]: New session 27 of user core. Feb 13 19:56:14.351883 systemd[1]: Started session-27.scope - Session 27 of User core. Feb 13 19:56:14.365988 containerd[1753]: time="2025-02-13T19:56:14.365901883Z" level=info msg="shim disconnected" id=fe1b76e0ca21b0761467bead3e1471c1b437d6fc3382d1d394fc3ecb8212cb67 namespace=k8s.io Feb 13 19:56:14.365988 containerd[1753]: time="2025-02-13T19:56:14.365982523Z" level=warning msg="cleaning up after shim disconnected" id=fe1b76e0ca21b0761467bead3e1471c1b437d6fc3382d1d394fc3ecb8212cb67 namespace=k8s.io Feb 13 19:56:14.365988 containerd[1753]: time="2025-02-13T19:56:14.365992963Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:56:14.568287 containerd[1753]: time="2025-02-13T19:56:14.568102780Z" level=info msg="CreateContainer within sandbox \"11a91b85deb926ce6402974748381e90fe21902c7cc50ea6c806d6cb5c4bc229\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 19:56:14.609494 containerd[1753]: time="2025-02-13T19:56:14.609159912Z" level=info msg="CreateContainer within sandbox \"11a91b85deb926ce6402974748381e90fe21902c7cc50ea6c806d6cb5c4bc229\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b61642f24f9657f6efeaab58bf4c2e44b36b15314a7484d94991672fd3100a10\"" Feb 13 19:56:14.611322 containerd[1753]: time="2025-02-13T19:56:14.611274515Z" level=info msg="StartContainer for \"b61642f24f9657f6efeaab58bf4c2e44b36b15314a7484d94991672fd3100a10\"" Feb 13 19:56:14.634892 systemd[1]: Started cri-containerd-b61642f24f9657f6efeaab58bf4c2e44b36b15314a7484d94991672fd3100a10.scope - libcontainer container b61642f24f9657f6efeaab58bf4c2e44b36b15314a7484d94991672fd3100a10. Feb 13 19:56:14.668016 sshd[5274]: Connection closed by 10.200.16.10 port 54390 Feb 13 19:56:14.668587 sshd-session[5178]: pam_unix(sshd:session): session closed for user core Feb 13 19:56:14.671552 systemd[1]: cri-containerd-b61642f24f9657f6efeaab58bf4c2e44b36b15314a7484d94991672fd3100a10.scope: Deactivated successfully. Feb 13 19:56:14.674225 containerd[1753]: time="2025-02-13T19:56:14.673455194Z" level=info msg="StartContainer for \"b61642f24f9657f6efeaab58bf4c2e44b36b15314a7484d94991672fd3100a10\" returns successfully" Feb 13 19:56:14.676484 systemd[1]: sshd@24-10.200.20.12:22-10.200.16.10:54390.service: Deactivated successfully. Feb 13 19:56:14.681965 systemd[1]: session-27.scope: Deactivated successfully. Feb 13 19:56:14.683914 systemd-logind[1717]: Session 27 logged out. Waiting for processes to exit. Feb 13 19:56:14.686943 systemd-logind[1717]: Removed session 27. Feb 13 19:56:14.711012 containerd[1753]: time="2025-02-13T19:56:14.710945962Z" level=info msg="shim disconnected" id=b61642f24f9657f6efeaab58bf4c2e44b36b15314a7484d94991672fd3100a10 namespace=k8s.io Feb 13 19:56:14.711012 containerd[1753]: time="2025-02-13T19:56:14.711004122Z" level=warning msg="cleaning up after shim disconnected" id=b61642f24f9657f6efeaab58bf4c2e44b36b15314a7484d94991672fd3100a10 namespace=k8s.io Feb 13 19:56:14.711012 containerd[1753]: time="2025-02-13T19:56:14.711013362Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:56:14.756922 systemd[1]: Started sshd@25-10.200.20.12:22-10.200.16.10:54404.service - OpenSSH per-connection server daemon (10.200.16.10:54404). Feb 13 19:56:15.138659 containerd[1753]: time="2025-02-13T19:56:15.138432785Z" level=info msg="StopPodSandbox for \"ef515ac533053ba6a297971b2791a19a74b4573a8de525ca95b0949198e8f404\"" Feb 13 19:56:15.138659 containerd[1753]: time="2025-02-13T19:56:15.138528865Z" level=info msg="TearDown network for sandbox \"ef515ac533053ba6a297971b2791a19a74b4573a8de525ca95b0949198e8f404\" successfully" Feb 13 19:56:15.138659 containerd[1753]: time="2025-02-13T19:56:15.138538425Z" level=info msg="StopPodSandbox for \"ef515ac533053ba6a297971b2791a19a74b4573a8de525ca95b0949198e8f404\" returns successfully" Feb 13 19:56:15.139654 containerd[1753]: time="2025-02-13T19:56:15.139359906Z" level=info msg="RemovePodSandbox for \"ef515ac533053ba6a297971b2791a19a74b4573a8de525ca95b0949198e8f404\"" Feb 13 19:56:15.139654 containerd[1753]: time="2025-02-13T19:56:15.139399346Z" level=info msg="Forcibly stopping sandbox \"ef515ac533053ba6a297971b2791a19a74b4573a8de525ca95b0949198e8f404\"" Feb 13 19:56:15.139654 containerd[1753]: time="2025-02-13T19:56:15.139456226Z" level=info msg="TearDown network for sandbox \"ef515ac533053ba6a297971b2791a19a74b4573a8de525ca95b0949198e8f404\" successfully" Feb 13 19:56:15.155326 containerd[1753]: time="2025-02-13T19:56:15.155257486Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ef515ac533053ba6a297971b2791a19a74b4573a8de525ca95b0949198e8f404\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:56:15.155484 containerd[1753]: time="2025-02-13T19:56:15.155337486Z" level=info msg="RemovePodSandbox \"ef515ac533053ba6a297971b2791a19a74b4573a8de525ca95b0949198e8f404\" returns successfully" Feb 13 19:56:15.156158 containerd[1753]: time="2025-02-13T19:56:15.156009327Z" level=info msg="StopPodSandbox for \"b0971812217fdda3ed576d333a25e369e4c50a6a7abe8440ed72b140d340daa6\"" Feb 13 19:56:15.156158 containerd[1753]: time="2025-02-13T19:56:15.156102927Z" level=info msg="TearDown network for sandbox \"b0971812217fdda3ed576d333a25e369e4c50a6a7abe8440ed72b140d340daa6\" successfully" Feb 13 19:56:15.156158 containerd[1753]: time="2025-02-13T19:56:15.156113567Z" level=info msg="StopPodSandbox for \"b0971812217fdda3ed576d333a25e369e4c50a6a7abe8440ed72b140d340daa6\" returns successfully" Feb 13 19:56:15.156575 containerd[1753]: time="2025-02-13T19:56:15.156499568Z" level=info msg="RemovePodSandbox for \"b0971812217fdda3ed576d333a25e369e4c50a6a7abe8440ed72b140d340daa6\"" Feb 13 19:56:15.156646 containerd[1753]: time="2025-02-13T19:56:15.156576888Z" level=info msg="Forcibly stopping sandbox \"b0971812217fdda3ed576d333a25e369e4c50a6a7abe8440ed72b140d340daa6\"" Feb 13 19:56:15.156683 containerd[1753]: time="2025-02-13T19:56:15.156649688Z" level=info msg="TearDown network for sandbox \"b0971812217fdda3ed576d333a25e369e4c50a6a7abe8440ed72b140d340daa6\" successfully" Feb 13 19:56:15.169968 containerd[1753]: time="2025-02-13T19:56:15.169837505Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b0971812217fdda3ed576d333a25e369e4c50a6a7abe8440ed72b140d340daa6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:56:15.169968 containerd[1753]: time="2025-02-13T19:56:15.169906585Z" level=info msg="RemovePodSandbox \"b0971812217fdda3ed576d333a25e369e4c50a6a7abe8440ed72b140d340daa6\" returns successfully" Feb 13 19:56:15.224101 sshd[5356]: Accepted publickey for core from 10.200.16.10 port 54404 ssh2: RSA SHA256:LTmo/6k/2cyRFZrv4Ga+drA+aFwEaiiiTQilASdJKcU Feb 13 19:56:15.225652 sshd-session[5356]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:56:15.232258 systemd-logind[1717]: New session 28 of user core. Feb 13 19:56:15.235799 systemd[1]: Started session-28.scope - Session 28 of User core. Feb 13 19:56:15.246304 kubelet[3389]: E0213 19:56:15.246211 3389 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 19:56:15.571799 containerd[1753]: time="2025-02-13T19:56:15.571724695Z" level=info msg="CreateContainer within sandbox \"11a91b85deb926ce6402974748381e90fe21902c7cc50ea6c806d6cb5c4bc229\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 19:56:15.618488 containerd[1753]: time="2025-02-13T19:56:15.618433555Z" level=info msg="CreateContainer within sandbox \"11a91b85deb926ce6402974748381e90fe21902c7cc50ea6c806d6cb5c4bc229\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"5ca885e87163b17a0a8dfab90e51a7e465a60c9085848d1219121deda640b6bc\"" Feb 13 19:56:15.619494 containerd[1753]: time="2025-02-13T19:56:15.619350396Z" level=info msg="StartContainer for \"5ca885e87163b17a0a8dfab90e51a7e465a60c9085848d1219121deda640b6bc\"" Feb 13 19:56:15.651850 systemd[1]: Started cri-containerd-5ca885e87163b17a0a8dfab90e51a7e465a60c9085848d1219121deda640b6bc.scope - libcontainer container 5ca885e87163b17a0a8dfab90e51a7e465a60c9085848d1219121deda640b6bc. Feb 13 19:56:15.683586 systemd[1]: cri-containerd-5ca885e87163b17a0a8dfab90e51a7e465a60c9085848d1219121deda640b6bc.scope: Deactivated successfully. Feb 13 19:56:15.688633 containerd[1753]: time="2025-02-13T19:56:15.686956762Z" level=info msg="StartContainer for \"5ca885e87163b17a0a8dfab90e51a7e465a60c9085848d1219121deda640b6bc\" returns successfully" Feb 13 19:56:15.747182 containerd[1753]: time="2025-02-13T19:56:15.747057158Z" level=info msg="shim disconnected" id=5ca885e87163b17a0a8dfab90e51a7e465a60c9085848d1219121deda640b6bc namespace=k8s.io Feb 13 19:56:15.747182 containerd[1753]: time="2025-02-13T19:56:15.747134998Z" level=warning msg="cleaning up after shim disconnected" id=5ca885e87163b17a0a8dfab90e51a7e465a60c9085848d1219121deda640b6bc namespace=k8s.io Feb 13 19:56:15.747182 containerd[1753]: time="2025-02-13T19:56:15.747145238Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:56:16.001568 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5ca885e87163b17a0a8dfab90e51a7e465a60c9085848d1219121deda640b6bc-rootfs.mount: Deactivated successfully. Feb 13 19:56:16.576689 containerd[1753]: time="2025-02-13T19:56:16.576529332Z" level=info msg="CreateContainer within sandbox \"11a91b85deb926ce6402974748381e90fe21902c7cc50ea6c806d6cb5c4bc229\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 19:56:16.611097 containerd[1753]: time="2025-02-13T19:56:16.611043616Z" level=info msg="CreateContainer within sandbox \"11a91b85deb926ce6402974748381e90fe21902c7cc50ea6c806d6cb5c4bc229\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"5c5b6fe348a883afe4307ee2aad9f27fbaf850b2e598de473534dffa2029856d\"" Feb 13 19:56:16.612837 containerd[1753]: time="2025-02-13T19:56:16.611955417Z" level=info msg="StartContainer for \"5c5b6fe348a883afe4307ee2aad9f27fbaf850b2e598de473534dffa2029856d\"" Feb 13 19:56:16.643832 systemd[1]: Started cri-containerd-5c5b6fe348a883afe4307ee2aad9f27fbaf850b2e598de473534dffa2029856d.scope - libcontainer container 5c5b6fe348a883afe4307ee2aad9f27fbaf850b2e598de473534dffa2029856d. Feb 13 19:56:16.669732 systemd[1]: cri-containerd-5c5b6fe348a883afe4307ee2aad9f27fbaf850b2e598de473534dffa2029856d.scope: Deactivated successfully. Feb 13 19:56:16.675144 containerd[1753]: time="2025-02-13T19:56:16.675057217Z" level=info msg="StartContainer for \"5c5b6fe348a883afe4307ee2aad9f27fbaf850b2e598de473534dffa2029856d\" returns successfully" Feb 13 19:56:16.704490 containerd[1753]: time="2025-02-13T19:56:16.704352934Z" level=info msg="shim disconnected" id=5c5b6fe348a883afe4307ee2aad9f27fbaf850b2e598de473534dffa2029856d namespace=k8s.io Feb 13 19:56:16.704490 containerd[1753]: time="2025-02-13T19:56:16.704409094Z" level=warning msg="cleaning up after shim disconnected" id=5c5b6fe348a883afe4307ee2aad9f27fbaf850b2e598de473534dffa2029856d namespace=k8s.io Feb 13 19:56:16.704490 containerd[1753]: time="2025-02-13T19:56:16.704417854Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:56:17.001785 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5c5b6fe348a883afe4307ee2aad9f27fbaf850b2e598de473534dffa2029856d-rootfs.mount: Deactivated successfully. Feb 13 19:56:17.583904 containerd[1753]: time="2025-02-13T19:56:17.583720972Z" level=info msg="CreateContainer within sandbox \"11a91b85deb926ce6402974748381e90fe21902c7cc50ea6c806d6cb5c4bc229\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 19:56:17.634658 containerd[1753]: time="2025-02-13T19:56:17.634542716Z" level=info msg="CreateContainer within sandbox \"11a91b85deb926ce6402974748381e90fe21902c7cc50ea6c806d6cb5c4bc229\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c8e99428c3bf7b404db9fd351e43e42911e816625f584f08f24cbb144e0454b9\"" Feb 13 19:56:17.636457 containerd[1753]: time="2025-02-13T19:56:17.635503357Z" level=info msg="StartContainer for \"c8e99428c3bf7b404db9fd351e43e42911e816625f584f08f24cbb144e0454b9\"" Feb 13 19:56:17.664843 systemd[1]: Started cri-containerd-c8e99428c3bf7b404db9fd351e43e42911e816625f584f08f24cbb144e0454b9.scope - libcontainer container c8e99428c3bf7b404db9fd351e43e42911e816625f584f08f24cbb144e0454b9. Feb 13 19:56:17.703269 containerd[1753]: time="2025-02-13T19:56:17.703224763Z" level=info msg="StartContainer for \"c8e99428c3bf7b404db9fd351e43e42911e816625f584f08f24cbb144e0454b9\" returns successfully" Feb 13 19:56:18.263660 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Feb 13 19:56:18.604090 kubelet[3389]: I0213 19:56:18.603925 3389 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-7z2t2" podStartSLOduration=5.603906268 podStartE2EDuration="5.603906268s" podCreationTimestamp="2025-02-13 19:56:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:56:18.602753066 +0000 UTC m=+183.579497394" watchObservedRunningTime="2025-02-13 19:56:18.603906268 +0000 UTC m=+183.580650476" Feb 13 19:56:19.492950 kubelet[3389]: I0213 19:56:19.491832 3389 setters.go:602] "Node became not ready" node="ci-4230.0.1-a-4092b3335a" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-02-13T19:56:19Z","lastTransitionTime":"2025-02-13T19:56:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Feb 13 19:56:21.150443 systemd-networkd[1346]: lxc_health: Link UP Feb 13 19:56:21.163178 systemd-networkd[1346]: lxc_health: Gained carrier Feb 13 19:56:22.274797 systemd-networkd[1346]: lxc_health: Gained IPv6LL Feb 13 19:56:26.184120 systemd[1]: run-containerd-runc-k8s.io-c8e99428c3bf7b404db9fd351e43e42911e816625f584f08f24cbb144e0454b9-runc.thMJAe.mount: Deactivated successfully. Feb 13 19:56:28.502331 sshd[5360]: Connection closed by 10.200.16.10 port 54404 Feb 13 19:56:28.502230 sshd-session[5356]: pam_unix(sshd:session): session closed for user core Feb 13 19:56:28.505257 systemd-logind[1717]: Session 28 logged out. Waiting for processes to exit. Feb 13 19:56:28.506605 systemd[1]: sshd@25-10.200.20.12:22-10.200.16.10:54404.service: Deactivated successfully. Feb 13 19:56:28.508533 systemd[1]: session-28.scope: Deactivated successfully. Feb 13 19:56:28.510209 systemd-logind[1717]: Removed session 28.